Last modified: 2013-08-06 17:02:28 UTC

Wikimedia Bugzilla is closed!

Wikimedia migrated from Bugzilla to Phabricator. Bug reports are handled in Wikimedia Phabricator.
This static website is read-only and for historical purposes. It is not possible to log in and except for displaying bug reports and their history, links might be broken. See T42742, the corresponding Phabricator task for complete and up-to-date bug report information.
Bug 40742 - Dumps project is showing the wrong quota for Gluster storage
Dumps project is showing the wrong quota for Gluster storage
Status: RESOLVED WONTFIX
Product: Wikimedia Labs
Classification: Unclassified
General (Other open bugs)
unspecified
All All
: Normal normal
: ---
Assigned To: Nobody - You can work on this!
: upstream
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2012-10-03 14:50 UTC by Hydriz Scholz
Modified: 2013-08-06 17:02 UTC (History)
4 users (show)

See Also:
Web browser: ---
Mobile Platform: ---
Assignee Huggle Beta Tester: ---


Attachments

Description Hydriz Scholz 2012-10-03 14:50:07 UTC
This has been reported a *very* long time ago, but Ryan seems to be ignoring me on IRC. :(

Anyway, the GlusterFS project storage is showing wrong figures for the df -h command and du -sh command (while inside /data/project).

This is the output of the query:
----
hydriz@dumps-1:/data/project$ df -h
Filesystem                                 Size  Used Avail Use% Mounted on
projectstorage.pmtpa.wmnet:/dumps-project  300G  173G  128G  58% /data/project
----
hydriz@dumps-1:/data/project$ du -sh
11G	.
----

Can someone look into this and resolve it soon? Many thanks!
Comment 1 Ryan Lane 2012-10-03 16:18:51 UTC
You ping me on IRC at random hours and disappear for days or weeks at a time. That's fine, but since you're mostly unreachable during my waking hours, bugs like this are always better. I had no clue you reported this.
Comment 2 Ryan Lane 2012-10-03 16:31:12 UTC
Seems this is yet another bug in gluster. Seems the bug has been reported in numerous places and there's no response.
Comment 3 Nemo 2012-11-22 19:56:50 UTC
Yesterday df claimed projectstorage.pmtpa.wmnet:/dumps-project had 18 TB in total and I don't remember how may free TB, but not a byte could be written to disk because of allegedly reached quota.
Now that some stuff has been deleted, df says 234G used vs. 72G according to du and writing to disk works again.
Comment 4 Nemo 2013-05-21 14:48:45 UTC
Now /data/project doesn't show up at all in df.
Comment 5 Ryan Lane 2013-05-21 17:57:45 UTC
Which instance are you trying to access this from?
Comment 6 Nemo 2013-05-21 18:12:24 UTC
(In reply to comment #5)
> Which instance are you trying to access this from?

dumps-1, dumps-2 (just tried) and all the others (tried in the past) were the same.
Comment 7 Ryan Lane 2013-05-21 18:17:50 UTC
root@i-00000355:~# df
Filesystem                                1K-blocks     Used Available Use% Mounted on
/dev/vda1                                  10309828  2624584   7161636  27% /
udev                                        2020900        8   2020892   1% /dev
tmpfs                                        809992      252    809740   1% /run
none                                           5120        0      5120   0% /run/lock
none                                        2024972        0   2024972   0% /run/shm
/dev/vdb                                   41284928   180240  39007536   1% /mnt
projectstorage.pmtpa.wmnet:/dumps-home     52428800   135296  52293504   1% /home
projectstorage.pmtpa.wmnet:/dumps-project 314572800 96101632 218471168  31% /data/project


^^ note that the filesystem is an automount and needs to be mounted for df to work. It'll automatically mount itself when it is accessed.
Comment 8 Nemo 2013-05-21 18:53:56 UTC
(In reply to comment #7)
> ^^ note that the filesystem is an automount and needs to be mounted for df to
> work. It'll automatically mount itself when it is accessed.

I browsed directories and deleted files, isn't that enough? Anyway, ok, we're now back to comment 0.
Comment 9 Peter Bena 2013-05-30 07:51:54 UTC
did you consider switching to nfs
Comment 10 Nemo 2013-05-30 08:02:27 UTC
(In reply to comment #9)
> did you consider switching to nfs

Who is "you"? Are the project users supposed to do something? Thanks.
Comment 11 Hydriz Scholz 2013-05-30 14:10:51 UTC
(In reply to comment #9)
> did you consider switching to nfs

Yes, but that created an extremely large load on the host nodes, which was the main factor that crashed the whole of Labs last year (see relevant bug: bug 36993).

This might not necessarily be relevant now since we are not doing much I/O, so we can experiment with that. However, we should be resolving the root cause, which is the Gluster bug, since it can happen to other projects in the future.
Comment 12 Nemo 2013-08-06 14:58:58 UTC
Problem still current:

$ df -h
Filesystem                                 Size  Used Avail Use% Mounted on
[...]
projectstorage.pmtpa.wmnet:/dumps-project  300G  102G  199G  34% /data/project

$ du -shc /data/project/
11G     /data/project/
Comment 13 Ryan Lane 2013-08-06 16:48:02 UTC
Yes, as I mentioned, I have no plans on fixing this or even investigating it. Are you having issues writing to the filesystem? If not, we'll just wait till gluster is replaced by NFS.
Comment 14 Nemo 2013-08-06 16:50:31 UTC
Ok, thanks for clarifying the status on bugzilla too. How hard is it to raise the quota to 400 GB so that we can use the standard 300?

(In reply to comment #13)
> Are you having issues writing to the filesystem?

Not yet since you fixed it, thanks.
Comment 15 Ryan Lane 2013-08-06 16:55:38 UTC
I think the 300GB will actually be usable. I can raise it, but you guys have mentioned in your emails that it isn't necessary.
Comment 16 Nemo 2013-08-06 17:02:28 UTC
(In reply to comment #15)
> I think the 300GB will actually be usable. 

Hm? When?

> I can raise it, but you guys have
> mentioned in your emails that it isn't necessary.

Well, Hydriz said that we'll try and manage with 300 GB, but not with 200.

Note You need to log in before you can comment on or make changes to this bug.


Navigation
Links