Last modified: 2013-11-13 09:43:59 UTC

Wikimedia Bugzilla is closed!

Wikimedia migrated from Bugzilla to Phabricator. Bug reports are handled in Wikimedia Phabricator.
This static website is read-only and for historical purposes. It is not possible to log in and except for displaying bug reports and their history, links might be broken. See T43967, the corresponding Phabricator task for complete and up-to-date bug report information.
Bug 41967 - GlusterFS performance problems
GlusterFS performance problems
Status: RESOLVED WORKSFORME
Product: Wikimedia Labs
Classification: Unclassified
Infrastructure (Other open bugs)
unspecified
All All
: Low normal
: ---
Assigned To: Ryan Lane
: performance
Depends on: 36993 36994
Blocks:
  Show dependency treegraph
 
Reported: 2012-11-10 14:45 UTC by Nemo
Modified: 2013-11-13 09:43 UTC (History)
7 users (show)

See Also:
Web browser: ---
Mobile Platform: ---
Assignee Huggle Beta Tester: ---


Attachments

Description Nemo 2012-11-10 14:45:57 UTC
GlusterFS performance is not ideal sometimes.
Comment 1 Andre Klapper 2012-11-10 15:01:07 UTC
Assuming that Ubuntu 12.04 is used, this would be about glusterfs 3.2.5-1ubuntu1.

RedHat ticket (which is in NEEDINFO state) recommends to get more info by:
- # gluster volume profile VOLNAME start
- Run workload
- # gluster volume profile VOLNAME info (Repeat this a few times while the workload is present on the volume)
- gluster volume profile VOLNAME stop (once the workload is complete)
Comment 2 Ryan Lane 2012-11-16 20:46:03 UTC
No. We're using the newest 3.3 from gluster debs.
Comment 3 Nemo 2013-01-14 12:26:58 UTC
Just to give an idea of how a bottleneck it is, even when issuing a zip -9 (slowest read/write possible with zip):

 1801 root      20   0  444m 147m 1724 R  103  1.8  96:21.75 glusterfs                                                                         
29831 nemobis   20   0 20640  11m  712 S   42  0.1  10:11.65 zip

Is there a way to enable multithreading for glusterfs? Maybe multiple cores dedicated to it would manage to get enough data for a single CPU core to work on.
Comment 4 Antoine "hashar" Musso (WMF) 2013-11-13 09:43:59 UTC
The workaround is to either:
- use /dev/vdb mounted on /mnt , with the drawback that if the instance dies, the data are loss.
- migrate a given labs project to use the shared NFS server as has been done for toollabs and deployment-prep (beta cluster)

Closing this bug since we have workarounds.

Note You need to log in before you can comment on or make changes to this bug.


Navigation
Links