Last modified: 2014-01-24 00:35:12 UTC

Wikimedia Bugzilla is closed!

Wikimedia migrated from Bugzilla to Phabricator. Bug reports are handled in Wikimedia Phabricator.
This static website is read-only and for historical purposes. It is not possible to log in and except for displaying bug reports and their history, links might be broken. See T62348, the corresponding Phabricator task for complete and up-to-date bug report information.
Bug 60348 - Handle large bursts of jobs more elegantly
Handle large bursts of jobs more elegantly
Status: NEW
Product: MediaWiki
Classification: Unclassified
JobQueue (Other open bugs)
1.23.0
All All
: Normal normal (vote)
: ---
Assigned To: Nobody - You can work on this!
:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2014-01-22 16:49 UTC by Chad H.
Modified: 2014-01-24 00:35 UTC (History)
4 users (show)

See Also:
Web browser: ---
Mobile Platform: ---
Assignee Huggle Beta Tester: ---


Attachments

Description Chad H. 2014-01-22 16:49:14 UTC
The job queue is way more awesome than it was 2 years ago. With the improved code + redis architecture, it's incredibly reliable and we're doing way more jobs than ever before.

We tend to keep up with the small day-to-day jobs perfectly. Most queues are near empty on enwiki most of the time, or in the case of cirrus/htmlCache/linksUpdate jobs, maybe a few hundred/thousand at a time. No big deal.

What we do *not* handle well is a large burst of jobs. Someone edits a super high use template, we reindex all of enwiki or commons in Cirrus, anything. We end up with millions of jobs and it takes weeks to clear the backlog without manual intervention.

It would be nice to do something better in this case. I have no clue what this better thing may be.
Comment 1 Brion Vibber 2014-01-24 00:26:12 UTC
Putting giant bulk operations onto their own subqueues and interleaving them with other actions might be good.

Note You need to log in before you can comment on or make changes to this bug.


Navigation
Links