Last modified: 2012-12-27 01:10:53 UTC

Wikimedia Bugzilla is closed!

Wikimedia migrated from Bugzilla to Phabricator. Bug reports are handled in Wikimedia Phabricator.
This static website is read-only and for historical purposes. It is not possible to log in and except for displaying bug reports and their history, links might be broken. See T39731, the corresponding Phabricator task for complete and up-to-date bug report information.
Bug 37731 - Editing very high-use templates results in a timeout in BacklinkCache::partition()
Editing very high-use templates results in a timeout in BacklinkCache::partit...
Status: RESOLVED FIXED
Product: Wikimedia
Classification: Unclassified
General/Unknown (Other open bugs)
unspecified
All All
: Unprioritized normal (vote)
: ---
Assigned To: Aaron Schulz
:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2012-06-20 02:31 UTC by MZMcBride
Modified: 2012-12-27 01:10 UTC (History)
6 users (show)

See Also:
Web browser: ---
Mobile Platform: ---
Assignee Huggle Beta Tester: ---


Attachments

Description MZMcBride 2012-06-20 02:31:58 UTC
When someone edits a very high-use template (a template with millions of transclusions), the template edit doesn't finish cleanly. Instead of the template page reloading on page save, the user is presented with a read timeout error message. This read timeout behavior has existed for a few years now.
Comment 1 MZMcBride 2012-06-20 02:35:26 UTC
According to Tim Starling in #wikimedia-tech just now:

> [20-Jun-2012 02:28:44] Fatal error: Maximum execution time of 180 seconds exceeded at /usr/local/apache/common-local/php-1.20wmf5/includes/db/DatabaseMysql.php on line 285
> that's me
> and it fails in BacklinkCache->partition, very nice

This was as a result of this edit: <https://commons.wikimedia.org/w/index.php?diff=prev&oldid=72965783>. The template has 2,783,343 transclusions according to the Toolserver's commonswiki_p right now.
Comment 2 Tim Starling 2012-06-20 03:13:14 UTC
Updated summary to reflect cause.
Comment 3 Tim Starling 2012-07-03 03:56:46 UTC
Workaround method to queue jobs from the command line: 
<https://commons.wikimedia.org/wiki/User:Tim_Starling/fixing_link_tables>
Comment 4 Aaron Schulz 2012-11-16 19:15:45 UTC
https://gerrit.wikimedia.org/r/#/c/32488/
Comment 5 db [inactive,noenotif] 2012-11-28 13:26:23 UTC
(In reply to comment #4)
> https://gerrit.wikimedia.org/r/#/c/32488/

Status Merged
Comment 6 Asher Feldman 2012-12-27 00:11:50 UTC
https://gerrit.wikimedia.org/r/#/c/32488/ added a limit to BacklinkCache::getNumLinks but some related jobs are still failing in BacklinkCache::getLinks like this:

Wed Dec 26 23:40:42 UTC 2012    mw14    commonswiki     BacklinkCache::getLinks 10.0.6.61       2008    MySQL client ran out of memory (10.0.6.61)      SELECT  /*! STRAIGHT_JOIN */ page_namespace,page_title,page_id  FROM `templatelinks`,`page`  WHERE tl_namespace = '10' AND tl_title = 'Date' AND (page_id=tl_from)  ORDER BY tl_from

That query returns 12384915 rows and would have to be batched.
Comment 7 Aaron Schulz 2012-12-27 01:10:53 UTC
(In reply to comment #6)
> https://gerrit.wikimedia.org/r/#/c/32488/ added a limit to
> BacklinkCache::getNumLinks but some related jobs are still failing in
> BacklinkCache::getLinks like this:
> 
> Wed Dec 26 23:40:42 UTC 2012    mw14    commonswiki    
> BacklinkCache::getLinks
> 10.0.6.61       2008    MySQL client ran out of memory (10.0.6.61)     
> SELECT 
> /*! STRAIGHT_JOIN */ page_namespace,page_title,page_id  FROM
> `templatelinks`,`page`  WHERE tl_namespace = '10' AND tl_title = 'Date' AND
> (page_id=tl_from)  ORDER BY tl_from
> 
> That query returns 12384915 rows and would have to be batched.

Moved to bug 43452 since this is about users getting timeouts.

Note You need to log in before you can comment on or make changes to this bug.


Navigation
Links