Last modified: 2012-12-02 10:13:35 UTC
For some unknown reason the Ext-Wikibase job can start eating all memory and send gallium to swap. When running update.php, it eventually stop at: 5:55:24 [exec] ...site_identifiers table already exists. The PHP process then eat as much memory as it can before getting killed by Linux OOM catcher. The command line for one the process was: php /var/lib/jenkins/jobs/Ext-Wikibase/workspace/maintenance/update.php --quick --conf /var/lib/jenkins/jobs/MediaWiki-Tests-Extensions/workspace/LocalSettings.php Will have to investigate. I have disabled the Ext-Wikibase job as a workaround.
Might be a race condition with the sqlite database. Need to double check which file is used, they should be different for each job though they are always named my_wiki.sqlite.
could also be language objects being loaded, see Bug 41103. but that's a wild guess.
Going on the last log entry I would guess that the issue is happening during the importing of sites from meta. No idea why it would happen there though.
I think the updater should not automatically pull stuff from meta. Instead, it should import site info from the interwiki table. That would provide a clean upgrade path for existing wikis.
You could try out writing to stdout to debug the thing in Jenkins. I might also want to set some $wgDebug variable to investigate what is going on. I wanted to run trace on a faulty process but it needs root access apparently :-(
I removed some code from the updater routine for wikibase, see patch I3335b5f9. It's possible that this fixes the issue, though I don't know why and how it would cause PHP to run out of memory. There was some code in there that would download the site matrix from meta and initialized the sites table based on that. Instead, the sites table should be initialized based on any existing entries in the interwiki table, see bug 42201.
jenkins is running
I am still not sure what caused this issue but it is definitely fixed :-) Thanks!