Last modified: 2014-10-10 18:57:13 UTC
I get this on various edits after my wiki grows to a certain size. I then have to vagrant destroy it. I have Flow, MobileFrontend, Flow, monobook, mantle and echo roles enabled. {"error":{"code":"internal_api_error_JobQueueError","info":"Exception Caught: Redis server error: Could not insert 1 ParsoidCacheUpdateJobOnDependencyChange job(s).\n","*":"\n\n#0 /vagrant/mediawiki/includes/jobqueue/JobQueueRedis.php(236): JobQueueRedis->throwRedisException()\n#1 /vagrant/mediawiki/includes/jobqueue/JobQueue.php(340): JobQueueRedis->doBatchPush()\n#2 /vagrant/mediawiki/includes/jobqueue/JobQueue.php(311): JobQueue->batchPush()\n#3 /vagrant/mediawiki/includes/jobqueue/JobQueueGroup.php(127): JobQueue->push()\n#4 /vagrant/mediawiki/extensions/Parsoid/php/Parsoid.hooks.php(62): JobQueueGroup->push()\n#5 /vagrant/mediawiki/extensions/Parsoid/php/Parsoid.hooks.php(77): ParsoidHooks::updateTitle()\n#6 /vagrant/mediawiki/includes/Hooks.php(206): ParsoidHooks::onArticleEditUpdates()\n#7 /vagrant/mediawiki/includes/GlobalFunctions.php(3984): Hooks::run()\n#8 /vagrant/mediawiki/includes/page/WikiPage.php(2211): wfRunHooks()\n#9 /vagrant/mediawiki/includes/page/WikiPage.php(1935): WikiPage->doEditUpdates()\n#10 /vagrant/mediawiki/includes/page/Article.php(2002): WikiPage->doEditContent()\n#11 /vagrant/mediawiki/includes/EditPage.php(1901): Article->__call()\n#12 /vagrant/mediawiki/includes/api/ApiEditPage.php(403): EditPage->internalAttemptSave()\n#13 /vagrant/mediawiki/includes/api/ApiMain.php(930): ApiEditPage->execute()\n#14 /vagrant/mediawiki/includes/api/ApiMain.php(364): ApiMain->executeAction()\n#15 /vagrant/mediawiki/includes/api/ApiMain.php(335): ApiMain->executeActionWithErrorHandling()\n#16 /vagrant/mediawiki/api.php(85): ApiMain->execute()\n#17 /var/www/w/api.php(5): include()\n#18 {main}\n\n"}}
We have had other reports of similar sounding problems, but I can't find them in bugzilla at the moment. Have you checked to see if your jobrunner is actually running? We store jobs in redis as this error shows. Those jobs are not subject to the normal redis cache eviction policy as losing jobs is considered to be bad. If the jobrunner has gone awol then the jobs will build up until redis can't hold any more.
Hit this again. Max was able to fix it by running `echo flushall | redis-cli` How am I getting into this state? {"error":{"code":"internal_api_error_JobQueueError","info":"Exception Caught: Redis server error: Could not insert 1 ParsoidCacheUpdateJobOnDependencyChange job(s).\n","*":"\n\n#0 /vagrant/mediawiki/includes/jobqueue/JobQueueRedis.php(236): JobQueueRedis->throwRedisException()\n#1 /vagrant/mediawiki/includes/jobqueue/JobQueue.php(340): JobQueueRedis->doBatchPush()\n#2 /vagrant/mediawiki/includes/jobqueue/JobQueue.php(311): JobQueue->batchPush()\n#3 /vagrant/mediawiki/includes/jobqueue/JobQueueGroup.php(127): JobQueue->push()\n#4 /vagrant/mediawiki/extensions/Parsoid/Parsoid.hooks.php(62): JobQueueGroup->push()\n#5 /vagrant/mediawiki/extensions/Parsoid/Parsoid.hooks.php(77): ParsoidHooks::updateTitle()\n#6 /vagrant/mediawiki/includes/Hooks.php(206): ParsoidHooks::onArticleEditUpdates()\n#7 /vagrant/mediawiki/includes/GlobalFunctions.php(4004): Hooks::run()\n#8 /vagrant/mediawiki/includes/page/WikiPage.php(2211): wfRunHooks()\n#9 /vagrant/mediawiki/includes/page/WikiPage.php(1935): WikiPage->doEditUpdates()\n#10 /vagrant/mediawiki/includes/page/Article.php(2004): WikiPage->doEditContent()\n#11 /vagrant/mediawiki/includes/EditPage.php(1902): Article->__call()\n#12 /vagrant/mediawiki/includes/api/ApiEditPage.php(403): EditPage->internalAttemptSave()\n#13 /vagrant/mediawiki/includes/api/ApiMain.php(932): ApiEditPage->execute()\n#14 /vagrant/mediawiki/includes/api/ApiMain.php(364): ApiMain->executeAction()\n#15 /vagrant/mediawiki/includes/api/ApiMain.php(335): ApiMain->executeActionWithErrorHandling()\n#16 /vagrant/mediawiki/api.php(85): ApiMain->execute()\n#17 /var/www/w/api.php(5): include()\n#18 {main}\n\n"}}
The INFO redis-cli command might give you some clues into how much memory and how many keys redis has allocated. Doing this before and after jobrunner runs would probably be most valuable. We've discussed the possibility of lightweight monitoring in MWV to track down issues such as this. That doesn't help you ATM, but I'll definitely keep redis in mind when it comes to implementation.