Last modified: 2014-08-09 13:51:19 UTC

Wikimedia Bugzilla is closed!

Wikimedia migrated from Bugzilla to Phabricator. Bug reports are handled in Wikimedia Phabricator.
This static website is read-only and for historical purposes. It is not possible to log in and except for displaying bug reports and their history, links might be broken. See T45652, the corresponding Phabricator task for complete and up-to-date bug report information.
Bug 43652 - Implement ability to search wikitext of current Wikimedia wiki pages with regular expressions (regex)
Implement ability to search wikitext of current Wikimedia wiki pages with reg...
Status: RESOLVED FIXED
Product: MediaWiki extensions
Classification: Unclassified
CirrusSearch (Other open bugs)
unspecified
All All
: Low enhancement (vote)
: ---
Assigned To: Nobody - You can work on this!
:
: 54503 (view as bug list)
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2013-01-05 04:14 UTC by MZMcBride
Modified: 2014-08-09 13:51 UTC (History)
16 users (show)

See Also:
Web browser: ---
Mobile Platform: ---
Assignee Huggle Beta Tester: ---


Attachments

Description MZMcBride 2013-01-05 04:14:35 UTC
Wikimedia should offer the ability to search the current wikitext of live wiki pages with regular expressions. This would be very helpful in identifying various problems with various wikis.
Comment 1 Andre Klapper 2013-01-05 13:31:07 UTC
==> enhancement.
Comment 2 MZMcBride 2014-01-31 04:39:20 UTC
I was just thinking about this issue again. Because of the way prose is, you want to be able to search for "\bthe the\b" where "\b" is a word boundary. And other silliness like this. MapReduce works wonderfully for this (Ori and I tried BigQuery at some point). I think resolving this bug would be a good goal for 2014. We could even look at Labs instead of the production cluster, if needed. Copying some of the search folks as this is fundamentally a search issue.
Comment 3 Nik Everett 2014-02-03 14:36:50 UTC
I wonder if this is something that can replace some of the more uncommon customizations that lsearchd did to improve recall.  It might not be because this is really an expert tool and those uncommon customizations (dash handling and stuff) effect everyone.

In any case, I think it might be useful to lean on search to cut down the list of pages that must be checked.  Lucene search and Elasticsearch both seem well optimized for a "first pass" you'd use to identify candidates that might match the regex.  I suppose it wouldn't always be the right thing to do, but it might be nice.

I like implementing this in labs because it could be a real performance drain on the production infrastructure if done there.  OTOH, if we put the wikitext in Elasticsearch we could have it run the regexes pretty easily.  The only trouble would be making sure the regexes don't cause a performance problem and I'm not sure that is possible.
Comment 4 MZMcBride 2014-02-05 02:53:14 UTC
(In reply to comment #3)
> I like implementing this in labs because it could be a real performance drain
> on the production infrastructure if done there.  OTOH, if we put the wikitext
> in Elasticsearch we could have it run the regexes pretty easily.  The only
> trouble would be making sure the regexes don't cause a performance problem
> and I'm not sure that is possible.

Can you please ballpark how much work would be involved in setting up Elasticsearch with the most recent English Wikipedia page text (wikitext) dump on Labs for use with sane regular expressions? The current dump is about 19.1 GB compressed (cf. <http://dumps.wikimedia.org/enwiki/20140102/>).
Comment 5 Nik Everett 2014-02-05 14:56:17 UTC
I suppose that depends on how good you need it to be.  I spent half an hour this morning and have an instance loading the data.  It is using the wikipedia river which is some toy thing the Elasticsearch folks maintain, ostensibly for testing.  It isn't what we want in the end for a great many reasons, not least of which that it munges the wikitext something fierce so it probably isn't what we want but it is something.  It was easy to set up and gives us something to play with.

I think what you are asking for is actually a few pieces:
* A tool to keep the index up to date- My guess is this'd take a day to get know labs, another day or two to get it working the first time, then about a week of bug fixes spread out over the first couple month.
* A tool to dispatch queries against it sanely- I'm less sure about this.  Anywhere from a couple of days to a month depending on surprises.  I can't really estimate bug fixes because I'm so wild on the mark for the tool.

I'll play with the wikipedia river instance and see what kind of queries I can fire off against it manually.

Finally, if we forgo making the second tool then users could technically just use it as an Elasticsearch instance with wikitext on it.  I'm not sure how many people that would be useful for and what kind of protection it'd need to have.  I imagine hiding it in the labs network and making folks sign in to labs with port forwarding would be safe enough.
Comment 6 Chad H. 2014-02-05 17:49:54 UTC
So I think this is a great idea and so I talked to Marc today about doing this in labs. He's on board with the idea, but confirmed my fear that it's bad timing. We're in the middle of trying to move labs to eqiad so it's a bad time to set up a new service--I'm thinking we set this up like database replication to real hardware, then figure out how people can query against it.

In the meantime, I've started a page on wikitech: https://wikitech.wikimedia.org/wiki/Search/Labs_services. Let's work on hashing out some of the implementation details while we let ops finish the migration.
Comment 7 Gabriel Wicke 2014-02-06 02:07:53 UTC
We have a dumpGrepper in the parsoid repository:
https://git.wikimedia.org/blob/mediawiki%2Fservices%2Fparsoid/ac5483ae6cba6be86989457ea7cf2ae6e460388a/tests%2FdumpGrepper.js

Quickstart:
git clone https://gerrit.wikimedia.org/r/p/mediawiki/services/parsoid
cd parsoid
npm install libxmljs
cd tests
nodejs dumpGrepper --help # show options
zcat dump.xml.gz | nodejs dumpGrepper <regexp>
Comment 8 Chad H. 2014-02-14 17:57:07 UTC
(In reply to Gabriel Wicke from comment #7)
> We have a dumpGrepper in the parsoid repository

Yeah, there's a plethora of tools to search dumps for text. This is about searching the real time indexes though :)
Comment 9 Chad H. 2014-02-20 21:50:38 UTC
*** Bug 54503 has been marked as a duplicate of this bug. ***
Comment 10 Nik Everett 2014-06-06 15:31:47 UTC
https://gerrit.wikimedia.org/r/#/c/137733/


The patch isn't really fully ready but its on its way.

Before we merge it I'd like to get some kind of better response to regex syntax errors then we have.

I can live with deploying it without the optimization in Elasticsearch that'll make it faster and more memory efficient.  That'll be nice but not required.

I can also live with deploying it without highlighting so long as we get highlighting "real soon" afterwords.
Comment 11 Waldir 2014-06-27 12:04:17 UTC
The patch has been merged and should be deployed (according to [[mw:MediaWiki 1.24/wmf10]] and looking at [[Special:Version]]), but the insource: prefix doesn't seem to work yet. Any hints why that could be?
Comment 13 Nik Everett 2014-06-27 13:34:59 UTC
This one requires that the index be rebuilt before it'll work.  I tried to make that clear in my email to ambassadors about it but I should have posted that here as well.

The reindex is proceeding as quickly as I'm able to get it:
1.  group0 wikis have been done since Monday.
2.  group1 wikis are about 60% of the way through the process.
3.  wikipedias are about 10% of the way through the process.

One of the problems with the whole expand templates thing that Cirrus does is that the reindex process is really slow when you have to dip all the way down to mediawiki to get the data.  In this case we do.  We might have been able to create a one shot tool to do this but that would have been a big chunk more work and a bit more risk of failure.  What we're doing now we've done many times before.  Safe = easier for a team of two to manage.

Anyway, some wikis are starting to see it:
https://en.wikipedia.org/wiki/Special:Search/insource:/a/
https://www.mediawiki.org/w/index.php?title=Special%3ASearch&profile=default&search=url+insource%3A%2F%26title%3Dfoo%2F&fulltext=Search

As promised its quite slow (~30 seconds on enwiki).  Right now its still using the timeouts from the full text search.  If that becomes a problem we'll have to raise them and look at other tricks to speed this up.

Marking now that its working pending index building.  I (or you) can verify it once it works on your wiki.

Note You need to log in before you can comment on or make changes to this bug.


Navigation
Links