Last modified: 2014-01-02 17:15:57 UTC
Since this list is (ab)used more and more on Meta, disrupting the work of Commons administrators tagging images as copyright violations (source) and it is very inflexible so URLs like http://www.google.de/url? (\bgoogle\..*?\/url\?) are blacklisted, there is the need that at least established users can override it. AbuseFilter and the Title blacklist are much more flexible in this area. Thanks in advance.
I can confirm every word of the requester's statement; these kind of spamfilterung is simply an unnecessary and unwelcome disruption of the daily admin work.
That may introduce new problems, since only some people could add some content. · Admin adds {{copyvio|...}} · Uploader removes the tag. · Normal user tries to revert. Or more subtle: · Admin talks about some site in the VP, not realising it is blacklisted. · Bot tries to archive it, but can't add the url to the archive page.
(In reply to comment #2) A warning to the admin adding such links should be sufficient.
This needs fixing asap, and really shouldn't take very long at all.
This really shouldn't be implemented per comment 2.
(In reply to comment #5) > This really shouldn't be implemented per comment 2. Then you're just making life impractical for Admins. As a matter of fact, both issues described in comment 2 can easily be overcome with a new 'spam-filter bypass' userright which should be added to the bot, rollbacker, and admin usergroups. I think that anyone with these bits is trustworthy enough to not deliberately introduce spam links.
(In reply to comment #4) > This needs fixing asap, and really shouldn't take very long at all. Please see http://www.mediawiki.org/wiki/Bugzilla/Fields for the meaning of the severity and priority values. Resetting.
Agreed with comment 2. Wikis with bad spam blacklists should sort those out rather than making their sysops unaware of the issues with it.
(In reply to comment #8) >Agreed with comment 2. Wikis with bad spam blacklists should sort those out There are no "bad" entries in the spam blacklist. But as soon as it happens that the url matches a regex in the meta-blacklist ([[:m:Spam blacklist]]), searching begins why this action was prevented. Then, at least providing a meaningful error message with a) The origin b) The regexp matching c) How to fix that (e.g. local whitelist or removal from global blacklist) is required. >rather than making their sysops unaware of the issues with it. They are not "unaware" if a warning as suggested under comment 3 is added. API also supports warnings.