Last modified: 2014-10-28 20:24:15 UTC
In addition to just the part that matches regex.
I'm not sure if there will be a simple way to do this without breaking b/c...
I'm still not completely comfortable with the idea of logging full (blacklisted) URLs like this. I think this may have been an intentional design decision?
(In reply to comment #2) > I think this may have been an intentional design decision? No, it was an oversight on my part.
(In reply to comment #2) > I'm still not completely comfortable with the idea of logging full > (blacklisted) URLs like this. I think this may have been an intentional > design > decision? Anyway in contrast, abuse log (Extension:AbuseFilter) contains every detail of an editing action, even when it's rejected.
I think this REALLY should be added ASAP - spammers use redirects to spam their sites anyway, and hits like 'goo.gl', 'ow.ly', and 'tinyurl.com' do not help at all. Having the full link enables us to find what is being linked to, and whether or not the spam problem still exists (please, do 'disable' the links by removing the 'http://'-part, no need to accidentally click a bad link). Thanks! --~~~~
Can we please have a fix for this. In addition, a way to find links that were attempted to be spammed would be a nice addition as well. It is now nigh impossible to find tried to add http://www.xxx.com.
Un-cookie licking.
Change 169314 had a related patch set uploaded by Ejegg: Log full URLs on spam blacklist hit https://gerrit.wikimedia.org/r/169314
(In reply to Kunal Mehta (Legoktm) from comment #1) > I'm not sure if there will be a simple way to do this without breaking b/c... What's your concern re: breakage? Is the content of Special:Log being parsed by things that expect just the matching domain? Or is the worry that regexes shared between wikis running different versions of this extension would be inconsistent? The patch I submitted shouldn't have the latter problem, as it constructs a full-line-matching regex for logs only when the initial regex detects a match.