Last modified: 2013-08-27 23:19:34 UTC
See https://gerrit.wikimedia.org/r/75253 One option might be to define parserTests options giving the selser change to test and the change expected. Another option would be to hook up mocha or some other test runner to allow writing non-parserTests test cases for these sorts of things.
Is there a reason why this won't be caught with exhaustive selser testing?
I'll leave this for subbu to answer. Subbu: is exhaustive selser testing an adequate replacement for the tests that you've been doing manually to verify fostering fixes?
For this patchset, adding that snippet as a test and having selser changes generate a change could test this example. But, bug 51718 makes this harder because wt2wt will show a diff and fail the selser test. I think this bug report is very specific .. I think the general situation is more where selser changes cannot generate all kind of edit patterns, and so, I guess the question is if a selser option could potentially give us greater control over testing of those scenarios. I had a couple examples of this while working on patches, but they are escaping me right now. So, we could give this some thought and repurpose this or open a new bug that elaborates on the gaps in our current setup and proposes a solution to that.
Generating selser change permutations iteratively instead of with generate & test as discussed in bug 50316 can make selser testing exhaustive. Are the gaps you are referring to in assignment generation or in the kinds of changes we currently support?
The kind of changes we can generate. The only changes that are currently generated are text-based. However, real edits add/insert/move dom-trees, not just text nodes.
https://gerrit.wikimedia.org/r/#/c/81429/ is one more example where our testing setup gets in the way of writing a whole bunch of tests to catch these errors early and also prevent regressions in the future.