Last modified: 2014-08-01 12:59:33 UTC
We should track the mean time to merge, probably per repo?, so we can see trends and the effects of changes to eg test coverage or hardware changes.
Created attachment 15992 [details] Mean time spent in Zuul/Jenkins by Mediawiki core changes after they are merged Zuul reports a bunch of metrics to Graphite over statsd. The metrics are described at : http://ci.openstack.org/zuul/statsd.html#metrics One of them is the time spent by a change in a the queue (which is Zuul overhead + the time to build all the jobs). We could graph somewhere the meantime of jobs triggered after a merge. Though I am not sure how helpful it is going to be. The metric is zuul.pipeline.postmerge.mediawiki.core.resident_time.mean The attached graphs represents the last three months. The URL is http://graphite.wikimedia.org/render/?width=856&height=600&_salt=1405949992.208&from=-3months&target=zuul.pipeline.postmerge.mediawiki.core.resident_time.mean The time to run jobs depends on number of jobs executing on the server. Zuul/Jenkins would hold them until a slot is free up. So that is not very representative.