Last modified: 2014-06-28 05:17:51 UTC
http://growthdoc.wmflabs.org/GuidedTour/ (should be a HTTP redirect) and http://proveit.wmflabs.org/ (not a redirect) are both responding with 502 Bad Gateway. The host they're connecting to is still up (82-day uptime in fact), but I don't know how to test it without the proxy. Both are still in the proxy list at https://wikitech.wikimedia.org/wiki/Special:NovaProxy
They proxy to docs.eqiad.wmflabs .
Upon further investigation, /etc/apache2/sites-enabled/ is now being affected by puppet. It is emptying these entires, but there is no "This directory is managed by puppet" warning.
I think ori refactored the apache* classes today. ori, could this be related?
Yes, probably. Are these files declared in Puppet?
(In reply to Ori Livneh from comment #4) > Yes, probably. Are these files declared in Puppet? Nope.
So the new puppet module installs virthosts and also removes any virthosts that aren't currently installed? That appeals to the obsessive/compulsive in me but does seem a little harsh for labs users. Could puppet use an alternative to sites-available and permit non-puppetized links on sites-enabled? Or better yet, somehow know to only wipe out files that it itself created?
Or, heck, just check $realm and skip the deletion step on labs.
(In reply to Andrew Bogott from comment #6) > So the new puppet module installs virthosts and also removes any virthosts > that aren't currently installed? That appeals to the obsessive/compulsive > in me but does seem a little harsh for labs users. Could puppet use an > alternative to sites-available and permit non-puppetized links on > sites-enabled? Or better yet, somehow know to only wipe out files that it > itself created? Puppetizing it probably isn't a big deal for me. However, I obviously can't commit this to operations/puppet. That means I need to do some kind of self-hosted thing. But https://wikitech.wikimedia.org/wiki/Help:Self-hosted_puppetmaster says, "This means that as soon as you add role::puppet::self the instance will stop receiving updates that are pushed into gerrit.", which I'd like to avoid if possible. Is there any way of having local puppet manifests on a Labs box without losing automatic updates? (As far whether Puppet should actually clear the directory of unpuppetized stuff, I don't know).
(In reply to Matthew Flaschen from comment #8) > (In reply to Andrew Bogott from comment #6) > > So the new puppet module installs virthosts and also removes any virthosts > > that aren't currently installed? That appeals to the obsessive/compulsive > > in me but does seem a little harsh for labs users. Could puppet use an > > alternative to sites-available and permit non-puppetized links on > > sites-enabled? Or better yet, somehow know to only wipe out files that it > > itself created? > > Puppetizing it probably isn't a big deal for me. However, I obviously can't > commit this to operations/puppet. That means I need to do some kind of > self-hosted thing. > > But https://wikitech.wikimedia.org/wiki/Help:Self-hosted_puppetmaster says, > "This means that as soon as you add role::puppet::self the instance will > stop receiving updates that are pushed into gerrit.", which I'd like to > avoid if possible. Is there any way of having local puppet manifests on a > Labs box without losing automatic updates? > > (As far whether Puppet should actually clear the directory of unpuppetized > stuff, I don't know). /etc/apache2/apache2.conf is not managed by Puppet, so you can glob config files from an additional path of your choosing by adding an "Include /etc/apache2/unpuppetized-sites/*" (or whatever) line to that file.
(In reply to Matthew Flaschen from comment #8) > [...] > But https://wikitech.wikimedia.org/wiki/Help:Self-hosted_puppetmaster says, > "This means that as soon as you add role::puppet::self the instance will > stop receiving updates that are pushed into gerrit.", which I'd like to > avoid if possible. Is there any way of having local puppet manifests on a > Labs box without losing automatic updates? > [...] At the moment no, but cf. bug #66683.
The fact that puppet actively destroys your local apache config is silly. If you want to puppetize and get me to merge your changes into the prod branch that's great, but we definitely need a path to support local, stable changes that don't require you to set up a puppetmaster. Possibly Matt's suggestion with apache2.conf will work... failing that, I expect Ori has a master plan which will get us back to a point where we support local vhosts. Ori, is that wrong?
(In reply to Andrew Bogott from comment #11) > we definitely need a path to support local, stable changes that > don't require you to set up a puppetmaster. Not for production, IMO. But I do see the use-case for labs. I propose the following: file { '/etc/apache2/local-sites': ensure => directory, owner => 'root', group => 'root', mode => '0755', require => Package['apache2'], } file_line { 'load_local_sites': path => '/etc/apache2/apache2.conf', line => 'Include local-sites/*', require => File['/etc/apache2/local-sites'], notify => Service['apache2'], }
(These resources would be declared in a Labs-specific manifest.)
Sure, that'll work.
Well... it's moot now I guess, but I'd really prefer that puppet FAIL when there are untracked vhosts, rather than just cheerfully destroy them.
In fact, there's an even easier way to do this that doesn't require tampering with apache2.conf, and it's to replace the second (file_line) resource with: apache::site { 'local_sites': content => "Include /etc/apache2/local-sites/*\n", require => File['/etc/apache2/local-sites'], }
Change 142439 had a related patch set uploaded by Ori.livneh: On Labs, provision an Apache config dir that is not managed by Puppet https://gerrit.wikimedia.org/r/142439
Change 142439 merged by Andrew Bogott: On Labs, provision an Apache config dir that is not managed by Puppet https://gerrit.wikimedia.org/r/142439
ok -- now (well, in an hour or so) you should be able to recreate your sites by hand by placing the vhost files in /etc/apache2/sites-local. Please try it out and report back here how it works.
Thanks, I added symbolic links there, pointing to sites-available, and it's fixed. For the record, sites-available was never being erased AFAICT, so sites were being disabled but not being erased completely.