Last modified: 2014-02-22 00:13:39 UTC
Before we can really start to rely on logstash it needs to have some work done to ensure that log events from the various input systems can reach the cluster via reliable transport and that various logstash nodes can consume that input. In the current udp2log relay setup we are really only using the logstash1001 instance to process all incoming logs. Any time this node is restarted all log events are lost until it comes back up (2-3 minutes).
<cool_aid_advertisement> Why not use Kafka as the messaging bus? That would solve all your reliability / durability concerns, it's operated by Ops for the Analytics team so it builds on existing infrastructure and there seems to be producer/consumer for logstash available at https://github.com/joekiller/logstash-kafka (hahaha more debianization fun) </cool_aid_advertisement>