A new version of Ekanite, the syslog server with built-in search, has been released. v1.2.1 includes a very important bug fix, for an issue that affected TCP operation.
You can download v1.2.1 from the GitHub releases page.
A new version of Ekanite, the syslog server with built-in search, has been released. v1.2.1 includes a very important bug fix, for an issue that affected TCP operation.
You can download v1.2.1 from the GitHub releases page.
A new version of Ekanite, the syslog server with built-in search, has been released. v1.2.0 includes some minor fixes and improvements.
You can download v1.2.0 from the GitHub releases page.
A new version of Ekanite, the syslog server with built-in search, has been released. v1.1.0 includes an important bug fix related to TCP connection handling, as well as some other minor fixes and improvements.
You can download v1.1.0 from the GitHub releases page.
Ekanite is an open-source Syslog server with built in log search. Thanks to some nice work by Fabian Zaremba, Ekanite now supports searching your logs via a browser.
If you’d like to understand more about the design and development of Ekanite, check out this series of posts.
It’s been 18 months since the first commit to my first significant Go project — syslog-gollector. After an initial burst of activity to create a functional Syslog Collector that streamed to Apache Kafka, the source code hadn’t been updated much since. But today I received a report that it no longer built, so I spent some time porting the code to the latest Shopify Sarama framework.
It was amusing to see how naive much of my early Go code was.
This is the last part of a 3-part series “Designing and building a search system for log data”. Be sure to check out part 1 and part 2.
In the last post we examined the design and implementation of Ekanite, a system for indexing log data, and making that data available for search in near-real-time. Is this final post let’s see Ekanite in action.
Continue reading Designing a search system for log data — part 3
This is the second part of a 3-part series “Designing and building a search system for log data”. Be sure to check out part 1. Part 3 follows this post.
In the previous post I outlined some of the high-level requirements for a system that indexed log data, and makes that data available for search, all in near-real-time. Satisfying these requirements involves making trade-offs, and sometimes there are no easy answers.
Continue reading Designing a search system for log data — part 2
This is the first part of a 3-part series “Designing and building a search system for log data”. Part 2 is here, and part 3 is here.
For the past few years, I’ve been building indexing and search systems, for various types of data, and often at scale. It’s fascinating work — only at scale does O(n) really come alive. Developing embedded systems teaches you how computers really work, but working on search systems and databases teaches you that algorithms really do matter.
Continue reading Designing a search system for log data — part 1
I’ve started coding in Go (golang), and I received some advice recently from Robert Griesemer, whom I was fortunate enough to sit beside at a recent Go Meetup. To learn Go, Robert suggested that I code a solution in Go for a problem I had previously solved in a different language.
AWS have posted the video online of Jim Nisbet’s and my talk at AWS:reinvent 2013. In it, Jim and I describe the system we built at Loggly, which uses Apache Kafka, Twitter Storm, and elasticseach, to build a high-performance log aggregation and analytics SaaS solution, running on AWS EC2.
Continue reading Infrastructure at Scale: Apache Kafka, Twitter Storm and elasticsearch