Following up on my earlier post, it has been pretty straightforward to so far to migrate this blog from Rackspace to GCP. It’s going pretty much as expected, but the architecture is going to be slightly different than I initially thought.
Tag Archives: operations
Meaningful Uptime Measurements for the Cloud
Another interesting paper came my way, thanks to the Morning Paper mailing list. Nines are Not Enough:Meaningful Metrics for Clouds discusses a topic that I deal with regularly in my role at Google.
SLIs, SLOs, and SLA are easy to discuss in a general sense, but surprisingly subtle to put into practise. This paper, authored by Google engineers, explores why this is so, and offers a new framework for thinking about them.
Deploying Vallified on GCP
Since I recently joined Google Cloud Platform (GCP), I thought it’s time to get some practical experience with the platform. As a result I’m going to migrate this blog from Rackspace to GCP — specifically I’ll use GCE for WordPress, and Cloud SQL for the persistent database storage.
Monitoring: it’s not just for production
Monitoring — the measurement of your system, the gathering of telemetry, and alerting when it behaves anomalously — is key to running large-scale, modern computer systems. But what many developers today don’t realise is that monitoring can be a key part of your design cycle too.
Stop asking "how much data do you have?"
In every field there is a question that, while it sounds interesting, betrays a naiveté and lack of sophistication.
In my field — SaaS and data platforms — it’s how much data do you have?
rqlite v3: Globally replicating SQLite
rqlite is an open-source distributed relational database, which uses SQLite as its storage engine. rqlite is written in Go and uses Raft to achieve consensus across a set of SQLite databases. It gracefully handles leader election, and can tolerate machine failure.
With the v3 release series, rqlite can now replicate SQLite databases on a global scale, with very little effort. Let’s see it in action using the AWS EC2 cloud.
InfluxDB and the Raft consensus protocol
Designing a search system for log data — part 3
This is the last part of a 3-part series “Designing and building a search system for log data”. Be sure to check out part 1 and part 2.
In the last post we examined the design and implementation of Ekanite, a system for indexing log data, and making that data available for search in near-real-time. Is this final post let’s see Ekanite in action.
Continue reading Designing a search system for log data — part 3
Designing a search system for log data — part 2
This is the second part of a 3-part series “Designing and building a search system for log data”. Be sure to check out part 1. Part 3 follows this post.
In the previous post I outlined some of the high-level requirements for a system that indexed log data, and makes that data available for search, all in near-real-time. Satisfying these requirements involves making trade-offs, and sometimes there are no easy answers.
Continue reading Designing a search system for log data — part 2
Designing a search system for log data — part 1
This is the first part of a 3-part series “Designing and building a search system for log data”. Part 2 is here, and part 3 is here.
For the past few years, I’ve been building indexing and search systems, for various types of data, and often at scale. It’s fascinating work — only at scale does O(n) really come alive. Developing embedded systems teaches you how computers really work, but working on search systems and databases teaches you that algorithms really do matter.
Continue reading Designing a search system for log data — part 1