rqlite is a lightweight, open-source, distributed relational database written in Go, with SQLite as its storage engine. v5.7.0 is now out, and introduces a major implementation change — replacing JSON encoding with Protocol Buffers for the Raft Log Entries.
For those that are new to Raft, it is method to ensure a set of computers all agree on some state, even in the event of computer failure or network faults. Such methods are known as Distributed Consensus protocols. In particular Raft generates a log of commands, which records all changes that have taken place, and which all computers agree on. In rqlite, SQLite statements are stored in these commands and written durably to disk, before being applied to the SQLite database.
There were a few reasons for the change.
- Reduce disk usage. Protobuf encoding is more efficient than JSON, so every addition to the Raft log now means less bytes written to disk — often a lot less.
- More easily support compression. rqlite supports batching SQLite requests, as well as arbitrarily long SQLite statements, so requests can often benefit from compression. This again means fewer bytes need to be written to disk. While JSON can also be compressed, Protobufs offers more sophisticated data modeling, making it a little easier to mark when a log entry has been compressed (which is necessary for robust decoding of that data later on).
- Try to increase rqlite performance, due to the reduced disk load.
- More sophisticated data modeling, to help with future development and maintenance of the core data models. Because these data models must be serialized to disk, backwards compatibility is important. Protocol Buffers makes that easier.
- Potentially form the basis of a gRPC API in the future.
This was the first time I introduced Protobufs into any of my open-source projects, and it was pretty straightforward. The data modeling went through a few iterations, but was also pretty easy. In fact, the new Protobuf types are now used throughout the code base, and allowed me to remove many other types – previously each layer of code defined its own types. While this latter approach had its advantages — clear separation of each module for example — it did result in a fair amount of data copying, as each layer had to first transform data into another slightly different data model before calling the next layer.
Initially I also coded top-down — adding the new types in the HTTP serving layer, followed by the Store layer. This quickly became awkward and slow, and wasn’t a good guide to the ongoing Protobuf data modeling either. Development proceeded much better when I abandoned the top down approach, and started adding the data models from the bottom up — starting with the DB layer, then moving to the Store layer, and finishing with the HTTP layer.
Overall I am happy with the new code and the use of Protobufs, but the rqlite codebase is beginning to grow in complexity. A major goal for rqlite has always been a clean design and implementation. Protobufs are definitely more complex than simple JSON, and therefore the codebase is now more complicated. However the generated Go support for Protobufs shouldn’t need much maintenance going forward, so hopefully the day-to-day development complexity remains low.
The performance improvements have been disappointing however, and I’ve seen only marginal improvements in write throughput. Basic testing shows that performance has increased for some workloads by about 10%. Because the underlying Raft system calls fsync after each disk write, it’s the real performance bottleneck. And that call can’t be avoided, not without risking data loss.
However for highly compressible requests, disk usage has gone way down, and that’s a big win. And even for small SQL statements which are not compressed, Protobuf encoding still saves significant space. Smaller SQL commands take up about 50% less disk space when encoded.
5.7.0 can also read log entries encoded in older versions, so it’s fully backwards-compatible with earlier releases in the 5.x series.
So if you’ve been using rqlite for large SQL statements, or use batching, be sure to check out release 5.7.0. You can download it from the releases page.