Something just doesn’t feel right about node.js.
After coding in it for almost a year, it’s been fun, but I’ve decided it’s just a waypoint to somewhere better.
Bjarne Stroustrup has great paper on his website titled Evolving a language in and for the real world: C++ 1991-2006. It provides fascinating insights on the development of the language, the challenges involved, and discusses interesting design ideas. If you have even a basic understanding of C++, it’s a such a worthwhile read.
SQLite is a “self-contained, serverless, zero-configuration, transactional SQL database engine”. However, it doesn’t come with replication built in, so if you want to store mission-critical data in it, you better back it up. The usual approach is to continually copy the SQLite file on every change.
I wanted SQLite, I wanted it distributed, and I really wanted a more elegant solution for replication. So rqlite was born.
Continue reading Replicating SQLite using Raft Consensus
So far coding in Go has been fun. It comes with nice functionality that lets you know that the Go team really have been writing system software (useful stuff like this, and this). And then I read about the Go Memory Model, and had my consciousness raised.
I’ve started coding in Go (golang), and I received some advice recently from Robert Griesemer, whom I was fortunate enough to sit beside at a recent Go Meetup. To learn Go, Robert suggested that I code a solution in Go for a problem I had previously solved in a different language.
Over 16 years, I’ve written software up-and-down the entire stack. Earliest in my career I wrote boot ROM software for specialized embedded devices. This kind of programming taught me so much about how computers really work.
Java is the predominant language of Big Data technologies. HBase, Lucene, elasticsearch, Cassandra – all are written in Java and, of course, run inside a Java Virtual Machine (JVM). There are some other important Big Data technologies, while not written in Java, also run inside a JVM. Examples include Apache Storm, which is written in Clojure, and Apache Kafka, which is written in Scala. This makes basic knowledge of the JVM quite important when it comes to deploying and operating Big Data technologies.
In my last blog post I explained why writing design documents is such a powerful approach to building well-engineered systems. But what should one document? When it comes to software, if one documents too much, the content of the documentation can become inaccurate very quickly, and inaccurate documentation is quickly ignored.
Many software engineers never write design documents. Design documentation takes time, and implementations often proceed so far without any documentation that if it happens, it’s an act of recording what has been done — a tedious task at the best times.
Many software engineers argue “the code exists, it’s running, it’s working, let’s move on and build the next thing.”
My father worked for many years in QA at Beckman, an American medical instruments firm. His job was to ensure that newly-manufactured centrifuge rotors would hold up when spun at thousands of RPMs. He used to tell me that the Beckman philosophy could be summarised in one sentence — “There is no substitute for quality”.
I came across a very readable paper on distributed systems — Distributed systems for fun and profit. I recommend it for anyone interested in learning more about distributed systems, and the challenges involved with designing, building, and operating distributed systems.
As technical lead at Loggly, responsibility for a well-engineered infrastructure ends with me. And one way to ensure the system is designed and implemented well is to stay as close as possible to the code, ensuring that the team and I write quality software.
But it can be difficult to complete the design and implementation of the features I am responsible for, ensure that what the team produces is well-implemented, and understand every line of code — there is only so much time in the day.
When running a large real-time processing system, monitoring is critical. But it does more than allow you to keep an eye on your system. During development it allows you test hypotheses about how it works, how it performs when certain parameters are changed, and takes the guessing out of working with dynamic systems.
The Boost ASIO Library is a wonderful piece of software. I’ve built high-performance event-driven IO C++ programs that just scream — it works very well. However, there is one subtlety when it comes to timers — specifically when it comes to cancelling expired timers.
CPU emulation, particularly of older processors, is an interesting topic.
While emulation source code for various CPU cores is easily available, I wanted to better understand how to interface the emulated CPU with my host machine. Therefore I decided to write a simple example of a host system for an emulated MOS Technology 6502 microprocessor.
The goal would be to have the emulated 6502 write “Hello, world” to the console of my linux desktop machine.
I really like having inline source when using gdb. Code Complete, by Steve Mcconnell has an entire chapter explaining how you should proactively step through all code you write — and not just when you’re actively debugging an issue. Having followed this practice for a few years now, I can testify that it increases your productivity enormously. I simply can’t imagine not doing so before committing any code.
Valgrind comprises a bunch of very useful tools for detecting problems with your programs. I first came across it a couple of years back and find it to be excellent. In particular I use its memory profiler, which helps you catch errors such as memory leaks and invalid accesses. In my experience these types of errors sometimes indicate logic errors, not just areas where you’ve forgotten to free some previously allocated memory — which is another reason why it is such a great tool.