rqlite is a lightweight, open-source, distributed relational database written in Go, which uses SQLite as its storage engine.
As part of the 7.14.2 release, I ran most of the source code through GPT-4. Let’s take a look at some of the changes it suggested, which of them I added to the code — and what this says about the future of programming.
What did GPT-4 find?
Let’s take a look.
- Not handling a type-assertion correctly and finding a bug in my metrics. GPT-4 found two real bugs. One bug was causing my metrics to be wrong. It suggested a fix, and I immediately changed the code since it was so obviously a bug. GPT-4 also noticed a bug in my switch statement logic. It told me that I seemed to want fallthrough behaviour, but wasn’t getting that. GPT-4 suggested fixes for all these issues, which I added. Finally, in the same commit, I made some style changes it suggested.
- Speeding up the test cycle. GPT-4 suggested that I cache my Go module dependencies when executing testing on CircleCI. I promptly made the change, and reduced by testing time by more than a minute (about 10% improvement). Quality and effective testing is really important to me, I love changes that make testing better.
- Tighten some timer handling, and reducing pressure on the garbage collector. It suggested I change my code slightly to reduce pressure on the garbage collector, as well as change how my timer code was structured. The changes were reasonable, and result in some improvements, so I decided to add them.
- Detected that I had a duplicated test. This was interesting one, obviously due to a copy-n-paste error. It suggested that I remove the duplicated test, and also make some style improvements — all of which I did.
- Found unnecessary type-assertions. I made the suggested changes to the code, which resulted in easier-to-read code.
- More suggestions to decrease garbage collection pressure. I incorporated the change — but only after I asked GPT-4 to explain why it would improve the code.
- Channel-handling improvements related to Queued Writes. GPT-4 spotted a missing unit test, which I added. It also suggested I tighten how the queues in the code are closed.
Is this the future?
It may not be the entire future, but it’s definitely part of it. The first thing that struck me about using GPT-4 with rqlite is that it’s fun. Sometimes it feels like a supercharged linter, and at other times it feels like magic.
I’ve been developing rqlite for more than 9 years. And having to write some boilerplate code before I can push out the newest feature does get toilsome over time. GPT-4 has sped up some of this work significantly, lowering the threshold for me starting new feature work. And that’s great.
I am an experienced Go programmer however. I can very quickly evaluate if its suggestions are meaningful, worthwhile and — most importantly — correct. Very occasionally it suggested something that was wrong, and I spotted it quickly.
But you know what else can be wrong? People. And as a senior engineer I need to check other’s code all the time (and they check mine!). But I do worry about relatively inexperienced programmers just copying-and-pasting code without thinking, and expecting it all to work.
But after just a month of using GPT-4 — and GitHub Copilot — I won’t go back. It’s too much fun, and too much of a productivity boost.
5 thoughts on “What did GPT-4 find wrong with the rqlite source code?”
Very insightful! I’ve also been using it while programming.
Did you find that specific prompts work better than others?
I’m not actually sure yet, though I find myself thinking in terms of “prompts” more and more, instead of just purely code. I’m sure you’ve seen the same change in yourself.
How did you run the code through GPT-4? What were the mechanics of that like? Did you prompt or did you just copy and paste code and say, “tell me what you would change”… I’m curious how that worked.
Yes, exactly. I usually say something like “Here is some Go code. Any improvements, suggestions, or bugs you see?” I then paste the code immediately afterwards, all in one prompt.
Amazing! These are fantastic examples of more advanced-level code improvements!
I used ChatGPT (not GPT-4) to help me build a hobby project with little programming background. I also asked it for feedback on code- but more often, the improvement was from “this doesn’t work and throws this error” to “it works now!”
It’s super cool to see that it also improves code at higher levels. Also, I agree it’s fun and sometimes feels like magic!!
Especially as an inexperienced programmer, I found many benefits from working with ChatGPT. For example, people joining a new team / tech stack, etc., could benefit a ton from having this always-available buddy that helps them ramp up more quickly.
Anyway… cool stuff! Check out my post about my experience if you’re interested: