Born in 1986 and having my first computer no sooner than in the late 90s, I always envied my older colleagues who grew up alongside the computing revolution. I was mesmerized to hear their stories on how they wrote a 3D game for Atari in 400 lines of code at the age of 11 or hacked a programmable calculator by figuring out memory management details. They grew up in a world of limitations that was challenging their creativity, whereas I grew up in a world without limits. A world that moved too fast for us to worry about code optimization. It was always easier to buy more RAM...
The world of constraints exists today
As I started working on embedded systems at Sonova, I realized that the world of hardware limitations exists today. It is just a matter of discovering it. Hundreds of devices encountered in our daily life still have rigid restrictions regarding size or power consumption. As a result, they might have limited memory or computing power but they often aspire to provide users with the most captivating experience possible. Developing software for such a device is not a piece of cake. With the on-board memory of hundreds of kilobytes you are going to count every byte, especially if the task that you are trying to achieve is ambitious.
To eat or not to eat 20 bytes
One day at work, I decided to get rid of the nasty hack that I came across in the source code (macro spanning a few lines of code with some logic inside). After quick refactoring, I discovered that in the process of making the solution "nice" I sacrificed 20 bytes of memory. Is that a steep price to pay for a nice API? Well... that depends on how much spare memory there is left on the device and how nice the new API really is. Had it been really nice, I wouldn't have hesitated to uphold it but I judged that it was not nice enough to cost me 20 bytes.
Debug is king
But wait – it was a debug build that I checked for memory consumption. I bet that in the release configuration, with all the nice compiler optimizations, my "nice" solution would not eat up these additional 20 bytes. I contemplated this thought for a minute and came to the conclusion that the release build didn't matter. I should use the debug configuration as a reference. Why? If I am no longer able to put my debug build onto the device, I cannot develop any new features. The release would probably use less memory but I would not be able to fill it with all the nice features that I would like to fill it with.
The other day, I wanted to impress my colleagues with my super-hacky use of bitfields to save a bunch of memory. What I knew about bitfields was that they helped to save some memory at a cost of a few more operations required to apply a bit mask. What I learned about them that day was that the code generated for accessing the bitfields could take up much more memory than you had just saved on storing your data. It is always a good idea to verify whether the optimizations that you believe in are really optimizing anything. They might do quite the opposite.
Even though I do not feel much of a low-level hacker myself yet, I finally have a story to tell. It won’t beat the 400-line Atari game yet but it’s good enough for now. Who knows, one day there might be someone who will envy me my challenging world of constraints. Someone who, like me, believes that limitations breed creativity.
Dariusz Daniłko, Senior Embedded Software Developer at Sonova