Tomas' Labroratory

I Measured How Fast I Could Type Binary Encodings by Hand. Patterns Speed Us Up.

As someone who's spent most of their career deep in systems, protocols, and low-level tooling, it's often painfully obvious that the slowest part of any workflow isn’t always the network or the hardware. Sometimes, it’s the person at the keyboard.

So this week, I ran a quick experiment to measure just that: How efficiently can a human type out 32 bytes of binary data, using different encoding formats?

Sometimes, for security or operational reasons, you can’t just copy and paste sensitive data. You have to transcribe it by hand. That might mean reading a string over the phone, writing it on paper, or transferring data across an air gap. In those cases, encoding schemes go from an implementation detail to something that affects the real-world usability of a system.


The Test

I wrote a small console app that:

What I expected

A fairly linear tradeoff between how many characters an encoding produces, and how fast I could type them.

What I found

A weirdly rich psychological layer that turned out to be the main story.


Speed vs. Sanity

Unsurprisingly, Binary was a pain. At 2 bits/sec, it was miserable. But I had to do it for science. I don’t recommend it. Oddly, though, I didn’t have to worry much about typos—there were only two keys to press. Were it not for the sheer length, I could almost imagine it being pleasant. But I think that's just a coping mechanism to deal with all that pain.

Decimal was slightly better—around 4 bits/sec. The only good thing I can say about it is, "thank Got it's not binary!".

Then came the family of Hex through Base91. This was where it got interesting. I expected wildly different performances, but they were actually all fairly pleasant to type, and the bits/sec numbers clustered together more than I anticipated. Whatever savings I got by shorter length was logically traded off for the increased complexity.

The sweet spot, if you’re balancing efficiency and human-friendliness, seemed to be Base58. It avoids confusing characters (like 0/O and l/1) and skips the special characters that make Base64 slightly more annoying. Short string, no symbols, readable. Not bad.


The Surprise: BIP39

Then I got to BIP39, which uses a fixed dictionary of 2048 English-like words—think apple, glory, tornado. On paper, it should have been the worst: it's longer than decimal. But it didn’t feel that way.

Typing BIP39 was… pleasant.

Unlike Binary or Base64, it didn’t feel like defusing a bomb. What I noticed the most was the possibility of fuzzy error correction. If I typed aple instead of apple, or rovust instead of robust, my brain could still read it, and in theory, a system could correct it. That’s not something you can do with hex or base32—47f2 vs 47f1 is a totally different byte, and good luck spotting the difference.


My Own Experiment: Badabibu

Inspired by BIP39, I also tested my own encoding called Badabibu. It’s made of made-up syllables (like ba, bi, bo, etc.) strung together to form pseudo-words. Surprisingly, it held its own in terms of speed. But it didn’t offer the same fuzzy-correction potential. If you see bulapo, do you know what was meant? Probably not. You lose the semantic fallback that BIP39’s real words provide. Real words have value.

It was interesting that even though spaces technically add yet another thing to type, they actually make you go faster. With Badabibu (4) (space every 4 characters), I could type faster than Badabibu (8) (space every 8 characters).


Future Thoughts: Beyond BIP39?

This got me thinking: what if you went further? What if instead of single words, you used common three-word phrases as your encoding base?

BIP39 already gives you decent error recovery because the words are meaningful. But if instead of apple fish dawn you had apple fell down? You would get built-in contextual verification. The phrase makes sense, so even if there’s a typo, the human brain can tell. That’s powerful.


The Numbers

Encoding Speed (bits/sec) Total Length (chars) Sample Pleasantness (1–5)
Binary 2.44 319 1110 0100 0101 0100 1101 1001 ... 1
Decimal 4.55 96 8745 9735 2913 7014 ... 2
Hex 7.87 79 47f3 e682 4dac 2463 ... 5
Base32 8.93 64 T4H7 OHTF 4VV4 XGD6 ... 5
Base58 10.8 54 HJ8o NhGq TKsC 5yQh ... 5
Base64 8.51 54 I3cL bBO7 WV2l LZF5 ... 4
Base91 9.41 49 sWiM FgB{ 43_Q 5#1& ... 4
BIP39 10.62 146 robust message dawn wash client pilot ... 5
Badabibu (4) 10.4 107 mabi neni duso sapu ludi nabe ... 5
Badabibu (8) 9.57 96 midadubi kudalane lekokamu detimari ... 4

Note: Total lengths are measured from the full strings including spaces.


So What?

This tiny typing test reminded me of something I’ve seen again and again across systems, teams, and architectures: the human layer matters more than we think.

Encoding schemes are usually chosen for efficiency, compatibility, or size. But if humans are in the loop. There interesting questions emerge:

The takeaway for me was that what you type is more important than how much you type. BIP39 is 50% longer than the equivalent decimal encoding, and yet I was able to go twice as fast even though I typed more overall.

When we type, randomness slows us down and patterns speed us up.


Want to try this test yourself? Let me know—maybe I’ll clean up the app and share it.