Biometrics – we should avoid them

Biometrics (the use of your own biological data as a key to a lock) sounds cool and awesome.

And it’s a disaster. Why is it a disaster? Because it’s the one password you can’t change. So if it’s hacked, you’re permanently screwed. And to date, they always get hacked, easily.

Tsutomu Matsumoto in particular has been able to defeat many biometrics systems with literally dollars in parts.

At best, this kind of thing would be a secondary or tertiary piece of information. At best.


The week in reading – September 16th-22nd 2013

This week involved a lot of magazine reading, some non-fiction reading, and only a little fiction reading. I think it’s because the week before I’d read several fantastic science fiction books back to back, and it’s hard for me to read “just really good” after reading “phenomenal”.

I’m trying to put things in the order I read them, although I have decided to group it into Books and Magazines. At some point, I’ll do actual book reviews.


Just My Type, Simon Garfield

The week started off with my finishing reading an amazing book about fonts.

I love typography. I even made my own font once upon a time. Back in the early 1990s, I wanted a better monospaced font for printing code, so I made one using Fontographer and called it Dallas Roman (although it was more inspired by Palatino than Times Roman). I’ve lost track of the actual Type 1 font itself (I was sloppy with backups some years ago), but I probably still have printouts made with the font.

Simon Garfield manages to walk through much of the history of fonts and typographers, with great stories about the fonts, the evolution of design and printing, and classification of font families themselves. This is not a scholarly work, there are other books that attempt to put everything in order. This is a book of the love of fonts.

Read it. You might find a new love of the letterform. At the very least, you will be entertained and informed.

I have to go, though, I need to get a replacement font editor and make a new font for programmers and code…

Digital Rapture, The Singularity Anthology

I did manage to read one fiction book, or rather a collection of short stories, all around the theme of The Singularity – the idea that knowledge grows exponentially and at some point we have a radical transition, whether it’s AIs much smarter than humans, humans evolved as far past Homo Sapiens as we were from our simian ancestors, or something else.

I enjoyed most of the stories in the book. I’d previously read some of them, but many years ago.

“The Last Question”, written by Isaac Asimov in 1956, is an early example of “what happens when things get really smart?”. It’s also notable because it’s a typical example of the portrayal of computers up until the 1970s – computers are vast mainframes covering acres of land with a few faithful priest-like attendants to enter data and read the printout.

I heartily recommend the anthology.

Reading in the Brain, Stanislas Dehaene

I tend to read a lot of books at the same time; it’s rare that I’ll read one book entirely before moving on. It’s not that books  become dull or boring, I just find many things interesting.

Stanislas Dehaene is a researcher on neuroscience, focusing on cognition and more narrowly on language and number processing. He’s written a fascinating book on our current understanding of how the brain manages the act of reading. The sheer amount of parallel processing resources thrown at letter recognition is staggering, for example. Our knowledge has grown quickly in recent years due to the use of MRI and other brain scanning techniques, so that we can actually test theories of reading. And as a result, some of our assumptions have been disproved, and others have been expanded. For example, while there is no actual left-brain-is-for-logic right-brain-is-for-creativity split, there are some biases towards specific areas of the brain for specific kinds of processing. However, the brain can also route around damage to some degree or another. Reading in a healthy brain does involve the left hemisphere much more than the other, but the right hemisphere can take up the task if need be.

The central theory is that we don’t have brand new circuitry for reading – instead, we’ve re-used other mechanisms for shape detection for scenery and for actors in the scene. And there’s a feedback loop there in terms of the letter shapes, it looks like the selection process for letter shapes was driven by what matches best against those existing mechanisms.

Learning to read alters the brain. For example, literate people have better verbal memory than illiterates, and this is thought to be related to how both verbal and visual language processing is done. Plato was wrong.

Whole language? Doesn’t match the cognitive neuroscience evidence. Phonics is the way to teach people to read. We don’t actually recognize words in one gulp, even though it feels like it to an experienced reader. Instead, there’s a tremendous amount of parallel processing going on, and an advanced reader has a lot more circuitry tuned to the task than a neophyte.

Reading it gave me ideas for better computer vision. Existing text recognition hovers at the 99.5% accuracy rate, which isn’t all that good in reality – it means a typo every few sentences. Humans’ accuracy rate is phenomenally high, and the ability to recognize letter shape despite rotation, size or accidental features is also much better than anything a computer can do. But it looks like we may have to throw a lot more resources at the first few stages of text recognition in order to get to that point. But it will be worth it.


MIT Technology Review, September/October 2013

I bought this issue because the front matter (the letter from the editor) covered “Seven over 70″. This issue was the yearly “35 Innovators under 35″ round-up, and I’ve gotten a little tired of that, but the editor’s page recognized that this seems to say “no one over 35 innovates”, so he covered seven people he knew who were still innovating past their 70s.

This issue covered some of the advances in 3D printing, like 3-D printing of a battery, or of a replacement (soft plastic) ear with integrated electronics. It was also interesting to get peeks into robotics, 3D imaging, and banking. Speaking of banking, Dwolla sounds like a moderately fresh approach to low-friction electronic purchasing. Unlike Paypal, Stripe and the others, it’s not piggybacked on the current system, so maybe it can really evolve into the super low-cost and yet secure transaction system that we desperately need. However, my money is still on something like Bitcoin, because we need anonymous money.

The other amazing things technology has brought to us are ventures like Evans Wadongo’s solar-charged LED lanterns, being made for about $23 each and being distributed to villages in Kenya, displacing kerosene lanterns that cost a lot more to run (about $1 a week, which is a lot of money) and have far worse light. Light is one of those things that lets us advance, because we can do work outside of the day time.

Or a nuclear reactor that produces almost no waste and has a far safer runaway condition, and can be made economically in a smaller form-factor. Of course, there’s a long road ahead, because this exists only on paper, and the mood is against nuclear energy at the moment.

The most interesting article, though, was on “The Next Silicon Valley”, a chronicle of all the attempts to grow a new one, and some analysis and thoughts into what has made that fail to happen to date.

Harvard Business Review, October 2013 issue

I read HBR about 1 issue in 3. This time the tentpole of the magazine was on innovation – how to engineer breakthrough ideas. There was a worthwhile issue by the previous director and deputy director of DARPA (2009-2012), on their tactics and their explanation for why DARPA has been so effective in terms of generating useful ideas. There was a so-so article about corporate VC activity.

There was an outstanding article about knowledge workers and how corporate operation needs to change. Fortunately for me, its conclusions match the direction I had been operating at –  people are organized around projects, and you move from project to project, instead of the projects coming to the workers (the latter leads to protectionism, inventing of work, stagnation, and bad alignment). I think I like the mystery to heuristic to algorithm flow they described.

Something related to this is going to be one of the big drivers of change in the next 50 years – it’s possible that pretty much every non-skilled job will disappear.

2600 magazine, September 2013 issue

Or, in full – 2600, The Hacker Quarterly, Volume Thirty, Number Two, September 2013.

I haven’t read 2600 in several years now. It just wasn’t interesting enough to me to keep reading it. But I was in Micro Center, and the back cover of the latest 2600 issue had a funny bit about a Micro Center in Cambridge MA that would ring up purchasers of 2600 under the automatic nom-de-guerre of “B. Hacker”, so I figured I’d read an issue, especially given the past few months’ worth of NSA revelations. After all, every paranoid fever dream of the 2600 crowd has seemed to turn out to be true.

But it’s still the same magazine. If I were much younger, it would still be endlessly fascinating. Even now, it’s interesting to see glimpses of subsets of cracker culture (reserving the term ‘hacker’ for those that make things). One thing 2600 does is avoid being dramatic or impressive. It’s all subdued, and while some of it is peoples’ imagination, a lot of it is true. Prosaic, details, no punch lines, but true. And there’s no mockery from me for 2600. I respect it.

Maximum PC, November 2013

I still read this, but I should stop. It’s the best of its ilk, which is probably why it’s still around. But the hard-core PC builder market isn’t one in which I live any more. The magazine is still entertaining, and has great information in it (there was a roundup of 30 consumer cloud-storage systems, surely that was all of them). But there’s only so much time in the day – focus, focus.

Economist, September 21st-27th 2013

I don’t really read or watch the news during the week. Instead, I read the Economist cover to cover every week. I’ve found that there’s very little news while it’s actually happening; instead you get talking heads using a lot of words to say the same few things over and over again. My theory is that if something truly important happened during the week, someone would tell me.

And there’s always a wealth of information in each issue. Aside from the travails of the US Administration vis-a-vis Syria and Bashar Assad, we have thoughts about what forgeries say about the idea of great art, about what impact intelligent machines will have on jobs, the decline of state capitalism (this is good).

It also has a dry but wicked sense of humor. You’ll see lines like “… Gazprom, a Kremlin-run racket masquerading as a corporation…”.

I recommend a steady diet of the Economist for everyone.


I’m going to collect all the serialization libraries I know about. Over the next few weeks, I’ll use all of them and compare.


Cap’n Proto –

Apache Thrift –

Apache Avro –

Tny –

Nanopb –

empb –


Git feature –assume-unchanged

This is a cool feature. You can mark files as “yes, I know this is tracked by Git, but I don’t want my changes committed.”

For example, there’s a config file that’s checked in. You need to make local edits to test with. However, you often accidentally commit those changes (you forget). But you could tell Git to ignore changes in this file. Let’s say we have a file config.xml that we want to edit locally and leave edited.

git update-index --assume-unchanged config.xml

After this, we can commit all we want, and Git will ignore config.xml. If you need to commit a change to it, you can undo this with

git update-index --no-assume-unchanged config.xml

If you’ve forgotten which files you have set the “assume unchanged” bit on, you can do git ls-files -v to see.

This is an edge case, but useful for some work flows.