[As ever, you can read this on the BBC News website]
When I started work as a professional programmer, writing in the C programming language, I sometimes wrote very bad code. It worked, but it wasn’t what you’d call ‘industrial strength’, largely because it didn’t do nearly enough checking.
As a result my programs would crash if you gave them unexpected input by typing a word into a field where a number was required, or because they failed to check whether a variable had been properly initialised before doing a calculation.
Fortunately I had talented and patient colleagues who showed me the difference between student programming and serious coding and understood that validating data, checking variables and handling all possible error conditions is not just a useful extra but at least as important as the part of the program that does the actual work.
The lesson has stayed with me, even though I now write little production code and only occasionally mess around with other people’s programs.
Sadly, it seems that the developers behind three of the most widely-used electronic voting systems in current use in the United States have never grasped this important principle.
Following concerns about the accuracy of the electronic voting systems used in last year’s the California state legislature commissioned computer science and cryptography experts at the University of California to review the main players and ensure that ‘California voters are being asked to cast their ballots on machines that are secure, accurate, reliable, and accessible’.
Anyone looking for reassurance will have had their hopes dashed, as the recently published report into e-voting systems from Diebold, Hart InterCivic and Sequoia found massive security holes in the source code which, combined with poor physical security and badly-designed procedures, make it impossible to rely on them to record votes accurately.
The report says that ‘the security mechanisms provided for all systems analyzed were inadequate to ensure accuracy and integrity of the election results and of the systems that provide those results’, which is about as bad as it gets.
And there is clear evidence of misleading comments by the voting machine manufacturers. Security researcher Ed Felten notes in his commentary on the work that ‘Diebold claimed in 2003 that its use of hard-coded passwords was “resolved in subsequent versions of the software”. Yet the current version still uses at least two hard-coded passwords — one is “diebold” and another is the eight-byte sequence 1,2,3,4,5,6,7,8”.
Apparently part of the problem was that the researchers actually had access to the systems they were testing. In a statement Hart InterCivic complained that investigators had ‘unfettered access to all technical documentation and source code information’, implying that since hackers or those trying to manipulate the vote would be less well prepared the bad coding doesn’t really matter.
A system can only be used in an election if it is certified by the relevant authorities, and it was clear from the California study that none of the machines examined was up to the job, so their certification was withdrawn at the start of August.
Unfortunately California’s Secretary of State Debra Bowen is clearly a trusting soul because she immediately gave them all a new certification provided that security features were added to ‘protect the integrity of the vote’.
Placing such trust in vendors who have shown a comprehensive inability to understand the security requirements of election systems seems to demonstrate a naivety about software development and integrity that is all too common in politicians.
Such problems are not confined to the United States, of course, though the campaign for more openness about the technology used in electronic voting seems to have made more progress there than elsewhere.
Here in the UK the Open Rights Group, resolute campaigners for civil liberties in the digital world, sent observers to several of the e-voting pilot projects in the May 2007 English and Scottish elections.
They had to fight through a bureaucracy which seemed to see openness as a dangerous aberration, where ‘observers were frequently subject to seemingly arbitrary and changeable decisions via unclear lines of authority’, but the final report makes chilling reading.
It outlines many problems, noting that ‘inadequate attention was given to system design, systems access and audit trails. Systems used both inappropriate hardware and software, and were insufficiently secured’.
A big problem for ORG is that ‘E-voting is a ‘black box system’, where the mechanisms for recording and tabulating the vote are hidden from the voter. This makes public scrutiny impossible, and leaves statutory elections open to error and fraud’.
The Electoral Commission, the body responsible for the administration of elections in the UK, has also been looking at the trials and it recently called for a halt to pilot projects while security and testing procedures are improved, an implicit admission that the ORG analysis of flaws in the May pilots was well-founded.
We can only hope that these warnings are heeded, and that the UK politicians show more awareness of the problems of building secure voting systems than the Californian officials have demonstrated.
Electronic voting is not the same as online voting, and the argument that voting by text message or over the internet diminishes the importance of democratic engagement does not apply to attempts to replace a pencil and paper ballot with modern technologies that could be more accessible and count votes faster and even more reliably.
But we would be better off keeping an old, paper-based system that we can trust rather than rushing to replace it with flawed technologies whose inevitable failure will further damage trust in the democratic process.