“Speaking Code” by Geoff Cox and Alex McLean

Subtitled “Coding as aesthetic and political expression”, I found this book in Blackwell’s in Oxford, and it instantly appealed as one who has made his living primarily from voice and code. The colourful cover also appeals, and pleasingly it too is a program, in the colour-notated language Piet.

The book’s subtitle is “Coding as aesthetic and political expression”, and the book sets out to connect coding to voice, and to analyse how the advent of ubiquitous computing affects notions of action, work, voice and speech.

For someone with little background in the relevant political, philosophical and literary background, which as far as I can tell rests primarily on Marx and Arendt, and more recently Virno, with a liberal sprinkling of French post-modern philosophers and the ever-present Žižek, Cox’s language, references and style are all hard work, and from a left-wing humanities culture I’ve been conditioned to be suspicious of. But the functional code (mostly McLean’s; there is also code poetry, which is fun but, I think, less interesting) is clear, playful and technically thought-provoking. One example patches the Linux kernel to make the machine slow down when it is busy; another tries to say “hello” to every server on the internet (since it’s IPv4, this could be read as a “Last Chance to See” style of greeting!), a sort of polite and manic code-cousin of Wowbagger the Infinitely Prolonged. In the first case, it’s interesting to think of the implications of making a machine behave more like a human; in the second, to wonder how likely one would be to fall under suspicion, or even arrest, for running an apparently harmless script. Other hacks attempt to follow all your followers’ followers on Twitter, or to defriend all your friends while inviting them to meet each other in the flesh.

We are treated to a deep dive into the Hofstadterian strange loop that code sets up between speech and action, described in the introductory chapter 0 as “double coding”, the difference between what humans and machines make of a program, not forgetting its comments.

There are some interesting analyses: Mechanical Turk is obviously ripe for political analysis even to this naïf, while there’s a powerful critique of proprietary cloud-based services which is all the more striking by the fact that it emerges from principled argument rather than the more pragmatic starting point of projects like Freedom Box. Yet my final impression is one of mild disappointment: there are plenty of sources I can now go and read, yet not only does the book not set out a programme (reasonably enough, as it’s an academic, not a polemic, work), but I didn’t reach the end feeling I’d read a compelling analysis or synthesis. Many apparently important sentences are either too vague to be sure of their meaning, or appear to include important misunderstandings without justification: describing JavaScript as “proprietary, indeed owned by Google”, or the operations of conjunction and disjunction as “complex” are just two glaring examples. Other passages which were unclear to me seemed to rely on a background I lack, which is a pity given that one might hope politically-ignorant “codeworkers” such as myself might form a significant portion of the book’s audience.

As a result, the final impression that the book is both too long for the limited message it does convey, and too short for readers lacking background in one of its two sides (there are plenty of unexplained technical references that will baffle those without a considerable grasp of computing). This is a pity, as the authors clearly have both the knowledge of and sympathy towards both sides of the subject required to write a compelling book, and education in this area is sorely needed, both by the mass of codeworkers who are by and large politically inert despite being highly educated; and, perhaps to a lesser degree, by political activists who despite being technically savvy have perhaps failed to grasp quite how fundamentally computing has changed the nature of the game.

Ridiculously for a book in MIT Press’s “Software Studies” series, and doubly for a book that contains source code, there is no electronic version or accompanying web site. Many of the sites referred to in the book seem to suffer from the same insouciance towards preservation, which seems oddly prevalent among digital artists given the lengths to which more traditional workers go to preserve their œuvre. A smaller gripe (which is by no means unique to this book) is that the endnotes comprise both references and expansions on the main text, resulting in a lot of pointless flipping back and forth to the former in order to catch the latter. Rarely have form and content been so out of whack.

For personal space we need personal cyberspace

Thanks to Anna Patalong for posting the Guardian article that got me thinking about this topic.

The internet is famously a refuge for human horrors of all kinds, where whatever your pet hate or perversion you can find like-minded people with whom to celebrate and practise it; but the common image of out-of-the-way websites and password-protected forums only applies to the most egregious examples, the tabloid fodder.

Most of the hate, the quotidian racism, sexism and general denigration of the unfamiliar and uncomfortable is in plain sight, and flourishes in the most anodyne, homogeneous, controlled environments. Those on the receiving end are well aware of it, but few realise exactly what the problem is.

After all, Facebook deletes the nasty stuff, right?

In the Guardian article linked to above, the activist Soraya Chemaly hits the bull on the horns: “It’s not about censorship in the end. It’s about choosing to define what is acceptable.” Indeed, it is about censorship, and that is precisely choosing to define what is acceptable. The problem is that acceptability is defined by the mores of the majority: the heavy hand of the Dead White Male decrees that mere depictions in cake of human sex organs are an abomination and must be suppressed, while groups celebrating rape culture are fine. Facebook claims that “it’s not Facebook’s job to decide what is acceptable”, but by removing anything they have already done so: it’s the removal of the labia cupcakes that legitimises the rape jokes.

This particular imbalance is likely on any service paid for by mainstream advertisers; but imbalance is inevitable in any centralized service: even if Facebook removed nothing of their own accord, they would be subject to local law, and hence two major sources of take-down: law enforcement agencies (with mixed results: goodbye child pornography, goodbye political activism) and corporations (goodbye anything that may infringe our IP). Facebook, in other words, is what it’s like living in a privatized country.

We can certainly work to change Facebook’s idea of what is acceptable, but its centralized control will always tend to conservative uniformity. The physical world, full of iniquity at worst and compromise at best as it is, is much more nuanced, and crucially tends to have the property that the more personal the space, the greater the control we have of it. Interior decoration is more or less up to the inhabitant, but Facebook does not hesitate to censor our personal profiles.

Here then is a new reason to support personal computing initiatives, like the FreedomBox, along with privacy and control of our private data: we’ve unwittingly ceded the very construction of society to the cloud, and I fear that if we don’t take it back, then increasingly the gains we’ve made in physical society will not only be slowed and blunted, but reversed in the new domain that was supposed to be the freest space of all.

The reason I’ve not heard of your cool language is because it’s non-free

Dear developer/researcher/company, I’ve just found your cool language. It might be recently announced, or it might’ve been lurking on the ’net for years. I read your page about it, and I went to download it. Oh dear, it’s non-free. Maybe I’ll try it anyway, though I won’t be using it for all the obvious reasons. Still, that explains why I’ve never heard of it until now.

One thing that a lot of developers seem to overlook is the sheer inertia that any non-free program has to overcome. If your program is free, a lot of people who wouldn’t otherwise use it will do so, and there’s a good chance of its getting into free software distros (which, by the way, are not just for free OSes. Free software is also much more likely to spread via non-free products, hardware or software. All that goes double for language implementations, because issues of portability, maintenance and licensing are even acuter when you’re making an investment in writing code.

So increasingly I conclude that language authors who insist on non-free licensing Just Don’t Get It. There’s one big exception: if your language is secret sauce, then hoarding it to try to make a fortune is at least rational. An example is K/q, which appears to have done very nicely for its author, who also wisely chose as his market the financial sector, in which your clients are likely to be less affected by the factors mentioned above: they’ll have big budgets, and be writing code they don’t expect to distribute, and which may well not have a long shelf-life.

Otherwise, the landscape is littered with duh. Until a few years ago I excused those who had perhaps not fully understood, or even predated, the internet-mediated explosion of free software (though there are plenty of examples of earlier generations who understood the value of freedom as well as anyone, such as Donald Knuth, whose typesetting languages TeX and Metafont dominate mathematical typesetting, and have maintained a large presence in many technical fields for over 30 years), but no longer. Some authors eventually see the light: Carl Sassenrath, author of the Amiga OS, eventually freed his intriguing REBOL language after 15 years of getting nowhere (and, as far as I can tell from reading between the lines, running out of money). Development now seems to be at least trickling along, and from reading the commit logs, he’s just reviewing, not writing. Other projects are almost too late: StrongTalk, a SmallTalk variant with static typing, was freed by Sun in 2006, ten years after the developers were acquired by Sun, and since then, no-one seems to have taken it on, and this still 20-year-old code is languishing, probably never to be relevant. Then there are “close-but-no-cigar” efforts such as that of Mark Tarver, whose Lisp descendent, Qi, was proprietary, but who almost learnt the lesson with its successor, Shen, except that he forbade redistribution of derivative works which do not adhere to his spec for the language, rather than simply requiring that such derivatives change their name.

Stupidest of all, however, are the academic projects that suffer the same fate. Until around 1990, code was only a by-product: academics mostly wrote papers, and it was the published papers that contained the interesting information; you could recode the systems they described for yourself, and since programs were short and systems short-lived and incompatible, that was fine. Now, research prototypes often involve significant engineering, and unpublished code is wasted effort. It’s incomprehensible that publicly-funded researchers are even allowed not to publish their code, but some don’t. The most egregious current example is the Viewpoint Research Institute whose stellar cast, among them the inventor of Smalltalk, Alan Kay, is publishing intriguing papers, but only fragments of code: it’s as if he’s learnt nothing from the last 40 years.

I really hope this problem will die with the last pre-internet generation.

Tags: computing

A tale of two Ebens

Recently I’ve been following the spectacular progress of Raspberry Pi with awe and delight. Eben Upton’s brainchild of a computer that empowers and inspires children to learn to program has garnered a lot of attention in the adult world; it remains to be seen whether its plucky British inventor and retro computing appeal can translate into real success where it matters—with children.

Meanwhile, I’ve been led to some startling talks given recently by Eben Moglen, the founder of the Software Freedom Law Center, and general counsel for the Free Software Foundation. They both deal with the importance of free software and free hardware, and the second in particular recasts the arguments into the current political climate as a strategy for getting the attention of politicians. It’s in the second talk too that Moglen insists on the importance of children, describing children’s curiosity as the greatest force for social change that we possess.

Moglen’s urgent and inspiring call to action (“technologists must engage politically to save civilisation”) rather overshadows Upton’s (“get kids excited about programming with cool toys”), but they share at least two foci: not just children, but also small inexpensive computers; for Moglen spends some time talking about the Freedom Box project, to create small inexpensive low-power servers that everyone can use to regain control over their own data.

And indeed, people are already making the Freedom Box software stack run on Raspberry Pis.

Is there some hope that we may as well as claiming the children also be able to reclaim their parents? I find it rather sad that so much emphasis is placed on children, as it overlooks the childlike potential of adults: for that potential is all of ours while we yet live.

Raspberry Pi: enough to go round?

Raspberry Pi’s most important achievement so far is in generating considerable publicity. The genius of its marketing is that it appeals to the current generation of tech journalists, who were raised on the 1980s home computers whose spirit it invokes. Whether it will appeal today’s children however is less obvious, and arguably more important. Having the inner working exposed (unlike home computers) should help, though it also exposes a serious failing (more on that later).

What measure of success?

How successful it is likely to be depends largely on what you think the problem is.

Reproducing the past

Raspberry Pi’s founder, Eben Upton, defines the challenge as getting a pocket-money-priced computer suitable for teaching children to program mass-produced. The R-Pi meets that definition for the sort of middle-class household that boasts a spare HDMI-compatible monitor plus an old mouse and keyboard and offers generous pocket-money, but elsewhere failing to count input devices and a display in the cost seems disingenuous, and an all-software solution that ran on just about anything, including phones and old PCs, and could be freely downloaded, would seem nearer the mark.

Upton would reply that merely adding “an app for that” doesn’t invite the child to program as the old home computers did (when you switched them on you were immediately presented with a programming environment). Compare this with games consoles and PCs: you can program them, but by default they offer games or an office desktop respectively.

More seriously, many parents quite rightly lock down the computers their children use to prevent their visiting undesirable web sites or installing new software, or even insist on their being supervised: forbidding conditions under which to nurture the sort of exploratory play by which we all learn to love programming. A separate device which belongs to the child, contains no sensitive parental data, and can’t go online addresses all these problems, and the child can be left alone with it as safely as with a book.

Rebuilding the workforce

So far, so good: we’ve recreated a small corner of the 1980s, and a small self-selecting segment of relatively privileged children will have a chance to become programmers. But we already need far more programmers today than when the children of the 1980s entered work, and we’ll need even more when today’s children grow up. To make up the shortfall, programming needs to go mainstream.

This is a challenge that’s already being met locally in many areas; Upton’s approach is to reach out to children directly via programming competitions (or “bribery” as he calls it); although this approach might work without substantial involvement by schools, it seems unwise not to make a serious push for inclusion in the school curriculum.

Remaking society

I believe, however, that programming is far more important and central a skill for the modern world than even its most ardent industrial cheerleaders suggest. Being a non-programmer today is like being illiterate two hundred years ago: it’s possible to get by without understanding anything about programming, but you end up relying heavily on others.

It’s a subtle point, because it’s rare that one needs to actually read or write code; rather, one needs to understand how computers work because increasingly they are embedded in, and hence govern, the systems we use to organise our lives.

Many competent and confident users of computers are reduced to impotent gibbering by machine malfunction, because learning how to operate a computer gives one very little insight into how they fail, whereas understanding bugs and other failures is central to learning how to program. It’s as if the person who could help you repair your blender is the one you’d ask how to cook a soufflé, or as if the person best able to navigate a car was a mechanic.

(Why computer systems are like this is a fascinating question whose answer involves the immaturity of the technology, its complexity, and the degree to which interface and systems design is still driven by technical rather than human considerations, but one I can’t elaborate on further here.)

Even more important is the mindset underlying programming: programmers, like scientists, believe that systems have rules which, if they can’t be looked up (“reading the source code”) can be discovered and codified (“reverse engineering”). But programming has an additional, empowering belief: that rules can be changed or replaced. In a society that is increasingly rule-bound and run by machines, a programmer’s mindset offers both the belief that things can be improved, and the tools to change them. That is why it’s essential that every child should understand at least the principles of programming, even if they never read or write a line of code as an adult.

Scaling up

Hence, it is necessary that programming become part of the core school curriculum, and it will be a good sign that it is embedding itself in our culture when it becomes so. Raspberry Pi has three major problems here: the hardware, the software, and connectivity.

Seeing to the bottom

The problem with the hardware is optically obvious, because of R-Pi’s lack of external casing: it’s entirely closed. You can see the components, but you can’t take it apart to see how it works, or modify it in any way. This is partly a result of the nanometre scale on which modern electronics is built, but it’s also caused by the increasingly draconian intellectual property régime under which we suffer. Unfortunately, the beating heart of the R-Pi, a Broadcom SoC (“System on a Chip”), is a prime example of this.

Even more unfortunately, it’s hard to see how anything like the R-Pi could be built without such regressive technology (in this case, via a special help from Broadcom that Upton, as an employee, managed to secure). All this means that the R-Pi is not only little use in firing the imagination of the next generation of hardware engineers (just as sorely needed, if not as numerously, as the software kind), but its hardware reinforces the “black-box, do not touch” mentality that its software is trying to break down.

Programming for all

Unfortunately, the programming environments provided, although open, are the standard machine-first arcane languages and tools that adults struggle with. Why not use something like Squeak Etoys, which is based on decades of research in both programming and teaching programming? (The plurality is part of the problem too: the R-Pi offers distracting choice, unlike old home computers which simply dumped you into their one built-in programming environment.) Fortunately, this is easy to fix: just update the software shipped with R-Pi.

Changing the world, Learning together

The final problem, connectivity, is a subtler one. Above, I mentioned that an advantage of giving a child their own device is that it need not be connected to the internet, and hence can be safe for them to play with unsupervised. But the R-Pi lacks other sorts of connection that are important. First, it can’t affect the world physically (though peripherals attached to it could). While the privacy and absolute power one enjoys in the virtual world inside the computer is exhilarating and empowering, children also love toys that have real world effects, and it’s an important aid to the imagination to see that one’s electronic creations can have direct physical outcomes.

The Logo systems of the ’70s and ’80s had a natural real-world extension in the form of drawing “turtles”; today we have Lego Mindstorms, but they’re expensive, and only partly open. What we need is a RepRap for children. Secondly, children want to play with each other; their computers should be able to network too. The One Laptop Per Child machines do this; R-Pis should be able to too (and again, fortunately, it’s mainly a matter of software).

Feeding the five million

In summary, Raspberry Pi is, closed hardware aside, a great platform that could help catalyse a much-needed revolution in the perception of programming. The good news is that the remaining technical steps are in software, and can be taken without the heroic step of re-mortgaging one’s house, as Upton did to fund R-Pi. The bad news is that the rest of the job is social, and hence much trickier to achieve than a bank loan.

Computing can’t be left to teachers and business

Today the education secretary, Michael Gove, announced an overhaul of the ICT curriculum. This is good news and long overdue; having recently been castigated by the great, the good, and Google for our poor ICT teaching, the government has responded and is launching a campaign to overhaul the way ICT is taught: out with word processing and spreadsheets, and in with programming.

So, I should be happy: mission innocents saved accomplished? Sadly not; apart from the natural wariness of any “major government initiative”, this one falls down in two important ways.

First, Gove made his big announcement at an education industry gathering, BETT, and made several references to the importance of industry, both as determining what skills should be taught, and as partners to help teach them. In some vocational subjects, this makes sense, but ICT is a compulsory part of the core curriculum. It is not the function of education to prepare workers for business, and businesses are neither interested in nor competent to decide how to educate people. There’s a very obvious sense in which this is the case as far as ICT goes: children must be educated for life (even if the rhetoric of continuing education bears full fruit, adults simply cannot learn as children do), while the ICT skills that business demands change every few years. So here, as in traditionally academic subjects, we should view any industry involvement with the scepticism (and, dash it, cynicism) that it deserves.

Secondly, the announcement essentially removes ICT from the National Curriculum (the Whitehall-speak is “withdrawing the Programme of Study”). There are positive noises about supporting teachers with actual money, alongside the usual guff about liberating them, but the government are still washing their hands of responsibility of what is now the most important subject taught in schools.

As the culture minister, Ed Vaizey, understands, knowledge of how computers work is now as fundamental as literacy. It’s too basic and important to leave unsupervised even if, on the other hand, it’s so new and changing so rapidly that Gove is correct in saying that a traditionally-written curriculum “would become obsolete almost immediately”.

But the elements of computing do not change so rapidly, and they are the important bit. In the mid-’90s the undergrad course on computation theory I attended was thirty years old, and it was just as relevant and up-to-date as it had been when it was written. Many of the computer languages and operating systems in use today are at least as old, as are almost all of the concepts on which they are based.

And although many of the elements have been with us for decades, they are only now becoming fundamental to our society in the same way as literacy and numeracy. Very few people have any idea what that really means. Two crucial points need to be made: that everyone needs to learn how to program, not just programmers; and that programming is not just about computers, just as literacy is not just about speaking, reading and writing. The programming mindset can transform one’s world-view, and, like literacy, it’s particularly empowering, as it brings not only an understanding of how to decompose problems and invent rules to solve them, but the sense that the rule systems which govern our society are software, and can be changed.

Working out how to get all that across will certainly be aided by freeing teachers to experiment. Championing the process while capturing and disseminating best practice and embedding it in our culture needs central leadership. This is a far from unpromising announcement, but it’s only the beginning of the cultural shift we really need.

Si monumentum requiris, ædifica!

Steve Jobs is dead, and the plaudits are rolling in. In a career lasting a little over thirty years, Apple, the company he co-founded and led to all its triumphs, has been part of every part of the computer revolution, and, for many people, has been the exemplar of each of its waves, from the Apple ][ to the Macintosh to the iPod to the iPhone to the iPad. Of the countless innovators who drove digital technology into our lives, Jobs was one of the few able to embed it in our hearts and minds, and by far the most influential. His uncompromising insistence on the marriage of form and function set expectations for industrial design well beyond his industry. Stephen Fry was not far from the mark when he told BBC News in August “I don’t think there is another human being on the planet who has been more influential in the last 30 years on the way culture has developed.”

Here in 2011, it’s amazing to look back over the last 30 years and see how completely computing technology has transformed our lives and culture. But step back a bit further and what’s odd is not how much, but how little we’ve achieved. Looking at Jobs’s achievements, I couldn’t help wondering how we’d ended up in a world where he really was the most successful innovator of his era. In 1968, Doug Engelbart, head of the Augmentation Research Center at Stanford University, showed us the future in one astonishing 90-minute demo, including the public debut of the mouse, hypertext, remote collaboration, videoconferencing and more, the result of just six years’ research by a team of 18. It would have been more believable coming from Stephen Fry’s alternate world in Making History. How did the computer revolution underperform so badly?

When beauty is just skin-deep

For the truth is, pretty much every revolutionary product Apple brought to market was merely the first successful incarnation of decades-old ideas. The Macintosh, released in 1984, embodied user interface work done at Xerox in the 1970s. The latest incarnation of its system software, Mac OS X, is based on NeXTSTEP, the operating system developed by NeXT, the company Jobs founded when he was fired by Apple in 1985. NeXTSTEP was itself built on the Mach 2.5 kernel1 and BSD UNIX, both ’80s repackagings of ’70s designs.2 The hardware was much the same: off-the-shelf parts, dressed beautifully. Jobs told Wired “When people look at an iMac, they think the design is really great, but most people don’t understand it’s not skin deep”, but that’s exactly what it was: where Apple excelled was industrial design. From the look to the choice and arrangements of internal parts,3 formal and functional elegance was all. And it helped the bottom line: as Jobs told Wired, “Focus does not mean saying yes, it means saying no,” and that meant simplicity not just in individual products, but in the product line as a whole. Despite not being the world’s largest manufacturer in most of its product categories, Apple boasts industry-beating economies of scale thanks to its tiny product range, which means that it uses more of each component than other manufacturers, and can thereby boost its profit margins, or demand uniquely customized components from its suppliers.

Tales vs Tools

The power of Apple’s brand is legendary inside and outside the company. There’s the story of the contractors who were laid off and yet came work for six months to finish their program, as well as the familiar pictures of fanbois4 queueing at Apple Stores around the globe for each new product release. Unfortunately, Apple’s brand has become an end in itself for the company. Digital devices are tools, but Apple realised, as car manufacturers had before them, that it was not merely more effective to sell them with stories and as symbols of a better, richer life, but as the tools the purchaser needed to create that life. In his commencement speech at Stanford in 2005, Steve Jobs said: “Don’t be trapped by dogma—which is living with the results of other people’s thinking.” But Apple told its customers why and how to use its products, and from its experiments with allowing third-party manufacturers to build Mac-compatibles to its iTunes media sales and now its App Store first for iPhone and then Mac OS, it has leveraged its loyalty to dictate what customers may do with their devices.

What price freedom?

And so Jobs compromised with profit. I don’t think it had much to do with avarice; offering consumers desirable products is a tried-and-tested route to success, and computing has many visionaries whose inspiration has touched almost no-one outside the field: Doug Engelbart’s astonishing array of new technologies, Chuck Moore’s insistence that computers can be thousands of times simpler, David Gelernter’s reimagining of the user interface, Carl Sassenrath’s crusade against software complexity; all have spent decades doing incredible work that goes almost unused. Xerox, whose Star workstation from the late ’70s was the basis for the Macintosh, came up with the “memory prosthesis” in the ’90s, an idea which makes perfect sense with today’s smart mobile phones, and which no-one has implemented. The obscure publicly-funded Viewpoints Research Institute, whose board of advisers reads like a who’s who of computer science, and whose small research staff all have stellar track records are trying to rewrite the rules of building software (and as far as one can tell, succeeding), but there are few tangible results. From academics to entrepreneurs, none of these brilliant inventors has had an iota of Steve Jobs’s direct impact on the everyday world. Sadly, one of the most brilliant, Jef Raskin, who started development of the Macintosh in 1979, was forced out of Apple by Steve Jobs, failed to find commercial success elsewhere, and in 2004, like Jobs, was diagnosed with pancreatic cancer, from which he died a year later. His book on interface design emphasizes the importance of software design based on how people actually use computers, rather than on superficial attractiveness.

There’s another, more fundamental sort of freedom that Jobs eschewed in his determination to control users’ devices: the freedom to study how the software and hardware works, modify it, and share those modifications. When most users will never want, or, in most cases, be capable to do such a thing, it’s an option that has proven to be of powerful benefit to society, from the days of computing pioneers sharing programming techniques, to the rise of the free and open source software movements. Under Jobs, Apple itself both used and contributed to open source and free software, but usually only when forced to by free software licenses, and never as a corporate badge of pride. Apple’s brand emphasised revolution, but its product policies encouraged conformity. Even Steve Jobs could not conform the market square to the circle of free users.

Si monumentum requiris, ædifica!

And so we are left with an immense challenge: the greatest popularizer of computing technology is dead, but the digital revolution has only achieved a fraction of its potential. If we want his monument, we’ll have to build it ourselves.


  1. Not a microkernel; Mach didn’t become that until the (final) version 3.0. 

  2. The original Mac OS was newer, built from the ground up by Apple for the Macintosh; but it was not up to the job of running multiple applications on the same machine, and despite two separate attempts to build a proper operating system from scratch, Pink and Copland, Apple never managed it. An ex-Apple employee, Jean-Louis Gassée, proved it could be done with Be Inc.’s BeOS, but Steve Jobs’s charisma and the more mature state of NeXTSTEP won Apple over when they were looking for a basis for Mac OS X. 

  3. The same Wired article quotes a friend of Jobs, in the early days, describing how he would dictate “how to lay out the P[rinted]C[ircuit] board so it would look nice”. 

  4. From the French faune des bois, meaning “wood-faun”, a creature that happily spends the night outside, sometimes pictured with the tail of an ass. 

Tags: computing

Old-hat futurism

I had not previously heard of Ben Hammersley, but he says he “helps people understand the modern world.” Recently, he gave a speech to the IAAC (Information Assurance Advisory Council), and the tone was very much “you are all living in the past”. He makes some excellent points in the second half of his talk, about how security theatre is widely seen by the public as an oppressive sham, and how it’s no longer acceptable for leaders to be proud of their technical incompetence, but the first half is both out of date (worrying for a self-described futurist) and out of kilter (worrying for someone supposedly acting as a “translator” between those inventing the future and those running the show). As often with people who get it badly wrong, he starts from the right premise: he quotes William Gibson “the future is already here, just not evenly distributed”; and then ignores it, going on about how our lives are now all on Facebook; we all expect people to be instantly available on the end of a phone 24/7 etc. etc. This is, of course, all true…for the tiny minority in power. But he’s missed the other (and sharper) edge of Gibson’s blade; the future, like the wealth to which it’s so tightly linked, is getting more and more unevenly distributed: not only are there people half-way round the planet still living in the stone age, but there are people a few hundred yards away living half in the present and half in the 1970s. That’s a much more important split than Hammersley’s between people who grew up before and after the end of the cold war. Sentiments like “Facebook, Twitter, Google and all the rest are, in many ways the very definition of modern life in the democratic west” are just evidence of the echo-chamber mentality those three engender. And anyone who still believes the absolute “networks beat hierarchies” simply hasn’t paid attention since 9/11. Scariest of all is that, tiny as the minority it represents is, this view really _is_ reality, in the sense it’s true for everyone with power, who have a huge impact on what happens to everyone else. I’m not sure I really want our rulers to understand Hammersley’s future (delay the inevitable as long as possible!), though I also suspect many of them have a much better grasp of it than he gives them credit for.

New Programmers Wanted For Old Stuff

Although computer science seems to have lost the glamour it had in the ’80s, there still seems to be a steady stream of volunteers to work on all sorts of exciting free and open-source software projects (even though my alma mater is having trouble finding good applicants to read Computer Science; more on that story earlier, and also, I hope later).

But what about the less exciting stuff? The fundamental tools and applications that we programmers still use, directly or indirectly? I mean GNU coreutils, GNU autotools, not to mention pieces that we take even more for granted, such as the shell, the C library and the kernel. (In case this all sounds disguisedly Linux-centric and you’re wondering why I didn’t just say “bash, glibc and Linux”, that’s because while I work mostly on GNU/Linux, I’m mostly interested in portable programs.)

“But isn’t this all legacy stuff”, I hear you cry? If you never stray from the comforts of Eclipse, then maybe yes, but there are still plenty typing “ls” and “grep”. If you’re one such, and you contribute to free software, why not help out? It’s not all legacy code in maintenance mode, and we certainly need help. Rather like the MS-DOS team in the early ’90s, there’s a tiny core of maybe a few dozen major contributors maintaining much of the command-line software stack (outside the kernel and gcc). Unlike them, we are mostly not paid to do so; but we do have many opportunities for innovation and invention.

The UNIX command-line may seem like a dead backwater, of interest only to the dull writers of sclerotic standards, but that’s to misinterpret effect for cause. Yes, it’s mature, and hence capable of standardization, and that’s a good thing: even 10 years ago, many UNIX boxes lacked a decent POSIX implementation, whereas now almost all have one (or can get one by adding GNU). The ISO C99 standard added important features to the language. GNU autotools has matured from a somewhat cranky portability tool to a great leveller, making it easy to write code that will not build and run on any major OS (yes, including Windows), thanks not just to increasingly maturity and stability, but also to new projects such as the amazing gnulib, which papers over the cracks in a wide range of POSIX API implementations and provides useful data structures and other APIs missing from the standards, and autoconf-archive, which supplies autoconf macros for dozens of common configuration tasks and for hundreds of languages, tools and libraries.

Using these tools I was able to remove all platform-specific C from GNU Zile, a cut-down Emacs clone, cutting its code size by about 2,000 lines (20% of the code base), and slash the size of its configure.ac (build system configuration file). All this while adding a test suite with nearly 100 tests, plus a few extra features.

And it’s not just Zile: stalwarts like GNU grep and coreutils have been made over, and, largely unnoticed by users, are looking much prettier under the hood (though there are important bug fixes, new features and performance improvements too). Even Emacs, with its immense code-base and ancient build system, is gradually being brought up to date.

The most exciting thing for me is the synergy: the more the tools are improved, the greater the leverage obtained when they are used, and the more they are used, the less effort is required to maintain the packages, and the work becomes easier: less time wrestling the system, more time improving it. And more fun: if you think C is hard, dull and slow work, think again. We too can have quick rebuilding thanks to ccache, easy bug-bashing with Valgrind, not to mention code completion and navigation either in an IDE like Anjuta or from the evergreen Emacs, which is finally integrating and polishing a decade of, until now largely invisible (and unusable), work on modern IDE tools.

Unfortunately at the moment this reduction in effort is being absorbed simply by enabling a tiny team to keep more packages up to scratch, but wouldn’t it be great if more people joined in?

Next time: the future of the past: it gets even more exciting!

Tags: computing

Kindle 3 is a good first attempt

Giving my girlfriend a Kindle for Christmas was the carrot in a multi-pronged strategy to avoid needing more bookshelves (the stick being “I will start giving away your books” and my contribution being to archive books I’ve read (or return the many that aren’t even mine). This therefore required that I stocked it with books before she got her hands on it, which in turn was all the excuse I needed to play with the thing.

My lazy solution was simply to download all of Feedbooks; I wrote some scripts to make this actually lazy, rather than brain-numbingly dull. In the process I found that while the Kindle is nice to hold and great to read, it struggles to cope with a large collection of books (even though the nearly 3,000 volumes of Feedbooks only half-filled its 4Gb memory), and is woeful as a research tool. And, of course, Amazon’s first-mover-evil surfaced early.

Here are the problems I had:

1. Amazon’s own store doesn’t seem to contain free books. I think it’s poor form not to give people a straightforward choice of free editions of out-of-copyright works. The Kindle may be a loss leader, but at £109 it’s still not cheap. Feedbooks, rather than integrating easily into the Kindle, like, say, a 3rd-party software provider into Ubuntu’s Software Center, provide a catalogue which itself is in the form of a book, doesn’t automatically update, and offers a list ordered only by title. In other words, it’s useless; one is better off using the built-in web browser to search the online catalogue…

2. …or better, another browser, since the Kindle’s is woefully slow (and I don’t just mean the screen update). It’s just about usable, and hence useful in an emergency, but is no good as, for example, an online research tool to use in parallel with the books you have downloaded, although…

3. …offline search is awful too. With just the few ebooks that come loaded on the device, it was slow; with the thousands of books I loaded, it simply locked up the device, even when trying to search in the manual, presumably already indexed. The Kindle seems to index its contents in the background, but even now, over a week later, search doesn’t work. The only effective navigation is by a book’s table of contents, and, to choose which books to read, the user-definable collections, though…

4. …collections are a pain to set up for many books, as you have to select each book manually; there is no way I have found to select a range. (Fortunately, I was able to define collections programmatically, but this will be beyond most users.)

In summary, it’s a lovely device, but the software is rather toytown. Amazon could improve it (and indeed, the 3.0.3 firmware update, at the experimental stage when I checked, claims, vaguely, “performance improvements”), but given that their main interest is in selling books and Kindles, I’m not hopeful that it will happen before the next hardware iteration; whether it happens at all depends on competition, and there should be plenty of that, to go by the number of other ebook readers.