“Notes on the Synthesis of Form” was the first monograph, published in 1964, by Christopher Alexander, the Vienna-born British architect who first studied mathematics at Cambridge University and then spent most of his career at the University of California at Berkeley. “The Thinking Hand” was written in 2009 by Juhani Pallasmaa, the Finnish architect and long-time professor of architecture at the Helsinki University of Technology. They were born within a few weeks of each other in 1936. Both have undertaken major projects, but while Pallasmaa’s look familiar to student’s of modern architecture, Alexander’s are idiosyncratic and widely dismissed by his peers, though as a theorist he has been influential. (My copy of “Notes on the Synthesis of Form” comes from Upper Iowa University Library, and the return card shows its having been taken out only once, in 1968.) Alexander has had considerable influence in computer science: both “Notes on the Synthesis of Form” and his later “A Pattern Language” have shaped developments in programming languages and techniques.
What particularly fascinates me about these books and their authors is that a summary of their arguments gives completely the opposite impression to the character of their authors’ works. Pallasmaa calls for a reconnection with embodied thinking in an era that has become too visual and virtual, while Alexander demands a formal approach to design in a world that has become too complex for intuitive approaches; but it is Pallasmaa’s architecture that is modern and Alexander’s that is traditional.
The key to this apparent paradox is in the authors’ characters: Pallasmaa is unabashedly modern in his insistence on the architect’s central importance as a visionary artist–engineer, while Alexander is much more cautious: he argues that architects, and designers in general, largely fail to cope with the problems with which they are faced: their “intuitive ability to organize physical form is…reduced to nothing by the size of the tasks”, and that instead they “hide [their] incompetence in a frenzy of artistic individuality”.
The respective orientations pervade and structure the books. Pallasmaa draws largely on other artists for his inspiration, and takes a thematic approach, with chapters including “The Mysterious Hand”, “The Working Hand”, and “Embodied Thinking”. He illuminates his argument with copious illustrations and fulsome references, with a full page of endnotes at the end of several of the short chapters (eight in 140 pages, with generous margins and the afore-mentioned frequent illustrations). Alexander by contrast offers a programme, and divides his 200-page volume into three main parts: first, an analysis of the problem, with chapters that define the design problem and the traditional “unselfconscious” and modern “selfconscious” design processes; secondly, an exposition of his formal analytic–synthetic process based on the extraction of a “program” or decomposition of the problem from formal–functional “constructive diagrams”; and thirdly two appendices which respectively give an extended example of the approach and the mathematical justification of the formal process. He draws on a similarly wide range of sources, overwhelmingly scientific, from disciplines as diverse as biology, mathematics and anthropology.
In summary, Alexander’s approach emphasizes science and craft, while largely taking artistry for granted (when it’s not part of the problem), while Pallasmaa insists on the need for an embodied artistic vision.
Both authors observe the limitations of abstract intellectual effort, but for different reasons and in different ways. Alexander defines the problem of design as one of dividing an “ensemble” into “form” and “context”, and then designing the form to as to ensure “good fit” between the two parts. He then observes that in traditional societies design is an “unselfconscious process”, that is, neither codified nor formally taught, but rather encoded in the patterns of the society and its objects. Crucially, he says, the learned skills consist simply of attempting to correct “bad fit”. A maker who has come across the problem before may use a learned solution; otherwise, a random change may be made, and effective solutions may become part of the tradition. Alexander does not mention natural selection, but in fact this is what he is describing. As in nature, it is tremendously powerful, and matches the structure of the design problem itself. It is impossible to give an exhaustive list of what constitutes “good fit”, but only a partial list of “misfits” that have arisen in past experience: the tradition consists of a series of adaptations to past problems.
For the unselfconscious process to work, two conditions must be met: the design problem must be decomposable into problems that can be solved separately, and neither the culture nor the physical environment must change too quickly to allow the tradition to reach an equilibrium of good fit. The need for selfconscious design in the modern world has arisen, Alexander says, because both conditions have been broken: society has become too complex and changes too fast for unselfconscious processes to work (though there are also counter-forces which exacerbate the problem, for example, “buildings are more permanent”). (It might be interesting to reflect on cause and effect here, in particular, whether unselfconscious processes actually broke down, or whether they were abandoned for other reasons as modern society developed, but Alexander doesn’t; however, his later work such as “A Pattern Language” strongly suggests that he reconsidered the applicability of unselfconscious processes to the modern world, which given the successes of free-market capitalism seems only sensible.)
Alexander claims that selfconscious design doesn’t work at present because of a combination of sheer complexity (the classic “five plus or minus two” phenomenon) and the human tendency to analyse in linguistic terms which don’t fit reality. His solution is to boost human cognitive abilities with formal methods. He describes this as a loss of innocence, but says that “whether we decide to stand for or against pure intuition as a method, we must do so for reasons which can be discussed”. He contrasts this attitude with designers who “insist that design must be a purely intuitive process: that it is hopeless to try and understand it”.
Pallasmaa implicitly contradicts this view by insisting on the primacy of embodied wisdom over intellect: “I cannot perhaps intellectually analyse…what is wrong with my work during the design…process”. Yet later he says: “In my view, the discipline of architecture has to be grounded on a trinity of conceptual analysis, the making of architecture, and experiencing…it”. He asserts that “creativity is always linked with the happy moment when conscious control can be forgotten”, yet later that “great artists…emphasize the role of restrictions and constraints”. These positions can be reconciled: the restrictions and constraints must be internalized so that they are no longer conscious. It is then possible to imagine using Alexander’s formal techniques as a framework within which to design, while the actual design work is carried on in Pallasmaa’s embodied, intuitive mode.
Pallasmaa, however, concentrates on the embodied mode of thought, which he sees as neglected, and does little to show its place in the larger picture. This is a pity, because this emphasis unbalances the picture he paints: rather like the hand itself, which contains no muscles, divorced from the body, his argument, by paying insufficient attention to its context, fails to persuade. It suffers itself from a frequent lack of connection: despite an approving quotation from Berger describing van Gogh at work, in which “the gestures come from his hand, his wrist, his arm”, at no point does Pallasmaa explicitly acknowledge the hand’s dependence as a mechanical component; and similarly, though he quotes Heidegger saying “only a being who can speak, that is, think, can have hands”, he repeatedly gives the impression that he believes the hand can literally think, separately from the brain. Again, he fails to acknowledge the irony, having quoted Henry Moore on the danger of analysing creative work, of his reliance on artists who have no such qualms.
This lack of selfconsciousness is precisely what Alexander warns against. Pallasmaa asserts that “an established and successful professional would hardly stop to ponder questions such as, what is the floor, the window, or the door”, exactly what Alexander has spent his career doing. Pallasmaa says “the true artist…collaborates with the silent tradition of the craft”, but fails to acknowledge the problems with this position, or indeed, his lack of balance in concentrating on the artistic side of architecture.
In his penultimate paragraph, Pallasmaa says, beautifully, “architecture has to slow down experience, halt time, and…maintain and defend silence”. This is true, though it is no less the task of every designer, and each person who would live aright. It seems, though, that Pallasmaa has confused means and ends: his defence of silence has become a silent defence.
The 1960s was in many ways a more optimistic time than now: the world was a larger place, technology was less powerful and our knowledge relatively undeveloped, but there was a greater sense of progress in the face of more tractable problems. The limits to growth were yet to appear. Alexander’s notes are very much in the spirit of the times, using the latest research to propose a way to address the problems of the day. Pallasmaa’s work is also arguably very much in the spirit of the times in its more personal, inward focus, and its more partial gaze; but in the fifty years that separate the books not only our problems but our ability to solve them are vastly greater, which makes Pallasmaa’s work look at best pessimistic or unambitious, and at worst out of touch navel-gazing. “The Thinking Hand” is elegiac and beautiful in places, but its call to mysticism, often shrouded in academic turns of phrase, does the profession no favours, while the occasional overconfidence of the young author of “Notes on the Synthesis of Form” is excused by his directness, vitality and enthusiasm.
Where I live, in London, the problems are only getting worse: the prestige projects are now overwhelmingly for private clients, the public purse is stretched as never before (regulations for schools were recently relaxed to make it possible to build them smaller), as the state continues to abdicate its traditional client roles. The result is a generation of architects whose interests lie in exclusive engagement with the rich and powerful, to design investment fortresses that are privately-owned, privately inhabited (or, in many cases, uninhabited) investment vehicles which continue to swallow up previously public land. Public use increasingly means shopping, which excludes those without the means to over-consume, and so the vast majority of citizens are left as powerless spectators of the urban landscape, unable to affect or afford anything, at best gawp at the attractions and buy something in the gift shops of this enormous private gallery.
Under these conditions, Pallasmaa’s response is an entirely understandable one from an evidently humane person, but it is Alexander that speaks to our need.
Thanks to Thomas Impiglia and Donna Mairi Macfayden respectively for recommending the books to me.
Subtitled “Coding as aesthetic and political expression”, I found this book in Blackwell’s in Oxford, and it instantly appealed as one who has made his living primarily from voice and code. The colourful cover also appeals, and pleasingly it too is a program, in the colour-notated language Piet.
The book’s subtitle is “Coding as aesthetic and political expression”, and the book sets out to connect coding to voice, and to analyse how the advent of ubiquitous computing affects notions of action, work, voice and speech.
For someone with little background in the relevant political, philosophical and literary background, which as far as I can tell rests primarily on Marx and Arendt, and more recently Virno, with a liberal sprinkling of French post-modern philosophers and the ever-present Žižek, Cox’s language, references and style are all hard work, and from a left-wing humanities culture I’ve been conditioned to be suspicious of. But the functional code (mostly McLean’s; there is also code poetry, which is fun but, I think, less interesting) is clear, playful and technically thought-provoking. One example patches the Linux kernel to make the machine slow down when it is busy; another tries to say “hello” to every server on the internet (since it’s IPv4, this could be read as a “Last Chance to See” style of greeting!), a sort of polite and manic code-cousin of Wowbagger the Infinitely Prolonged. In the first case, it’s interesting to think of the implications of making a machine behave more like a human; in the second, to wonder how likely one would be to fall under suspicion, or even arrest, for running an apparently harmless script. Other hacks attempt to follow all your followers’ followers on Twitter, or to defriend all your friends while inviting them to meet each other in the flesh.
We are treated to a deep dive into the Hofstadterian strange loop that code sets up between speech and action, described in the introductory chapter 0 as “double coding”, the difference between what humans and machines make of a program, not forgetting its comments.
As a result, the final impression that the book is both too long for the limited message it does convey, and too short for readers lacking background in one of its two sides (there are plenty of unexplained technical references that will baffle those without a considerable grasp of computing). This is a pity, as the authors clearly have both the knowledge of and sympathy towards both sides of the subject required to write a compelling book, and education in this area is sorely needed, both by the mass of codeworkers who are by and large politically inert despite being highly educated; and, perhaps to a lesser degree, by political activists who despite being technically savvy have perhaps failed to grasp quite how fundamentally computing has changed the nature of the game.
Ridiculously for a book in MIT Press’s “Software Studies” series, and doubly for a book that contains source code, there is no electronic version or accompanying web site. Many of the sites referred to in the book seem to suffer from the same insouciance towards preservation, which seems oddly prevalent among digital artists given the lengths to which more traditional workers go to preserve their œuvre. A smaller gripe (which is by no means unique to this book) is that the endnotes comprise both references and expansions on the main text, resulting in a lot of pointless flipping back and forth to the former in order to catch the latter. Rarely have form and content been so out of whack.
Thanks to Anna Patalong for posting the Guardian article that got me thinking about this topic.
The internet is famously a refuge for human horrors of all kinds, where whatever your pet hate or perversion you can find like-minded people with whom to celebrate and practise it; but the common image of out-of-the-way websites and password-protected forums only applies to the most egregious examples, the tabloid fodder.
Most of the hate, the quotidian racism, sexism and general denigration of the unfamiliar and uncomfortable is in plain sight, and flourishes in the most anodyne, homogeneous, controlled environments. Those on the receiving end are well aware of it, but few realise exactly what the problem is.
After all, Facebook deletes the nasty stuff, right?
In the Guardian article linked to above, the activist Soraya Chemaly hits the bull on the horns: “It’s not about censorship in the end. It’s about choosing to define what is acceptable.” Indeed, it is about censorship, and that is precisely choosing to define what is acceptable. The problem is that acceptability is defined by the mores of the majority: the heavy hand of the Dead White Male decrees that mere depictions in cake of human sex organs are an abomination and must be suppressed, while groups celebrating rape culture are fine. Facebook claims that “it’s not Facebook’s job to decide what is acceptable”, but by removing anything they have already done so: it’s the removal of the labia cupcakes that legitimises the rape jokes.
This particular imbalance is likely on any service paid for by mainstream advertisers; but imbalance is inevitable in any centralized service: even if Facebook removed nothing of their own accord, they would be subject to local law, and hence two major sources of take-down: law enforcement agencies (with mixed results: goodbye child pornography, goodbye political activism) and corporations (goodbye anything that may infringe our IP). Facebook, in other words, is what it’s like living in a privatized country.
We can certainly work to change Facebook’s idea of what is acceptable, but its centralized control will always tend to conservative uniformity. The physical world, full of iniquity at worst and compromise at best as it is, is much more nuanced, and crucially tends to have the property that the more personal the space, the greater the control we have of it. Interior decoration is more or less up to the inhabitant, but Facebook does not hesitate to censor our personal profiles.
Here then is a new reason to support personal computing initiatives, like the FreedomBox, along with privacy and control of our private data: we’ve unwittingly ceded the very construction of society to the cloud, and I fear that if we don’t take it back, then increasingly the gains we’ve made in physical society will not only be slowed and blunted, but reversed in the new domain that was supposed to be the freest space of all.
A curious train of thought started as I was on the way to a local bookshop to buy a mother’s day card. I buy most of my greetings cards in this bookshop, but I’ve never bought a book there in the two and a half years I’ve lived in the area; I don’t buy many books, and mostly I buy through Amazon.
I say “through” because I find that most of the books I buy come from Amazon Marketplace sellers, not directly from Amazon, which is why I’m not sure that Amazon is altogether a Bad Thing: through their marketplace I have access to many small sellers, and it’s far from obvious that a foreign company operating an internet marketplace should be expected to pay UK taxes on that part of their profits. (I agree that they should do so for their direct UK trade.)
Further, I sell books through Amazon myself occasionally, something that I’d never managed to before without the combination of convenience and huge potential market offered for the small range of somewhat rebarbative volumes I offer. But it still feels wrong: the system is biased in favour of large sellers, most obviously, Amazon themselves. Pitting tiny booksellers against each other across the country seems like a way to make everyone miserable, even as it enables the really small sellers (like me!) to get into the game at all.
A healthier system would be one that encourages localism. A monetized version of Freecycle? I might use such a system if it enabled me to make a little more on my sales by not having to pay postage, though I’d want to be able to post inventory data just once: locally and to Amazon.
But then it struck me that the reason nationwide Amazon Marketplace works is that one can send books anywhere in the country for the same amount: Royal Mail’s universal delivery obligation, more or less. Designed to give access to national life for everyone, however physically isolated, this distortion of the market also tips the scales in favour of large organisations.
But it’s letters that are really important to social inclusion, not parcels, and increasingly we’re using the internet to communicate. What if internet access were gradually substituted for universal delivery as the Royal Mail’s obligation? Universal high-speed broadband access has been a political “priority” for some time now, while delivery has been slow. If over the course of a generation (to allow for education, so that access really is universal: there are still far too many people who are not internet-literate to make this fair today) we made this substitution we could complete this important infrastructural investment, at the same time removing the artificial distortion to the physical landscape and thereby encouraging local trade. The infinitely malleable online markets would quickly adapt to postage-per-mile. There would need to be exceptions, for example, for sending medicines by post, but this could be charged to the agency with the relevant obligation, in the case of medicine, the NHS, thus rendering the argument over and justification for different cases more transparent than at present.
Why give the Royal Mail this job, which has little to do with their current function or expertise? Because in fact it has everything to do with it: the function is a social one, the same as currently discharged by the Mail. The current fashion in the public sector when desiring to regulate a sector in the public interest is to set up a commissioning quango to buy in services from private sector organisations with the relevant technical competence (hopefully). This seems to me cart before horse when the mission is a social one. The Royal Mail and closely allied Post Office have decades of successful experience in delivering universal services (banking, benefits, bill paying…); this is “just” another one. Also, it seems hugely wasteful to spend a generation winding down one service while ramping up another that, modulo technology, is very similar in shape: a last-mile communications service.
It’d be good to have a financial incentive to actually buy books from my local bookshop.
Dear developer/researcher/company, I’ve just found your cool language. It might be recently announced, or it might’ve been lurking on the ’net for years. I read your page about it, and I went to download it. Oh dear, it’s non-free. Maybe I’ll try it anyway, though I won’t be using it for all the obvious reasons. Still, that explains why I’ve never heard of it until now.
One thing that a lot of developers seem to overlook is the sheer inertia that any non-free program has to overcome. If your program is free, a lot of people who wouldn’t otherwise use it will do so, and there’s a good chance of its getting into free software distros (which, by the way, are not just for free OSes. Free software is also much more likely to spread via non-free products, hardware or software. All that goes double for language implementations, because issues of portability, maintenance and licensing are even acuter when you’re making an investment in writing code.
So increasingly I conclude that language authors who insist on non-free licensing Just Don’t Get It. There’s one big exception: if your language is secret sauce, then hoarding it to try to make a fortune is at least rational. An example is K/q, which appears to have done very nicely for its author, who also wisely chose as his market the financial sector, in which your clients are likely to be less affected by the factors mentioned above: they’ll have big budgets, and be writing code they don’t expect to distribute, and which may well not have a long shelf-life.
Otherwise, the landscape is littered with duh. Until a few years ago I excused those who had perhaps not fully understood, or even predated, the internet-mediated explosion of free software (though there are plenty of examples of earlier generations who understood the value of freedom as well as anyone, such as Donald Knuth, whose typesetting languages TeX and Metafont dominate mathematical typesetting, and have maintained a large presence in many technical fields for over 30 years), but no longer. Some authors eventually see the light: Carl Sassenrath, author of the Amiga OS, eventually freed his intriguing REBOL language after 15 years of getting nowhere (and, as far as I can tell from reading between the lines, running out of money). Development now seems to be at least trickling along, and from reading the commit logs, he’s just reviewing, not writing. Other projects are almost too late: StrongTalk, a SmallTalk variant with static typing, was freed by Sun in 2006, ten years after the developers were acquired by Sun, and since then, no-one seems to have taken it on, and this still 20-year-old code is languishing, probably never to be relevant. Then there are “close-but-no-cigar” efforts such as that of Mark Tarver, whose Lisp descendent, Qi, was proprietary, but who almost learnt the lesson with its successor, Shen, except that he forbade redistribution of derivative works which do not adhere to his spec for the language, rather than simply requiring that such derivatives change their name.
Stupidest of all, however, are the academic projects that suffer the same fate. Until around 1990, code was only a by-product: academics mostly wrote papers, and it was the published papers that contained the interesting information; you could recode the systems they described for yourself, and since programs were short and systems short-lived and incompatible, that was fine. Now, research prototypes often involve significant engineering, and unpublished code is wasted effort. It’s incomprehensible that publicly-funded researchers are even allowed not to publish their code, but some don’t. The most egregious current example is the Viewpoint Research Institute whose stellar cast, among them the inventor of Smalltalk, Alan Kay, is publishing intriguing papers, but only fragments of code: it’s as if he’s learnt nothing from the last 40 years.
I really hope this problem will die with the last pre-internet generation.
GNOME 3.6 has been released, and as usual that means updating a raft of extensions that I use to restore sanity to my desktop (mostly, switching off all the guff I don’t use, and putting old apps’ system tray icons back up at the top of the screen where they’re actually useful, rather than hidden away in the message tray).
Most of the extensions should just work, but they don’t, because GNOME Shell extensions are broken by design. Let me count the ways:
Extensions are declared to work with a version of GNOME, not a version of the API. Typically, the version in the
metadata.json file includes a revision number. At the very worst (since in practice the GNOME team seems to change the API with every release, of which more below) it should include a minor version number (e.g. “3.6”). There should be absolutely no need, barring bugs in GNOME Shell, to update extensions across minor releases.
Minor API changes, like removing an underscore from an identifier, break extensions. Seriously dudes, stop it. We never had this pain with GNOME 2: Compiz extensions, on the whole, worked from one release to the next. GNOME 3 should be GNOME 3. By all means tweak the API as it goes along, but make it backwards compatible.
The built-in tools (Looking Glass) are cute, but useless for development (Looking Glass stops the desktop, so you have to keep switching between it and your editor). It shouldn’t be harder to patch a GNOME Shell extension than a GTK app written in C, it should be much easier.
On the plus side, extensions.gnome.org works faster and is better now: for example, it allows you to install extensions that have not yet been checked on the latest version. For several extensions that was all I needed; for others I needed to change the version number manually; a couple needed trivial patches, and some others I had to switch to alternatives. This stupid dance was more than half the work in upgrading from Ubuntu 12.04 to 12.10, a fact which reflects only a little credit on Canonical, as I would class the amount of upgrade-induced work there as “acceptable” rather than “minimal”.
Recently I’ve been following the spectacular progress of Raspberry Pi with awe and delight. Eben Upton’s brainchild of a computer that empowers and inspires children to learn to program has garnered a lot of attention in the adult world; it remains to be seen whether its plucky British inventor and retro computing appeal can translate into real success where it matters—with children.
Meanwhile, I’ve been led to some startling talks given recently by Eben Moglen, the founder of the Software Freedom Law Center, and general counsel for the Free Software Foundation. They both deal with the importance of free software and free hardware, and the second in particular recasts the arguments into the current political climate as a strategy for getting the attention of politicians. It’s in the second talk too that Moglen insists on the importance of children, describing children’s curiosity as the greatest force for social change that we possess.
Moglen’s urgent and inspiring call to action (“technologists must engage politically to save civilisation”) rather overshadows Upton’s (“get kids excited about programming with cool toys”), but they share at least two foci: not just children, but also small inexpensive computers; for Moglen spends some time talking about the Freedom Box project, to create small inexpensive low-power servers that everyone can use to regain control over their own data.
And indeed, people are already making the Freedom Box software stack run on Raspberry Pis.
Is there some hope that we may as well as claiming the children also be able to reclaim their parents? I find it rather sad that so much emphasis is placed on children, as it overlooks the childlike potential of adults: for that potential is all of ours while we yet live.
Raspberry Pi’s most important achievement so far is in generating considerable publicity. The genius of its marketing is that it appeals to the current generation of tech journalists, who were raised on the 1980s home computers whose spirit it invokes. Whether it will appeal today’s children however is less obvious, and arguably more important. Having the inner working exposed (unlike home computers) should help, though it also exposes a serious failing (more on that later).
How successful it is likely to be depends largely on what you think the problem is.
Raspberry Pi’s founder, Eben Upton, defines the challenge as getting a pocket-money-priced computer suitable for teaching children to program mass-produced. The R-Pi meets that definition for the sort of middle-class household that boasts a spare HDMI-compatible monitor plus an old mouse and keyboard and offers generous pocket-money, but elsewhere failing to count input devices and a display in the cost seems disingenuous, and an all-software solution that ran on just about anything, including phones and old PCs, and could be freely downloaded, would seem nearer the mark.
Upton would reply that merely adding “an app for that” doesn’t invite the child to program as the old home computers did (when you switched them on you were immediately presented with a programming environment). Compare this with games consoles and PCs: you can program them, but by default they offer games or an office desktop respectively.
More seriously, many parents quite rightly lock down the computers their children use to prevent their visiting undesirable web sites or installing new software, or even insist on their being supervised: forbidding conditions under which to nurture the sort of exploratory play by which we all learn to love programming. A separate device which belongs to the child, contains no sensitive parental data, and can’t go online addresses all these problems, and the child can be left alone with it as safely as with a book.
So far, so good: we’ve recreated a small corner of the 1980s, and a small self-selecting segment of relatively privileged children will have a chance to become programmers. But we already need far more programmers today than when the children of the 1980s entered work, and we’ll need even more when today’s children grow up. To make up the shortfall, programming needs to go mainstream.
This is a challenge that’s already being met locally in many areas; Upton’s approach is to reach out to children directly via programming competitions (or “bribery” as he calls it); although this approach might work without substantial involvement by schools, it seems unwise not to make a serious push for inclusion in the school curriculum.
I believe, however, that programming is far more important and central a skill for the modern world than even its most ardent industrial cheerleaders suggest. Being a non-programmer today is like being illiterate two hundred years ago: it’s possible to get by without understanding anything about programming, but you end up relying heavily on others.
It’s a subtle point, because it’s rare that one needs to actually read or write code; rather, one needs to understand how computers work because increasingly they are embedded in, and hence govern, the systems we use to organise our lives.
Many competent and confident users of computers are reduced to impotent gibbering by machine malfunction, because learning how to operate a computer gives one very little insight into how they fail, whereas understanding bugs and other failures is central to learning how to program. It’s as if the person who could help you repair your blender is the one you’d ask how to cook a soufflé, or as if the person best able to navigate a car was a mechanic.
(Why computer systems are like this is a fascinating question whose answer involves the immaturity of the technology, its complexity, and the degree to which interface and systems design is still driven by technical rather than human considerations, but one I can’t elaborate on further here.)
Even more important is the mindset underlying programming: programmers, like scientists, believe that systems have rules which, if they can’t be looked up (“reading the source code”) can be discovered and codified (“reverse engineering”). But programming has an additional, empowering belief: that rules can be changed or replaced. In a society that is increasingly rule-bound and run by machines, a programmer’s mindset offers both the belief that things can be improved, and the tools to change them. That is why it’s essential that every child should understand at least the principles of programming, even if they never read or write a line of code as an adult.
Hence, it is necessary that programming become part of the core school curriculum, and it will be a good sign that it is embedding itself in our culture when it becomes so. Raspberry Pi has three major problems here: the hardware, the software, and connectivity.
The problem with the hardware is optically obvious, because of R-Pi’s lack of external casing: it’s entirely closed. You can see the components, but you can’t take it apart to see how it works, or modify it in any way. This is partly a result of the nanometre scale on which modern electronics is built, but it’s also caused by the increasingly draconian intellectual property régime under which we suffer. Unfortunately, the beating heart of the R-Pi, a Broadcom SoC (“System on a Chip”), is a prime example of this.
Even more unfortunately, it’s hard to see how anything like the R-Pi could be built without such regressive technology (in this case, via a special help from Broadcom that Upton, as an employee, managed to secure). All this means that the R-Pi is not only little use in firing the imagination of the next generation of hardware engineers (just as sorely needed, if not as numerously, as the software kind), but its hardware reinforces the “black-box, do not touch” mentality that its software is trying to break down.
Unfortunately, the programming environments provided, although open, are the standard machine-first arcane languages and tools that adults struggle with. Why not use something like Squeak Etoys, which is based on decades of research in both programming and teaching programming? (The plurality is part of the problem too: the R-Pi offers distracting choice, unlike old home computers which simply dumped you into their one built-in programming environment.) Fortunately, this is easy to fix: just update the software shipped with R-Pi.
The final problem, connectivity, is a subtler one. Above, I mentioned that an advantage of giving a child their own device is that it need not be connected to the internet, and hence can be safe for them to play with unsupervised. But the R-Pi lacks other sorts of connection that are important. First, it can’t affect the world physically (though peripherals attached to it could). While the privacy and absolute power one enjoys in the virtual world inside the computer is exhilarating and empowering, children also love toys that have real world effects, and it’s an important aid to the imagination to see that one’s electronic creations can have direct physical outcomes.
The Logo systems of the ’70s and ’80s had a natural real-world extension in the form of drawing “turtles”; today we have Lego Mindstorms, but they’re expensive, and only partly open. What we need is a RepRap for children. Secondly, children want to play with each other; their computers should be able to network too. The One Laptop Per Child machines do this; R-Pis should be able to too (and again, fortunately, it’s mainly a matter of software).
In summary, Raspberry Pi is, closed hardware aside, a great platform that could help catalyse a much-needed revolution in the perception of programming. The good news is that the remaining technical steps are in software, and can be taken without the heroic step of re-mortgaging one’s house, as Upton did to fund R-Pi. The bad news is that the rest of the job is social, and hence much trickier to achieve than a bank loan.
Today the education secretary, Michael Gove, announced an overhaul of the ICT curriculum. This is good news and long overdue; having recently been castigated by the great, the good, and Google for our poor ICT teaching, the government has responded and is launching a campaign to overhaul the way ICT is taught: out with word processing and spreadsheets, and in with programming.
So, I should be happy: mission innocents saved accomplished? Sadly not; apart from the natural wariness of any “major government initiative”, this one falls down in two important ways.
First, Gove made his big announcement at an education industry gathering, BETT, and made several references to the importance of industry, both as determining what skills should be taught, and as partners to help teach them. In some vocational subjects, this makes sense, but ICT is a compulsory part of the core curriculum. It is not the function of education to prepare workers for business, and businesses are neither interested in nor competent to decide how to educate people. There’s a very obvious sense in which this is the case as far as ICT goes: children must be educated for life (even if the rhetoric of continuing education bears full fruit, adults simply cannot learn as children do), while the ICT skills that business demands change every few years. So here, as in traditionally academic subjects, we should view any industry involvement with the scepticism (and, dash it, cynicism) that it deserves.
Secondly, the announcement essentially removes ICT from the National Curriculum (the Whitehall-speak is “withdrawing the Programme of Study”). There are positive noises about supporting teachers with actual money, alongside the usual guff about liberating them, but the government are still washing their hands of responsibility of what is now the most important subject taught in schools.
As the culture minister, Ed Vaizey, understands, knowledge of how computers work is now as fundamental as literacy. It’s too basic and important to leave unsupervised even if, on the other hand, it’s so new and changing so rapidly that Gove is correct in saying that a traditionally-written curriculum “would become obsolete almost immediately”.
But the elements of computing do not change so rapidly, and they are the important bit. In the mid-’90s the undergrad course on computation theory I attended was thirty years old, and it was just as relevant and up-to-date as it had been when it was written. Many of the computer languages and operating systems in use today are at least as old, as are almost all of the concepts on which they are based.
And although many of the elements have been with us for decades, they are only now becoming fundamental to our society in the same way as literacy and numeracy. Very few people have any idea what that really means. Two crucial points need to be made: that everyone needs to learn how to program, not just programmers; and that programming is not just about computers, just as literacy is not just about speaking, reading and writing. The programming mindset can transform one’s world-view, and, like literacy, it’s particularly empowering, as it brings not only an understanding of how to decompose problems and invent rules to solve them, but the sense that the rule systems which govern our society are software, and can be changed.
Working out how to get all that across will certainly be aided by freeing teachers to experiment. Championing the process while capturing and disseminating best practice and embedding it in our culture needs central leadership. This is a far from unpromising announcement, but it’s only the beginning of the cultural shift we really need.
Steve Jobs is dead, and the plaudits are rolling in. In a career lasting a little over thirty years, Apple, the company he co-founded and led to all its triumphs, has been part of every part of the computer revolution, and, for many people, has been the exemplar of each of its waves, from the Apple ][ to the Macintosh to the iPod to the iPhone to the iPad. Of the countless innovators who drove digital technology into our lives, Jobs was one of the few able to embed it in our hearts and minds, and by far the most influential. His uncompromising insistence on the marriage of form and function set expectations for industrial design well beyond his industry. Stephen Fry was not far from the mark when he told BBC News in August “I don’t think there is another human being on the planet who has been more influential in the last 30 years on the way culture has developed.”
Here in 2011, it’s amazing to look back over the last 30 years and see how completely computing technology has transformed our lives and culture. But step back a bit further and what’s odd is not how much, but how little we’ve achieved. Looking at Jobs’s achievements, I couldn’t help wondering how we’d ended up in a world where he really was the most successful innovator of his era. In 1968, Doug Engelbart, head of the Augmentation Research Center at Stanford University, showed us the future in one astonishing 90-minute demo, including the public debut of the mouse, hypertext, remote collaboration, videoconferencing and more, the result of just six years’ research by a team of 18. It would have been more believable coming from Stephen Fry’s alternate world in Making History. How did the computer revolution underperform so badly?
For the truth is, pretty much every revolutionary product Apple brought to market was merely the first successful incarnation of decades-old ideas. The Macintosh, released in 1984, embodied user interface work done at Xerox in the 1970s. The latest incarnation of its system software, Mac OS X, is based on NeXTSTEP, the operating system developed by NeXT, the company Jobs founded when he was fired by Apple in 1985. NeXTSTEP was itself built on the Mach 2.5 kernel1 and BSD UNIX, both ’80s repackagings of ’70s designs.2 The hardware was much the same: off-the-shelf parts, dressed beautifully. Jobs told Wired “When people look at an iMac, they think the design is really great, but most people don’t understand it’s not skin deep”, but that’s exactly what it was: where Apple excelled was industrial design. From the look to the choice and arrangements of internal parts,3 formal and functional elegance was all. And it helped the bottom line: as Jobs told Wired, “Focus does not mean saying yes, it means saying no,” and that meant simplicity not just in individual products, but in the product line as a whole. Despite not being the world’s largest manufacturer in most of its product categories, Apple boasts industry-beating economies of scale thanks to its tiny product range, which means that it uses more of each component than other manufacturers, and can thereby boost its profit margins, or demand uniquely customized components from its suppliers.
The power of Apple’s brand is legendary inside and outside the company. There’s the story of the contractors who were laid off and yet came work for six months to finish their program, as well as the familiar pictures of fanbois4 queueing at Apple Stores around the globe for each new product release. Unfortunately, Apple’s brand has become an end in itself for the company. Digital devices are tools, but Apple realised, as car manufacturers had before them, that it was not merely more effective to sell them with stories and as symbols of a better, richer life, but as the tools the purchaser needed to create that life. In his commencement speech at Stanford in 2005, Steve Jobs said: “Don’t be trapped by dogma—which is living with the results of other people’s thinking.” But Apple told its customers why and how to use its products, and from its experiments with allowing third-party manufacturers to build Mac-compatibles to its iTunes media sales and now its App Store first for iPhone and then Mac OS, it has leveraged its loyalty to dictate what customers may do with their devices.
And so Jobs compromised with profit. I don’t think it had much to do with avarice; offering consumers desirable products is a tried-and-tested route to success, and computing has many visionaries whose inspiration has touched almost no-one outside the field: Doug Engelbart’s astonishing array of new technologies, Chuck Moore’s insistence that computers can be thousands of times simpler, David Gelernter’s reimagining of the user interface, Carl Sassenrath’s crusade against software complexity; all have spent decades doing incredible work that goes almost unused. Xerox, whose Star workstation from the late ’70s was the basis for the Macintosh, came up with the “memory prosthesis” in the ’90s, an idea which makes perfect sense with today’s smart mobile phones, and which no-one has implemented. The obscure publicly-funded Viewpoints Research Institute, whose board of advisers reads like a who’s who of computer science, and whose small research staff all have stellar track records are trying to rewrite the rules of building software (and as far as one can tell, succeeding), but there are few tangible results. From academics to entrepreneurs, none of these brilliant inventors has had an iota of Steve Jobs’s direct impact on the everyday world. Sadly, one of the most brilliant, Jef Raskin, who started development of the Macintosh in 1979, was forced out of Apple by Steve Jobs, failed to find commercial success elsewhere, and in 2004, like Jobs, was diagnosed with pancreatic cancer, from which he died a year later. His book on interface design emphasizes the importance of software design based on how people actually use computers, rather than on superficial attractiveness.
There’s another, more fundamental sort of freedom that Jobs eschewed in his determination to control users’ devices: the freedom to study how the software and hardware works, modify it, and share those modifications. When most users will never want, or, in most cases, be capable to do such a thing, it’s an option that has proven to be of powerful benefit to society, from the days of computing pioneers sharing programming techniques, to the rise of the free and open source software movements. Under Jobs, Apple itself both used and contributed to open source and free software, but usually only when forced to by free software licenses, and never as a corporate badge of pride. Apple’s brand emphasised revolution, but its product policies encouraged conformity. Even Steve Jobs could not conform the market square to the circle of free users.
And so we are left with an immense challenge: the greatest popularizer of computing technology is dead, but the digital revolution has only achieved a fraction of its potential. If we want his monument, we’ll have to build it ourselves.
Not a microkernel; Mach didn’t become that until the (final) version 3.0. ↩
The original Mac OS was newer, built from the ground up by Apple for the Macintosh; but it was not up to the job of running multiple applications on the same machine, and despite two separate attempts to build a proper operating system from scratch, Pink and Copland, Apple never managed it. An ex-Apple employee, Jean-Louis Gassée, proved it could be done with Be Inc.’s BeOS, but Steve Jobs’s charisma and the more mature state of NeXTSTEP won Apple over when they were looking for a basis for Mac OS X. ↩
The same Wired article quotes a friend of Jobs, in the early days, describing how he would dictate “how to lay out the P[rinted]C[ircuit] board so it would look nice”. ↩
From the French faune des bois, meaning “wood-faun”, a creature that happily spends the night outside, sometimes pictured with the tail of an ass. ↩