Chris Edwards: July 2009 Archives

Endless endless

31 July 2009

Take two Macs running Screen Sharing and someone who couldn't be bothered to cross the office to put a laptop to sleep and this is what happens. I'd been scanning some notebook pages to park in DevonThink. Because the laptop was physically closer to the scanner/printer, I used Screen Sharing from the laptop to control the scanner-management software that I had running on the desktop machine.

Once I got back to the desktop I thought it would be an idea to put the laptop to sleep. Although it is matter of metres away I decided on the lazy option of just going into the laptop through Screen Sharing from the desktop machine. As the screen painted, I realised my fundamental error. Hello Infinite Loop.


Well, it took a few moments to fill in the image as bits of screen data whizzed from one machine to the other. And, not surprisingly everything got a little sluggish. Keystrokes certainly didn't work. I'm not sure whether that was because of the cycles Screen Sharing was chewing up or because the keystrokes were flying backwards and forwards over the network to see which machine really owned them. I had to grab a camera to take this picture then just shut the lid on the MacBook Pro to cut the feedback loop.

Remember, don't try this at home, kids.

If there is a difference between synthetic biology and plain old genetic engineering, it is one of scale. Genetic engineering typically relies on just adding a gene here or there. Synthetic biology is about working on big chunks of the genome - inserting or deleting entire sections.

As a result, it's not a surprise that people see gene synthesis as a key enabling technology for synthetic biology. You need bigger chunks of DNA. Ergo, you need to synthesise a lot of DNA. The J Craig Venter Institute (JCVI) is the most famous proponent of this philosophy having enlisted the help of four gene-synthesis company, and a lot of delicate cloning work at the JCVI, to build an entire genome from scratch. Ultimately, that synthetic genome is meant to reboot an existing bacterial cell with a new operating system. But it's by no means a popular approach, not least because it's so expensive to do and because it's not that useful in practice.

Anyone trying to dramatise the lives of the Pre-Raphaelite Brotherhood has my sympathy. At first glance, the project seems to have everything you need for post-watershed TV: sex; glamour; tragedy. It also has too many characters, most of them pretty unsympathetic gits, not afraid to do a bit of Victorian moralising while destroying the people they purported to love.

The only one who isn't a git is a bit naïve and dull, gets married, paints the ultimate chocolate-box picture, gets rich and er...that's it. It becomes pretty obvious pretty quickly why no-one has tried dramatising the story before.

The Office of Life Sciences Blueprint for the UK's National Health Service (NHS) is a remarkable document. It's found a way to turn the nation's single largest consumer of public funds into a profit centre. Although the blandly worded manifesto for a technologically led healthcare system will probably remain mostly unimplemented amid the upheaval of a likely change in ruling party come 2010, it still carries a slightly chilling undercurrent.

Take this nugget for example (with my emphasis):

"1.5 The NHS is a unique selling point for the UK, and has the potential to add significantly to the UK's attractiveness as a base for life sciences, providing high-quality healthcare to all, and offering a competitive advantage with its vast patient databases for clinical trials and investigations. There is also a vital role for the NHS as a value-creator and an engine of economic growth, leading the way in the uptake of innovative medicines and technologies, deepening collaboration with industry, and helping the industry to flourish and grow. In turn, a flourishing life sciences industry will help generate the step-change innovation needed to maintain quality and productivity into the future."

In other words: "Invest here. We've got lots of sick people with all their records sitting in a shiny new database (well eventually they will be)."

One thing that's been niggling at me all day on Google's Chrome OS is: if everything just runs in a browser-based sandbox, why bother with an x86 port? You are, on average, going to pay more for an x86-based machine than one based around an equivalent ARM processor for the simple reason that ARM licensees will tear each others' throats out to get into a high-volume design. Intel doesn't have the same kind of pressure, although it does need to play nice for a while to avoid driving too many customers into the ARM world.

But the main reason for slinging an x86 processor into anything at the low end of the pricing or size scale is compatibility with Microsoft Windows. If all the applications run in a browser, presumably using some mixture of Javascript, XML and services from The Cloud, there is no real advantage in having an x86.

PC makers may prefer to put Chrome OS onto an x86-based board for inventory reasons - one selling point may be that such a stripped-down environment gives them a way of selling a $99 machine without cannibalising the market for slightly more capable machines able to run a full Linux or Windows 7 OS. Having to do an ARM version as well as x86 for very similar hardware increases design cost perhaps to the point where the price differential between Intel's silicon and everybody else's becomes irrelevant.

Then you have the compatibility option: force Windows into its own little sandbox and run it under virtualisation as an option for those who cannot give up on everything Microsoft. Google has made no mention of virtualisation. But the idea of a Windows that cannot do too much damage to your main computing environment has its attractions. It's something that the military is already using in a different context through software such as Green Hills' Integrity.

Strictly speaking, Windows runs on emulated hardware on top of Integrity but the idea is that Integrity has full control over the system and, in one implementation, has been certified to Common Criteria EAL 6+. In other words, it's very tough to hack into, unlike Windows which is two levels down. Level 4 sounds good but means it's vulnerable to "casual" attacks.

I can't see Google going to Common Criteria certification. But a heavily slimmed down kernel provides a smaller attack surface. Having Windows isolated through virtualisation with the tunnel to and from the web monitored by the core OS may provide a more secure way to run Windows applications, or at least the illusion of it, which is all Google really needs. Then Google can sell its own sandbox as a more trouble-free environment: "only go to Windows when you really, absolutely must get to Exchange through a VPN connection".

Otherwise, you are left with the reason for having x86 compatibility feature so prominently being that Chrome was written initially for that architecture. That's not really much of a reason to pay more for the hardware.

In the spirit of scientific wagers that saw Richard Feynman bet against micromachining and Steven Hawking reject the idea of Cygnus X-1 harbouring a black hole, Professor Lewis Wolpert has staked a case of port that maverick biologist Rupert Sheldrake is wrong about the role of the genome in determining the fate of living organisms.

However, it is a bet that Wolpert expects he will lose, not because he believes he is wrong but because computer technology and human knowledge will not be able to establish his position in 20 years' time when the outcome of the bet is to decided.

"My guess is that I have been quite generous," Wolpert told me on the question of timescale. It could take 40 years, not 20, for science to work out how to predict the shape of an organism from the genetic information contained in its egg. "It is really a matter of being able to develop the molecular biology needed to understand the interactions within the cell: the way in which all the proteins interact within the cells."

Not only that, there is the issue of how much compute power will be needed to do the prediction. "It will be immense," said Wolpert, who reckons 40 years is a more likely timescale in which to prove him right.

The full arguments from both Wolpert and Sheldrake are published in this week's New Scientist. The release that has gone out from Sheldrake's publisher, Icon Books, makes it look as though Wolpert had narrowed the scope of the bet too far. "[Wolpert] is convinced that it is only a matter of time before all the details of an organism can be predicted on the basis of the genome," it read.

That rang a few alarm bells as, if that meant the genome in terms of a sequence of DNA, the chances are that there is not enough information in that to be able to predict much with any accuracy. This looked to be a bet Wolpert is guaranteed to lose. But Wolpert explained that, from his point of view, the bet takes into account the additional information that an egg might have and not just the sequence of DNA in the genome. "It involves the entire chemical constitution of the egg."

Wolpert is not a big fan of epigenetics as a means of inheritance but acknowledges that the action of proteins on DNA, often silencing genes by adding chemical side chains, or by wrapping the DNA around them, influences the behaviour of the cell.

The bet stems from a March debate at the University of Cambridge between the two in which Sheldrake maintained the position that the genome cannot determine how an organism will form. "It provides nothing more than the code for making proteins," said Sheldrake. His argument is that for cells to differentiate something else is needed. For Sheldrake, that something else is 'morphic resonance': the result of fields formed by biological material that communicates form and function to similar creatures.

That DNA provides the recipe for proteins, and only does that, is no problem for Wolpert. It's not just proteins that DNA templates for, RNA is the initial product and, in anything above bacteria, an important product for cell development. I don't think Sheldrake and Wolpert would disagree on what DNA generates, the difference lies in the emphasis.

For Sheldrake it is "just proteins" (and RNA). For Wolpert, that's all you need. Emergence takes care of the rest. It is in the complex interactions between proteins, DNA and RNA that organisms grow and develop. Nothing else is needed. Our only problem right now is that we cannot predict precisely how an organism might develop purely from the mixture of DNA and proteins in an egg, assuming we could, in the first place, take a snapshot of all those things.

In the March debate, Sheldrake argued that Francis Crick and Sidney Brenner had made all these deterministic claims before, back in 1963. And then, the timeframe was only ten years. What's so different now?

Clearly, Brenner underestimated the complexity of the job of using genetics to predict development. Crick took on consciousness, which was an even harder thing to explain. But, in that time, molecular biology has answered a good many questions about development even if, at the same time, it has raised many more.

Sheldrake's position boils down to an argument that fields we cannot detect directly are somehow able to act on biological material, and biological material alone, in apparently complex ways. In fact, anything that does not have an immediate explanation in the molecular world simply complicates the field part of the equation. And, in Sheldrake's view, this is a field with memory. That's some field.

Techcrunch calls Google's announcement of an operating system designed to run precisely one application a nuclear bomb on Microsoft. A commenter further down tones it down a bit: "a bullet aimed at Microsoft". Or maybe it's a fart in the general direction of Redmond, WA?

The Techcrunch claim is based on a largely detail-free, claim-heavy blog post at the official Google blog. It's all vaguely reminiscent of the pronouncements that Sun Microsystems made for Solaris in the 1990s with almost zero evidence:

"It's scalable. Way more scalable than any other OS."

"But it's a warmed-over Unix."

"No other Unix or OS is as scalable as this one. It will scale from your toaster to a mainframe."

As it turned out, it scaled from an expensive workstation to an expensive minicomputer. And its creator looked near helplessly on as Windows and another warmed-over Unix ate even into that space. Oddly enough, Linux can run in a toaster, just as long as you don't mind slamming a few megabytes of DRAM into an appliance with the sole function of heating bread.

What has Google got in its hype playbook? Why, the web, of course. "...the operating systems that browsers run on were designed in an era where there was no web." What, like Unix? This is either a clue that the infrastructure for Chrome OS is resolutely not Linux or just a bit of marketese that Google hopes people will forget as the project nears fruition. In either case, it begs the question, exactly which core feature do existing multitasking OS implementations lack that a browser requires to be built in? Are there special spinlocks or mutual exclusion semaphores that a browser requires?

In fact, Chrome OS will run on a Linux kernel. Erm, wouldn't that be an OS designed before the web came along? Linux itself may have been born after Tim Berners-Lee and colleagues implemented the initial HTTP protocol. But the core architecture is that of an operating system designed in the late 1960s, which is even pre-Internet.

The Chrome OS will apparently have a new security infrastructure. Built on top of Linux. "Users won't have to deal with viruses, malware and security updates." That's right, because Linux never has to implement security updates. Oh, wait a minute...

It's possible that Google will insert checks for buffer overflows and other common attacks. But those modules have been available for Linux for some time.

"We have a lot of work to do, and we’re definitely going to need a lot of help from the open source community to accomplish this vision." So, this marvellous OS that is designed in the era of the browser...isn't really ready yet? It doesn't launch until the second half of next year, although an early version of the source code should arrive in the autumn of this year. But the early FUD clobbers Ubuntu and the rest. And Android.

The FUD has already infected Techcrunch where, apparently, Android just isn't built for the x86, whereas Chrome OS is. I'm sure that kind of thing really bothered the engineers at Apple when they looked at OS X. "You know, this thing just isn't built for x86...oh no, it's OK, I found an x86 compiler."

Android remains Linux with some kind of weird Java engine on top. Java was designed explicitly to run on register-limited architectures such as the x86. Unix was designed well before RISC architectures such as ARM existed. Plus, in the meantime, Intel came up with a few tricks to get around the register-based limitations in the x86. Plus, as MC Siegler then admits, there are ports of Android to the x86.

Google admits there will be overlap between Chrome OS and Android but adds: "we believe choice will drive innovation for the benefit of everyone". Translation? "You work it out. We will have two Linux-based OSs, one of which is designed for netbooks and desktops but has been hobbled more than the one for phones."

So, in an environment where support for Android is respectable, but still fragile, Google drops a bomb on its own OS. And, because the time between announcement and actual code is relatively long, people won't have a good idea of how restricted the Chrome OS environment is. It also begs the question of whether Android is actually web-ready if Google needs another go at an OS.

Finally, you have to consider the main target: Microsoft. Google claims applications written for Chrome OS will run in any "standards-based" browser. Stalwarts of web development will probably let out a hollow laugh at this point: which standards are we talking about here? And, note the emphasis: apps written for Chrome. It does not mean the converse.

As an example, Outlook Web Access runs well in precisely one browser: IE. Does Google plan to reverse engineer IE web apps in the hope of running them on its own browser? Or does it hope that the primary destination for Chrome OS users will be Google Apps and need not worry about the rest?

I was going to write on the insidious effect of search-engine optimisation (SEO) on communication but Read/Write Web basically wrote the first half of it for me:

"It's happening to more and more of the blogs I read: the personality, quirkiness and unique voice that once made them so appealing to me are fading. In their place, an SEO-driven uniformity that puts keyword placement ahead of pretty much everything.

"That approach has been afflicting newspapers for some time, as clever headlines give way to the kind of blandness that only a machine could love (which is no coincidence, because machines are the target audience). And many pro bloggers who rely on AdSense for their revenue have been doing it for years.

"But now I'm starting to see it trickle into the blogging of friends and loved ones. I understand the desire to rank more highly in search engines, but as SEO goes mainstream, I can't help but feel we're losing something."

(I don't normally copy out most of a blog post, but you might want to go there anyway as there's also a cartoon.)

The long-term effect of SEO is the thing that really bothers me. The chances are that Google and other search engines will react faster than those creating the 'rules' that people think they need to obey to get better rankings. But the rules might stick around long enough to become a tradition. Worse, they might even forget why the rules were created: "Why are we doing this?"; "That's the way we do it round here."

And, let's face it, what's so great about the idea of writing for a machine? Machines can't read. It's going to be a while before they can read. All they can do, for the moment, is make a list of keywords and their relative positions from which they compute some kind of weighting matrix. All we can hope for in the short term is that the war of keyword stuffing gets so bad - and it's a war where the spammers and sploggers are better able to push keyword stuffing as far as it will go - that the search engines get better at looking for the warning signs of over-SEOd pages.

But spam has encouraged search engines to ignore the out-of-band data that writers could use to improve findability without wrecking the text itself. The meta tag? Vestigial at best and yet with better heuristics to spot spammy pages this is arguably the best home for SEO data. Far better, from the point of view of people than stuffing the SEO data in headlines and crossheads.

The effect on Google of widespread SEO has not done the search engine that much good either. It's taken a while, but the results pages for many popular topics are showing the creeping nonsense that afflicted AltaVista on its way down. The problem for Microsoft is that Bing starts off just as bad.

Free fall

1 July 2009

Malcolm Gladwell's lengthy demolition of Chris Anderson's latest book Free has given Gladwell's review a lot more attention than the book itself. Google Blog Search turned up an estimated 8000 hits for "gladwell anderson review free new yorker". Even given Google's legendary inaccuracy in calculating the number of hits (the chances are it's closer to 800), that's still a lot of comment and bloggage. And here's another one.

Alan Patrick of Broadstuff argues that Gladwell has the necessary superstar cachet to be noticed by the media - economists have criticised Free with barely a nod. But there is another reason why people have latched onto Gladwell's critique: it's one member of The Big Idea book-writing club rounding on another. While Patrick argues that the fascination with Gladwell's review is symptomatic of a forward march into the Age of Unreason, it could be that one bestselling form of business book is nearing extinction.

It's a form that's been falling apart for a while. Michiko Kakutani in the New York Times began the review of Gladwell's own Outliers with:

"Malcolm Gladwell's two humongous best sellers, 'The Tipping Point' and 'Blink', share a shake-and-bake recipe that helps explain their popularity. Both popularize scientific, sociological and psychological theories in a fashion that makes for lively water-cooler chatter about Big Intriguing Concepts...Both books are filled with colorful anecdotes and case studies that read like entertaining little stories. Both use Powerpoint-type plant concepts in the reader's mind. And both project a sort of self-help chirpiness, which implies they are giving the reader useful new insights into the workings of everyday life."

I don't know how chirpy Free is for I haven't read it. But that paragraph could so easily be applied to the glut of high-concept business books that greet you on the promo tables of airport bookshops around the world.

Normally, for a book of this you have to stretch out the argument to ten chapters, in the knowledge that one would do the job - this is possibly why the lecture tours do so well. Why read the book when the author can cut out the chaff live without the unnecessary bother of reading? It is at least one step up from my use of video recorders, which seem to wind up watching films on my behalf. The video recorder never gives me a summary of whether it was worth the decoding.

Kakutani, unlike the victim of his review, gives away the secret in the second paragraph: "'Outliers'...employs this same recipe. It is also glib, poorly reasoned and thoroughly unconvincing".

Now, of course, most reviewers can wield the knife with greater impunity. I don't believe Kakutani has a high-concept business book on the shelves. The problem for Gladwell is that his critique of Free can so easily be applied to his own work. You wouldn't have to change many words in the final sentence of the New Yorker review:

"The only iron law here is the one too obvious to write a book about, which is that the digital age has so transformed the ways in which things are made and sold that there are no iron laws."

So, if the proponents of the high-concept business book are now turning on the form - maybe Gladwell has had a bit of an epiphany or reckons the game is up - will publishers start to turn away from it, fearing that the public itself is getting bored with expositions of easy theories that leave out the counter-evidence?