Technology: July 2009 Archives

Endless endless

31 July 2009

Take two Macs running Screen Sharing and someone who couldn't be bothered to cross the office to put a laptop to sleep and this is what happens. I'd been scanning some notebook pages to park in DevonThink. Because the laptop was physically closer to the scanner/printer, I used Screen Sharing from the laptop to control the scanner-management software that I had running on the desktop machine.

Once I got back to the desktop I thought it would be an idea to put the laptop to sleep. Although it is matter of metres away I decided on the lazy option of just going into the laptop through Screen Sharing from the desktop machine. As the screen painted, I realised my fundamental error. Hello Infinite Loop.


Well, it took a few moments to fill in the image as bits of screen data whizzed from one machine to the other. And, not surprisingly everything got a little sluggish. Keystrokes certainly didn't work. I'm not sure whether that was because of the cycles Screen Sharing was chewing up or because the keystrokes were flying backwards and forwards over the network to see which machine really owned them. I had to grab a camera to take this picture then just shut the lid on the MacBook Pro to cut the feedback loop.

Remember, don't try this at home, kids.

If there is a difference between synthetic biology and plain old genetic engineering, it is one of scale. Genetic engineering typically relies on just adding a gene here or there. Synthetic biology is about working on big chunks of the genome - inserting or deleting entire sections.

As a result, it's not a surprise that people see gene synthesis as a key enabling technology for synthetic biology. You need bigger chunks of DNA. Ergo, you need to synthesise a lot of DNA. The J Craig Venter Institute (JCVI) is the most famous proponent of this philosophy having enlisted the help of four gene-synthesis company, and a lot of delicate cloning work at the JCVI, to build an entire genome from scratch. Ultimately, that synthetic genome is meant to reboot an existing bacterial cell with a new operating system. But it's by no means a popular approach, not least because it's so expensive to do and because it's not that useful in practice.

The Office of Life Sciences Blueprint for the UK's National Health Service (NHS) is a remarkable document. It's found a way to turn the nation's single largest consumer of public funds into a profit centre. Although the blandly worded manifesto for a technologically led healthcare system will probably remain mostly unimplemented amid the upheaval of a likely change in ruling party come 2010, it still carries a slightly chilling undercurrent.

Take this nugget for example (with my emphasis):

"1.5 The NHS is a unique selling point for the UK, and has the potential to add significantly to the UK's attractiveness as a base for life sciences, providing high-quality healthcare to all, and offering a competitive advantage with its vast patient databases for clinical trials and investigations. There is also a vital role for the NHS as a value-creator and an engine of economic growth, leading the way in the uptake of innovative medicines and technologies, deepening collaboration with industry, and helping the industry to flourish and grow. In turn, a flourishing life sciences industry will help generate the step-change innovation needed to maintain quality and productivity into the future."

In other words: "Invest here. We've got lots of sick people with all their records sitting in a shiny new database (well eventually they will be)."

One thing that's been niggling at me all day on Google's Chrome OS is: if everything just runs in a browser-based sandbox, why bother with an x86 port? You are, on average, going to pay more for an x86-based machine than one based around an equivalent ARM processor for the simple reason that ARM licensees will tear each others' throats out to get into a high-volume design. Intel doesn't have the same kind of pressure, although it does need to play nice for a while to avoid driving too many customers into the ARM world.

But the main reason for slinging an x86 processor into anything at the low end of the pricing or size scale is compatibility with Microsoft Windows. If all the applications run in a browser, presumably using some mixture of Javascript, XML and services from The Cloud, there is no real advantage in having an x86.

PC makers may prefer to put Chrome OS onto an x86-based board for inventory reasons - one selling point may be that such a stripped-down environment gives them a way of selling a $99 machine without cannibalising the market for slightly more capable machines able to run a full Linux or Windows 7 OS. Having to do an ARM version as well as x86 for very similar hardware increases design cost perhaps to the point where the price differential between Intel's silicon and everybody else's becomes irrelevant.

Then you have the compatibility option: force Windows into its own little sandbox and run it under virtualisation as an option for those who cannot give up on everything Microsoft. Google has made no mention of virtualisation. But the idea of a Windows that cannot do too much damage to your main computing environment has its attractions. It's something that the military is already using in a different context through software such as Green Hills' Integrity.

Strictly speaking, Windows runs on emulated hardware on top of Integrity but the idea is that Integrity has full control over the system and, in one implementation, has been certified to Common Criteria EAL 6+. In other words, it's very tough to hack into, unlike Windows which is two levels down. Level 4 sounds good but means it's vulnerable to "casual" attacks.

I can't see Google going to Common Criteria certification. But a heavily slimmed down kernel provides a smaller attack surface. Having Windows isolated through virtualisation with the tunnel to and from the web monitored by the core OS may provide a more secure way to run Windows applications, or at least the illusion of it, which is all Google really needs. Then Google can sell its own sandbox as a more trouble-free environment: "only go to Windows when you really, absolutely must get to Exchange through a VPN connection".

Otherwise, you are left with the reason for having x86 compatibility feature so prominently being that Chrome was written initially for that architecture. That's not really much of a reason to pay more for the hardware.

In the spirit of scientific wagers that saw Richard Feynman bet against micromachining and Steven Hawking reject the idea of Cygnus X-1 harbouring a black hole, Professor Lewis Wolpert has staked a case of port that maverick biologist Rupert Sheldrake is wrong about the role of the genome in determining the fate of living organisms.

However, it is a bet that Wolpert expects he will lose, not because he believes he is wrong but because computer technology and human knowledge will not be able to establish his position in 20 years' time when the outcome of the bet is to decided.

"My guess is that I have been quite generous," Wolpert told me on the question of timescale. It could take 40 years, not 20, for science to work out how to predict the shape of an organism from the genetic information contained in its egg. "It is really a matter of being able to develop the molecular biology needed to understand the interactions within the cell: the way in which all the proteins interact within the cells."

Not only that, there is the issue of how much compute power will be needed to do the prediction. "It will be immense," said Wolpert, who reckons 40 years is a more likely timescale in which to prove him right.

The full arguments from both Wolpert and Sheldrake are published in this week's New Scientist. The release that has gone out from Sheldrake's publisher, Icon Books, makes it look as though Wolpert had narrowed the scope of the bet too far. "[Wolpert] is convinced that it is only a matter of time before all the details of an organism can be predicted on the basis of the genome," it read.

That rang a few alarm bells as, if that meant the genome in terms of a sequence of DNA, the chances are that there is not enough information in that to be able to predict much with any accuracy. This looked to be a bet Wolpert is guaranteed to lose. But Wolpert explained that, from his point of view, the bet takes into account the additional information that an egg might have and not just the sequence of DNA in the genome. "It involves the entire chemical constitution of the egg."

Wolpert is not a big fan of epigenetics as a means of inheritance but acknowledges that the action of proteins on DNA, often silencing genes by adding chemical side chains, or by wrapping the DNA around them, influences the behaviour of the cell.

The bet stems from a March debate at the University of Cambridge between the two in which Sheldrake maintained the position that the genome cannot determine how an organism will form. "It provides nothing more than the code for making proteins," said Sheldrake. His argument is that for cells to differentiate something else is needed. For Sheldrake, that something else is 'morphic resonance': the result of fields formed by biological material that communicates form and function to similar creatures.

That DNA provides the recipe for proteins, and only does that, is no problem for Wolpert. It's not just proteins that DNA templates for, RNA is the initial product and, in anything above bacteria, an important product for cell development. I don't think Sheldrake and Wolpert would disagree on what DNA generates, the difference lies in the emphasis.

For Sheldrake it is "just proteins" (and RNA). For Wolpert, that's all you need. Emergence takes care of the rest. It is in the complex interactions between proteins, DNA and RNA that organisms grow and develop. Nothing else is needed. Our only problem right now is that we cannot predict precisely how an organism might develop purely from the mixture of DNA and proteins in an egg, assuming we could, in the first place, take a snapshot of all those things.

In the March debate, Sheldrake argued that Francis Crick and Sidney Brenner had made all these deterministic claims before, back in 1963. And then, the timeframe was only ten years. What's so different now?

Clearly, Brenner underestimated the complexity of the job of using genetics to predict development. Crick took on consciousness, which was an even harder thing to explain. But, in that time, molecular biology has answered a good many questions about development even if, at the same time, it has raised many more.

Sheldrake's position boils down to an argument that fields we cannot detect directly are somehow able to act on biological material, and biological material alone, in apparently complex ways. In fact, anything that does not have an immediate explanation in the molecular world simply complicates the field part of the equation. And, in Sheldrake's view, this is a field with memory. That's some field.

Techcrunch calls Google's announcement of an operating system designed to run precisely one application a nuclear bomb on Microsoft. A commenter further down tones it down a bit: "a bullet aimed at Microsoft". Or maybe it's a fart in the general direction of Redmond, WA?

The Techcrunch claim is based on a largely detail-free, claim-heavy blog post at the official Google blog. It's all vaguely reminiscent of the pronouncements that Sun Microsystems made for Solaris in the 1990s with almost zero evidence:

"It's scalable. Way more scalable than any other OS."

"But it's a warmed-over Unix."

"No other Unix or OS is as scalable as this one. It will scale from your toaster to a mainframe."

As it turned out, it scaled from an expensive workstation to an expensive minicomputer. And its creator looked near helplessly on as Windows and another warmed-over Unix ate even into that space. Oddly enough, Linux can run in a toaster, just as long as you don't mind slamming a few megabytes of DRAM into an appliance with the sole function of heating bread.

What has Google got in its hype playbook? Why, the web, of course. "...the operating systems that browsers run on were designed in an era where there was no web." What, like Unix? This is either a clue that the infrastructure for Chrome OS is resolutely not Linux or just a bit of marketese that Google hopes people will forget as the project nears fruition. In either case, it begs the question, exactly which core feature do existing multitasking OS implementations lack that a browser requires to be built in? Are there special spinlocks or mutual exclusion semaphores that a browser requires?

In fact, Chrome OS will run on a Linux kernel. Erm, wouldn't that be an OS designed before the web came along? Linux itself may have been born after Tim Berners-Lee and colleagues implemented the initial HTTP protocol. But the core architecture is that of an operating system designed in the late 1960s, which is even pre-Internet.

The Chrome OS will apparently have a new security infrastructure. Built on top of Linux. "Users won't have to deal with viruses, malware and security updates." That's right, because Linux never has to implement security updates. Oh, wait a minute...

It's possible that Google will insert checks for buffer overflows and other common attacks. But those modules have been available for Linux for some time.

"We have a lot of work to do, and we’re definitely going to need a lot of help from the open source community to accomplish this vision." So, this marvellous OS that is designed in the era of the browser...isn't really ready yet? It doesn't launch until the second half of next year, although an early version of the source code should arrive in the autumn of this year. But the early FUD clobbers Ubuntu and the rest. And Android.

The FUD has already infected Techcrunch where, apparently, Android just isn't built for the x86, whereas Chrome OS is. I'm sure that kind of thing really bothered the engineers at Apple when they looked at OS X. "You know, this thing just isn't built for x86...oh no, it's OK, I found an x86 compiler."

Android remains Linux with some kind of weird Java engine on top. Java was designed explicitly to run on register-limited architectures such as the x86. Unix was designed well before RISC architectures such as ARM existed. Plus, in the meantime, Intel came up with a few tricks to get around the register-based limitations in the x86. Plus, as MC Siegler then admits, there are ports of Android to the x86.

Google admits there will be overlap between Chrome OS and Android but adds: "we believe choice will drive innovation for the benefit of everyone". Translation? "You work it out. We will have two Linux-based OSs, one of which is designed for netbooks and desktops but has been hobbled more than the one for phones."

So, in an environment where support for Android is respectable, but still fragile, Google drops a bomb on its own OS. And, because the time between announcement and actual code is relatively long, people won't have a good idea of how restricted the Chrome OS environment is. It also begs the question of whether Android is actually web-ready if Google needs another go at an OS.

Finally, you have to consider the main target: Microsoft. Google claims applications written for Chrome OS will run in any "standards-based" browser. Stalwarts of web development will probably let out a hollow laugh at this point: which standards are we talking about here? And, note the emphasis: apps written for Chrome. It does not mean the converse.

As an example, Outlook Web Access runs well in precisely one browser: IE. Does Google plan to reverse engineer IE web apps in the hope of running them on its own browser? Or does it hope that the primary destination for Chrome OS users will be Google Apps and need not worry about the rest?