June 2009 Archives

The Intel/Nokia deal that will see the two companies work on Linux-based software for mobiles is both good and bad news for Microsoft. But it's a real problem for Nokia's own Symbian group. When you consider that the Symbian OS started life as a mobile internet device (MID) operating system and mutated into smartphone software and took more than ten years to get where it is today, this is not the outcome that Nokia could have wanted when it decided to back Symbian. And it leaves Nokia looking a bit flat-footed against much smaller but apparently much more nimble competitors. The only good news for Symbian is that at least it's not Limo.

Nokia outsells the more fashionable brands, such as Apple, by a long way. This is something that the Symbian people were keen to hammer home when they decided to go open source last year. But reviews of the latest high-end Nokia phones are not good, with a number pointing to the overall clunkiness of the Symbian interface when compared with more recent software that was put together in a fraction of the time needed to get Symbian OS into its current state.

So, Nokia is left trying to work with Intel on knocking Moblin into shape so that they can take on the iPhone and the various Android-based devices going through design. In a back-handed way, this is good news for Microsoft when you look at the potential for bigger MIDs and small netbooks. Balkanisation of the Linux landscape only plays into Microsoft's hands. Intel will have its very own virtualisation software courtesy of Wind River, although the future embedded software subsidiary is mainly pitching that at telecom infrastructure equipment. It's a small leap of imagination to go from a design that runs Moblin (or indeed Android) to one that bungs on Windows 7 as well for those moments when only MS Office will do.

There is precious little detail in what Intel and Nokia have said so far but Nokia is just the kind of customer that Intel would want to have Atom-based devices made at TSMC. For Nokia, it makes the decision to go with ARM or Atom little more than a matter of deciding which core to slap down on the system-on-chip (SoC). The open question is how much Nokia will want to outsource design to Intel, given that it has been slimming down its own chip-design operations, versus the amount of control the DIY approach will give the Finnish company.

At the same time, Intel gets a lot of useful intellectual property (IP) that it can slap into an Atom- or Centrino-style chipset that finally might make a netbook a netbook rather than just a very cheap laptop. You have to hope Nokia got a very keen deal on pricing for its use of Atom given that this is a deal that gives Intel much better access to a market it currently craves and one in which, if the number-one chipmaker succeeds, will be able to play the monopoly game. Being able to pit Qualcomm and the other ARM processor suppliers against each has made ARM a much more attractive proposition to handset makers than anything Intel can come up with. But if, through a deal with a top supplier, Intel can get a foot in the door, the balance of power may shift.

But, it worth remembering that Nokia's previous attempts at the MID business, and deals with Intel, have hardly been stellar successes.

Len Jelinek, director and chief analyst at iSuppli, has stuck his neck out and called the end of Moore's Law as an economic driver in 2014. He hasn't said development has to stop technologically, more that the costs will not outweigh the advantages of going further than the 20nm or 18nm node.

He is at odds with the official Intel position. Leading Intel technologist Kelin Kuhn was in London to talk about some of the potential roadblocks on the way to 15nm and beyond. And she was bullish about the future, even though it relies on some very big changes to the way chips are made with the next five years.

And you don't have to wait long for someone to tell you that, in the 1980s, a bunch of researchers reckoned 1µm was the limit. The story is relayed in much the same way history teachers tell of Victorian fears that riding in cars at more than 30mph would tear people's heads off. So, has Jelinek got it wrong?

OK. So the first thing to get out of the way is Moore's Law: It's not really a law is it? Not like Newton's laws of motion or the laws of thermodynamics in which bad things happen if they don't work. But it has proved to be a very good observation of an industry that has been good for about 40 years. It's actually an observation on what price elasticity and technological development can do for an industry.

I think it should actually be called the Moore-Noyce Law because it was Bob Noyce, Moore's colleague at Fairchild and then Intel, who came up with the pricing model that meant Moore's Law became the key to predicting a market sector driven by deflation.

You might be thinking: who cares? So what if it stops? The world certainly won't stop but long-held assumptions about the development of computing and the development of a technology-driven society do rest on the idea that there is this constant upgrade cycle in action.

Second, Moore's Law isn't really very detailed. When Moore plotted some graphs and extrapolated in the mid-1960s, there wasn't any such thing as a 'process node'. There was no International Technology Roadmap for Semiconductors (ITRS) to tell you what the half-pitch measurement for the first metal layer was expected to be from year to year. Moore explained later that the actual reduction in size of the transistors and circuits was responsible for just one-third of the increase in chip function per dollar every two years. (It started off as a doubling every year but this soon levelled out to two by the time Moore gave his more detailed analysis of the graph he drew in the mid-1970s). The other two-thirds came from an increase in chip size and improvements in design techniques. Although the electronics industry was one of the first to employ computers for design, most layout was done by hand on sheets of plastic even ten years into the Moore's Law period.

What has happened since is that the one-third down to shrinkage in two dimensions now accounts for the bulk of the biennial improvement in density. So, it's easy to equate Moore's Law with process technology. But there is no reason for things to stay that way. The original graph only plots two things: 'number of components per function' versus time. There is no declaration of how any of that is actually to be achieved.

Personally, I reckon, and I'm not alone by a long shot, unless there's some kind of miracle in extreme ultraviolet lithography or someone comes up with some crazy way of continuing to use 193nm lithography, there's going to be a bit of a hiatus between 22nm and 18nm or 15nm.

The question is: will anybody outside the industry actually notice. Because there are few tricks the chipmakers can pull to honour the promise of Moore's Law but which utterly break the connection between process node and the trend that has driven chipmaking since the mid-1960s. The lead feature in the upcoming E&T deals with part of that. There is also one from a few weeks back that goes into one of the ways out. But I'll add the links in, as well as some of the iSuppli charts as they do explain a lot, and follow-up later. I wanted to get this post out of the way quickly.

The Stretta Procedure on Tom Oberheim's revival of the SEM analogue synthesiser:

"The recent introduction of Tom Oberheim's SEM re-issue sparked a spirited debate on the sonic differences between surface mount and through-hole components..."

You have got to be kidding me. No, wait. This is world where Monster Cables is doing good business. I can believe that some 'experts' reckon there is an audible difference based on the way in which you solder components to a printed circuit board.

It's only a matter of time before someone determines that the key to the original SEM's sound is the lead content in the solder – I'm assuming Oberheim is using lead-free solders for the new SEM so that the company gets to sell them in Europe and Japan.

The only real difference - other than the updated specs of the new components on the PCB - is going to be if a surface-mount chip 'tombstones' off the board, giving you an accidentally circuit-bent synth. That's why washing-machine makers didn't use surface-mount components for a long time: the manufacturers worried about the chips being shaken off the board. Given that it's mostly discretes used in the new SEM, as I understand it, I can't see that happening very often, unless you fancy going all Keith Emerson on it.

I need to go find the discussion so I can have a good laugh.

Bi curious

8 June 2009

This has to be the candidate for the shortest web URL evah. I clicked on a heavily truncated link in Twitter to be delivered to a site with a working web server with the internet's own version of the programmer's Hello World message: "It works!"

Nothing very special about that other than the fact that the address was just two letters: bi. Curious as to how you could get to a working website without a single dot, I had a look around to see who owned the 'bi' top-level domain. It turns out to be Burundi, which seems to have a single web server that directs accesses to addresses like 'gov.bi' to the same curt but cheery page.

bi-works.jpg

Burundi's not very well off, so maybe they should give Bitly and TinyURL a run for their money (if there is any) in the URL shortening business.

Silicon and software marriages, particularly in embedded systems, have never been very happy affairs. Freescale Semiconductor bought several software tools companies in its time and has vacillated between having its own software operation and letting third parties do the job. The problem is that silicon companies are almost uniformly terrible at extracting money from customers for software. They tend to treat software as a loss leader in the hope they can lock systems houses into long-term buys for silicon.

There are two ways to look at the Intel's decision to buy Wind River. The optimistic scenario is that Intel management looked at Wind River and realised the software company was moving into areas that are key to the chipmaker's development. Wind River has an active role in the Multicore Association, which is developing software standards for multicore processors. Intel needs to understand as much as it can about multicore development. However, the chipmaker already has its own team working on this stuff and from what it's published so far, seems to be in advance of Wind River's own work. The difference is that Wind River has more commercial experience of multiprocessing through work with telecoms customers. The multicore API committee that Wind River engineers is on has its roots in telecom.