You might call Larry Boucher a parallel entrepreneur and high-bandwidth guy. He developed the SCSI interconnect the veteran parallel bus now making its transition to fast serial incarnations. Boucher later founded adapter maker Adaptec in 1981, storage system maker Auspex in 1987 and his current company, Alacritech, in 1997. Alacritech today is designing hardware acceleration for the ubiquitous TCP/IP protocol. And Boucher is already brewing ideas for a startup that would pursue a fresh approach to server blades, a landmark system concept that he says today's biggest computer companies have gotten all wrong.
EE Times: What's the future of computing?
Larry Boucher: Storage is exploding, and the hot area of computing is storage. If you look at a Web farm today, it's a very early precursor to tomorrow's video-on-demand systems, which are where we are going.
VoD is too narrow a term, really, but it's an easier concept to understand than something like "rich-data servers." Hollywood online is another way to think of it. It's nontrivial.
EET: What does that mean for tomorrow's computer?
Boucher: I think everyone is pursuing it from their own vantage point as opposed to backing off and seeing where the center of this system is. Seven years ago the center of this system was the processor, but the world has changed. The center of the system is now the switched backplane, and the processor is one more peripheral.
Boucher: We've had a change in demand. Up until seven years ago we wanted the next killer app, and everything was focused on how to make this processor more efficient. When the killer app was the most important thing, the processor was the most important thing.
The Internet has changed what we want. Seven years ago, the demand was processing cycles; today it's data delivery. That's a huge change in what we want out of this environment.
Seven years ago 90 percent of all data that was stored was being stored to be moved from processor to processor. Today 90 percent of all data is being stored to be viewed: Its goal is not to be processed but to be seen. All the Web data is there to be viewed. So we've moved from where the center of gravity for computing was data processing to where it is data delivery. As a result, the center of the computer farm is the backplane of the switch, even though that has not been widely recognized.
EET: What's wrong with the current crop of dense server blades we're seeing?
Boucher: How to get more computers into a smaller space is not the problem for the Internet data center. The underlying problem is: How do I get the data to the ultimate end user off the head of this disk? And, by the way, I need to move the amount of data up by two orders of magnitude. In this light, the present blade servers are a joke. They are exactly 180° from what you want. They give you the least data-transfer capability.
The backplane of a switch is what people are building to move data for superlow latency and superhigh bandwidth.
EET: Describe the architecture you would propose.
Boucher: Imagine you take a four- or eight-way Pentium computer and build it so it plugs right into a Cisco Catalyst switch. You have a serious processor that is as close to the client computer as you can get. If you deploy three or four of those in a switch, you have a supercomputer. You can let those things talk via a Layer 1 and 2 of whatever the backplane is, because everyone has their own backplane and Layer 3 and 4 is Infiniband, if the Infiniband crew doesn't kill themselves. They are trying.
Why not plug storage blades into that same switch with a blade that goes to Fibre Channel or iSCSI [Small Computer Systems Interface over Internet Protocol] and that has RAID on it? Now you have a basic switch that can be configured however you want [as] a supercomputer, a truly nasty Web server or a file server and you can put however much of whatever you need into it. It's one infrastructure that supports all your resources.
EET: Has anyone ever tried it?
Boucher: This is basically the architecture we built at Auspex 15 years ago. I took the concept to both Cisco and Nortel about 10 years ago.
Cisco at the time was sufficiently wrapped up with VoIP [voice-over-Internet Protocol] that they didn't want to get into it. At Nortel we actually went far enough to build a prototype using an Auspex box and a Nortel switch to see if it could be made to work. It was pretty interesting, but it never went anywhere.
EET: Why not?
Boucher: The switch guys don't have a clue about how to build storage, file systems or processing blades. Even the switch guys think those are peripherals [to the computer]. But if you were to do a startup and go to a Foundry [Networks Inc.] or Extreme [Networks] and get their backplane technology and build blades that would plug into it, that would be great. I've given this suggestion to a number of people. I had been pushing them off on Cisco and realized that was a dumb idea, and so recently I told them maybe Foundry and Extreme [were] a better place to go. . . . I may go to the venture community and get them really working on it at some point in time.
If you made that product available it would be at a price/performance point that hasn't been available yet. It would be really exciting. The switch companies are in the perfect position at the center of the infrastructure today.
EET: So the communication companies are the new computer companies, if they would just realize it and open their architectures?
Boucher: That's exactly right. They are in the right place in the infrastructure and in the market. The computer companies are out of position, and they are not trying to get into position. If they were, they would not be building these so-called blade servers, which are really just very large-node computer servers.
The players that are entrenched in yesterday are staying in yesterday. The Fibre Channel switch guys will be storage switch guys forever and will die. The computer guys are large enough that you've got to believe that maybe they will figure out how to move forward, but they are not moving very fast, and their idea of a blade server is . . . well, I don't think there will be a huge number of those things sold. They are missing the idea. Still, I do not believe the computer industry can force the direction to be other than switch-based computing.
EET: You said the Infiniband crew is killing itself. How so?
Boucher: The problem with the Infiniband crew is that it is defining an entire software stack, and that's not going to work, because Layers 1 and 2 are going to be Ethernet. If you do anything other than Ethernet, you will fail.
All Infiniband really is, is a decent way to cluster. If no one realizes that and the Infiniband crew continues to insist their Layers 1 and 2 are so much better than Ethernet, then warp-over-TCP [a remote direct-memory-access capability that brings Infiniband techniques to Ethernet] will kill them.
That's exactly what happened to Adaptec and storage-over-IP. I tried to get Adaptec years ago to do storage-over-IP with SCP STP. That's a SCSI encapsulation protocol that defines a way to encapsulate SCSI and run it over any transport. So when you are in your local area you have something that is the equivalent of Fibre Channel, but you could also put it on TCP for the wide area.
Adaptec had this all done and running in the labs, but . . . [in the end] they killed it. Then IBM and Cisco came out with iSCSI, and that was it. The way people have designed iSCSI, it only works one way. That's OK, but it's not as elegant an architecture as it could have been.
Just as Adaptec did that with SCP STP, the Infiniband crew if they continue to keep their heads firmly inserted in the sand will kill themselves, and warp and Ethernet will beat them.
The point is that there will be on the backplane of a switch TCP/IP running networking and storage, and it will either run clustering too or have Infiniband Layer 3 and 4 running clustering on the backplane of the switch.
EET: So what is the new competitive battleground?
Boucher: Externally, Layers 1 and 2 are going to be Ethernet; there is never going to be anything else. Internally in the system, Layers 1 and 2 of the switch will always be the secret sauce. I don't see that ever becoming a standard. Extreme Networks wouldn't exist if they had to be on a standard. If you really want to build a nasty switch, you figure out the backplane yourself. That's how it is today. Every single switch out there has its own backplane and its own controls. But what we will see now is large companies or startups plugging processor or storage blades into those switches.
EET: Let's talk about what you are doing at Alacritech. What was the genesis of this idea of hardware-accelerated TCP/IP?
Boucher: When I was at Auspex we had three different blades: networking, storage and processing. Each one cost us about $2,000 to make. The storage blade had ten 40-Mbyte/second SCSI interfaces on it, for a total bandwidth of 3.2 Gbits/s. Next to it was a networking blade with two 100 Base T links, for 200 Mbits/s. There was a little discrepancy there.
I sat down with the networking team and said, "Guys, we've got to get our price/performance up." The only difference between the two blades was that one had the protocol in hardware and the other had it in software.
So we tried to put the protocol in hardware, and they proved it couldn't be done. I worked very hard trying to convince them it could be done. Eventually, I gave up.
Once I left Auspex, I figured that, since the world knew it couldn't be done and it had been proved that it couldn't be done but it clearly needed to be done we ought to go do it. The world no longer thinks it can't be done, but the networking world still doesn't think it needs to be done. This is an ironic thing, and it's one of our challenges.
People thought Adaptec did SCSI, but what Adaptec really did was put disk-protocol processing in hardware. What that did was reduce cost and increase performance radically.