Likely not a lot. How much of it would require physically meeting the people he talks to, in an age of email and video conferencing? It's quite possible he's never actually met some of the folks he keeps in touch with. (And I'd guess some he might have met would be through industry trade shows, where they more or less came to him.)
@DMc: Good question. My hunch is most of his interactions (and most of the interactions of his peers all across the industry) involve a significant amount of face time at non-public events the big companies sponsor for their partners around particular initiatives and design reviews and customer updates.
I think there's a whole ecosystem of mini fovcused events and meetings most of us never hear about where a lot of relationship building and work gets done in the quiet NDA world.
You raise an interesting point: I personally think that relationships are built (or at least solidifed) face-to-face. I wonder how many readers think that face-to-face meetings are still important especially when it comes to customers, or if everything can be done digitally now.
Oh, face time certainly helps. Any sales person is likely to tell you that actually meeting with the customer is critical to closing the deal.
But Intel's guy isn't a salesman. He's a chip designer. He's talking to people who use his chips about what they do and how they do it, and how his chips could be made better for that they do. How much of that actually requires an in person meeting?
On a different line, people collaborate all the time without actually meeting. Software devlopers team on projects, and many, expecially in open source development, will never meet, because they are scattered all over the world. How many people contributing to the Linux kernel have ever actually met Linus Torvalds? They might like to, but it's not necessary for what they do.
If I'm in the Intel engineer's position, the data I need will be provided electronically. I'll want specs on the configuration of the servers using my chips, analysis on the workload and task mix on those systems, profiling of code, all the little details about what the machines actually do in a daily basis, and where the choke points are that get in the way. I might want remote access to the servers, so I can see what the server admins see.
None of that requires that I personally meet with the people providing the data. I've no doubt he's met some of the folks he's dealing with, but what he needs doesn't require it.
I keep hearing the PC era is over, but they still take up most of the floorspace in the tech area of most retail electronics stores. I think the reason that PCs are no longer driving innovation in processors is because they're already much faster than the wireless and SMB networks to which the vast majority PCs tend to connect. Thoughts?
The PC era is hardly over, but the market has shifted.
The PC market is largely saturated. Most folks who can use a PC likely have one. There is a substantial market, but it's replacements and upgrades. New sales will be relatively few, and the growth beloved of the financial markets won't exist.
Growth is in data centers and mobile, and Intel's challenge is addressing those markets.
They have a leg up in datacenters, because those already use Intel processors. Mobile is much more of a challenge, because ARM has the same sort of leg up there, and Intel is still strugging to match ARM's power efficiency in an environment where battery life is the scarce resource.
But Intel still faces a challenge in retaining datacenter market share. As datacenters proliferate and server density grows, power efficiency becomes an issue there, too, as power costs skyrocket. ARM is poised to challenge them in the datacenter with chips that use less power and generate less heat.
PCs have been faster than the networks they connect to for a while, but network connectivity isn't the only factor. historically. most of what the PC did was purely local - users ran programs stored on local drives, and created and manipulated data that was also local. HD access speed, CPU power, and graphics performance were far more important than how quick the network was. For most PC usage, they arguably still are.
DM: You're right...there's a lot more upgrading and replacement going on in the PC market. And people are always fascinated with what's new -- and tablets are on the upswing. That said, I think most PC owners may get a tablet (and certainly a smartphone) eventually, but I suspect most will still want a PC/laptop for those times when a tablet or phone just aren't enough. I'm a writer, so a laptop is pretty much indispensible for me. What do others think? Is a tablet alone enough for you?
It's really interesting that some of the biggest customers also may be funding ARM competitors. It's also interesting on the effect this customization will have on the shape of Intel going forward, assuming those large customers stay with it -- I'm guessing: smaller margins, perhaps tied to more competitive (read: lower) pricing.
Is that what others see in the crystal ball for Intel?
If I have a datacenter, I have steadily increasing server density, with higher power requirements and far greater amounts of heat to dissipate. (At a former employer, the server room was expanded with a lot more servers being put into production. I realized the difference when I no longer needed to wear a sweater in the server room, and my boss was calling in electricians and A/C vendors to upgrade the power coming in and cool the larger number of machines. Compared to the datacenters someone like Google or Facebook might operate, it was a tiny operation, and the big boys would be the same thing squared and cubed.)
Big datacenter customers are seeing far higher demands for power and A/C as server density continually rises. Current generations of ARM processors have the potential of being able to do the same sorts of tasks as Intel processors do, with much lower power requirements, If I'm a datacenter operator, I'll find that prospect attractive, and I'll be thinking about when and under what circumstrances making a shift would be a good idea.
@Dylan: some of the software purists will take exception to your description "...accelerates some big-data algorithm such as Hadoop," because Hadoop is an open source platform for distributed and scalable computing.
The trend toward fast, distributed and scalable computing has accelerated recently with the growth of cloud computing & storage. To that end, there has been a growth / consolidation of data centers in the US & EU toward mega datacenters. I doubt if this will be the norm in developing economies where the combination of lower power servers and better computing platforms like Hadoop will trigger the trend towad mini / micro datacenters. Power consumption will be another big driver toward this latter trend.
MP- thanks for weighing and adding your perspective. I wish to point out, for the record, that the above story is authored by Rick Merritt, so it's not my description. But even so, it's not clear to me why software purists would take exception to that description. Just because Hadoop is open source?
@Dylan, I stand corrected! You and Rick are the usual suspects! The point I was trying to make is that Hadoop is not an algorithm but a software framework with many basic algorithms. It is up to the developer using this framework to implement application-specific algorithms. It is an implementation of MapReduce which is ideally suited to mapping (sorting, etc) and reducing (basic parameters of the data like frequency) of large sets of data in a distributed computing environment. Central to this ecosystem is the high speed communication and computing & storage infrastructure.
Hadoop's open source tag hasn't hindered its adoption and it is here to stay, gainfully employing many! You should come by one of these days to Yahoo HQ on Mathilda for the Hadoop meetup every third Wednesday @6:00PM. There is plenty of Hadoop talk not to mention free food & beer!
It may be inaccurate to talk about proprietary algorithms. They certainly exist, but an algorithm is simply a precise rule (or set of rules) specifying how to solve some problem. That algorithm will translate to computer code, in the form of instructions the CPU executes. The focus will be on the instructions, rather than the algorithm.
When speed is an issue, developers apply profiling to see where the code spends its time, and attempt to optimize those parts. Sometimes it results in optimized code. Sometimes it results in an entirely different algorithm that yeilds the same results in a different and more efficient ways.
In this case, I can see data center operators doing the analysis, and going to Intel saying "Our servers spend a lot of time executing the following members of the X86 instruction set. What can you do to speed that up?"
I'm wondering when we are going to see RISC come back into vogue as a result.
@Susan Rambo: the current trend in many developing economies (like India & Brazil) is indeed in mega data centers by largely international companies and some domestic. There are many other small to medium datacenters some of which are colo's whereas many others are independently owned and local to Indian market. Tulip Telecom for example has 900K-sqft facility in Bangalore which I would categorize under mega datacenter (but I don't know the number of pods in that datacenter):
What I was saying below as regards to mini & micro datacenter is an evolving trend to serve those who largely use mobile computing for majority of their needs. The storage and computing demand for that segment of the market is only now taking off in India but I would argue more than 90% of the infrastructure is not there! Unlike in the US, a majority of the Indian data centers are in urban areas where power shortage and infrastructure challenges make it very hard to build mega datacenters. Though that country has seen good investments in fiberoptic infrastructure, the number of active equipment to utilize that leaves a lot to be desired.C-DOT in India has been pushing for micro datacenters linked by highspeed connectivity which has motivated couple of startups toward that model.
MP: Thanks for the reminder about India and other still-developing economies. True, the infrastructure in most of India is just now being built. I know many friends in India who tell me it's still rare to have reliable Wifi for laptops, and so wireless is leap-frogging there. It's good for those of us in the West to remember that technology in different places evolves in different ways. In the US, I believe the laptop is a permanent part of the home environment for the foreseeable future, even as tablet and mobile sales soar.
Do other readers in the west disagree? Are you ready to stop buying laptops/desktops entirely?
Just to be pithy--and because I am curious--I asked Ronak over email if he uses social networks (the digital kind). Still waiting for a response probably not because he is ON Facebook but maybe because he is INSIDE Facebook. He said he had a couple customer meetings in thre Bay Area.
I was expecting to read about an engineer who created a social network at his work in order to "evangelize" his particular ideology, a la Steve Jobs. But it's really about a guy who's specifying chips for social networks. I guess that's why we read the article!
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.