IBM's name is seen synomynous with hardware business. Its hard to believe that IBM can be only software development company. There are numerous software companies already being there. IBM stands out from crowd because of their hardware experience.
If EDRAM is so good, and IBM is talking about selling it to someone, doesn't it give IBM levearge to ask for specific conditions , for example an ability to purchase wafers using EDRAM at a reasonable price ?
And if that's the case, it's most likely a secret from us and company employees ,as common in negotioatios.
And if the tech is so great , why wouldn't global foundries use it ?
Embedded DRAM (I prefer eDRAM as an abbreviation since EDRAM was previously used for Enhanced DRAM) has significant density advantages, but involves significant addition manufacturing cost. This is somewhat like incorporating flash memory into a computation-oriented chip—it adds cost and may have delayed availability or delay the roll out of a new process—, but incorporating flash can more easily reduce system cost for small systems by removing the need for external persistent storage.
If an eDRAM-capable process is available sufficiently later at sufficiently higher cost, it can be attractive to use a smaller ordinary process and use SRAM since the smaller process will shrink and increase performance/energy-efficiency of the logic and make SRAM density more comparable to eDRAM in the older process.
eDRAM is most useful when bandwidth or latency demands integration but the desired capacity would be too great for practical integration if SRAM was used. (eDRAM also has some error resiliance advantage over SRAM.) This makes it great for huge L3 caches integrated into the processor chip (or even just in the same multichip module as the processor chip). It could also be useful as a high bandwidth memory for a GPU.
The lack of great success of Mosys' 1T SRAM (higher performance DRAM abstracted behind an SRAM interface) hints that the market for a memory with its capacity advantage is not especially large. (I seem to recall it was used by HP for a PA-RISC processor's off-chip cache.)
For a foundry eDRAM seems to be something of a niche feature (more so than support for integrated flash memory). Without IBM's server demand for such, the market may not be able to support the extra costs. Process development is already expensive and risky, so Global Foundry's rejecting the extra cost and risk for such a niche feature is understandable.
(I also seem to recall that IBM's eDRAM techology was linked with SOI. SOI also adds cost and seems to be less popular among foundry users.)
A technology can be theoretically very useful ("great") without being very profitable.
By the way, IBM's POWER7 exploited the presence of trench capacitors to provide better power regulation. This additional advantage might expand the niche slightly but only slightly.
I did not mean to imply that SOI was required for eDRAM but that SOI was exploited by IBM's more recent eDRAM uses (i.e., linked). To confirm this vague recollection I found John E. Barth's 2008 presentation "eDRAM to the Rescue" in which it is stated "Use the Buried oxide to simplify the process & reduce parasitics – half the cost of bulk eDRAM".
In hindsight I should have been more clear (or even better searched for the above statement rather than rely on recollection).
Yes , you're right . in bulk silicon , eDRAM needs a collar oxide (rather thick oxide on the walls of the top of the trench) to prevent a parasitic NMOS transistor to turn on. With SOI substrate , you don't need it anymore.
Qualcomm, Oracle, Apple, Digital Equipment and others have all shown that competitive processors can be built in foundries. The real issue is the cost of IBM retaining its own fabs/process vs. the cost of foundry services. As zSeries and pSeries sales have fallen, and the game business was lost, the cost of IBM maintaining its own fabs has become untenable, and transition to future technology nodes unaffordable.
This article is written as if the design space is very limited. Since IBM is in the system business, there are many design options to achieve overall system cost, reliability and performance. For example, rather than foundry-supplied eDRAM, it might be more cost effective to use RAM on a silicon interposer or RAM attached to the processor chip with TSVs.
Very interesting original post and comments section.
* In my opinion, IBM's value is in its ability to control and optimize the hardware AND software elements. The ROI on the POWER architecture should be judged on the systems and software sales this underlying architecture delivers. I think the decision IBM took to not take on x86 in the general server market was the right path. Therefore (and you could argue I am a hardware guy and am thereofre biased!) a shift to software only is a dangerous path. I view those solutions as less "sticky" - opens IBM up to more competition
* I don't believe it is mandatory to own a fab to build compelling high end server products...However, if you don't have your own fab, you certainly need to be a large and influential enough company to get your custom process tweaks implemented that make your particular solution shine
While it's no substantial change for microelectronics, IBM is signaling fairly clearly that it's never going to invest in any new opportunity in hardware because of the commodity disaster that is the cloud.
I feel for this IBM employee that sees good technologies with opportunities to continue differentiation from competitors, but the sad truth is that when the buck stops at the C-level, IBM will never spend money they could use to buyback shares to invest in opportunities that might perhaps someday grow their hardware businesses. Not organically, and not through acquisition. Not unless something radically changes at the top levels of management.
You might look at GE reinvesting in being a maker of tangible things and think it could happen to IBM, too. But there's no bellweather moment for IBM like GE Capital pulling the whole conglomerate down like an anchor. IBM will continue to act like the share price is the company and all its worth, and continue to view any investment in its hardware business as detrimental. For the sake of the employees, it is hard not to hope they sell microelectronics entire.
There's a general problem with comanies:The stock market punishes them in long term projects. Attacking intel would be such project. Why behave in such manner ? Because stock investors basically look for short term gains. Invest in long term projects - and you decrease your appeal.  is a nice explanation of this.
That's why you get companies today hoarding cash , in a world so filled with technological opportunies.
One cannot be in the electronics industry for long without being aware of the contributions of IBM, and admiring the transitions they have made over the years.
For this topic, IBM invested in custom development (fabs, design, eda) at a time when literally these markets did not exist externally. In the meantime, the world has significantly changed, and it is not clear whether these custom techniques in fact provide significant competitive differentiation for their server class products. Plenty of companies (HP, EMC, etc) seem to find a way to build compelling products without these investments.
Like any large organization, I expect it is difficult to change gears, but I suspect there is little choice. Inspired leadership would figure out how to utilize the talented individuals within these organizations in a more productive manner. At least, that is the hope.
MBA saleswomen Carly Fiorina and Meg Whitman ruined the once great hardware company HP. Now Rometty with experience mostly in sales is shutting down Hardware mfg at IBM. The US has had to pay a heavy price for women's equality ! Engineers and hardware guys have the handicap that they need capital to do most anything significant so these airhead MBAs in pantsuits sent by Wall St. can boss over them.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.