Dear CompExpert: Can you point to any easy-to-use parallel programming model suitable for tomorrow's 64+ core x86 processors now on the drawing board?
I am well aware of the massive amount of work several decades ago from the likes of Thinking Machines and many others to create massively parallel systems and code, but most researchers I have talked to agree despite the incredible efforts, no one ever solved the problem and it got put back on the shelf for today's researchers to take up again.
Thank you Mapou for taking the time to counter the smoke and mirrors of the businesses and institutions whose goal is not to make a leap forward, but to make money. Profit is fine, but not when the attitude of the businesses is to drain billions out of the economy while making no progress.
I was astonished at Rick's comment in passing that "researchers have failed to create a useful parallel programming model in the past". Parallel programming research and implementation has been going on for half a century. Tens of thousands of PhDs have been granted in the field in the U.S. alone. And "researchers have failed to create a useful parallel programming model in the past." We have a problem Redmond! Words cannot convey the depth and strength of my feelings on reading this.
Surely, Microsoft which has monopolized the software industry and controlled PC hardware development illegally should be broken up into separate entities, but even that doesn't seem to be enough.
Considering the resources and money that have been wasted resulting in - "researchers have failed to create a useful parallel programming model in the past" (I can't get over that - I just can't), our university systems need to be taken apart and those responsible for the failures removed from positions of control.
It also turns out that hierarchical structure processing applications can be automatically parallelized dynamically at runtime with unique and full parallel processing capabilities. Hierarchical structure applications process hierarchical data structures. These hierarchical data structures automatically map the parallel processing pathways which can be automatically parallel processed. Hierarchical pathways can each support multiple data occurrences which can each also be parallel processed. This parallel processing technique does not require user assistance or a design step. It also can parallel process most of the full program pathway and utilize a very high number of multicores. This automatic hierarchical parallel processing was described in the following recent article: www.devx.com/SpecialaReports/Article/40939/1954
Another well researched and fascinating article by Rick Meritt. Thank you, Rick. It's too bad the brainy folks at Berkeley, Urbana-Champaign and Stanford have nothing interesting to offer in the way of a solution to the parallel programming crisis. To boldly come out and declare that "there is no silver bullet" is annoyingly anti-climactic. In my considered opinion, it has all been a waste of time and money. I had high hopes for UC-Berkeley, if only because Dr. Edward Lee, the man who showed everybody that multithreading is evil, is a member of Berkeley's Parallel Labs. I was hoping that Prof. Lee would use his considerable influence to convince the industry that multithreading is not a viable parallel software model. Instead, the wise folks at Berkeley decided to kowtow to Intel's business interests. It makes sense because Intel would be in a world of trouble if the industry abandoned threads and, as we all know, Berkeley's work is partly funded by Intel. David Patterson's strategy seems to be: tell them what they want to hear, not what's good for the industry. In all, I just wanted to say that I am terribly disappointed. What a waste of time, money and brains! You people at Berkeley, Stanford and UIUC cannot say that you never saw the writing on the wall. You have no excuses, in my opinion. Google or Bing "How to Solve the Parallel Programming Crisis" to learn about the real solution to the crisis.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.