I don't really think there will be any more problems with reliability at the exascale level. I'd expect it to be more reliable.
Years ago, my father described the vacuum tube computing devices he worked on as "operating on the principle of burning themselves out." I understand that some systems had technicians always on staff with the singular job of continually locating and replacing burned out tubes.
With 750,000 more or less equal processors, none of them are significant. Load leveling algorithms have been around for a long while and non-functioning units can just be removed from the processing flow via software and returned to it once repaired or replaced.
Again, hot-swapping technology is nothing new as well. That being the case, any failed units could be left in place without worry until specified maintenance time.
If Watson understood the category of the question (U.S. Cities), I don't understand how this error could have happened by accident. Certainly he had a very thorough atlas / map database that was used for other questions. It would be the necessary cross check before presenting an answer. Given all the question marks, I'm wondering about a sense of humor / strategy to keep the game interesting.
Yes, saying that "Toronto is a U.S. City" is a bad mistake, but it just means Watson was grasping at straws. An even more telling mistake was in round one, where Watson gave exactly the same answer as the human before him. He should have known it was wrong, but did not listen to what the human said.
The way IBM explained it to me, was that Watson does not scan its database, but has prefabricated indices. These are machine generated using neural-, genetic- and statistical learning algorithm so there would not be a specific list of all U.S. cities. A weird answer so obviously wrong--like Toronto as a U.S city--only means the association was roundabout.
Watson missed the final Jeopardy question on the topic of, "U.S. Cities", when it gave the answer of, "Toronto ?????". To Watson's credit, it did put in those question marks and it bet very little (still soundly beating its human opponents). However, that was the most disappointing moment for me so far (besides the lack of a chatbot function so that it could socially interact with Alex Trebeck). It underscored that Watson is not truly intelligent but is just a highly advanced search engine with enormous raw computational abilities. The humans still win no matter what the money scores say.
@KB3001 ya reliability will be a major issue in exascale. Hopefully we will design more new tools to solve this issue.
By the way latest news is that "Watson flexed its considerable computational muscle, trouncing two all-time Jeopardy champions in last night's game and made a puzzling stumble with its answer in the final round".
source - http://www.pcmag.com/article2/0,2817,2380429,00.asp
As we unveil EE Times’ 2015 Silicon 60 list, journalist & Silicon 60 researcher Peter Clarke hosts a conversation on startups in the electronics industry. Panelists Dan Armbrust (investment firm Silicon Catalyst), Andrew Kau (venture capital firm Walden International), and Stan Boland (successful serial entrepreneur, former CEO of Neul, Icera) join in the live debate.