The prospect of replacing the human brain with smart algorithms is a little scary at first, but when you look at it from the aspect of reducing human error it seems quite acceptable--even desireable--especially if it prevents wrong diagnoses by doctors.
When a good doctor prescribes a medicine or a particular treatment to a patient, he considers , apart from the pathological and such tests, many other factors - these are the patient history, patients attitude towards the treatment, patient's financial condition to afford a particular treatment and so many other factors which may not have a logical reason .
Will the so called "Big Data" acting as a brain be able to come to the correct prescription.based upon these factors?
Consideration of such factors may not be necessarily falling into the category of "human error" but it may more aptly called as "human judgement" in my opinion.
Robot replaces human in car manufacturing. SMT, a minature version of auto manufacturing robot, replaces human in electronics world. Building an algorithm to drive the decision is these manufacturing process is relatively easy. The trend, no doubt, will continue moving forward to elsewhere.
It isn't a surprise to me that computer algorithm is going into medical. Most medical information and knowledge will be in the cloud in the very near future. The more data, the better understand of the diease. Human can't process so many data all at once and even so, to memorize most of the critical information is not an easy task. Logically speaking, computer algorithm will eventually come into the picture to help M.D. to sort the information and help diagnosing the patient. Then, will computer algorithm be able to replace M.D. completely? Before answering this question, we will need to ask another question. What if the patient is mis-diagnosed, who's responsible - the clinic/ hospital or the company selling the service or the big data?
I do believe the new world is coming. it comes sooner and faster than I thought. Embracing it may be the only opinion.
Interesting point about service liability @chanj0...but it will get resolved...it works somehow with subways systems with no drivers, for robots, and planes on auto-pilot...the biggest challenge I see is human aspect: will you trust the computer diagnosis? will you be able to get a second opinion with a human being? Kris
Digging deeper into Jeff Hawkins' Grok anomaly identification tool, I was delighted to see that Grok defines "understanding" as matching internal predictions of what will happen in incoming data to what actually happens. Hawkins first advocated this idea as far back as his 2004 book "On Intelligence." Since in a 2010 paper I argued that embedding quantum electrodynamics in classical time leads to essentially the same idea (no, I'm not kidding), Hawkins' book is now definitely on my Christmas list.
It's easy to underestimate the importance of prediction in cognition, especially since traditional simulation is CPU intensive and is rarely if ever linked to real-time sensor data processing. But consider: If you know that some large subset of real-time image inputs is generated by a single rigid object in motion, it is possible in principle to make very good predictions of what will happen in those inputs by looking only at some very small of them that allows you to verify how the object is moving. If you play that idea out at multiple levels, prediction begins to look like a pretty impressive way to start focusing data collection and processing resources on what's really important.
So, I have a prediction about prediction: Over the next ten years, real-time IT systems that deal with massive quantities of sensor data will increasingly include a powerful ability to predict key aspects of incoming before the data actually arrives. Think of it as reality caching, if that helps, and like memory caching it can happen at multiple levels. Open source frameworks like Grok will help promote this growth by encouraging faster exchanges of ideas in this new field of "predictology."
Here too, it does not seem inconceivable that a computer algorithm can be made to consider at least as many factors, in making a decision, as a human can. And to do so more consistently. I would think that a doctor is more likely to overlook the results reported in some obscure medical journal, which may have bearing on a decision he is about to make, than a computer would. Or to minimize the importance of that particular information, because the doctor may have forgotten some key paragraphs of the article that showed a direct relation to his current case.
"Sparse distributed representations" sounds like the human brain's technique to deal with huge quantities of data, without letting itself become overwhelmed by minutiae. Haven't we all experienced people who are more capable of doing this than others? Some people get so bogged down with marginally relevant details that they can't ever seem to get the job done. Others seem more capable of "cutting through the c**p," and making good decisions. And the other side of the coin, some people trivialize or dismiss truly important factors that should come into play, often for lack of that information, and make the wrong decisions.
Doesn't seem inconceivable that computer algorithms can be written to do this sort of SDR thing more consistently than humans do. Initially, no doubt, the algorithm would only be used as an aide to the doctor. In this first phase, it would be really nice if, when the algorithm disagrees with the doctor, all of the factors the algorithm considered were listed, with their weights. Perhaps then, the doctor might change his decision, or vice versa, based on the information the algorithm had.
The medical application seems to me to be a combination of pattern matching and trend analysis. That isn't so different from how a human doctor performs a diagnosis, but with a computer with access to enough sensor data and global information the analysis can be more complete. Where a human doctor would see an infection and use knowledge of local conditions (this particular bug is going around) to decide what might be the problem, a computer could use data from entire regions and even track the progress of particular infections. Even now the CDC is working with Google to track inquiries about particular symptoms to track disease outbreaks. This seems to be an extension of that idea.
Funny, I too was thinking that the medical diagnostic application isn't so different from the autonomous vehicle application, except perhaps in the scope & magnitude of the pattern matching and inferences required. The medical problem seems to be the more complex of the two.