Breaking News
Blog

AI Fits Best in Your Pocket

View Comments: Newest First | Oldest First | Threaded View
moloned
User Rank
Rookie
Re: distributed intelligence & vision
moloned   7/17/2014 4:23:40 PM
NO RATINGS
Tango and Kinect are indeed thought-leading developments in this space, but they are by no means the only ones as we've seen from the Amazon smartphone, Occulus Rift and the new HTC One with dual-cameras and lytro-like focus-later capability.  2015 will be a key inflection point for the transition of such products to the mass market, hopefully many of them will be leverage Myriad technology.

-David

moloned
User Rank
Rookie
Re: Time to leverage the "peace dividend of the Smartphone Wars"
moloned   7/17/2014 4:18:59 PM
NO RATINGS
Your suggestion of an open platform for robotics than can leverage Myriad and offer Tango-like capabilities is an intriguing one and certainly food as roll-out our next generation product.  We will be presenting Myriad2 on August 12th at HotChips in the Flint Centre in Cupertino.

-David

jamesclardy
User Rank
Rookie
Time to leverage the "peace dividend of the Smartphone Wars"
jamesclardy   7/17/2014 1:52:24 PM
Couldn't agree more with your article!

Chris Anderson of 3D Robotics (and former Editor-in-Chief at Wired) likes to talk about leveraging the 'peace dividends of the Smartphone Wars" in disruptive manners, applied to new markets. Formerly unapproachable levels of proccessing power are now available at consumer price points... IMO, Movidius could aid general purpose APs to react to visual stimuli in real-time and low cost/power. This would be a boon to autonomous cars, UAVs, industrial robots, etc. all of which need to be able to take action to protect human safety without resorting to the Cloud. Leveraging DNN as Baidu & Neurala recommends makes sense - especially if the learnings could be openly curated and shared. I'd love to see an embedded version of the Movidius/Tango architecture open to hobbyists and hackers...

- Jim in Austin

odeniz
User Rank
Rookie
distributed intelligence & vision
odeniz   7/17/2014 12:25:28 PM
My two cents related to this interesting piece of writing: Vision is by far the most challenging sensor in terms of required processing. It may be also the most valuable. 'Distributed' vision or wireless vision sensors all around us seem to have been an elusive goal, despite the clear trend of computer vision going out from factory. Google's project Tango and the miniaturisation effort that brought Kinect may be the two most prominent first efforts in this respect.

moloned
User Rank
Rookie
Re: Programming shift
moloned   7/17/2014 12:12:01 PM
NO RATINGS
I agree that much of the convenience of our current programming extractions will have to be abandoned as Moore's law and we can no longer rely on the 18 month 2x ratchet in terms of performance.  This is a huge challenge for the SW industry as David Patterson said "parallelism is the biggest challenge in 50 years because industry is betting its future that parallel programming will be useful". We are all going to think more locally (embedded) in order to keep the whole economy going by compensating for the slowing of Moore's law by writing more optimised software and at the same time achieve rapid tome-to-market.  Is the solution more APIs like OpenVX for computer vision and similar efforts for AI etc. or will we see more fragmentation similar to the break away from OpenGL by Apple in the form of it's Metal graphics API?

-David

moloned
User Rank
Rookie
Re: Future is here!
moloned   7/17/2014 12:01:01 PM
NO RATINGS
My own view is that grand efforts to understand the massive human brain with it's 80B neurons and countless synapses such as the EU Human Brain Project or US BRAIN program are doomed to failure when we can't even model the 309 neuron connectome of the C. Elegans worm.  I believe that more modest efforts like the OpenWorm project will tackle brain function on a mirco level, allowing us to gradually scale up to complex behaviour just like Kilby's very primitive IC in 1959 madded the way to the multi-billion transistor SoCs we have today.  In fact AI on the level of what an insect like a cockroach or an ant would already allow is to build very capable autonomous machines that could improve our daily lives performing useful tasks at very modest cost.

-David

moloned
User Rank
Rookie
Re: Distribution is (again) with us!
moloned   7/17/2014 11:55:54 AM
NO RATINGS
I suspect the (re)integration trend may be slowing due to the increasing cost per transistor as technologies scale to below 28nm.  At least to me a distibuted model makes most sense as energy consumption, heat dissipation and bandwidth can be allocated in such a way as to arrive at something close to a global minimum and allowing mobile-to-cloud solutions to scale better.

-David

dtrainor
User Rank
Manager
Distribution is (again) with us!
dtrainor   7/17/2014 11:46:15 AM
I agree that pushing computational ability and "intelligence" towards sensors and mobile devices in a power-efficient way makes a lot of sense for future large-scale cloud applications. This is particularly true when media data analysis and transferral is involved. It's another interesting example of the regular "integration-redistribution" cycle that has been repeating in many areas of electronics, engineering and computing in recent years.

cristian.olar
User Rank
Rookie
Programming shift
cristian.olar   7/17/2014 11:26:14 AM
As computing is shifted more and more to the devices closer to the sensor this will also translate into a need for more and more research work to happen at embedded level. If I look in the world today, most processors closer to the sensors interfacing to the external world implement relatively simple algorithms, maybe just some FIR filter or such and leave the decisions to other stronger APs, or in this case the cloud.

What I see is that usually the main processors (or, let's talk about the cloud now) are based on some architecture which supports programming in very abstract ways, while the processors closer to the sensors are usually used still in a very "bare-metal" way to squeeze the maximum optimization possible in terms of power usage, speed etc.

So, this shift in locating the processing power I think leads to either the researchers of today having to abandon part of the abstractions they were used to and get closer to the metal or, alternetively, we might be at an era of a new strand of development towards improving tools for embedded in general, which would enable research labs and universities to program embedded devices at the same abstraction levels they would be used to.

The interesting problem I see with the latter is that while main processors tend to usually stay within one or two famillies, which enables development of their tools to be allocated to scores of teams around the world, the processors closer to the sensor are a lot more diverse. Many of them having dedicated instruction architectures therefore requiring separate toolchains in principle. In practice though we are seeing the rise of tools which try to abstract the architecture away to some extent like LLVM compilers are for example. So this new trend of switching processing closer to the sensor will, I think, open new doors for such portability tools.

In any case, the future will be interesting to witness.

 

vali_#1
User Rank
Rookie
Future is here!
vali_#1   7/17/2014 11:24:14 AM
Great Vision that connected a new range of dots!

It makes sense. Smart devices from our pockets get geared up with more and more sensors. In parallel, the Internet of Things herald a new tsunami of data. This won't fit all in the netwrok bandwidth on the way up to the cloud and it would not be intelligent to push it that way either. On the other hand, the natural push would be to make the smart devices really smarter.

What would make a device smarter if not extra intelligence at a reasonable cost? A reasonable cost meaning also a reasonable power consumption.

Is AI the way to go? What neural network architecture/solutions will make this happen? What would be the HW - SW breakdown to minimize latency but provide as much intelligence as possible to cloud? Is this a new era for data mining too?

In the end I like this parallel to human brain: "Latency is also an issue for the human brain, where distributed processing powers our reflexes without involving the frontal cortex." Yet another dot connected! How many dots left to connect until we replicate the human brain? How about a uber-cloud overseeing other brains replicas 'socializing" with their data?

Flash Poll
Radio
LATEST ARCHIVED BROADCAST
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.
Like Us on Facebook

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
EE Times on Twitter
EE Times Twitter Feed
Top Comments of the Week