Advertisement
News
EEtimes
News the global electronics community can trust
eetimes.com
power electronics news
The trusted news source for power-conscious design engineers
powerelectronicsnews.com
ebn
Supply chain news for the electronics industry
ebnonline.com
elektroda
The can't-miss forum engineers and hobbyists
elektroda.pl
Products
Electronics Products
Product news that empowers design decisions
electronicproducts.com
Datasheets.com
Design engineer' search engine for electronic components
datasheets.com
eem
The electronic components resource for engineers and purchasers
eem.com
Design
embedded.com
The design site for hardware software, and firmware engineers
embedded.com
Elector Schematics
Where makers and hobbyists share projects
electroschematics.com
edn Network
The design site for electronics engineers and engineering managers
edn.com
electronic tutorials
The learning center for future and novice engineers
electronics-tutorials.ws
TechOnline
The educational resource for the global engineering community
techonline.com
Tools
eeweb.com
Where electronics engineers discover the latest toolsThe design site for hardware software, and firmware engineers
eeweb.com
Part Sim
Circuit simulation made easy
partsim.com
schematics.com
Brings you all the tools to tackle projects big and small - combining real-world components with online collaboration
schematics.com
PCB Web
Hardware design made easy
pcbweb.com
schematics.io
A free online environment where users can create, edit, and share electrical schematics, or convert between popular file formats like Eagle, Altium, and OrCAD.
schematics.io
Product Advisor
Find the IoT board you’ve been searching for using this interactive solution space to help you visualize the product selection process and showcase important trade-off decisions.
transim.com/iot
Transim Engage
Transform your product pages with embeddable schematic, simulation, and 3D content modules while providing interactive user experiences for your customers.
transim.com/Products/Engage
About
AspenCore
A worldwide innovation hub servicing component manufacturers and distributors with unique marketing solutions
aspencore.com
Silicon Expert
SiliconExpert provides engineers with the data and insight they need to remove risk from the supply chain.
siliconexpert.com
Transim
Transim powers many of the tools engineers use every day on manufacturers' websites and can develop solutions for any company.
transim.com

Reverse-Engineering Insect Brains to Make Robots

By   07.19.2022 1

British startup Opteran, a spin–out of the University of Sheffield, has a completely different view of neuromorphic engineering compared to most of the industry. The company has reverse–engineered insect brains to derive new algorithms for collision avoidance and navigation that can be used in robotics.

Opteran calls its new approach to AI “natural intelligence,” taking direct biological inspiration for the algorithm portion of the system. This approach is separate to existing computer vision approaches, which mainly use either mainstream AI/deep learning or photogrammetry, a technique that uses 2D photographs to infer information about 3D objects, such as dimensions.

Opteran’s natural intelligence requires no training data, and no training, more like how a biological brain works. Deep learning today is capable of narrow AI — it can execute carefully defined tasks within a limited environment such as a computer game — but huge amounts of training data is required, as are computation and power consumption. Opteran wants to get around the limitations of deep learning by closely mimicking what brains really do, in order to build autonomous robots that can interact with the real world while on a tight computation and energy budget.

“Our purpose is to reverse– or re–engineer nature’s algorithms to create a software brain that enables machines to perceive, behave, and adapt more like natural creatures,” said professor James Marshall, chief scientific officer at Opteran, in a recent presentation at the Embedded Vision Summit.

“Imitating the brain to develop AI is an old idea, going back to Alan Turing,” he said. “Deep learning, on the other hand, is based on a cartoon of a tiny part of the primate brain visual cortex that ignores the vast complexity of a real brain… modern neuroscience techniques are increasingly being applied to give the information we need to faithfully reverse engineer how real brains solve the problem of autonomy.”

Reverse engineering brains requires studying animal behavior, neuroscience, and anatomy together. Opteran has been working with honeybee brains as they are both sufficiently simple and capable of orchestrating complex behavior. Honeybees are able to navigate over distances of 7 miles, and communicate their mental maps accurately to other bees. It does all this with fewer than a million neurons, in an energy–efficient brain the size of a pinhead.

Opteran has successfully reverse–engineered the algorithm honeybees use for optical flow estimation (the apparent motion of objects in a scene caused by relative motion of the observer). This algorithm can do optical flow processing at 10 kHz for under a Watt, running on a small FPGA.

“This performance exceeds the deep learning state of the art by orders of magnitude in all dimensions, including robustness, power, and speed,” Marshall said.


With the rise of artificial intelligence, technologies claiming to be “brain-inspired” are abundant. We examine what neuromorphic means today in our Neuromorphic Computing Special Project.


Biological algorithms

Biological motion detection was mathematically modeled in the 1960s based on experiments with insect brains. The model is called the Hassenstein–Reichardt Detector and it has been verified many times over via different experimental methods. In this model, the brain receives signals from two neighboring receptors in the eye. The input from one receptor is delayed. If the brain receives both signals at the same time, the neuron fires, because it means the object you’re looking at is moving. Doing this again with the other signal delayed means it works if the object is moving in either direction (hence the symmetry in the model).

Hassenstein-Reichardt Detector compared to Opteran algorithm
(Left) the Hassenstein–Reichardt Detector, a model of motion detection in biological brains. (Right) Opteran’s patented algorithm derived from honeybee brains. (Source: Opteran)

Marshall explained in his presentation that the Hassenstein–Reichardt Detector, while sufficient to model motion detection in fruit flies, is highly sensitive to spatial frequency (the distribution pattern of dark and light in an image) and contrast, and therefore not a great fit for generalized visual navigation.

“Honeybees do something cleverer, which is a novel arrangement of these elementary units,” Marshall said. “Honeybee flying behavior shows great robustness to spatial frequency and contrast, so there must be something else going on.”

Opteran used behavioral and neuroscientific data from honeybees to come up with its own visual inertial odometry estimator and collision avoidance algorithm (on the right in the diagram above). This algorithm was benchmarked and found to be superior to FlowNet2s (a state–of–the–art deep learning algorithm at the time), in terms of theoretical accuracy and noise robustness. Marshall points out that the deep learning implementation would also require GPU acceleration, with the associated power penalty.

Real–world robotics

It’s a nice theory, but does it work in the real world? Opteran has indeed been applying its algorithms in real–world robotics. The company has developed a robot dog demo, Hopper, in a similar form factor to Boston Dynamics’ Spot. Hopper uses an edge–based vision–only solution based on Opteran’s collision prediction and avoidance algorithm; when a potential collision is identified, a simple controller makes it turn away.

Opteran is also working on a 3D navigation algorithm, again based on honeybees. This solution will be equivalent to today’s SLAM (simultaneous location and mapping) algorithms, but it will also handle path planning, routing, and semantics. Marshall said it will run on a fraction of a Watt on the same hardware.

“Another big saving is in terms of the map size generated by this approach,” he said. “Whereas classical photogrammetry–based SLAM generates map sizes in the order of hundreds of megabytes to gigabytes per meter squared, causing significant problems for mapping large areas, we have maps consuming only kilobytes of memory.”

A demo of this algorithm powering a small drone in flight uses a single low–resolution camera (less than 10,000 pixels) to perform autonomous vision–based navigation.

Hardware and software

Opteran’s development kit uses a small Xilinx Zynqberry FPGA module which weighs less than 30g and consumes under 3W. It requires two cameras. The development kit uses cheap ($20) Raspberry Pi cameras, but Opteran will work with OEMs to calibrate algorithms for other camera types during product development.

The current FPGA can run Opteran’s omnidirectional optical flow processing and collision prediction algorithms simultaneously. Future hardware may migrate to larger FPGAs or GPUs as required, Marshall said.

The company is building a software stack for robotics applications. On top of an electronically stabilized panoramic vision system, there is collision avoidance, then navigation. Work is underway on a decision engine to allow a robot to decide where it should go and under what circumstances (due in 2023). Future elements include social, causal, and abstract engines, which will allow robots to interact with each other, to infer causal structures in real world environments, and to abstract general principles from experienced situations. All these engines will be based on biological systems — no deep learning or rule–based systems.

Opteran completed a funding round of $12 million last month, which will fund the commercialization of its natural intelligence approach and the development of the remaining algorithms in its stack. Customer pilots so far have used stabilized vision, collision avoidance, and navigation capabilities in cobot arms, drones, and mining robots.

Future research directions could also include studying other animals with more complex brains, Marshall said.

“We started with insects, but the approach scales,” he said. “We’ll be looking at vertebrates in due course, that’s absolutely on our roadmap.”

1 comments
Post Comment
Vitaly Shilman   2022-07-20 08:44:36

Very interesting. British are first at AI again. DeepMind, now this.
Probably will give a boost to AutoPilot applications also.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.