PARIS – On the day EE Times visited Chronocam, a Paris-based startup developing an event-based computer vision technology, a ripple of shock went through the whole building. Intel had just announced the acquisition of Mobileye.
The big surprise hit shortly after a talk with the startup's CEO. This reporter closed her notebook, came out of the meeting room and proceeded on the obligatory company tour. Chronocam’s CEO reappeared unexpectedly, staring into his smartphone. Calmly, but with tones of deep portent, he said, “Intel just bought Mobileye. Fifteen billion dollars.”
The move to combine Intel and Mobileye, two pillars of the computing and vision markets, carries far-reaching ramifications for everyone involved in the automotive market.
But what, for example, might happen to a startup like Chronocam?
Although the CEO has never described Mobileye as a competitor, Chronocam’s technology, if successful, could overthrow the conventional CMOS image sensors and image processing algorithms currently used in cars, drones and robotics.
The startup’s sensor technology, designed not for human consumption but for machines to sense and detect, would completely alter today’s CMOS image sensor market.
Chronocam’s event-driven sensor, still so new, has not yet been used in any commercial cars, drones or robots.
Experts in the imaging/sensing market see Chronocam at the crossroads of imaging and sensing. Charting the course of image sensors over the next 15 years, Pierre Cambou, imaging activity leader at Yole Développement, considers Chronocam’s technology to be one of many disruptive technologies emerging as the industry shifts from an era of photography to machine sensing.
Chronocam’s story is intriguing — even more so in the context of a red hot Advanced Assistance Driver System (ADAS)/autonomous vehicle market in which competition has escalated to the point that big players are now beginning to merge.
Enter the combination of Intel and Mobileye. In the face of such super-combos, is Chronocam tilting at windmills?
Will the Intel/Mobileye deal quash Chronocam’s plucky plan to participate in the automotive sensor market? Maybe. But Luca Verre, CEO and co-founder of Chronocam, remains optimistic.
Chronocam CEO Luca Verre
A common thread that ties Chronocam with Intel and Mobileye (and by extension, Nvidia, NXP/Qualcomm and others) is that they’re all pursuing the same thing: An autonomous driving machine that can sense, detect and analyze massive data -- almost at real time -- so that a car can instantly respond to any imminent danger.
Chronocam is nowhere near the same league as the big boys in the chip industry. But the startup isn’t discouraged. Here’s why.
Next page: Data acquisition vs. data processing
Data acquisition vs. data processing
As Chronocam’s Verre explains, big GPU/CPU companies are still trying to figure out the best way to process massively collected data more accurately and more quickly. In contrast, “Our focus is not on processing side, but on data acquisition,” said Verre.
Chronocam’s sensor technology is designed to acquire data that’s simplified and tailored for machines to use. This dramatically reduced data load should allow cars to make almost real-time decisions.
Verre suspects that the Intel/Mobileye deal will intensify competition on the processing side, eventually knocking lesser players off the field.
However, Verre believes this market shakeout will spare the data acquisition side -- companies like Chronocam. “That’s my short-term view,” he said. The longer view, Verre suspects, is that those in the processing field will eventually start paying attention to more efficient, alternative data acquisition solutions designed from the ground up to reduce the amount of data collected by sensors.
Leading the pack
Only a handful of players in the electronics industry are on the path to machine sensing, and Chronocam leads the pack. The company’s technology is already out of the lab, getting close to the commercial market.
The French startup has developed an event-driven computer vision technology that captures imaging data not based on artificially created frames, but driven by events within view. Last October, Chronocam raised $15 million in series B funding from five companies including Renault, Robert Bosch Venture Capital, Intel Capital.
Other players in the same field include iniLabs (Zurich), a spin-off of the academic institutes in Switzerland, whose mission is to promote neuromorphic engineering technology. Samsung also discussed for the first time its own event-driven vision sensor at this year’s ISSCC just last month.
Having interviewed Chronocam a year ago, EE Times returned to Paris to catch up with CEO Verre. We asked him how Chronocam is positioning itself on the market (any changes?), which segments of the machine vision market it’s now going after, how he views the emerging market.
Before getting into Chronocam’s story in full, here’s a recap as to how the company’s event-driven sensors work -- a story told a year ago by Christoph Posch, Chronocam’s co-founder and CTO.
These event-driven devices are not designed to acquire a sequence of snapshots. They deal with no more frames. Rather, they generate a continuous-time stream of information from an array of autonomous pixels. Each pixel independently adapts its acquisition process to the visual input it receives. As a result, these sensors can eliminate data redundancy – which is the problems with the conventional image sensors. The asynchronous pixel circuits also acquire scene dynamics at very high temporal resolutions, and it captures data at a dynamic range.
Shown on the screen are the movement of hands captured by Chronocam's sensors (Photo: EE Times)
Even the slightest hint of a market revolution is exciting to any inventor of a new technology. This isn’t a thrill, however, that stirs the incumbents who supply conventional image sensors.
With that resistance in place, it it’s tough to convincing others to consider a path they’ve never taken before. Talking them into embracing it, joining the revolution and building “an eco-system” is not a job for the fainthearted. Chronocam’s CEO Verre, however, is undeterred.
Verre said he’s moving as fast as he can, while navigating the complex vision market and adjusting strategies where necessary.
But here’s the question.
Next page: What value?
Chronocam coming to the ADAS market now with its disruptive technology can be puzzling. Hasn’t the industry -- especially companies like Mobileye -- already solved most of the tech-related problems in the ADAS segment? What value, if any, do car OEMs or tier ones see in Chronocam’ offerings not available elsewhere?
First, Verre made it clear, “We are not generating [so-called] images.” Chronocam, he said, is generating visual data for machines to use.
He cited three key advantages Chronocam’s event-driven sensor can provide. “We generate much less data, we enable faster reaction time, and we operate at a much wider dynamic range,” he explained.
Unlike conventional image sensors, which use many redundant frames (consuming more power), Chronocam’s event-driven computer vision technology captures only local pixel-level changes caused by movements in a scene.
This quantum reduction in data is critical, because “we don’t overload the bandwidth,” said Verre. “The very limited amount of data” can also positively affect “the cost and amount of resources necessary for any ADAS applications.” Further, it enables machines to “process data at real time.”
An additional perk of using Chronocam’s computer vision sensor is its dynamic range.
Independent of light conditions -- bright or dark, the event-driven computer vision can capture visual data. “The acquisition process to extract high-quality temporal data under such a wide dynamic range is a huge advantage,” said Verre. Compared to conventional image sensors whose dynamic range is between 80 to 100dB, Chronocam offers a 120 to 140dB dynamic range, he explained.
Chronocam’s market positioning, however, has shifted significantly in one respect.
Chronocam now believes, in entering the ADAS/autonomous car market, it doesn’t need to chart a collision course with an entire community of incumbent CMOS image sensor suppliers.
Different from what we heard from the startup a year ago, Chronocam today is pitching its technology as one of the several different sensors to be added to ADAS/autonomous cars for safety.
The reality is that CMOS cameras are well established in modern cars. As shown below by Yole Développement, that number is rapidly growing.
In addition to those multiple cameras, ADAS cars are also already deploying different sensors that range from radars and lidars to ultrasound.
Verre predicted, “Radar will probably stay. And cameras are necessary to display information [for human consumption.]” But Chronocam’s event-driven sensors, when added, can “push the performance of other sensors, and speed up the process in extracting data.”
Verre noted that event-driven sensors extract no motion blur and operate independent of lighting conditions. Their high temporal information can work with other sensors. He said Chronocam has no partners to announce at this point. His goal is to create, along with other sensor partners, an ecosystem with event-driven sensors as an integral part.
Chronocam’s sensors in a car, for example, could replace an expensive lidar. It could make Autonomous Emergency Braking a standard feature for non-luxury cars, because data-driven sensors do not require intensive data processing.
Will car OEMs add one more sensor to the mix?
Now, what about "deep learning"? A lot of autonomous cars currently in development are beginning to depend on deep learning to process massively collected sensor data.
Next page: Which markets?
As Nvidia tells the story, deep learning “relies on powerful GPUs, access to vast troves of data, and sophisticated algorithms for deep neural networks to solve complex problems.” Nvidia’s GPUs, for example, are used both for the learning and inference sides of deep learning.
Data sets currently used for deep learning are based on data captured by traditional CMOS image sensors. If Chronocam decides to opt for deep learning, the company needs to train a data set of its own -- driven by event-based data.
The foundation of event-driven sensors is neuromorphic engineering. This will work for Chronocam, not against it, said Verre, because the amount of data that needs to be trained is a lot smaller. Event-driven sensors are by nature capable of reacting very fast. Chronocam’s own AI library – enabling full event-based approach -- will be ready before the end of this year, he added.
Chronocam has set its sights in three markets: automotive, AR/VR and robotics. The earliest segment the company is ready for -- scheduled to hit the market in 2018 -- is robotics. Event-driven sensors are used in robotics to monitor a safe distance between a human worker and co-robot when they’re working together on a factory floor.
When such a system deploys a traditional image sensor, the minimum distance that needs to be maintained is 40 cm, according to Verre, because the latency incurred by traditional image sensors’ data acquisition renders it unsafe for a co-robot to get any closer to its human.
In contrast, using Chronocam’s sensor, the distance drops to 3 cm, he said. Chronocam is already working with a leading robotic manufacturer in the United States (but Verre declined to name names). A prototype will be ready by the middle of this year.
AR/VR is the second market Chronocam will enter, with a commercial product hitting the market in 2019. Chronocam’s sensors will be applied to eye tracking and visual odometry for AR/VR headsets.
The last but the biggest market Chronocam is counting on is automotive. Lined up as partners are Renault and Nissan, in addition to OEMs and tier ones outside Europe. The plan is to roll out a prototype before the end of the 2018 and “be on the market in 2021,” according to Verre.
For any startup, having a technology aimed at the automotive market is traumatic because of the long product design-in cycles required for cars. Chronocam, however, is fortunate to have other markets -- such as robotics and AR/VR -- to go after meanwhile.
Verre predicted that the company will begin seeing actual sales in 2019 – around $20 million a year.
Need a strong foundry partner
The top priority for the startup now is to finish prototypes for all three markets, prove they work and build partnerships in each segment.
A year ago, Chronocam might have thought delivering a camera running a first layer of event-based computation (software development kit) was enough to convince customers of its technology. But now, Verre is convinced that Chronocam must go after a full vertical solution to work with its partners. Without that range, Chronocam knows that its radically different technology will never be accepted.
For its event-driven sensors to succeed, Chronocam needs a strong foundry partner. Currently, the startup is using a standard CMOS imaging process to make their sensor with VGA resolution. “This needs to be improved at least to HD,” said Verre. The goal is to “move to backside 3D stacking technology to produce a sensor with its pixel pitch reduced to 5 micron, compared to the current 15 micron.”
Chronocam, a year ago, had only 1.5 million euro raised from investors such as CEA Investment and Robert Bosch VC. Now with $15 million in its war chest, Chronocam is beefing up its engineering team. Verre’s plan is to grow to 50 people by the end of this year.
Chronocam CEO Luca Verre in front of his office. The startup's HQ is tucked away in a courtyard of rue du Faubourg Saint Antoine, near Bastille in Paris (Photo: EE Times)
— Junko Yoshida, Chief International Correspondent, EE Times