Dr. Massimiliano Versace is the co-founder and CEO of Neurala, and the company visionary. After his pioneering research in brain-inspired computing and deep networks, he continues to inspire and lead the world of autonomous robotics. He has spoken at dozens of events and venues, including TedX, NASA, the Pentagon, GTC, InterDrone, National Labs, Air Force Research Labs, HP, iRobot, Samsung, LG, Qualcomm, Ericsson, BAE Systems, AI World, Mitsubishi, ABB and Accenture, among many others.
You initially studied psychology and then pivoted to neuroscience, what was your rationale at the time?
The pivot was natural. Psychology provided one side of the “training coin” – the study of psychological phenomenon. However, if one is interested in what mechanistically cause thoughts and behavior, one inevitably lands on studying the organ responsible for thoughts, and ends up studying Neuroscience!
When did you realize that you wanted to apply your understanding of the human brain towards emulating the human brain in an AI system?
The next step, Neuroscience to AI, is trickier. While Neuroscience is concerned with the detailed study of the anatomy and physiology of the nervous system and how brains give rise to behavior, another complementary path to achieve an even greater understanding is to build a synthetic version of them. An analogy I like to give is that one can gain a partial understanding of how an engine works by knocking off a cylinder and the radiator and concluding that cylinders and radiators are important in engine functioning. Another deeper way to understand an engine is to build one from scratch – namely by studying intelligence by building a synthetic (artificial) version of it.
What are some of the early deep learning projects that you worked on?
In 2009 for DARPA we worked on building a “whole brain emulation” for an autonomous robot using an advanced chip designed by Hewlett Packard. In a nutshell, our task was to emulate the brain and some of the key autonomous and learning behavior of a small rodent in a form factor that would make it suitable to be portable and implemented in small hardware.
Could you share the genesis story being Neurala?
Neurala as a company started in 2006 to contain some patent work around using GPUs (Graphic Processing Units) for deep learning. While this might be thought of trivial today, at the time GPU were not used for AI at all, and we pioneered that concept by imagining that each pixel in a graphic card could be used to process a neuron (vs a section of a scene to render on the screen). Thanks to the parallelism of GPUs, which mimics our brain parallelist to a (commercially viable) extent, we were able to achieve learning and execution speed for our algorithms that all at a sudden made AI and Deep Learning practical. We had to wait a few more years to leave academia as the world “caught up” (we were already firm believers!) on the reality of AI. In 2013, we took the company out of stealth mode, (as we were already funded by NASA and US Air Force Research Labs) and entered the Boston Tech Stars program. From there, we started to hire a few employees and raised private capital. Still, it was not until 2017 that, with fresh injection of capital and the industry maturing further, we were able to land the first important deployments and put our AI in 56M devices, ranging from cameras, to smart phones, drones, and robots.
One of Neurala’s early projects was working on NASA’s Mars rover. Could you share with us highlights of this project?
NASA had a very specific problem: they wanted to explore technology to power future unmanned missions, where the autonomous system (e.g., a rover) would not rely on Earth’s mission control step-by-step guidance. Communication delays make this control impossible – just remember how clunky the communication was between Earth and Matt Damon in the movie “The Martian”. Our solution: endow each rover with a brain of its own. NASA turned to us, as we were already seen as an expert in building these autonomous “mini-brains” with DARPA, to endow a rover with a small-factor Deep Learning system able not only to run on the robot, but also adapt in real-time and learn new things as the robot is operating. These include new objects (e.g., rocks, sign of water, etc.) as they are encountered and create a meaningful map of an unexplored planet. The challenge was huge, but so was the payoff: a Deep Learning technology that was able to run on a very tiny processing power and learn on even a single piece of data (e.g. an image). This went beyond what Deep Learning was able to accomplish at the time (and even today!).
Neurala has designed the Lifelong-DNN, can you elaborate on how this differs from a regular DNN and the advantages it offers?
Designed for the NASA use-case above, Lifelong DNN, as the name states, can learn during its whole life-cycle. This is unlike traditional Deep Neural Networks (DNNs), which can be either trained, or perform an “inference” (namely, a classification). In L-DNN, like in humans, there is no difference between learning and classifying. Every time we look at something, we both “classify” it (this is a chair) and learn about it (this chair is new, never seen it before, I now know a bit more about it). Differently then DNNs, L-DNN is always learning and confronting what it knows about the world, what new information is presented, and is naturally able to understand anomalies. For example, if one of my children played a joke on me and painted my chair pink, I would recognize it right away. Since my L-DNN has learned over time that my chair is black, and when my perception of it mismatches my memory of it, L-DNN would produce an anomaly signal. This is used in Neurala’s products in various ways (See below).
Can you discuss what the Brain Builder custom vision AI is, and how it enables faster, easier, and less expensive robotics applications?
Since L-DNN naturally learns about the world and can understand if something is anomalous or deviates from a learned standard, Neurala’s product, Brain Builder and VIA (Visual Inspection Automation) are used to quickly set up visual inspection tasks using just a few images of “good products”. For example, in a production setting, one can use 20 images of “good bottles” and create a Visual Quality Inspection “mini-brain” able to recognize good bottles, or when a bad bottle (e.g., one with a broken cap) is produced. This can be done with L-DNN very easily, quickly, and on a simple CPU, leveraging the NASA technology built in more than 10 years of intense R&D.
In a previous interview, you recommended that entrepreneurs always aim for starting a business that is slightly impossible. Did you feel that Neurala was slightly impossible when you first launched the company?
I still recall my friend and colleague, Anatoli, spitting out his espresso when I said “one day, our technology will run on a cell phone”. It sounded impossible, but all you needed to do was imagine and work for it. Today, it runs on millions of phones. We envision a world where thousands of artificial eyes can spot industrial machines and processes to provide previously unimaginable level of quality and control, previously impossible as they would consume thousands of people per machine. Hope nobody is drinking espresso while reading this….