January 9, 2018, Las Vegas – videantis GmbH, a leading supplier of computer vision and video coding semiconductor solutions, today announced its new v-MP6000UDX visual processing architecture and v-CNNDesigner tool. The new processor increases deep learning algorithm performance by up to three orders of magnitude, while maintaining software compatibility with the already very powerful and successful v-MP4000HDX architecture.
Videantis continues to see a lot of demand for smart sensing systems that combine deep learning with other computer vision and video processing techniques such as SLAM or structure from motion, wide-angle lens correction, and video compression. Videantis is the only company that can run all these tasks on a single unified processing architecture. This simplifies SOC design and integration, eases software design, reduces unused dark silicon, and provides additional flexibility to address a wide variety of use cases.
“We’ve quietly been working on our deep learning solution together with a few select customers for quite some time and are now ready to announce this exciting new technology to the broader market,” says Hans-Joachim Stolberg, CEO at videantis. “To efficiently run deep convolutional nets in real-time requires new performance levels and careful optimization, which we’ve addressed with both a new processor architecture and a new optimization tool. Compared to other solutions on the market, we took great care to create an architecture that truly processes all layers of CNNs on a single architecture rather than adding standalone accelerators where the performance breaks on the data transfers in between.”
In automotive videantis has seen a lot of growth due to the industry’s rapid adoption of advanced driver assistance systems that make cars safer and provide a better driving experience. The technology has been adopted by several leading semiconductor companies and OEMs and is already on the road in millions of vehicles. Another area of growth is in virtual and augmented reality, where new headsets use smart cameras for a wide variety of tasks, including localization, depth extraction, and eye tracking. A recent trend is to bring deep-learning-based algorithms to the embedded vision systems. Deep learning requires orders of magnitude more compute and bandwidth capabilities though, which videantis addresses with its new v-MP6000UDX architecture.
Hans-Joachim said: “Using some clever design features, the v-MP6000UDX architecture we’re announcing today increases throughput on key neural network implementations by roughly 3 orders of magnitude, while remaining extremely low power and compatible with our v-MP4000HDX architecture. This compatibility ensures a seamless upgrade path for our customers toward adding deep learning capabilities to their systems, without having to rewrite the computer vision software they’ve already developed for our architecture.”
Mike Demler, Senior Analyst at The Linley Group, said, "Embedded vision is enabling a wide range of new applications such as automotive ADAS, autonomous drones, new AR/VR experiences, and self-driving cars. Videantis is providing an architecture that can run all the visual computing tasks that a typical embedded vision system needs, while meeting stringent power, performance, and cost requirements.”
“With its innovative, specialized processors, videantis has long been a pioneer in enabling the proliferation of computer vision into mass-market applications,” said Jeff Bier, founder of the Embedded Vision Alliance. “By enabling the deployment of deep learning as well as conventional computer vision algorithms, processors like the v-MP6000UDX are making the promise of more intelligent devices a reality.”
The v-MP6000UDX processor architecture includes an extended instruction set optimized for running convolutional neural nets, increases the multiply-accumulate throughput per core eightfold to 64 MACs per core, and extends the number of cores from typically 8 to up to 256. Alongside the new architecture, videantis also announced v-CNNDesigner, a new tool that enables easy porting of neural networks that have been designed and trained using frameworks such as TensorFlow or Caffe. v-CNNDesigner analyzes, optimizes, and parallelizes trained neural networks for efficient processing on the v-MP6000UDX architecture. Using this tool, the task of implementing a neural network is fully automated and it just takes minutes to get CNNs running on the low power videantis processing architecture.
Availability
The v-MP6000UDX processor platform and v-CNNDesigner software tool are available for licensing today.
About videantis
Headquartered in Hannover, Germany, videantis is a one-stop deep learning, computer vision and video processor IP provider, delivering flexible computer vision, imaging and multi-standard HW/SW video coding solutions for automotive, mobile, consumer, and embedded markets. Based on a unified processor platform approach that is licensed to chip manufacturers, videantis provides tailored solutions to meet the specific needs of its customers. With core competencies of deep camera and video application expert know-how and strong SoC design and system architecture expertise, videantis serves a worldwide customer basis with a diverse range of target applications, such as advanced driver assistance systems and autonomous driving, mobile phones, AR/VR, IoT, gesture interfacing, computational photography, in-car infotainment, and over-the-top TV. videantis has been recognized with the Red Herring Award and multiple Deloitte Technology Fast 50 Awards as one of the fastest growing technology companies in Germany.
For more information, please visit www.videantis.com .