Deliver "Smarter" Faster: Design Methodology for AI/ML Processor Design
By Joe Sawicki, executive vice president of IC EDA division, Mentor
EETimes (June 19, 2019)
New design tools can develop circuits for AI and machine learning faster than ever. AI/ML is being used to make those same design tools even faster.
We are at the beginning of an age where artificial intelligence (AI) processing will advance in sophistication rapidly and become ubiquitous. While the concept of AI — giving machines the ability to mimic cognitive functions to learn and solve problems and then take an action — has been an academic discipline since the mid-1950s, it wasn’t until the last five years that AI processing, mostly in the form of machine learning (ML), could step out of the dimly-lit halls of research and supercomputer one-offs and move to practical everyday use. Why?
The amount of data generated from the internet and billions of smart devices alone has given us more than enough data to collect sizable data sets with which we can have ML filter and train ML-based systems to use. In addition, today we have enough ubiquitous high-performance compute power in smart devices and the high-bandwidth communications infrastructure to process and transfer massive data sets quickly. This compute power also gives us the canvas to develop ever more sophisticated and specialized algorithms for particular tasks, further expanding the application of AI/ML.
E-mail This Article | Printer-Friendly Page |
|
Related Articles
- The role of cache in AI processor design
- Automotive electronics revolution requires faster, smarter interfaces
- Medical imaging process accelerated in FPGA 82X faster than software
- ''Do's and Don'ts" when considering an FPGA to structured ASIC design methodology
- Reusable Verification Infrastructure for A Processor Platform to deliver fast SOC development