Paul Williamson on Edge AI, Llama 3.2 on Arm
By Nitin Dahad, EETimes (September 27, 2024)
EE Times caught up with Paul Williamson, senior VP and general manager of the IoT business for Arm, for an exclusive virtual interview after his keynote talk at the Edge Impulse Imagine conference in Mountain View, Calif., this week.
During the interview, Williamson provided an overview of his talk at the Edge Impulse event. He then touched on examples of edge AI and why it is all about small language models (SLMs) trained for specific tasks at the edge. “The edge is increasingly about expert systems rather than large generic models,” Williamson said.
He helped answered the question, “How do we make edge AI real in real world applications?” and discussed the significance of the other announcement Arm made this week: the company’s collaboration with Meta to enable Llama 3.2 LLMs on Arm CPUs.
E-mail This Article | Printer-Friendly Page |
Related News
- RaiderChip brings Meta Llama 3.2 LLM HW acceleration to low cost FPGAs
- Arm Accelerates AI From Cloud to Edge With New PyTorch and ExecuTorch Integrations to Deliver Immediate Performance Improvements for Developers
- Arm Accelerates Edge AI with Latest Generation Ethos-U NPU and New IoT Reference Design Platform
- Edge Impulse Deploys its State-of-the-Art Edge AI Models to Arm Microcontrollers Tools
- BrainChip integrates Akida with Arm Cortex-M85 Processor, Unlocking AI Capabilities for Edge Devices
Breaking News
- Jury is out in the Arm vs Qualcomm trial
- Ceva Seeks To Exploit Synergies in Portfolio with Nano NPU
- Synopsys Responds to U.K. Competition and Markets Authority's Phase 1 Announcement Regarding Ansys Acquisition
- Alphawave Semi Scales UCIe™ to 64 Gbps Enabling >20 Tbps/mm Bandwidth Density for Die-to-Die Chiplet Connectivity
- RaiderChip Hardware NPU adds Falcon-3 LLM to its supported AI models