Parsing the Mindboggling Cost of Ownership of Generative AI
By Lauro Rizzatti, VSORA
EETimes (November 2, 2023)
The latest algorithms, such as GPT-4, pose a challenge to the current state-of-the-art processing hardware, and GenAI accelerators aren’t keeping up. In fact, no hardware on the market today can run the full GPT-4.
Current large language model (LLM) development focuses on creating smaller but more specialized LLMs that can run on existing hardware is a diversion. The GenAI industry needs semiconductor innovations in computing methods and architectures capable of delivering performance of multiple petaFLOPS with efficiency greater than 50%, reducing latency to less than two second per query, constraining energy consumption and shrinking cost to 0.2 cent per query.
Once this is in place–and it is only matter of time–the promise of transformers when deployed on edge devices will be fully exploited.
E-mail This Article | Printer-Friendly Page |
|
Related Articles
- Will Generative AI Help or Harm Embedded Software Developers?
- Modeling Total Cost of Ownership for Semiconductor IP
- Accelerating SoC Evolution With NoC Innovations Using NoC Tiling for AI and Machine Learning
- Why Interlaken is a great choice for architecting chip to chip communications in AI chips
- New PCIe Gen6 CXL3.0 retimer: a small chip for big next-gen AI
New Articles
- Accelerating RISC-V development with Tessent UltraSight-V
- Automotive Ethernet Security Using MACsec
- What is JESD204C? A quick glance at the standard
- Optimizing Power Efficiency in SOC with PVT Sensor-Assisted DVFS Technology
- Bandgap Reference (BGR) Circuit Design and Transient Analysis in 90nm VLSI Technology
Most Popular
- System Verilog Assertions Simplified
- Accelerating RISC-V development with Tessent UltraSight-V
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Understanding Logic Equivalence Check (LEC) Flow and Its Challenges and Proposed Solution
- Design Rule Checks (DRC) - A Practical View for 28nm Technology