Industry Expert Blogs
Rambus HBM3 Controller IP Gives AI Training a New BoostRambus BlogOct. 26, 2023 |
As AI continues to grow in reach and complexity, the unrelenting demand for more memory requires the constant advancement of high-performance memory IP solutions. We’re pleased to announce that our HBM3 Memory Controller now enables an industry-leading memory throughput of over 1.23 Terabytes per second (TB/s) for training recommender systems, generative AI and other compute-intensive AI workloads.
According to OpenAI, the amount of compute used in the largest AI training has increased at a rate of 10X per year since 2012, and this is showing no signs of slowing down any time soon! The growth of AI training data sets is being driven by a number of factors. These include complex AI models, vast amounts of online data being produced and made available, as well as a continued desire for more accuracy and robustness of AI models.
OpenAI’s very own ChatGPT, the most talked about large language model (LLM) of this year, is a great example to illustrate the growth of AI data sets. When it was first released to the public in November 2022, GPT-3 was built using 175 billion parameters. GPT-4, released just a few months after, is reported to use upwards of 1.5 trillion parameters. This staggering growth illustrates just how large data sets are becoming in such a short period of time.
Related Blogs
- Intel Embraces the RISC-V Ecosystem: Implications as the Other Shoe Drops
- Mitigating Side-Channel Attacks In Post Quantum Cryptography (PQC) With Secure-IC Solutions
- How PCIe 7.0 is Boosting Bandwidth for AI Chips
- Extending Arm Total Design Ecosystem to Accelerate Infrastructure Innovation
- Ecosystem Collaboration Drives New AMBA Specification for Chiplets