Create high-performance SoCs using network-on-chip IP
By Andy Nightingale, Arteris IP
EDN (March 13, 2023)
A system-on-chip (SoC) containing a million transistors was considered a large device in the not-so-distant past. Today, SoCs commonly contain up to a billion transistors. Consider, for example, the recent case study with SiMa.ai and its new machine learning (ML) chip called MLSoC; it provides effortless machine learning at the embedded edge.
This MLSoC, created at the 16-nm technology node, comprises billions of transistors. As is almost invariably the case in today’s SoC designs, the MLSoC is composed of a sophisticated mix of off-the-shelf third-party intellectual property (IP) blocks coupled with an internally developed machine learning accelerator (MLA) IP.
Third-party IPs are well-known and standard functions, such as processor and communication cores—Ethernet, USB, I2C, and SPI—and peripherals, the sort of processes not worth the time and effort to develop internally. The “secret sauce” that differentiates this SoC from its competitors is the MLA, which provides 50 trillion operations per second (TOPS) while consuming a minuscule 5 watts of power.
One problem with combining hundreds of IPs from various vendors is that multiple interconnect protocols have been defined and adopted by the SoC industry—OCP, APB, AHB, AXI, STBus, and DTL—and each IP may use a distinct protocol. Also, each IP may support a different data width and run at a separate clock frequency. As you can imagine, getting these IPs to talk to each other can be daunting.
E-mail This Article | Printer-Friendly Page |
|