|
|||||||||||||||
Can 10 Gbps Ethernet be an Embedded Design Solution?Ron Wilson, Intel FPGA 10 Gbps Ethernet (10GbE) has established itself as the standard way to connect server cards to the top-of-rack (ToR) switch in data-center racks. So what’s it doing in the architectural plans for next-generation embedded systems? It is a tale of two separate but connected worlds. Inside the Data Center If we can say that a technology has a homeland, then the home turf of 10GbE would be inside the cabinets that fill data centers. There, the standard has provided a bridge across a perplexing architectural gap. Data centers live or die by multiprocessing: their ability to partition a huge task across hundreds, or thousands, of server cards and storage devices. And multiprocessing in turn succeeds or fails on communications—the ability to move data so effectively that the whole huge assembly of CPUs, DRAM arrays, solid-state drives (SSDs), and disks acts as if they were one giant shared-memory, many-core system. This need puts special stress on the interconnect fabric. Obviously it must offer high bandwidth at the lowest possible latency. And since the interconnect will touch nearly every server and storage controller card in the data center, it must be inexpensive—implying commodity CMOS chips—compact, and power efficient. And the interconnect must support a broad range of services. Blocks of data must shuffle to and from DRAM arrays, SSDs, and disks. Traffic must pass between servers and the Internet. Remote direct memory access (RDMA) must allow servers to treat each other’s memory as local. Some tasks may want to stream data through a hardware accelerator without using DRAM or cache on the server cards. As data centers take on network functions virtualization (NFV), applications may try to reproduce the data flows they enjoyed in hard-wired appliances.
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |