Scaling the 100 GbE Memory Wall
Michael Sporer, Director of IC Marketing, MoSys
4/14/2014 10:15 AM EDT
All interrelated system-level tradeoffs, including performance, pin count, and area, ultimately are driven by power consumption considerations. At 100 and 400 GbE, network chip vendors must consider end-to-end solutions for equipment OEMs. To remain competitive, OEMs plan to introduce multi-terabit systems that aggregate multiple 100 Gbit/s ports on each line card.
Two current technology trends, 100 Gbit/s line speeds in network appliances and the transition to IPv6, compound design complexity. At both the network SOC and OEM appliance levels, solutions have to deliver performance, network management, and quality of service. Crucial parameters include absolute delay, delay jitter, minimum delivered bandwidth, and packet loss.[1] Network engineers monitor and manage networks based on these parameters, which also serve as the basis of contractual service-level agreements.
![]() |
E-mail This Article | ![]() |
![]() |
Printer-Friendly Page |
|
Related Articles
New Articles
- Nexus: A Lightweight and Scalable Multi-Agent Framework for Complex Tasks Automation
- How the Ability to Manage Register Specifications Helps You Create More Competitive Products
- EAVS - Electra IC Advanced Verification Suite for RISC-V Cores
- Why RISC-V is a viable option for safety-critical applications
- Dimensioning in 3D space: Object Volumetric Measurement by Leveraging Depth Camera-based Reconstruction on NVIDIA Edge devices
Most Popular
- System Verilog Assertions Simplified
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- An Outline of the Semiconductor Chip Design Flow
- Synthesis Methodology & Netlist Qualification