Benefits

Using the NIU block in VisualSim provides:

  • Flexible Arbitration: Modify arbitration logic to test fairness, priority, and QoS policies.
  • Virtual Channel Support: Prevent head-of-line blocking and ensure predictable latency.
  • Cross-Memory Compatibility: Connects to DDR, LPDDR, GDDR, and HBM controllers.
  • Performance Visibility: Simulate packet throughput, latency, and buffer utilization.
  • Industry IP Alignment: Supports Synopsys and Cadence NoCs, but also enables customer-specific NIU development.
  • System Reliability: Accurate reordering ensures data integrity across out-of-order responses.

The NIU (Network Interface Unit) block in VisualSim models the critical boundary element between masters/initiators (CPUs, GPUs, DSPs, NPUs) and the NoC fabric. It is responsible for packetizing memory requests, applying arbitration and QoS policies, setting up virtual channels, and reordering responses before returning them to the correct initiator.

NIUs emerged as a core building block of Network-on-Chip (NoC) architectures, which evolved in the 2000s to address multi-core scaling, bandwidth demands, and heterogeneous SoC integration. Companies such as Arteris, ARM (CoreLink/CMN), Cadence, and Synopsys pioneered NIU-based NoCs, and today almost every advanced SoC uses some form of NIU for scalable interconnect and memory subsystem performance.

The NIU in VisualSim enables architects to evaluate latency, throughput, priority schemes, arbitration logic, and congestion control, making it essential for the design of modern AI accelerators, multi-core processors, and safety-critical SoCs.

Overview

The NIU block in VisualSim provides the following features:

  • Request Handling Unit: Forwards read/write requests from initiators to the NoC.
  • Packet Encoder: Converts memory transactions into packetized flits for NoC transmission.
  • QoS Manager: Implements arbitration and priority-based scheduling for critical workloads.
  • Virtual Channel Manager: Allocates and manages VCs to prevent head-of-line blocking.
  • Request Buffer: Stores outstanding transactions to manage congestion.
  • Reorder Buffer: Ensures out-of-order responses are properly sequenced before delivery.
  • Flow Control Mechanism: Credit- or backpressure-based flow regulation to avoid deadlock.
  • Memory Request Decoder: Translates packets back into memory transactions.
  • Response Buffer: Stores responses before delivering to initiators.
  • Data Path Manager: Directs response routing to the correct NIU master.
  • Interfaces: Connects seamlessly to NoC routers, switches, and memory controllers (DDR, LPDDR, GDDR, HBM).

Supported Standards

The NIU does not conform to a single industry standard, but is aligned with:

  • Corelink CMN-600/700/Cyprus/S3 and AMBA AXI/CHI protocols: Widely used in ARM-based SoCs.
  • Arteris FlexNoC and Ncore: Frameworks for scalable and configurable NoC architectures.
  • Cadence and Baya Systems NoC IPs: Supported for interoperability and design validation.
  • JEDEC DRAM Interfaces (DDR, LPDDR, GDDR, HBM): Supported via controller integration.
  • UCIe: Supported through AMBA C2C and other interconnect protocols.

Key Parameters

Key configurable parameters include:

  • Flit_Size: Defines granularity of packet transfers.
  • Request_Buffer_Size: Configures outstanding request queue depth.
  • Reorder_Buffer_Size: Controls out-of-order response handling.
  • QoS_Mode: Enables arbitration schemes for fairness or priority.
  • Priority_Enable: Configures weighted arbitration rules.
  • Bandwidth_per_Port: Assigns link capacity per NIU port.
  • NoC_Speed: Defines overall NoC operating frequency.
  • Interconnect_QoS: Policy selection for congestion management.

Application

The NIU block applies wherever high-performance, multi-core SoCs are built:

  • AI & ML Accelerators: Managing tensor/matrix traffic across compute cores and HBM stacks.
  • Automotive SoCs: Safety-critical gateways and ADAS compute platforms.
  • Mobile Processors: Integrating CPUs, GPUs, ISPs, and NPUs in smartphones.
  • Datacenter & HPC: Large-scale heterogeneous SoCs with DDR5/HBM controllers.
  • Aerospace & Defense: Mission-critical NoCs requiring deterministic QoS and fault tolerance.

Integrations

  • Connects with: NoC routers, switches, and memory controllers.
  • Interfaces with: processors, GPUs, DSPs, NPUs, and accelerators as masters/initiators.
  • Works with: LPDDR, DDR, GDDR, and HBM memory blocks.
  • Integrates into: Arteris, Cadence, Synopsys, and ARM CoreLink-style NoCs.

Schedule a consultation with our experts

    Subscribe