Improving the throughput of the products involving processors.

The two fundamental measures of processor performance are task-latency and throughput. In case of most of the micro-processors, they are latency oriented architectures. Their goal is to minimize the running time of a single sequential program by avoiding task-level latency whenever possible.
Throughput-oriented processors, in contrast, arise from the assumption that they will be presented with workloads in which parallelism is abundant. This fundamental difference leads to architectures that differ from traditional sequential machines. Broadly speaking, throughput-oriented processors rely on three key architectural features: emphasis on many simple processing cores, extensive hardware multi threading, and use of single-instruction, multiple-data, or SIMD, execution.
No successful processor can afford to optimize aggregate task throughput while completely ignoring single-task latency or vice versa. Throughput-oriented processors rely on three key architectural features: emphasis on many simple processing cores, extensive hardware multi-threading, and use of single-instruction, multiple-data, or SIMD, execution.
Throughput-oriented processors achieve even higher levels of performance by using many simple, and hence small, processing cores.
How fast can data get in and out of the processor? This sets the minimum latency that can be reached.
One of the most important network performances metric is throughput. One of the ways for increasing the throughput is to implement efficient routing mechanism. Routing is the selection of path for traffic in a network which can be a network on chip too. The routing process usually directs forwarding on the basis of routing tables, which maintain a record of the routes to various network destinations.
Dynamic routing attempts to achieve high throughput by constructing routing tables automatically, based on information carried by routing protocols, allowing the network to act nearly autonomously in avoiding network failures and blockages. Dynamic routing dominates the Internet. Examples of dynamic-routing protocols and algorithms include Routing Information Protocol (RIP), Open Shortest Path First (OSPF) and Enhanced Interior Gateway Routing Protocol (EIGRP).
Hence, the designer has to maintain a trade-off between the latency and throughput and the type of mechanism i: e weather routing protocols or switching scheme is used.
A crossbar switch might be on of the many options implemented for routing from input to output. A crossbar switch is an assembly of individual switches between a set of inputs and a set of outputs. The switches are arranged in a matrix. If the crossbar switch has M inputs and N outputs, then a crossbar has a matrix with M × N cross-points or places where the connections cross.

VisualSim Architect provides with improved throughput and latency by allowing the designer to make changes in the routing . So while designing the core of any processor, I had multiple options to increase the throughput of the system.
Like –
• Using high end commands to write good programs.
• Using efficient routing schemes.
• Crossbar switching between the ports as well.
• Use of the snoop commands.
• Introducing delay to maintain time lapse for accessing the router table by the router.
• Providing a memory controller and cache for hardware .
Designing the processors on VisualSim Architect has provided the architects with various options and providing a trade-off between the throughput and latency along with varying in between the model design according to the application.
#Throughput #VisualSimAR #Simulation #Processors #Logic