Cache

Stochastic and cycle-accurate cahce with snoop

Quick Explanation

  • Supports stochastic and cycle accurate cache
  • Supports request queuing
  • Supports cache access latency
  • Supports cache hit-miss evaluation
  • Supports cache prefetch
  • Address range
  • Associativity, Write-back, Write-through and aging

Protocol

  • Supports L1, L2, L3
  • Snooping standards from ARM, Xilinx and other consortiums

Cache

Cache is a small, fast computer memory for keeping copies of data from a larger, slower memory. The trade-off between memory speed and size is pretty universal. Without cache memory, every time the CPU requests for a data, it would send the request to the main memory which would then be sent back across the system bus to the CPU. This is a slow process. The idea of introducing cache is that this extremely fast memory would store data that is frequently accessed and if possible, the data that is around it. This is to achieve the quickest possible response time to the CPU. One of the key goals in system design is to ensure the processor is not slowed down by the storage devices it works with. Slowdowns mean wasted processor cycles, where the CPU can’t do anything because it is sitting and waiting for the information it needs.

Overview

VisualSim models can incorporate the cache blocks in the definition of the hardware architecture. The cache block is used to model theL1 (I- and D-cache), L2 and L3 cache. VisualSim has two cache abstraction blocks: stochastic cache and cycle accurate cache. In addition, the processor block also has functionality to describe the caches that are within the core. The cache can be linked to create the hierarchy and can support TLB, TBU/TCU and System Memory Management Unit. The differences between the three types of blocks are the level of detail on the cache input, addressing, generating a miss, number of prefetch lines, associativity and cache algorithm such as the MESI protocol. The stochastic block counts the number of accesses and generates a miss at the end of a cache line. The hit-miss parameter can also generate a miss for any cache access. The cycle-accurate cache maintains the content at each address position. Requests access a specific address and Write will trigger the replacement policies.

Stochastic Cache Parameter:

  • Cache_Speed_Mhz: Speed of the cache in Mhz
  • Cache_Size_KBytes: Size of the cache in KBytes
  • Cache_Size_KBytes: Size of the cache in KBytes
  • Words_per_Cache_Line: The number of words per cache line
  • FIFO_Buffers: This is the number of out standing requests that need to be processed
  • Cache_Hit_Expression: This is an expression for the cache hit

Cycle accurate Cache Parameter:

  • Cache_Size_KB: Size of cache in Mbytes
  • Cache_Speed_Mhz: Speed of cache in Mhz
  • Cache_Bytes_per_Word:
  • Bus_Width
  • Cache_Line
  • Associativity: Direct, 2,4,8,16,32
  • Replacement Policy: Least recently Used, Most recently Used
  • Write Policy: Write Back, Write Through
  • Prefetch_Lines: Number of lines to prefetch for each request

Cache-accesses cache if there is any misses in the internal memory