A method comprising:a monitor agent monitoring access by a producer agent or a consumer agent to one or more memories; andthe monitor agent causing data to be moved, at a time determined by the monitor agent and not as part of a request from the consumer agent, from a first memory to a second memory that is accessible by the consumer agent with less latency than the first memory.
intelligent cache memory system...performs prefetches based on data fetching characteristics of the CPU. The system includes cache control logic, a first and a second cache memory... used to track the data fetching history of the CPU.... read cycle, the cache control logic returns the data being fetched by the CPU from either the first or the second cache memory or the main memory...the cache control logic initiates prefetch and updates the data fetching history ... Prefetch is conditioned on the data fetching history, while data fetching history update is conditioned on where the data requested by the CPU are fetched.
Prefetches to a cache memory subsystem are made from predictions which are based on access patterns stored by context. An access pattern is generated from prior accesses of a data processing system processing in a like context. During a training sequence, an actual trace of memory accesses is processed to generate unit patterns which serve in making future predictions and to identify statistics such as pattern accuracy for each unit pattern.
... provide history-based movement of shared-data in coherent cache memories. A plurality of entries are stored in a consume after produce (CAP) table ... Each of the entries is associated with a plurality of storage elements...caches and includes information of prior usage...Upon a miss by a processing node to a cache included therein, any storage elements are transferred to the cache from ..memory...another cache. An entry is created ...associated with the storage elements that caused the miss. A push prefetching engine may be used to create the entry.
A method and apparatus are described for protecting cache lines allocated to a cache by a run-ahead prefetcher from premature eviction, preventing thrashing. The invention also prevents premature eviction of cache lines still in use, such as lines allocated by the run-ahead prefetcher but not yet referenced by normal execution. A protection bit indicates whether its associated cache line has protected status in the cache or whether it may be evicted.
A processor executes one or more prefetch threads and one or more main computing threads. Each prefetch thread executes instructions ahead of a main computing thread to retrieve data for the main computing thread, such as data that the main computing thread may use in the immediate future. Data is retrieved for the prefetch thread and stored in a memory, ...
A complier apparatus for a computer system that is capable of improving the hit rate of a cache memory is comprised of a prefetch target extraction device, a thread activation process insertion device, and a thread process creation device, and creates threads for performing prefetch and prepurge. Prefetch and prepurge threads created by this compiler apparatus perform prefetch and prepurge in parallel with the operation of the main program, by taking into consideration program priorities and the usage ratio of the cache memory.
..a prefetch unit incorporates therein a circuit for issuing a request to read out one group of data to be prefetched and registers for holding the group of data read .. The group of data are read out from a cache memory or a main memory under the control of a cache request unit....