answersLogoWhite

0


Best Answer

calculating and storing data

User Avatar

Wiki User

โˆ™ 2011-11-23 09:19:53
This answer is:
User Avatar
Study guides

Algebra

20 cards

A polynomial of degree zero is a constant term

The grouping method of factoring can still be used when only some of the terms share a common factor A True B False

The sum or difference of p and q is the of the x-term in the trinomial

A number a power of a variable or a product of the two is a monomial while a polynomial is the of monomials

โžก๏ธ
See all cards
3.8
โ˜†โ˜…โ˜†โ˜…โ˜†โ˜…โ˜†โ˜…โ˜†โ˜…
1796 Reviews

Add your answer:

Earn +20 pts
Q: What does the term ' cache hit ratio ' mean?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Related questions

What is the difference between cache vs cold cache vs hot cache vs warm cache vs cache hit vs cache miss?

Firstly, it sounds like you are asking for general definitions, rather than differential definitions, which is problematic when the definitions are differential and context specific. Cache miss: not in cache, must be loaded from the original source Cache hit: was loaded from cache (no implication of what "type" of cache was hit). cold cache: The slowest cache hit possible. The actual loading mechanism depends on the type of cache (CPU cache could refer to an L2 (or L3) hit, disk cache could refer to a RAM hit on the drive, web cache could refer to a drive cache hit) hot cache: The fastest cache hit possible. Depends on mechanism described (CPU could be L1 cache, disk could be OS cache hit, web cache could be RAM hit in cache device) Warm cache: Anything between, like L2 when L1 is hot and L3 is cold. It is a less precise term and often used to imply "hot" when the performance is closer to "cold."


What is the data found in the cache is called?

cache hit


What is cache hit and cache miss?

Type your answer here... in cache memory when the CPU refer to the memory and find the word in cache it is said to be hit or produced....... if the word is not found in cache it is in main memory it counts as a miss


How is the power of processors measured?

The power of processors (computers) are generally measured in MIPS, or Million Instructions Per Second. However, this is a subjective term because it depends on the instruction mix, cache hit/miss ratio, processor versus cache versus memory speed, and various other factors. Benchmark tests for processors are very complex, diverse, and relatively standardized so that useful comparisons can be made from them.


What is a cache hit?

In memory terms, this means the memory location you tried to hit was in the cache. Hitting the cache is very fast but due to hardware limitations is very small. When a program needs to fetch memory it will look in the cache first. Depending on the operating system the cache management is designed differently but in most cases memory locations that are used a lot are in the cache. The same can be applied to a disk cache. Instead of moving the disk drive head to find a file, if the file is in the cache it is faster. When the file is needed it will look at this cache first and if it's there then it hits the cache.


What is the difference between a cache hit and a cache miss?

Well cache is a high speed memory whcih is basically used to reduce the speed mismatch between the CPU and the main memory as it acts as a buffer. cache hit-whenever the CPU requests for any data or information then it first checks the cache whether the data is present or not.if it is present then the data is being taken from the cache memory itself and this is referred as chache hit cache miss-when the data is not found in cache memory then the data is taken from the main memory and the copy of it is kept in the cache memory also for any further use of it.thsi is known as cache miss Anand bhat(mca@kiit-870024)


What is the size of L1 and L2 cache?

Usually the size of the L2 cache will be larger than the L1 cache so that if data hit in L 1 cache occurs, it can look for it in L 2 cache.. If data is not in both of the caches, then it goes to the main memory...


What is the objective of cache only memory architecture?

Cache memory is the high speed memories which are repeatedly requested by the Cache client (CPU). Whenever the requested data from the cpu is present in the cache, it directly supply the data and is known as cache hit(fast) and when the data is not accessible in cache then cache access the block of the main memory and feed to the CPU and it is termed as cache miss (slow).


Why it is possible to achieve a high hit rate with a relatively small amount of cache?

Well, cache hit rate isn't always determined by the size of the cache. If the cache is inefficient, or if the processor is clocked too far out of stability, the hit rate can decrease. Same as such in the inverse-- If you have cache functioning at near perfection and the processor is clocked properly, the cache hit rate will be reasonably high. High miss rates are most often caused by having too little cache, but the two above mentioned things have an impact too. A small cache isn't bad-- if it is enough. Processing aims for simple-instruction hit rate at least 97%. Misses beyond this have an increasingly heavy impact on performance. 3 misses out of 100 is a bit rough when you consider how many billions of cycles a second a processor goes through.


What is the relationship between a computer's CPU speed CPU cache and it's main bus speed with regard to the computer's performance?

For bulk memory-to-memory copy, the main bus bandwidth will dominate, and the other components must be able to keep up (to saturate the bus and the storage components behind it). For single-byte random reads, latency will dominate. We want read requests put on the main bus to be satisfied with low latency, but inevitably several CPU cycles will go past (stalls will happen) before the request is satisfied. A larger cache may be able to reduce the number of such requests, but the cache hit ratio is very very sensitive to the application workload, and in the worst case we will see a low cache hit ratio with extra latency while we suffer the expense of a large cache.


How does cache memory reduce the memory access time?

Cache resides in between the CPU and the main memory. Whenever CPU requires some data, it first looks it in the cache. If the data is available in the cache[called hit], then the data is transferred to CPU.But in case , if data is not available in cache[called miss], then it has to be fetched into cache from the main memory. So using cache we can reduce the access time to fetch the data from memory by making it available in the cache.


Is cache memory a removable memory?

No, a cache memory is often used to store data that has been needed recently on grounds that it will be faster to access when/if it is needed again. When data that is requested is contained in the cache you have a cache hit, and when you have to retrieve it from the hard drive (or where ever its original storage was) again it is called a cache miss. Retrieving data from the hard drive is slower than retrieving it from the cache.

People also asked