History of Your Computer – Cache Memory Part 1 2
We examined the early digital computer memory, see the history of the computer – Core memory, and mentioned that the current standard RAM (random access memory) chip memory. This is in line with Moore's often cited application (Gordon Moore was one of Intel's founders). It states that the component density of integrated circuits, which can be paraphrased with unit cost performance, doubles every 18 months. Early core memory has cycle times in microseconds, today we are talking about nanoseconds.
The caching term is known to the personal computer. This is one of the performance features discussed on the latest processor or hard drive. The processor may have L1 or L2 cache and disk size of different sizes. Some programs also have a cache, also called buffer, for example when writing data to a CD writer. Early CD writers' programs have "surpassed". They had a good support for these results
Mainframe systems used cache for many years. The concept became popular in the 1970s to speed up memory access time. This was the era when magmemory was gradually removed and replaced with integrated circuits or chips. Although the chips were much more efficient in terms of physical space, they had other problems in terms of reliability and heat generation. The chips of the plans were faster, warmer and more expensive than other design chips that were cheaper but slower. Speed was always one of the most important factors in computer sales and design engineers have always been looking for ways to improve performance.
The cache memory concept is based on the fact that the computer is by nature a serial processing machine. Of course, one of the great benefits of a computer program is that the "branch" or "jump" from the series – is the object of another article in this series. However, there is still enough time for an instruction to follow the other to make a buffer or cache a useful complement to your computer.
The caching idea is to predict what data should be processed from memory in the CPU. Take a program consisting of a series of instructions, each stored in memory, say 100 upwards. Command 100 is read from the memory and executed by the CPU, and then the next instruction is read from location 101 and executed, followed by 102, 103, and so on.
If the memory in question is the central memory, 1 microgram for reading an instruction. If the processor takes, say, 100 nanosecons to execute the instruction, it has to wait 900 nanosecons for the next instruction (1 micron = 1000 nanoseconds). The actual repetition rate of the CPU is 1 microsecond. (Times and speeds quoted are typical but do not refer to any special hardware, but illustrate the principles.)
Source by sbobet th