Memory Guide.
Memory
SDram
RegSDram
Memory market


Note that the parity SIMMs are distinguished by the "x 9" or "x 36" format specifications. This is because parity memory adds a parity bit to every 8 bits of data. So, a 30-pin SIMM provides 8 data bits per cycle, plus a parity bit, which equals 9 bits; 72-pin SIMMs provide 32 bits per cycle, plus 4 parity bits, which equals 36 bits.


SIMM module identification

SIMMs, just like the DRAM chips that comprise them, are specified in terms of depth and width, which indicate the SIMM's capacity and whether or not it supports parity. Here are some examples of popular 30- and 72-pin SIMMs. Note that the parity SIMMs are distinguished by the `x9' or `x36' format specification.

 

Refresh

A memory module is made up of electrical cells. The refresh process recharges these cells, which are arranged on the chip in rows. The refresh rate refers to the number of rows that must be refreshed.

Two common refresh rates are 2K and 4K. The 2K components are capable of refreshing more cells at a time and they complete the process faster; therefore, 2K components use more power than 4K components.

Other specially-designed DRAM components feature self refresh technology, which enables the components to refresh on their own -- independent of the CPU or external refresh circuitry. Self refresh technology, which is built into the DRAM chip itself, reduces power consumption dramatically. It is commonly used in notebook and laptop computers.

 

3.3-volt versus 5-volt

Computer memory components operate at either 3.3 volts or 5 volts. Until recently, 5 volts was the industry standard. Making integrated circuits, or ICs, faster requires a reduced cell geometry, that is, a reduction in the size of the basic `building blocks.' As components become smaller and smaller, the cell size and memory circuitry also become smaller and more sensitive. As a result, these components cannot withstand the stress of operating at 5 volts. Also, 3.3-volt components can operate faster and use less power.

 

Composite versus noncomposite modules

The terms composite and noncomposite refer to the number of chips used on a given module. The term noncomposite describes memory modules that use fewer chips. For a module to work with fewer chips, those chips must be higher in density to provide the same total capacity.

 

EDO Memory

Extended Data Output, or EDO memory, is one of a series of recent innovations in DRAM chip technology. On computer systems designed to support it, EDO memory allows a CPU to access memory 10 to 15 percent faster than comparable fast-page mode chips.

 

Synchronous DRAM

Synchronous DRAM (SDRAM) uses a clock to synchronize signal input and output on a memory chip. The clock is coordinated with the CPU clock so the timing of the memory chips and the timing of the CPU are in `synch.' Synchronous DRAM saves time in executing commands and transmitting data, thereby increasing the overall performance of the computer. SDRAM memory allows the CPU to access memory approximately 25 percent faster than EDO memory.

 

DDR or SDRAM II

Double-data rate SDRAM is a faster version of SDRAM that is able to read data on both the rising and the falling edge of the system clock, thus doubling the data rate of the memory chip. In music, this would be similar to playing a note on both the upbeat and the downbeat.

 

RDRAM® (Rambus™ DRAM)

RDRAM is a unique design developed by a company called Rambus, Inc. RDRAM is extremely fast and uses a narrow, high-bandwidth "channel" to transmit data at speeds about ten times faster than standard DRAM. Rambus technology is expected to be used as main PC memory starting in about 1999.

 

SLDRAM (Synclink DRAM)

SLDRAM is the major competing technology to RDRAM. Backed by a consortium of chip manufacturers, Synclink extends the Synchronous DRAM four-bank architecture to 16 banks and incorporates a new system interface and control logic to increase performance.

 

Cache memory

Cache Memory is a special high-speed memory designed to accelerate processing of memory instructions by the CPU. The CPU can access instructions and data located in cache memory much faster than instructions and data in main memory. For example, on a typical 100-megahertz system board, it takes the CPU as much as 180 nanoseconds to obtain information from main memory, compared to just 45 nanoseconds from cache memory. Therefore, the more instructions and data the CPU can access directly from cache memory the faster the computer can run.

Types of cache memory include primary cache (also known as Level 1 [L1] cache) and secondary cache (also known as Level 2 [L2] cache). Cache can also be referred to as internal or external. Internal cache is built into the computer's CPU, and external cache is located outside the CPU.

Primary cache is the cache located closest to the CPU. Usually, primary cache is internal to the CPU, and secondary cache is external. Some early-model personal computers have CPU chips that don't contain internal cache. In these cases the external cache, if present, would actually be the primary (L1) cache.

Earlier we used the analogy of a room with a work table and a set of file cabinets to understand the relationship between main memory and a computers hard disk. If memory is like the work table that holds the files you're working on making them easy to reach, cache memory is like a bulletin board that holds the papers you refer to most often. When you need the information on the bulletin board you simply glance up and there it is.

You can also think of cache memory as a worker's tool belt that holds the tools and parts needed most often. In this analogy, main memory is similar to a portable tool box and the hard disk is like a large utility truck or a workshop.

The "brain" of a cache memory system is called the cache memory controller. When a cache memory controller retrieves an instruction from main memory, it also takes back the next several instructions to cache. This occurs because there is a high likelihood that the adjacent instruction will also be needed. This increases the chance that the CPU will find the instruction it needs in cache memory, thereby enabling the computer to run faster.

Home | Links | PC-info