Upload
duane-cain
View
232
Download
0
Embed Size (px)
DESCRIPTION
Chapter Content Memory and Memory Organization Cache Memory Virtual Memory
Citation preview
EEC4133Computer Organization &
ArchitectureChapter 7: Memory
by Muhazam Mustapha, April 2014
Learning Outcome
• By the end of this chapter, students are expected to be able to understand and explain the concepts of memory, cache memory and virtual memory, and construct a given memory structure
Most of the material in this slide set is adopted from Murdocca & Heuring 2007
Chapter Content
• Memory and Memory Organization• Cache Memory• Virtual Memory
Memory &Memory Organization
The Memory Hierarchy
Simple RAM Cell
• SRAM (a) and DRAM (b):
Flash Memory Cell
RAM Pinout
Memory Organization
• 4-word × 4-bit memory:
Memory Organization
• Simplified:
Memory Organization
• Use two 4-word × 4-bit RAM to construct 4-word × 8-bit RAM (4-word × 1-byte):
Memory Organization
• Use two 4-word × 4-bit RAM to construct 8-word × 4-bit RAM:
Single-In-Line Memory Module
• 256 MB dual in-line memory module organized for a 64-bit word with 16 16M × 8-bit RAM chips (eight chips on each side of the DIMM)
ROM from Decoder andOR Gate
RAM as Look Up Table ALU
Cache Memory
Improving Memory Performance
• Memory as the main component in a computing system oftentimes couldn’t meet the microprocessor’s specification– It is too slow to cope up with the current
microprocessor’s speed– It is too small to load the entire modern code
and multimedia data• Smart workarounds are needed to solve
these problems
Improving Memory Performance
• To solve speed problems memory caching (cache memory) scheme was invented
• To solve size problems memory virtualizing (virtual memory) scheme was invented
μPMain
MemoryVirtual
MemoryCache
Memory
FastLimitless Space
SRAM DRAM Harddisk
Cache Memory
• [Definition] Cache memory is a much smaller piece of memory than the main memory, but much faster, that is used to temporarily keep the content of the main memory where the microprocessor will be accessing, instead of directly to / from the main memory
• Content from the main memory will be loaded into cache on demand
Cache Memory
• Since cache memory is smaller, we can only keep a small portion of main memory in it
• Cache memory is subdivided into slots – 1 slot keeps 1 portion of main memory
• Hence there is a need to keep track which part of main memory is being kept in cache slot, this process is called mapping
Mapping Scheme
• [Definition] Mapping in cache memory is the scheme to link which slot in cache is holding which part in main memory
• There are 3 mapping schemes:– Associative Mapping– Direct Mapping– Set Associative Mapping
Associative Mapping
• [Definition] Associative Mapping is a cache mapping scheme that allows any block in memory to be kept by any slot in cache
• The link to the right main memory location is resolved by tagging
• The MSB part of the address is used as tags, LSB is used to address the specific one in slot
Associative Mapping
Replacement Policy
• Since cache is smaller, there are times when it is full, that it needs to flush the current content and reload with new memory demand
• With associative mapping we need replacement policy to control which slot to be flushed and replaced
Replacement Policy
• Type of replacement policies:– Least recently used (LRU)– First-in/first-out (FIFO)– Least frequently used (LFU)– Random– Optimal (used for analysis only – look
backward in time and reverse-engineer the best possible strategy for a particular sequence of memory references)
Direct Mapping
• [Definition] Direct Mapping is a mapping scheme that allows blocks in memory to be kept by one specific slot in cache
• Which memory block is still resolved by tagging since many blocks go to 1 slot
• The MSB part of the address is used as tags and slot number, LSB is used to address the specific one in slot
Direct Mapping
Set Associative Mapping
• [Definition] Set Associative Mapping is a cache mapping scheme that consolidates the associative and direct mapping schemes:– Slots are grouped into sets– Sets are directly mapped– Slots in sets are associatively mapped
Set Associative Mapping
• The link to the right main memory location is also resolved by tagging
• The MSB part of the address is used as tags and set numbers, LSB is used to address the specific one in slot
Set Associative Mapping
Read & Write Schemes
Hit Ratios & Effective Access Times
Virtual Memory
Main Memory Size Limit
• The size of memory ICs has expanded a lot since the last decades
• This memory size explosion however can never catch-up with the bigger explosion of software size in term of its code area and data area – especially with multimedia
• Hence there is a need to load only the required portion of data from harddisk into memory
Overlaying
• [Definition] Overlaying is a system whereby the parts of software to be loaded and flushed are decided and done manually by the programmer
Virtual Memory
• [Definition] Virtual Memory is a system whereby the parts of software to be loaded from and flushed into harddisk are decided and done automatically and transparently by the operating system
• Again this raises the need to map the content of real main memory with the one virtual in harddisk
Virtual Memory
Page Table
• This is the mapping scheme between real and virtual memory
Address Translation
Segmentation
• Segmentation allows 2 users to share the same code with different data:
Fragmentation
• After many loads and flushes, main memory may be fragmented – can be defragmented: