5
8/8/2019 DMA and Memory Management http://slidepdf.com/reader/full/dma-and-memory-management 1/5

DMA and Memory Management

Embed Size (px)

Citation preview

Page 1: DMA and Memory Management

8/8/2019 DMA and Memory Management

http://slidepdf.com/reader/full/dma-and-memory-management 1/5

Page 2: DMA and Memory Management

8/8/2019 DMA and Memory Management

http://slidepdf.com/reader/full/dma-and-memory-management 2/5

Bluetronics Inc

DMA and Memory Management Unit

Rajarajan.E ([email protected])

Bluetronics Inc | DMA and Memory Management Unit

resistors or terminators on control signals (such as read and write strobes) so the control signals don't

float to the active state during the brief period when neither the processor nor the DMA controller isdriving them. DMA controllers require initialization by software. Typical setup parameters include the base

address of the source area, the base address of the destination area, the length of the block, and whether

the DMA controller should generate a processor interrupt once the block transfer is complete. It's typically

possible to have the DMA controller automatically increment one or both addresses after each byte

(word) transfer, so that the next transfer will be from the next memory location. Transfers between

peripherals and memory often require that the peripheral address not be incremented after each transfer.

When the address is not incremented, each data byte will be transferred to or from the same memory

location.

DMA or burst

DMA operations can be performed in either burst or single-cycle mode. Some DMA controllers support

both. In burst mode, the DMA controller keeps control of the bus until all the data buffered by the

requesting device has been transferred to memory (or when the output device buffer is full, if writing to a

peripheral). In single-cycle mode, the DMA controller gives up the bus after each transfer. This minimizes

the amount of time that the DMA controller keeps the processor off of the memory bus, but it requires that

the bus request/acknowledge sequence be performed for every transfer. This overhead can result in a

drop in overall system throughput if a lot of data needs to be transferred. In most designs, you would use

single cycle mode if your system cannot tolerate more than a few cycles of added interrupt latency.

Likewise, if the peripheral devices can buffer very large amounts of data, causing the DMA controller to

tie up the bus for an excessive amount of time, single-cycle mode is preferable. Note that some DMA

controllers have larger address registers than length registers. For instance, a DMA controller with a 32-

bit address register and a 16-bit length register can access a 4GB memory space, but can only transfer

64KB per block. If your application requires DMA transfers of larger amounts of data, software

intervention is required after each block.

Get on the bus

The simplest way to use DMA is to select a processor with an internal DMA controller. This eliminates the

need for external bus buffers and ensures that the timing is handled correctly. Also, an internal DMA

controller can transfer data to on-chip memory and peripherals, which is something that an external DMA

controller cannot do. Because the handshake is handled on-chip, the overhead of entering and exiting

Page 3: DMA and Memory Management

8/8/2019 DMA and Memory Management

http://slidepdf.com/reader/full/dma-and-memory-management 3/5

Bluetronics Inc

DMA and Memory Management U

Rajarajan.E ([email protected]

Bluetronics Inc | DMA a

DMA mode is often much faster tha

processor is used, be sure that thethe problem of bus contention, ens

the DMA controller from requesting

see, DMA is not as mysterious as it

the system is properly designed.

Figure 1: A DMA controller share

MMU: Memory Management Unit

Memory Management Unitsystems. Among the functions of addresses, memory protection, cabank switching. Typically, the MMUMMU includes a small amount of addresses. This table is called thethe MMU, which determines whethdevice. If the data is not in memory,

The compusystem is called the memory man

the CPU and system memory. Thedivided into three areas: hardwareapplication memory management.component, it is usually integrated i

Generally, the hardware as(RAM) and memory caches. RAM i

nit

n)

nd Memory Management Unit

n when an external controller is used. If an external

hardware handles the transition between transfersre that bus requests are inhibited if the bus is not f

the bus before the processor has reacquired it after

sometimes seems. DMA transfers can provide real

the processor's memory bus

(MMU) is the hardware component that managsuch devices are the translation of virtual addr he control, bus arbitration, and, in simpler compis part of the CPU, though in some designs it is amemory that holds a table matching virtual addr

Translation Look-aside Buffer (TLB). All requests f e r the data is in RAM or needs to be fetched fromthe MMU issues a page fault interrupt.

ter hardware that is responsible for managing the cagement unit (MMU). This component serves as

functions performed by the memory management umemory management, operating system memoryAlthough the memory management unit can beto the central processing unit (CPU).

sociated with memory management includes randos the physical storage compartment that is located

DMA controller or

orrectly. To avoidree. This prevents

a transfer. So you

advantages when

s virtual memoryesses to physicaluter architectures,eparate chip. Theesses to physicalr data are sent tothe mass storage

mputer’s memorya buffer between

it can typically bemanagement and

a separate chip

access memoryon thehard disk. It

Page 4: DMA and Memory Management

8/8/2019 DMA and Memory Management

http://slidepdf.com/reader/full/dma-and-memory-management 4/5

Bluetronics Inc

DMA and Memory Management U

Rajarajan.E ([email protected]

Bluetronics Inc | DMA a

is the main storage area of the comcopies of certain data from the ma

cache, which helps speed up the pr

When the physical memory, or RAmemory from the hard disk to rumemory from the operating systewithin the central processing unit,Pages are secondary storage blocoperating system to utilize storage s

Instead of the user receiving an err instructs the system to build enomemory space is created out of a pThis feature is a major key to makinot required to create one chunk of sizes of memory space to accofragmentation. This could lead towhen the total space available is aof allocating the memory requiredoperating systems, many copies of often assigns an application theprograms the same addresses. Alsprograms on an as needed basis.elsewhere. One of the main challen

needed and can be discarded. Thismemory management has becomemanagement presents a major issu

Fig:

nit

n)

nd Memory Management Unit

puter where data is read and written. Memory cachein memory. The CPU accesses this information h

ocessing time.

, runs out of memory space, the computer automthe requested program. The memory managemto various applications. The virtual address area

is comprised of a range of addresses that are diks that are equal in size. The automated paging ppace scattered on the hard disk.

or message that there is not enough memory, theugh virtual memory to execute the application.ool of equal size blocks of virtual memory for running this process work effectively and efficiently becvirtual memory to handle the program requirementmodate different size programs will cause a p

he possibility of not having enough free space foctually enough. Application memory management e

to run a program from the available memory rethe same application can be running. The memoryemory address that best fits its need. It’s simple

o, the memory management unit can distribute meWhen the operation is completed, the memory i

ges for memory management unit is to sense whe

frees up memory for use on other processes. Autoa separate field of study because of this issue. Iwhen it comes to optimal performance of compute

Schematic of the operation of an MMU

s are used to holdld in the memory

tically uses virtualent unit allocates, which is locatedivided into pages.rocess allows the

MU automaticallyontiguous virtualg the application.

use the system is. Creating variousoblem known asr larger programsntails the process

sources. In larger management unitr to assign thesemory resources to

recycled for usedata is no longer

matic and manualnefficient memorysystems.

Page 5: DMA and Memory Management

8/8/2019 DMA and Memory Management

http://slidepdf.com/reader/full/dma-and-memory-management 5/5

Bluetronics Inc

DMA and Memory Management Unit

Rajarajan.E ([email protected])

Bluetronics Inc | DMA and Memory Management Unit

Modern MMUs typically divide the virtual address space (the range of addresses used by the processor)into pages, each having a size which is a power of 2, usually a few kilobytes, but they may be muchlarger. The bottom n bits of the address (the offset within a page) are left unchanged. The upper addressbits are the (virtual) page number. The MMU normally translates virtual page numbers to physical pagenumbers via an associative cache called a Translation Look aside Buffer (TLB). When the TLB lacks atranslation, a slower mechanism involving hardware-specific data structures or software assistance isused. The data found in such data structures are typically called page table entries (PTEs), and the datastructure itself is typically called a page table . The physical page number is combined with the page offsetto give the complete physical address. A PTE or TLB entry may also include information about whether the page has been written to (the dirty bit ), when it was last used (the accessed bit , for aleast recentlyused page replacement algorithm), what kind of processes (user mode, supervisor mode) may read andwrite it, and whether it should be cached.

Sometimes, a TLB entry or PTE prohibits access to a virtual page, perhaps because no physical randomaccess memory has been allocated to that virtual page. In this case the MMU signals a page fault to theCPU. The operating system (OS) then handles the situation, perhaps by trying to find a spare frame of RAM and set up a new PTE to map it to the requested virtual address. If no RAM is free, it may benecessary to choose an existing page, using some replacement algorithm, and save it to disk (this iscalled "paging"). With some MMUs, there can also be a shortage of PTEs or TLB entries, in which casethe OS will have to free one for the new mapping.

In some cases a "page fault" may indicate a software bug. A key benefit of an MMU is memory protection:an OS can use it to protect against errant programs, by disallowing access to memory that a particular program should not have access to. Typically, an OS assigns each program its own virtual address

space. An MMU also reduces the problem of fragmentation of memory. After blocks of memory havebeen allocated and freed, the free memory may become fragmented (discontinuous) so that the largestcontiguous block of free memory may be much smaller than the total amount. With virtual memory, acontiguous range of virtual addresses can be mapped to several non-contiguous blocks of physicalmemory.