CSE 7343/5343, Spring 2003
Topic 2-3: Virtual Memory
Prof. Jeff Tian, CSE/SoE/SMU, Dallas, TX 75275
tian@engr.smu.edu; www.engr.smu.edu/~tian/class/7343.03s
- Dates: 3/18-3/20.
- Reading: Ch 10.
Virtual Memory Concepts
- memory management: logical => physical
- extension: when it is not one-to-one mapping
- basic concept:
- virtual: not necessarily physical
- in concept: abstraction
- what users see
(not necessarily) = what is physically in memory
may be corresponding to disk and other storage devices
- direct extension of paging concept
- example Fig 10.1 (p.319)
- practical implications:
- partially in memory for program execution/process management
- virtual memory space >> physical memory space
(but still limited by the sum of all storage devices)
- general implementation:
- mapping between virtual and physical
- many different possibilities
- swapping and dynamic loading (Ch. 9) as basis
- paging-based implementation most common
- other schemes possible
- comparison: simplicity, transparency, efficiency, etc.
- VM: why and possibility?
- locality assumption (more later)
close neighbours more likely to be used together (spatial locality)
usage sequence/clusters over time (temporal locality)
- evolution/migration/locality concepts
more in page replacement later
- rare conditions, exception handling, etc.
- difference between declared and actual memory request
sparse matrix example
- major benefits:
- reduced physical limitations
(from uniform large logical address space)
- increased degree of multiprogramming
- better resource utilization
- may result in less I/O
- major cost: overhead associated with implementation
- cost-benefit analysis
overall and for different implementation schemes
Virtual Memory: Demand Paging
- most common VM scheme
- relating memory frames with backing store
Fig 10.2 (p.320)
pages - frames - disk space
- idea: paging with loading/swapping
- typically #pages >> #frames
- page table with valid/invalid bit
- valid: in main memory (MM), and map to it
- invalid: in backing store
- example Fig 10.3 (p.321)
- address mapping and memory reference
- (logical) memory reference steps
- access page table
- valid: map and reference
(same as regular paging in Ch.9)
- invalid (page fault):
find a frame
swap in
reset table entry
then restart (reference)
Fig 10.4 (p.322)
- no more frames: replacement
multiple steps too
replacement algorithms later
- frame vs. page
- physical memory to hold a page
- occupied or free
- Demand paging: implementation
- reference/mapping: above
- what to start with: nothing vs pre-loading
(again, locality and other concerns)
- extra bit/field for valid/invalid indicator
- backing store (swap space)
- software support for mapping/locating/replacement/etc.
- details later
- frame allocation and page replacement
- allocation of frames
- simple for demand paging
- compare to segmentation based VM
- typically predetermined #frames/process
- initial loading vs pure demand loading
- minimal number of frames (to start computation)
- allocation criteria: internal/external/equal
- local vs. global allocation/replacement
- replacement: affect performance
- implementing replacement:
- swap in always required
- swap out: not always
use of dirty-bit,
-- not modified, no need to swap out, just overwrite
- replacement algorithm later
- Demand Paging: performance
- direct measure: access time
- indirect measure: page fault count or rate
- affected by various factors
- internal factor: scheme/algorithm
- external factor: reference sequence and probabilities
page fault probability: p
- cost/performance of individual operations
- access time = (1 - p) x ma + p x page-fault-time
- page fault time: time for all operations (steps 1-12 p.326)
(main ops: service pf; load; restart)
- close link to page replacement algorithms
Virtual Memory: Page Replacement
- commonly used replacement algorithms:
- FIFO
- optimal
- LRU
- MRU
- other, and combinations
- working/evaluation w.r.t. reference strings
- general expectation:
inverse relation between #frames and #p.f.
example Fig 10.8 (p.335)
may vary for different algorithms
- rationale for different algorithms:
from workload (ref-str) characterization
locality and other assumptions
- FIFO
- simple to understand and implement
- works well for (strictly) sequence accesses/references
- how it works: example
- Belady's anomaly: more frames may even result in more page faults
- see the example connected to Fig 10.10 (p.337)
reference string 1,2,3,4,1,2,5,1,2,3,4,5
with 3 or 4 frames
- possible improvement: how to reduce page faults
- optimal replacement
- idea: not required for the furtherest into the future
- Belady's anomaly will not occur
- example in class
and in book (Fig 10.11, p.338)
- difficulty: future knowledge
- similarity with SJF
- solution: approximation based on certain assumptions
- LRU, MRU, etc.
- LRU, MRU, etc.
- LRU: least-recently used, closer to local correlation, modified FIFO
- MRU: most-recently used
- locality and working set concepts: works well with LRU
- LRU: no Belady's anomaly
- example in class
and in book (Fig 10.12, p.339)
- approximation for LRU: reduced information/overhead
- LFU/MFU: least/most frequently used, as alternative to LRU/MRU
Virtual Memory: Other Considerations
- management of free frame pool
- segmentation based VM
- thrashing: more time spent paging (replacement etc.) than execution
- example: Fig 10.15 (p.349)
- locality and working set model
- allocation > working set size
- LRU generally works better
Prepared by Jeff Tian
(tian@engr.smu.edu).
Last update March 21, 2003.