Mock Quiz Hub
Dark
Mock Quiz Hub
1
Recent Updates
Added: OS Mid 1 Quiz
Added: OS Mid 2 Quiz
Added: OS Lab 1 Quiz
Check back for more updates!
Time: 00:00
Quiz
Navigate through questions using the controls below
0%
Question 1 of 60
Quiz ID: q1
What is the primary benefit of virtual memory according to the lecture?
Faster CPU execution speed
Separation of user logical memory from physical memory
Reduced power consumption
Better graphics performance
Question 2 of 60
Quiz ID: q2
Which of the following is NOT a benefit of virtual memory mentioned in the lecture?
Logical address space can be much larger than physical address space
Allows address spaces to be shared by several processes
Reduces the need for secondary storage
Allows for more efficient process creation
Question 3 of 60
Quiz ID: q3
Virtual memory can be implemented via which two methods?
Demand paging and demand segmentation
Static allocation and dynamic allocation
Sequential access and random access
Buffering and caching
Question 4 of 60
Quiz ID: q4
In virtual address space design, where does the stack typically start?
At address 0
At the middle of address space
At the maximum logical address and grows down
At a random location
Question 5 of 60
Quiz ID: q5
What is the main advantage of having unused address space between stack and heap?
Faster memory access
No physical memory needed until heap or stack grows to a given new page
Better security
Reduced fragmentation
Question 6 of 60
Quiz ID: q6
What is demand paging?
Loading the entire process into memory at startup
Bringing a page into memory only when it is needed
Allocating memory in fixed-size chunks
Compressing memory pages
Question 7 of 60
Quiz ID: q7
What are the benefits of demand paging compared to loading entire processes?
Less I/O needed, less memory needed, faster response, more users
Better graphics, faster CPU, more storage
Reduced power consumption, better security
Simplified programming, easier debugging
Question 8 of 60
Quiz ID: q8
What is a lazy swapper in the context of demand paging?
A swapper that works slowly
A swapper that never swaps a page into memory unless the page will be needed
A swapper that only works during idle time
A swapper that compresses pages
Question 9 of 60
Quiz ID: q9
In the valid-invalid bit scheme, what does 'v' represent?
Virtual memory
Valid page reference
In-memory (memory resident)
Variable size page
Question 10 of 60
Quiz ID: q10
What happens when the MMU encounters an 'i' bit during address translation?
The page is immediately loaded
A page fault occurs
The process is terminated
Memory is compressed
Question 11 of 60
Quiz ID: q11
What is the first step in handling a page fault?
Find a free frame
Reference to a page traps to the operating system
Reset the page tables
Swap the page into memory
Question 12 of 60
Quiz ID: q12
After a page fault occurs and the OS determines it's a valid reference, what is the next step?
Restart the instruction
Find free frame
Reset validation bit
Terminate the process
Question 13 of 60
Quiz ID: q13
What is pure demand paging?
Loading all pages at once
Starting a process with no pages in memory
Using only physical memory
Compressing all pages
Question 14 of 60
Quiz ID: q14
What hardware support is needed for demand paging?
Page table with valid/invalid bit, secondary memory, instruction restart
Cache memory, faster CPU, more registers
Graphics card, sound card, network card
Floating point unit, vector processor
Question 15 of 60
Quiz ID: q15
What is a free-frame list?
A list of processes waiting for memory
A pool of free frames for satisfying page fault requests
A list of pages to be swapped out
A directory of memory locations
Question 16 of 60
Quiz ID: q16
What is zero-fill-on-demand?
Filling memory with random data
Zeroing out frame contents before allocation
Filling frames with program code
Creating empty files
Question 17 of 60
Quiz ID: q17
According to the performance example, if memory access time is 200 nanoseconds and average page-fault service time is 8 milliseconds, what is the EAT when p = 1/1000?
200 nanoseconds
8.2 microseconds
8 milliseconds
1000 nanoseconds
Question 18 of 60
Quiz ID: q18
For performance degradation less than 10%, what should be the maximum page fault rate?
Less than 1 page fault in every 1,000 accesses
Less than 1 page fault in every 400,000 accesses
Less than 1 page fault in every 100 accesses
Less than 1 page fault in every 10,000 accesses
Question 19 of 60
Quiz ID: q19
What is Copy-on-Write (COW)?
A method to copy files between processes
Allowing parent and child processes to initially share pages, copying only when modified
A technique to write data to multiple locations
A backup mechanism for virtual memory
Question 20 of 60
Quiz ID: q20
Why is Copy-on-Write more efficient for process creation?
It uses less CPU time
Only modified pages are copied
It requires less disk space
It uses fewer system calls
Question 21 of 60
Quiz ID: q21
What is vfork()?
A system call to create virtual memory
A variation on fork() where parent suspends and child uses copy-on-write address space
A method to allocate virtual frames
A technique to validate memory pages
Question 22 of 60
Quiz ID: q22
What happens when there is no free frame available for a page fault?
The process is terminated
Page replacement must occur
The system crashes
Memory is automatically expanded
Question 23 of 60
Quiz ID: q23
What is the purpose of the modify (dirty) bit in page replacement?
To identify invalid pages
To reduce overhead of page transfers by only writing modified pages to disk
To mark pages for compression
To indicate page access frequency
Question 24 of 60
Quiz ID: q24
What are the basic steps in page replacement?
Find desired page, find free frame, bring page in, continue process
Compress memory, allocate space, load page, restart
Terminate process, clean memory, restart system
Save state, find victim, swap out, swap in
Question 25 of 60
Quiz ID: q25
In the FIFO page replacement algorithm, how many page faults occur with the reference string 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1 using 3 frames?
12
15
18
20
Question 26 of 60
Quiz ID: q26
What is Belady's Anomaly?
More frames always result in fewer page faults
Adding more frames can cause more page faults
Page replacement algorithms are inefficient
Virtual memory causes system crashes
Question 27 of 60
Quiz ID: q27
What is the optimal page replacement algorithm?
Replace the page that was least recently used
Replace the page that arrived first
Replace the page that will not be used for the longest period of time
Replace the page with the lowest address
Question 28 of 60
Quiz ID: q28
How many page faults does the optimal algorithm produce with the given reference string and 3 frames?
7
9
12
15
Question 29 of 60
Quiz ID: q29
What does the LRU (Least Recently Used) algorithm replace?
The page that arrived first
The page that will not be used for longest time
The page that has not been used in the most amount of time
The page with the lowest reference count
Question 30 of 60
Quiz ID: q30
How many page faults does LRU produce with the reference string using 3 frames?
9
12
15
18
Question 31 of 60
Quiz ID: q31
What are two implementation approaches for LRU?
Counter implementation and stack implementation
Array implementation and linked list implementation
Hardware implementation and software implementation
Sequential implementation and parallel implementation
Question 32 of 60
Quiz ID: q32
What is the main problem with LRU implementation?
It produces too many page faults
It needs special hardware and is still slow
It cannot handle large page sizes
It is incompatible with virtual memory
Question 33 of 60
Quiz ID: q33
What is the reference bit in LRU approximation?
A bit that counts references to a page
A bit associated with each page, initially 0, set to 1 when page is referenced
A bit that indicates page validity
A bit that shows page modification status
Question 34 of 60
Quiz ID: q34
What is the second-chance algorithm?
A variation of optimal algorithm
Generally FIFO, plus hardware-provided reference bit
A hardware-based LRU implementation
A compression-based replacement algorithm
Question 35 of 60
Quiz ID: q35
In the enhanced second-chance algorithm, which page category is best to replace?
(1,1) recently used and modified
(1,0) recently used but clean
(0,1) not recently used but modified
(0,0) neither recently used nor modified
Question 36 of 60
Quiz ID: q36
What does LFU (Least Frequently Used) algorithm replace?
The page with the largest reference count
The page with the smallest reference count
The page that was least recently used
The page that arrived first
Question 37 of 60
Quiz ID: q37
What is the main benefit of page-buffering algorithms?
They eliminate page faults entirely
Keep a pool of free frames so frames are available when needed
They compress memory pages
They predict future page references
Question 38 of 60
Quiz ID: q38
What is the minimum number of frames each process needs?
At least 1 frame
Depends on the instruction set architecture
At least 10 frames
Depends on the process size
Question 39 of 60
Quiz ID: q39
In fixed allocation, what is equal allocation?
Each process gets frames proportional to its size
Each process gets the same number of frames
Each process gets frames based on priority
Each process gets frames based on CPU usage
Question 40 of 60
Quiz ID: q40
What is the formula for proportional allocation?
ai = (si/S) × m
ai = si × m
ai = m/si
ai = S/si
Question 41 of 60
Quiz ID: q41
What is global replacement?
Process selects replacement frame from only its own allocated frames
Process selects replacement frame from the set of all frames
All processes share the same replacement algorithm
Replacement occurs globally across all systems
Question 42 of 60
Quiz ID: q42
What is an advantage of local replacement over global replacement?
Higher throughput
More consistent per-process performance
Better memory utilization
Faster execution time
Question 43 of 60
Quiz ID: q43
What is thrashing?
A process using too much CPU time
A process is busy swapping pages in and out
Multiple processes competing for resources
System running out of disk space
Question 44 of 60
Quiz ID: q44
What causes thrashing according to the lecture?
Too many processes running
Insufficient CPU power
Size of locality > total memory size
Poor programming practices
Question 45 of 60
Quiz ID: q45
What is the working-set model?
A model for CPU scheduling
A model using a working-set window (Δ) - a fixed number of page references
A model for disk scheduling
A model for process synchronization
Question 46 of 60
Quiz ID: q46
What is WSSi in the working-set model?
Working set size of all processes
Working set of Process Pi - total number of pages referenced in the most recent Δ
Window size for process i
Wait state for process i
Question 47 of 60
Quiz ID: q47
According to the working-set model, what happens if D > m?
System performance improves
Thrashing occurs
More processes can be added
Memory is automatically expanded
Question 48 of 60
Quiz ID: q48
What is Page-Fault Frequency (PFF)?
A more direct approach than working-set model
A method to increase page faults
A scheduling algorithm
A memory allocation technique
Question 49 of 60
Quiz ID: q49
Why is kernel memory treated differently from user memory?
Kernel memory is faster
Kernel requests memory for structures of varying sizes, and some needs to be contiguous
Kernel memory is more secure
Kernel memory uses different hardware
Question 50 of 60
Quiz ID: q50
What is the buddy system?
A system for sharing memory between processes
Memory allocated using power-of-2 allocator
A backup memory system
A system for memory compression
Question 51 of 60
Quiz ID: q51
What is a disadvantage of the buddy system?
Slow allocation speed
Fragmentation
High memory overhead
Complex implementation
Question 52 of 60
Quiz ID: q52
What is a slab in the slab allocator?
A type of memory controller
One or more physically contiguous pages
A compression algorithm
A scheduling policy
Question 53 of 60
Quiz ID: q53
What are the benefits of the slab allocator?
Faster CPU performance
No fragmentation, fast memory request satisfaction
Better graphics performance
Reduced power consumption
Question 54 of 60
Quiz ID: q54
What is prepaging?
Removing pages from memory
Prepaging all or some of the pages a process will need before they are referenced
Compressing pages before storage
Validating page references
Question 55 of 60
Quiz ID: q55
What factors must be considered in page size selection?
Fragmentation, page table size, resolution, I/O overhead
CPU speed, memory speed, disk speed
Process priority, user preferences, system load
Network bandwidth, graphics capability
Question 56 of 60
Quiz ID: q56
What is TLB Reach?
The distance TLB can access
The amount of memory accessible from the TLB
The number of TLB entries
The speed of TLB access
Question 57 of 60
Quiz ID: q57
In the program structure example, why does Program 2 have fewer page faults than Program 1?
Program 2 uses less memory
Program 2 accesses array elements in row-major order, following memory layout
Program 2 has better optimization
Program 2 runs faster
Question 58 of 60
Quiz ID: q58
What is I/O interlock?
A method to speed up I/O operations
Pages must sometimes be locked into memory during I/O operations
A synchronization mechanism for I/O
A technique to reduce I/O overhead
Question 59 of 60
Quiz ID: q59
What technique does Windows use for virtual memory management?
Pure demand paging
Demand paging with clustering
Segmentation only
Static allocation
Question 60 of 60
Quiz ID: q60
In Solaris virtual memory management, what is the purpose of the pageout process?
To allocate new pages to processes
To perform paging using modified clock algorithm
To compress memory pages
To schedule process execution
Quiz Summary
Review your answers before submitting
60
Total Questions
0
Answered
60
Remaining
00:00
Time Spent
Submit Quiz
Back to Questions
Previous
Question 1 of 60
Next
!
Confirm Submission
Cancel
Submit Quiz