Systems Architecture
Answers to 121 Assignment 3
Question One
The Weirdnix OS uses this process priority mechanism:
- Process priorities range from 0 (best) to 16 (worst).
- The maximum timeslice for any process is 10ms.
- If a process uses all of or exceeds its timeslice, then its priority
is lowered by one point.
- If a process is blocked before it uses all of its timeslice, then
its priority is increased by one point.
- When a process is returned to a Ready queue, it is put on the tail
of the Ready queue.
- At context switch time, the next process chosen to run comes from
the head of the highest priority non-empty Ready queue.
The system currently has four processes. The following table lists
their current priority, and the amount of CPU they will want to use
before blocking.
Process | Current Priority | Future CPU Use |
|
1 | 0 | 10, 5, 5, 8, 12 |
2 | 1 | 20, 20, 20, 20 |
3 | 4 | 20, 5, 20, 5, 20, 5 |
4 | 8 | 2, 10, 8, 8, 8, 8 |
i.e. Process 1 wants to do 10ms then block, 5ms then block, 5ms then
block etc.
Which process will finish executing first? Which process will finish
last? Write out the scheduling of each process in turn, with details
of its priority as it is returned to the Ready queues.
Answer to Question One - Straight Back to Ready Queue
I wrote one Perl program to simulate the scheduling, assuming that
any process which exceeds its timeslice (i.e. 10ms or more) is put
back straight onto the ready queue one priority down. Here is what
it printed out:
About to run process 1, was prio 0
Process 1 runs for 10 out of 10 ms, pre-empted
Process 1 goes to ready queue 1
About to run process 2, was prio 1
Process 2 runs for 10 out of 20 ms, pre-empted
Process 2 goes to ready queue 2
About to run process 1, was prio 1
Process 1 runs for 5 ms
Process 1 goes to ready queue 0
About to run process 1, was prio 0
Process 1 runs for 5 ms
Process 1 goes to ready queue 0
About to run process 1, was prio 0
Process 1 runs for 8 ms
Process 1 goes to ready queue 0
About to run process 1, was prio 0
Process 1 runs for 10 out of 12 ms, pre-empted
Process 1 goes to ready queue 1
About to run process 1, was prio 1
Process 1 runs for 2 ms
Process 1 has exited
About to run process 2, was prio 2
Process 2 runs for 10 out of 10 ms, pre-empted
Process 2 goes to ready queue 3
About to run process 2, was prio 3
Process 2 runs for 10 out of 20 ms, pre-empted
Process 2 goes to ready queue 4
About to run process 3, was prio 4
Process 3 runs for 10 out of 20 ms, pre-empted
Process 3 goes to ready queue 5
About to run process 2, was prio 4
Process 2 runs for 10 out of 10 ms, pre-empted
Process 2 goes to ready queue 5
About to run process 3, was prio 5
Process 3 runs for 10 out of 10 ms, pre-empted
Process 3 goes to ready queue 6
About to run process 2, was prio 5
Process 2 runs for 10 out of 20 ms, pre-empted
Process 2 goes to ready queue 6
About to run process 3, was prio 6
Process 3 runs for 5 ms
Process 3 goes to ready queue 5
About to run process 3, was prio 5
Process 3 runs for 10 out of 20 ms, pre-empted
Process 3 goes to ready queue 6
About to run process 2, was prio 6
Process 2 runs for 10 out of 10 ms, pre-empted
Process 2 goes to ready queue 7
About to run process 3, was prio 6
Process 3 runs for 10 out of 10 ms, pre-empted
Process 3 goes to ready queue 7
About to run process 2, was prio 7
Process 2 runs for 10 out of 20 ms, pre-empted
Process 2 goes to ready queue 8
About to run process 3, was prio 7
Process 3 runs for 5 ms
Process 3 goes to ready queue 6
About to run process 3, was prio 6
Process 3 runs for 10 out of 20 ms, pre-empted
Process 3 goes to ready queue 7
About to run process 3, was prio 7
Process 3 runs for 10 out of 10 ms, pre-empted
Process 3 goes to ready queue 8
About to run process 4, was prio 8
Process 4 runs for 2 ms
Process 4 goes to ready queue 7
About to run process 4, was prio 7
Process 4 runs for 10 out of 10 ms, pre-empted
Process 4 goes to ready queue 8
About to run process 2, was prio 8
Process 2 runs for 10 out of 10 ms, pre-empted
Process 2 has exited
About to run process 3, was prio 8
Process 3 runs for 5 ms
Process 3 has exited
About to run process 4, was prio 8
Process 4 runs for 8 ms
Process 4 goes to ready queue 7
About to run process 4, was prio 7
Process 4 runs for 8 ms
Process 4 goes to ready queue 6
About to run process 4, was prio 6
Process 4 runs for 8 ms
Process 4 goes to ready queue 5
About to run process 4, was prio 5
Process 4 runs for 8 ms
Process 4 has exited
Answer to Question One - Blocked Before Return to Ready Queue
I then wrote a second Perl program to simulate the scheduling. This
time, any process which exceeds its timeslice is not immediately put
back on a ready queue. Instead, a different process is chosen, and
when it starts to run, the blocked process is put back onto a ready
queue. Here are the results of the simulation:
About to run process 1, was prio 0
Process 1 runs for 10 out of 10 ms, pre-empted
Process 1 goes to ready queue 1
About to run process 2, was prio 1
Process 2 runs for 10 out of 20 ms, pre-empted
Process 2 goes to ready queue 2
About to run process 1, was prio 1
Process 1 runs for 5 ms
About to run process 2, was prio 2
Blocked process 1 goes to ready queue 0
Process 2 runs for 10 out of 10 ms, pre-empted
Process 2 goes to ready queue 3
About to run process 1, was prio 0
Process 1 runs for 5 ms
About to run process 2, was prio 3
Blocked process 1 goes to ready queue 0
Process 2 runs for 10 out of 20 ms, pre-empted
Process 2 goes to ready queue 4
About to run process 1, was prio 0
Process 1 runs for 8 ms
About to run process 3, was prio 4
Blocked process 1 goes to ready queue 0
Process 3 runs for 10 out of 20 ms, pre-empted
Process 3 goes to ready queue 5
About to run process 1, was prio 0
Process 1 runs for 10 out of 12 ms, pre-empted
Process 1 goes to ready queue 1
About to run process 1, was prio 1
Process 1 runs for 2 ms
Process 1 has exited
About to run process 2, was prio 4
Process 2 runs for 10 out of 10 ms, pre-empted
Process 2 goes to ready queue 5
About to run process 3, was prio 5
Process 3 runs for 10 out of 10 ms, pre-empted
Process 3 goes to ready queue 6
About to run process 2, was prio 5
Process 2 runs for 10 out of 20 ms, pre-empted
Process 2 goes to ready queue 6
About to run process 3, was prio 6
Process 3 runs for 5 ms
About to run process 2, was prio 6
Blocked process 3 goes to ready queue 5
Process 2 runs for 10 out of 10 ms, pre-empted
Process 2 goes to ready queue 7
About to run process 3, was prio 5
Process 3 runs for 10 out of 20 ms, pre-empted
Process 3 goes to ready queue 6
About to run process 3, was prio 6
Process 3 runs for 10 out of 10 ms, pre-empted
Process 3 goes to ready queue 7
About to run process 2, was prio 7
Process 2 runs for 10 out of 20 ms, pre-empted
Process 2 goes to ready queue 8
About to run process 3, was prio 7
Process 3 runs for 5 ms
About to run process 4, was prio 8
Blocked process 3 goes to ready queue 6
Process 4 runs for 2 ms
About to run process 3, was prio 6
Blocked process 4 goes to ready queue 7
Process 3 runs for 10 out of 20 ms, pre-empted
Process 3 goes to ready queue 7
About to run process 4, was prio 7
Process 4 runs for 10 out of 10 ms, pre-empted
Process 4 goes to ready queue 8
About to run process 3, was prio 7
Process 3 runs for 10 out of 10 ms, pre-empted
Process 3 goes to ready queue 8
About to run process 2, was prio 8
Process 2 runs for 10 out of 10 ms, pre-empted
Process 2 has exited
About to run process 4, was prio 8
Process 4 runs for 8 ms
About to run process 3, was prio 8
Blocked process 4 goes to ready queue 7
Process 3 runs for 5 ms
Process 3 has exited
About to run process 4, was prio 7
Process 4 runs for 8 ms
Question Two
The Pentium has pages that are 4,096 bytes in size. The MMU is using
the following page table:
Logical Page | Physical Page Frame |
|
0 | 27 |
1 | 41 |
2 | invalid |
3 | 12 |
4 | 28 |
5 | 44 |
6 | 12 |
Determine the physical address that will be referenced when a process
reads from these logical locations:
- 560
- 12,290
- 24,575
- 8,000
If the process writes to logical address 15,000, determine what other
logical address will be affected, and explain why this other address
will be affected.
Answer to Question Two
By multiplying all the page and frame numbers by 4,096, we get the
base address of each page and frame:
Logical Page Base Address | Physical Page Frame Base Address |
|
0 | 110592 |
4096 | 167936 |
8192 | invalid |
12288 | 49152 |
16384 | 114688 |
20480 | 180224 |
24576 | 49152 |
- Page address 560 in on the page with base 0 at offset 560, so that's
the physical address 110592 + 560 or 111152.
- Page address 12290 is on the page with base 12288 at offset 2, so
that's the physical address 49152 + 2 = 49154.
- Page address 24575 is on the page with base 20480 at offset 4095,
so that's the physical address 180224 + 4095 = 184319.
- Page address 8000 is on the page with base 4096 at offset 3904, so
that's the physical address 167936 + 3904 = 171840.
- If a process writes to logical address 15000, this is on the page
with base address 12288 at offset 2712, so that's the physical address
49152 + 2712 = 51864. However, the frame at physical address 49152
is also mapped to the page at logical address 24576, so the write
to logical 15000, physical 51864 will also affect the logical address
24576 + 2712 = 27288.
Just to be clear on this answer, below are two tables showing both
logical addresses mapping to the same physical address:
Logical Page (base address) | Physical frame (base address) | Logical Page (base address) |
|
3*4096 (12288) | 12*4096 (49152) | 6*4096 (24576) |
Logical Page + offset (address) | Physical frame + offset (address) | Logical Page + offset (address) |
|
3*4096+2712 (15000) | 12*4096+2712 (51864) | 6*4096+2712 (27288) |
Question Three
A computer has five page frames. The time of loading, time of last
access, and the accessed (A) and modified (M) bits for each page frame
are as shown below (with times in clock ticks, smaller numbers are
older numbers):
Frame | Load Time | Access Time | A | M |
|
0 | 700 | - | 0 | 0 |
1 | 220 | 600 | 1 | 0 |
2 | 500 | 510 | 1 | 1 |
3 | 450 | 500 | 0 | 1 |
4 | 400 | 590 | 1 | 1 |
- Which page frame will NRU replace? Explain why.
- Which page frame will LRU replace? Explain why.
- Which page frame will FIFO replace? Explain why.
- Which page frame will Second Chance replace? Explain why.
Answer to Question Three
- NRU categorises each frame into one of four classes:
- Class 1: not accessed, not modified
- Class 2: not accessed, modified
- Class 3: accessed, not modified
- Class 4: accessed, modified
NRU chooses a frame from the lowest class number to replace. Here,
frame 0 is not access, not modified, so frame 0 will be replaced.
- LRU picks the frame unused for the longest time to replace. Here,
frame 0 has never been used, so it has been unused for the longest
and would be replaced.
- FIFO replaces the oldest allocated frame. Here, frame 2 was allocated
at time 220, so it would be replaced.
- Second Chance looks at the oldest frame. If the A bit is 0, then it
is replaced. Otherwise, set the A bit to 0 and update the load time
to now, and repeat the algorithm. Here, frame 2 is the oldest but
the A bit is 1, so we don't choose it.
The next oldest frame is frame 3 with age 450. It has A=0 so it would
be replaced.
Question Four
Semaphores are usually implemented by the OS, so that a process is
blocked by the OS if it cannot acquire a semaphore, and made ready
when it does acquire the semaphore.
An alternative method of implementing semaphores is to allow a process
to turn off pre-emption to enter a critical section; this stops any
other process from running, making the critical section atomic.
Suggest reasons why this method is generally never used to implement
semaphores.
Answer to Question Four
If a process turns off pre-emption, there is no guarantee that they
will ever turn pre-emption back on again. This would be OK if they
were I/O-bound, and they would be blocked when they do an I/O system
call. However, if the process is CPU-bound, then it now has exclusive
access to the CPU, and will never be forced to relinquish it. If this
is an interactive operating system, then all user interaction would
stop. In fact, no other process would ever get the CPU again.
Because the process may never give up the CPU, this is why we don't
use this mechanism to implement semaphores.
Question Five
Explain why context switching between threads in the same process
is faster than switching between threads that belong to different
processes. An OS designer might choose to prioritise context switches
between threads in the same process; show how this can lead to CPU
starvation for other threads.
Answer to Question Five
The threads within a process share the process' memory map. When we
need to context switch between threads within a process, we only have
to save and restore the registers of the threads; we don't have to
save and disable one memory map, and reload and re-enable a second
memory map. We save time skipping the memory map work, and so the
context switch between the threads is faster than between separate
processes.
As this context switch is faster, a designer might choose to prioritise
context switches between threads in the same process, so as to minimise
the time wasted doing context switches. However, this can lead to
CPU starvation for other threads and processes.
Consider a process with three threads, all of which are CPU-bound.
There is a second process with only one thread which is also CPU-bound.
If we prioritise threads within the same process, then this effectively
makes the first process its own ready queue of 3 threads. As soon
as one of these is selected to run, it prioritises the other two threads
in that process. They will all be selected to run, and the single
thread in the second process will stay at a lower priority and be
starved of CPU time.
File translated from
TEX
by
TTH,
version 3.85.
On 9 Apr 2012, 21:55.