Categories: 2016Operating System

Operating Systems Discussion

GATE 2016

Operating System

Process

What is a process?

Program under execution- is an informal definition.

try.c:

#include

int main(){

printf(“Hello”);

}

The above program prints hello. So, now, I compile it and make an executable,

gcc try.c -o try

The executable is called try. Now, I execute it

./try

This starts the process. So, what all happens now:

Our program is currently on disk. Long term scheduler takes it to memory. Our executable is

just a stream of bytes- corresponding to machine instructions to print “Hello”. But when it is made a process it has more components. These are needed by the OS mainly because it needs to handle multiple processes and multiple users at a time. So,

for each process a PCB is created which has important info needed for a process, One thing there is “process state”. So, what will be the initial state of “try”?

yes- ready. which means now the scheduler (short term) in OS can pick this for execution anytime during which the state changes to running.

Now, if I run this code and I got output “Hello”. After the process changes to running state, will it be changed to blocked or ready?Assuming a normal system?

printf is non-blocking, so the process terminates or it might go to ready state in between as another process comes in.. Usually in 1 s itself multiple context switches happen in a normal PC.

Now. threads? what is it? why we need it?

There are many cases where we need a process to do multiple jobs at a time- like making a progress bar run while doing a processing etc. So, to do this we have threads- which are like sub process but is not entirely a process. A process might have a single thread or multiple threads.. By default all process have a single thread. To create multiple threads we use libraries like thread, languages like Java have thread support etc.

Now, there are 2 types of threads- user level and kernel level. Difference?

user level thread is invisible to kernel- so like a function call only. so fast, but due to this reason if one of the thread does a blocking call- like a scanf- then all threads of the process are blocked. To handle this we can map the user level to distinct kernel level threads- but this needs to be supported by the kernel. Kernel threads on other hand are more time consuming.

http://gateoverflow.in/706/gate2001_1-13

Click to access CSE675.02_MIPS-ISA_part3.pdf

Inter Process Communication

Message queue

Shared memory

Socket Programming

Synchronisation

Let us start with Synchronisation . We know that 2 process on a diff computer can communicate with Iso -osi protocol right.

But when we need 2 have a communication between 2 process in same host . we need a synchronisation

Now but what we mean by Synchronisation . The process in a host maybe independent as well as dependent .

For independent process there is no issue . But for dependent one we need to have synchronization so that we get a correct results

Let us take a example where we have 2 process P1 and P2 , where Pi output is P2 input then in this case we need Synchronisation

Okay so lets move ahead now . Before talking about Synchronization let us understand some important terms

1) Critical section . So tell me what is Critical section ?

A critical section is piece of code that access a shared resources (for eg in the above example count was a share resources ), that must not be concurrently accessed by other process at same time.

Got this ?

Sir i mean the one in chat what i said

the P1 and p2 code ..(count example)

yes, got it…

P1:

Read count

Add count, 1

Write count

P2:

Read count

Sub count, 1

Write count

okay now second term we need to understand is 2) Race Condition -When Exceution order of instruction determines the result i of a shared resources . is a race condition . the one who complete last (or later ) will win it .

fine ?

Okay so lets move ahead .now before moving ahead condition for Synchronisation

just want to say that Every Critical section has a entry and exit section

(We will understand its importance in Reader Writer problem )

A entry section is basically a request by a process to enter critical section .

say if a process P1 has enter and another process P2 want to enter Critical section , then the entry section wont allow. And one we are done with CS we will exit . After exiting the entry section will allow process P2

CS and Race condition- the relation not very clear.. In the previous example how they are related? ohh see count is a shared resource right ? So only one at a time can access this . This is nothing but a critical section- where you want that your count should not be accessed by another process say P2.

okay. So race condition solved by moving the code to access count to a critical section rt?

yes

P1 : load count (5)

P1: count = count+1

P2: load count (5)

P2: count = count -1

P2: write count (4)

P1: write count (6).

this is the issue rt?- update of P2 got lost.

by making count access to a CS, we ensure either P1 or P2 access it at a time but not both. Now, we solved our problem but caused many other problems- like??

Problems happen because now we have to ensure only one process gets access to critical section. How to do this?

Okay now assuming we got race condition and critical section

Lets move Condition for Synchronisation

There are certain things that every Synchronisation mechanism must follow

1)Mutual Exclusion – It mean no 2 process can be in Critical section at same time

2)Progress – If a process say p1 dont want to go to Cs then it shouldn’t block other process also . say if p2 want to go , it can go . p1 shouldn’t block p2- only those who want to go there might decide for this, and this decision must be made within a definite time.

The above 2 are primary requirements

And the next 2 are secondary ones

3) Bounded waiting : It mean there is no fixed count a process to enter into CS. ( for an example i cant say p1 should go only 2 times only . It can go anytime it want )

From Galvin: There exists a bound on the no. of times other processes are allowed to enter CS after a process has made a request to enter CS and before that request is granted.

4) portability : It should be flexible with all platforms . Like say INc insinstruction is not supported in all platforms

did you get all 4 conditions ?(please reply )

Galvin says Bounded waiting is a “must” satisfy requirement.

So, there

was this question of does progress satisfy bounded waiting?

Also does bounded waiting imply progress?

Progress = process may get access to CS any number of times if some other process is not competing for it

Bounded Waiting = it should not be the case that any process waits infinite long to get in CS. Waiting should be bounded.

That is the word meaning of bounded waiting- but the definition is not agreeing with this.. Bounded waiting doesn’t imply progress.

Agree?

What’s deadlock?

two goats trying to cross a narrow bridge.

So, whats livelock?

two people trying to go pass each other but both are

again and again making anti moves to each other

such that no one is getting pass the other.

So, thing is goat doesn’t make anti moves? 🙂

Deadlock or livelock satisfy progress?

which satisfy bounded waiting?

whether it is livelock or deadlock both ensures that no further instructions of any process involved are processed/executed

so there is neither progress and nor bounded wait, waiting is infinite. everyone waiting for the other. either in a dead state or a live state.

dead state means doing nothing

live state means doing something but not making progress

Yes, progress is not there on both these locks- because the wait to enter CS becomes “indefinite” violating the progress definition.

But bounded waiting is indeed satisfied. This is because bounded waiting definition says “bound on the other processes to enter CS” so in a deadlock/livelock, this is satisfied.- importance of following the definition exactly..

Difference between live and deadlock is that in a deadlock, there is no execution. In a livelock the processes try to get out- just that they cant.

bound on the other processes to enter CS should be finite = bounded waiting? but here this bound is infinite.

bounded waiting means after sometime critical section is ensured

here no other process is entering CS.. So, the value is 0. Bound is not with respect to “time” as per definition..

is not respect to time then what?

bound is for the no. of times other processes are allowed to enter CS after this process request CS access and before that access is granted.

Now Synchronisation mechanism are of 2 types 1) Busy waiting

2) Non Busy waiting

An eg of Busy waiting is that : If a process P1 is in CS , and another process P2 want to enter CS then it will go on waiting by checking for a condition repeatedly like while(flag == 2);

In other words P2 is simply wasting Cpu cycles

eg of non busy waiting?

for a non busy waiting : If a process P1 is in Cs and now P2 want to also enter , it will check for the first time . since p2 find that someone is already in Cs , it will go and sleep . And when p1 complete its work it will go and wakeup p2

right ?

yes.. No continuous check for condition satisfy. So, CPU cycles can be saved..

yes cpu cycles in non busy is saved

okay.. now, we need to see some solutions 🙂 till now only problems..

there are various solution to achieve synchronisation

software approach — 1)Strict Alternation (Dekker ) 2) peterson algo

Hardware approach — 1) test and set 2) Xchng and swap

ANd os supported solution — Semaphore

and monitors

yes and almost all these have been asked multiple times for GATE.. one of their favorite question area..

Test and set: whats the importance of this?

We test a condition and based on this we set a variable. Usually it is like we check if we got the turn and if so we say I have taken it and thus update this info. Test and set instruction is an “atomic” instruction which ensures no other process interferes in between.

Issue: Hardware must support.

Strict Alternation issues? works only for 2 processes.

https://en.wikipedia.org/wiki/Dekker%27s_algorithm

Progres satisfied? yes. why? Progress definition just says that when a process wants to enter CS then only those processes “NOT” executing in the Remainder section can participate in decision making. This is different from saying only those processes which want to enter CS participate in decision making. Got it?

Peterson algo issues?

For 2 process.

Progress? yes

Bounded waiting? yes.

For n processes:

Bounded waiting? No.

https://en.wikipedia.org/wiki/Peterson%27s_algorithm

Bakery algorithm: Satisfies all requirements.

Semaphores

Important: has two atomic operations wait (P(s)) and signal (V(s)).

The main reason why semaphores came into pictures is that wakeup signal (producer consumer ) should not be wasted

Semaphore ensure that out of all n process only 1 at a time can access to it

agree ?

No. only for a binary semaphore. For a general semaphore we can have “count” no of processes which can succeed wait and thus go for CS.

now semaphores are of 2 types 1) Binary semaphores which can take a value 0 or 1 — it allows 1 process to be in CS at a time

2) Counting semaphore — it can have positive , 0 and negative value. here multiple process can be in Cs . Positive number indicate no of process successfully entered in Cs and negative value indicate the no of process who got blocked and didnt get a chance to enter into Cs. so they are put in a queue .( i mean Pcb are put in queue )

Agree ?

why pcb?

when they say process are put in a Queue , each process is associated with its PCB .so i can even say PcB are queue

@arjun sir

yes.. we need every info for a process, so pcb is the straight one to choose.. its right ?

yes

There are 2 operation that can be carried out on a semaphore P and V .

And yes Binary sempahores are also called mutex.

// @ arjun sir : what should i explain now i mean how should i explain

?

P(s) and V(s) what they do? Assume binary..

P(s) decrement the semaphore value .. In case of binary semaphore if P(s) turn out to be 0 then no other process can enter it .they cant make a value negative . this is how 1 process at a time in Cs section

yes. basically binary value 1 is used to say that CS is free to take. 0 is used to say it is locked. So, importance of semaphore operations is that they are atomic, ensuring before looking for lock and then locking, no other process interferes.

So, another question, To implement a semaphore do we need hardware support? With Test_And_Set hardware instruction we can implement semaphore as we can do a straight match for semaphore operations to this instruction.

can we make semaphore operations P(s) and V(s) critical section? How does this help to do without hardware instruction?

We can use any of the “synchronization” mechanisms we saw earlier like Dekker, Peterson algo. etc. We don’t need an explicit hardware support for implementing semaphore though it would speed up the implementation.

Dead lock..

Why deadlock? When a process waits for a resource and that resource is held by another process which in turn waits for a resource held by the first process directly or via a chain, deadlock happens. Once deadlock occurs, no process in the chain can proceed. So, conditions for deadlock are

Mutual Exclusion
Hold and wait
No preemption
Circular wait

Yes. If processes are not allowed to share resource, then deadlock cannot happen- thats condition 1. Now, when a process waits for a resource it can release all resources already held by it. If this happens then also deadlock cannot occur- condition 2. Condition 3 states that a process cannot forcefully release a resource held by another process. Condition 4 is already told in definition- a circular wait.

What is the difference between hold and wait & circular wait?

Circular wait says that P1 waits for P2, P2 for P3 and so on finally some Pn waits for P1. But hold and wait just says that when a process waits for a resource it must be holding some other resource.

My first statement regarding “implication” was wrong. You got why?

yes . I would say the above same thing (the diff what you said about circular and hold and wait )

yes. but does circular wait imply hold and wait? yes Hold and wait seems to be necessary condition for circular wait, right?

yes, From the definition, this implication seems true. But I’m not 100% sure.. Shall confirm later.

Can i say All circular wait will be related to hold and wait but the converse is not true ?

it is clear hold and wait is a less strict condition. It doesn’t imply circular wait. Circular wait in turn seems to imply hold and wait.

Sir in galvin they have clearly mentioned that out of all 4 condition , circular and hold wait are somewhat dependent on each other.y

Yes, just checked. It is clearly said “circular wait” imply “hold and wait”. So, that part is over 🙂

Now, how to handle deadlock?

Deadlock prevention
Deadlock avoidance
Deadlock recovery
Ignore the problem 🙂

The fourth solution is the most easiest and hence commonly used – result- sometimes OS hangs and we have to reboot. Can someone explain the other 3 ?

Deadlock prevention: We could stop the conditions of creation of deadlock. Eg. Resource sharing to prevent hold mutual exclusion. Preemption a
basically in deadlock prevention we don’t want at least 0ne of the condition should not be meant

the deadlock characteristic ,
then 1 to 4 , 1&2 before deadlock occurs , and 3&4 after the deadlock occurred .

Yes, prevention and avoidance ensure deadlock never happens. But the problem is these are over restrictive and hence cause less system utilization. yes

Still, we have to study them 🙂

Deadlock prevention – we learnt 4 necessary conditions for deadlock. So, ensure at least 1 of them never happens.

Deadlock avoidance- Here safe state comes.

Define Safe state – ? ,

omg , very much fun here 🙂

Safe state is one where deadlock-free execution of processes can occur.

yes. A safe state means there is a way to avoid deadlock- a safe sequence of execution exists so as to ensure no deadlock. This part is important- because many people make mistake here.

An unsafe state doesn’t mean deadlock.

yes an unsafe may or may not lI ead to deadlock . Sometime it can completed successfully also .

Any example?

In a state is a process say P1 requests resource R2- a deadlock happens. So, this state (before the request) is an unsafe state. But only if such a request comes we reach a deadlock. As long as no such request comes, there is no deadlock though state is unsafe. So, unsafe state means “risk” of deadlock and not deadlock.

Good resource to learn about deadlocks: http://web.cs.wpi.edu/~cs3013/c07/lectures/Section07-Deadlocks.pdf 🙂

i have one question . Does frozen state mean same as deadlock state ?

“frozen” ? I suppose so, but never seen.. Usually GATE questions don’t confuse with words unlike other exams.

Thanks for the link- I’ll update on website 🙂

Okay sorry . this thing was also mention in the galvin .

which section?

7.3

They have said a system in frozen state but not in a deadlock state . that all no further explanation for this . okay section name : method for handling deadlock

yes okay. So the difference is this.

Frozen state means system doesn’t respond to new processes. There is no deadlock there. This happens when we simply run a process and utilize all system resources and never finishes. So, Galvin says that this state can happen anytime and OS must handle such cases. And the same handling mechanism can be used for deadlocks too. Got it? ok 🙂

So, the most commonly used algorithm in books- Bankers algorithm. Can someone explain this informally? Only the main points..

In banker algorithm ,it give the details that whether after allocating resources our system wil be in safe state or not ? If it is safe we actually allocated it . and if not the system delays that request

Bankers algo works in iterative manner. Each iteration we check if a resource can be alloc to a process. If it can be alloc we do it, else we search for next process that can be allocated the resource. If we enter safe state finally, we are good to go. Each iteration the resources are freed

after the process completes. 4 tables are maintained for the process, free, available, max and current (?)

There’s some problem with above definition. Bankers algorithm always guarantees that system is in safe state. So, any request is guaranteed only if the resulting state is safe. Rest is fine.

Why we use bankers algorithm and not Resource allocation graph algorithm?

Because when we have multiple resource instances we cannot use resource allocation graph.

Running time of resource allocation graph alg?

Running time of banker’s alg?

For a resource allocation its n * n

For a banker it m * n * n where m is the no of resources and n is no of processes.

yes, m = 1 for resource allocation graph.

yes . Sir one more thing inspite of the above reason also , it is still said that banker is ineffiecnt as compared to resource allocation graph

why so?

I dont know . galvin 😛 i think beacuse of number of operation ?

which part is it mentioned in galvin?

wait sir , just looking

okay..

Now deadlock detection: Similar methods

Wait for graph- detect cycle- applicable for single instance of a resource- n^2 operations
Algorithm similar to bankers for multiple instance case. m * n^2 operations.

So, what happens once we detect deadlock?

If the deadlock is detected , we need to recover , several ways are there

Human can also participate it
we must rollback all process to the point where deadlock was not found
We can even preempt resources
we can also start from begining – but the problem is here all work and computation will be lost . So here we can see which one has done minium work and then we could rollback that
Even we can kill a process

Good 🙂 Yes, so many ways but no standard way. It depends on situation and we want to ensure minimum wastage of work and also ensure no starvation for a process- that is the s

ame process shouldn’t be killed everytime…to avoid this u can set a bound on max time aich process can rollback.

can we use latex here , sir ? You can install an addon too for latex..

I think it wont work here, but content here will be copied to webpage and there it’ll work.. So, you can use latex.. can you give link please @sir .

http://www.gatecse.in/groups/operating-systems/ thank y

How page(os) is different from block(coa)? And if a virtual address is generated where Block and page interferes?And how to merge these concepts?.And post reference too?.anyone answering?

block is part of page .and word/byte is part of block . we read block in CO , and page in OS , correct me , if i’m wrong .

We normally use “block” for caches. A cache line or cache block means the smallest unit of data it takes in.

“page” is used for virtual memory. We can see more later. Next we can see process scheduling fast and then move to Virtual memory.

@Arjun :CAn u give a example where both concepts are combined.Eg. like a memory address is accessed by a process,where the block and page comes to picture?.

Load R0, 2000

So, we want to move data from memory location 2000 to R0. Assume system employs virtual memory.

So, what are the sequence of steps to be followed?

s

2000 here is virtual address- meaning the actual physical RAM address is not 2000
2000 must be converted to actual physical address by MMU
First PTE for 2000 is searched in TLB which is like a cache for page table. Suppose it is not there
Now, PTE is searched in page table and say we got the actual physical address as 2400.
Now, we have to fetch data from main memory.
First we look in cache. Suppose we got a cache miss.
Here, there are 2 cases- data is actually present in main memory or it is a page fault meaning data is to be loaded from secondary storage (or it is an invalid memory access).
We just want a byte or word of data. Let’s assume a word of 4 bytes. But usually memory operates at bigger granularity as each memory access is expensive. So, rather than sending a word, it sends data in blocks to cache. What is a block here? It depends on the cache. Usually 64 bytes is used in modern intel machines. While page size is in MBs. We can even say a page is a block- but this block is different from the cache block.
So now data is in the cache and it got like 64 bytes whereas we needed only 4 bytes. So, it’ll send only the 4 bytes to CPU and this is done using the best mechanism possible.

Sir dont we have cache hit here ?

Yes, in the above point I said memory sends data in “blocks”. Cache is checked before memory access. But we have to assume cache is present first. Also, there are 4 types of caches. Once we get physical address surely we can check the cache. But some caches even allow to check using virtual address. So, this gets complicated here, but for simplicity we can assume physically addressed cache. I’ll modify point 6 to include cache.

ok got it .

so does memory relocation happens during this stage?

I mean the use of limit and base register comes into picture during this stage.Riight?

okay.

I havent told anything about the virtual address to physical address translation above. This is done in step 4. We can see this afterwards.. Just gave an overview so that later discussions will be easy..

In step 9, I used “best mechanism” so that you all can imagine what will be the best… This wont be asked in GATE..

we havent yet discussed about deadlock recovery.

we have rt? Its just informal discussion in Galvin.. Any more in other texts?

So, deadlock over? You have to know bankers algorithm which is simple to apply but we havent discussed..

Take a break?

Process scheduling is pretty easy.. So we can go fast..

@arjun sir in above steps suppose we have a page fault so how mmu know at what disk address this block is stored because cpu generated a Virtual address ..does MMU maintain a index to hande page fault?

Yes, when a process is made all virtual address in use are mapped. If main memory is not sufficient this will be done using swap space on linux (guess similar mechanism in other OS too) and this mapping is known to OS.

Ok sir i have read somewhere that inverted page table is used for this purpose ….is it true sir?

No. Inverted page table is used for a different purpose. We can see later..

when page table size becomes large we can use inverted page table, for each page(frame) of main memory we have one entry

okay.. so we can be back in 30 mins.. I expect someone to quickly explain CPU scheduling may be using some example question.. We can also define important terms there..

CPU Scheduling:

Good job 🙂

Just an intro: CPU scheduling is done only when we have multiprograming. That is multiple processes are ready to be executed and so when one process goes to a wait (say for IO), another process is scheduled to run. This is done by CPU scheduler which is also called short term scheduler. The short term Scheduler then call a program called Dispatcher who is reponsible for

taking a process from ready to running

Scheduling takes place in one of condition:

1.when process moves from running to ready (during an interrupt)

2.when process moves from running to waiting/blocked (during IO)

3. when process moves from waiting to ready (IO completion)

4.when process terminates

1 and 3 are basically preemptive and 2 n 4 non preemptive- when scheduling happens only during 2 and 4 it is called a non-premptive scheduling.

Types of scheduler:

1.long term scheduler- decides which processes are allowed to get to ready queue. i.e., when we start a program execution (say by ./a.out), long term scheduler decides when it is ready to be put in ready queue (of course this happens in a split second and we often dont even realize this) . It is responsible for deciding the degree of multiprogramming

2 medium term scheduler – this scheduler is not always needed and is used only when physical memory is limited. Medium term scheduler swaps out less needed processes from main memory to disk and also vice versa.

3.short term scheduler- this is the CPU scheduler which decides which process gets the running state. It is named “short” because it is called frequently.

So, lets see some terms and their importance:

some of important terms related to scheduling.

Anyone explaining these?

arrival time: time at which process arrives
waiting time:time when process wait for completion of job done by another process
turn around time:time between submission and completion of process
response time:time when procee get first response
throughput:no of process completed per unit time
CPU utilization:time for which cpu is busy- yes the percentage of time CPU does useful job- similar to network utilization or any other utilization.

Good.. No definition for CPU utilization 🙂

CPU scheduler aims to maximize CPU utilization as well as throughput. But sometimes other factors also matters as in a real time system- response time is important..

Dispatcher: Doesn’t take part in decision making but gives control of CPU to the process selected by CPU scheduler.

Dispatch latency:Time consumed by dispatcher to give control to process

it’s basically overhead

Types of Scheduling

Before we see scheduling policies, just to confirm – CPU scheduling is done for CPU bursts- when a process needs IO it voluntarily releases CPU.

FCFS
SJF(Non preemptive )
SJF preemptive or SRTF
RR
Longest job first
Multilevel queue
Multilevel feedback
Highest Response ratio

FCFS

easy to implement as we just need a queue
non preemptive – once a process gets CPU no one else gets CPU until it releases it.
fair scheduling ?? gives fair chance to every process their is no concept of priority here- the first coming process can use CPU for a long time if it has a large CPU burst- so its not fair for those coming during this time- convoy effect.
no starvation ?? As long as all processes are guaranteed to terminate, there won’t be starvation as we follow a queue. But waiting times can be very long.
suffers from convey effect ??when I/O bound process are after cpu bound process . Cpu bound process block I/o bound process .. .Simply we can say sometimes short process have to wait for ending of a long process to complete due to its non preemptive nature.

Okay, now we can see example.. Who made these? Good job 🙂

Example(arrival time for all is 0)

P1 10

P2 5

P3 5

0——P1—–10—–P2——15——P3—–20

process waiting time turnaround time

1 0 10

2 10 15

3 15 20

avg waiting time=25/3

avg turn around time=55/3 (This would be 45 ?)

Turn around time = waiting time + execution time.

Fine?

yes

2.SJF(shortest job first) – no preemption

considering same example as above

0—–P2—-5—–P3—–10——P1—–20

process waiting time turnaround time

1 10 20

2 0 5

3 5 10

avg waiting time=15/3=5

we can see waiting time is reduced here….

next is preemptive SJF- when a shorter job comes it can preempt an already running job. Also called Shortest Remaining Time First

process arrival time burst time

1 0 8

2 1 4

3 2 9

4 3 5

0—p1—1—-p2—–5—–p4——10——p1——17——p3—–26

process waiting time turnaroundtime

1 9 17

2 0 5

3 1 5 25

4 2 10

Comments about SJF:

Optimal for average waiting time.

But practically impossible- why?

we cant predict next cpu burst…- yes this is something which happens in future- no process tells ahead I need “this” time for CPU. But we can predict this.

static prediction techniques:

based on process size
based on process type

But the problem with this method is even might process size that you want to execute would have same size , but the nature and complexity varies . so it may take more or less time also

dynamic prediction techniques:

simple averaging: it just like taking the average of all process estimated burst time / Total number of process
exponential averaging

Tn+1=aTactual+(1-a)Tnpredicted where a is smoothing factor . it is 0<=a<=1 ROUND ROBIN : -> used in time sharing system

→ Time quantum is important also know as time slice

→ preemptive in nature – it is FCFS with preemption at regular interval

—> implementation is done through Circular queue

—> Each lower priority process or higher priority process are given equal chance

— each process run for it quantum and then it is thrown out (preempted ) and another process is brought in

— > Min average response time is achieved here – response time is the time between which a process is submitted and first response (output) is produced. We can approximate this as the time between which a process is submitted and it gets CPU for the first time.

let take a small example with time slice 2 units

process arrival time Burst time

1 0 2

2 1 3

3 1 4

so gantt chart is :

0—p1–2—p2–4—p3—6-p2—7-p3—9

// (note all other run for 2 units )

process waiting time turn around time

1 0 2

2 1 6

3 3 8

avg. waiting time:4/3

avg. response time: (0 + 2 + 4)/3 = 2 -any scheduler can improve this?

avg. turn around time:16/3

Issue with Round Robin scheduling?

if time quantum is very less , then more number of context switches will happen. -Yes, context switches are expensive.

if time quantum is more , then it is same as FCFS.

Highest Response Ratio: Here it select a process based on highest response ratio

Non preemptive in nature

Highest Response ratio = (burst time + Waiting time )/ burst time

here at each iteration , you will calculate the process with highest response ratio and then evaluate it .

HRRN not only favours shortest jobs but also favours longer jobs waiting for long time.

Priority Scheduling:

A SJF is a special case of priority scheduling with priority being the inverse of remaining CPU time.

Multilevel Queue :

when we say a process enter into ready state . The ready state is nothing but a ready queue (made up of several other queue)

We have queue of high priority above and one with lower priority down . Note that process are categorized in these queue . But again here the problem is that a process in lower queue (eg a queue for student application ) will always starve because higher priority process always come

So in order to avoid above problem we had MULTILEVEL FEEDBACK :

here everything is same , except that priority of a process changes ( ie it can move from one lowest queue to high level queue , in case if it has waited for a long time ).

Note that Each queue may implement their scheduling algorithm .

summary:

scheduling starvation

FCFS no — here long waiting time if longer job comes first (convoy effect) but still no starvation

SJF (non- preemptive) yes

SRTF yes

Priority yes

RR no

Longest job first yes

Response ratio yes

multilevel queue yes

multilevel feedback queue no

Longest remaining time first (preemptive) no

what if a process keep coming one after another ? The processes which came at first wont be given time . isn’t it ? So there will be starvation ?

it cant occur if process came first..it will get cpu ryt..FCFS

ok yeah the queue order does not change .

So, that summarizes CPU scheduling

fork () — used in UNIX .

Why is fork used?

to create child process so that they can handle more than one work at a time ? yes, can be used.

But also used for creating new processes sharing the current process address space. For example, when any new process is created in unix, fork is called. When we run “./a.out” on linux shell, the shell does a fork- our process becomes its child, now the child replaces the address space of the shell with the address space of new process using exec call.

Another important term here is fork().

.

Each fork() call causes the process to split to 2. So, if we call n fork(), in a loop (which means successive fork is called by al previous child processes), then we get 2^n processes in total and 2^n-1 child processes.

In fork () the child process is created and it returns the child process id. This return value is use to distinguish parent/child process as every other code is exactly the same for both.

We can get the process id by using pid ()

and we can get parent process id by using ppid().

→ resources are not shared here (they have there own buffer )

—> it may happen that child can be replica of parent or it may have its own code to exceute

—> because of the fork () , we can have a process which can move from ready to exit . How this happen ? — it can happen when parent have create a child and after some execution , it want to terminate its child ( so child goes from ready to exit )

—> one of the main use of fork happen in Webserver application where fork () can be created everytime when a new request is to be served . But now these are replaced by Multithreading programs .

pid_t pid = fork();

if (pid == -1) {
// When fork() returns -1, an error happened.
perror(“fork failed”);
exit(-1);
}
else if (pid == 0) {
// When fork() returns 0, we are in the child process.
printf(“Hello from the child process!\n”);
exit();

}
else {
// When fork() returns a positive number, we are in the parent process
// and the return value is the PID of the newly created child process.
printf(“Inside parent”);
}

Q.Using fork diff physical address space is given to child and parent and same virtual address,

also when we print both addresses ,will virtual addresses be printed ie same for both?

yes

Why ?

in virtual addressing , user process knows only logical address. MMU converts logical address to physical address.

Are we done with this chapter ?

yes, fork is clear? yes

This chapter was best . we have summarize all the important points and can be used for reviAsion 😀

What is the difference between deadlock and livelock?

In a deadlock , no one change state and no useful work also dont happen

But in a livelock , a process may change there state but still no useful work is done

did you got it ?

Yeah

In deadlock nothing happens, but in livelock process tries to get out of lock but falls back to lock- like 2 person meeting at middle of a 1 man bridge, both going back and then both coming together again and this getting repeated.

Is disk seeking over?

nope we will cover it later

okay, so now we can move to virtual memory.

Dynamic loading:

Address binding is postponed until program execution.program/module keep changing location in

memory until execution starts.done by os at runtime..

Dynamic linking:

dynamic linking is linking various modules..for ex fun call at runtime.Usually done when dynamic loading is used.

Both the above are concerned mainly for modules which are in different modules. Suppose we write a simple code

#include

int main(){

printf(“Hello”);

}

here we have one function call to printf.

Do we have any library here? stdio is the library here. Linked by C linker (?)

stdio is just an include file. We can just use int printf(…); instead of #include.The actual library is libc which contains the code for printf. So, libc can be part of our executable- this is called static linking- or it can be loaded when we call printf during execution- this is called dynamic loading. (loading when we call printf is called lazy binding, even if we load library during process start it is called dynamic loading)

When this program is loaded to memory, what is dynamically loaded?

Now, whats dynamic linkng? we called printf in above code. So, which code is executed is decided at runtime and not when executable is made. How this is done? Simple mechanism

In executable:

printf – printfadd

Table []

printfadd – []

As shown above, in executable printf is linked to an address, which is an entry to a table. This table is filled by loader at runtime and thus what function is actually called is decided only at runtime. With some trick we can divert this printf call to any other function including our custom printf all thanks to dynamic linking.

So, whats the main use of dynamic loading?

we dont load modules that are not being used.only required modules are loaded when needed.so more space is available for other processes.

yes, and also modules can be shared (if shareable) thus reducing the memory usage.

in which case more page faults occur, dynamic / static linking ? and which case process execution fast

Dynamic linking cause more page faults, because pages arent loaded into page table at time of linking (?)

Page tables contain pages?

Page tables just contain virtual-physical address mapping.

If we use lazy binding- on first access to a dynamically loaded library- we get a page fault. But this is just one and not very significant. Overall dynamic linking and dynamic loading reduces memory usage and thus can result in lower page faults.

Which is faster for execution? Theoretically dynamic linking involves one extra table lookup and hence can not be faster than static linking. But its not slower also as tables are implemented with hardware support like indirect addressing.

in dynamic loading /linking hw support required and its done by which module of os?

hw support is not a necessity. It just speeds up. And is done by dynamic loader part of runtime.

what is mean by link time binding and load time binding sees logical address same as physical address ?

link time is part of static. So, its static linking. At linktime we get logical address but just relative- not absolute address. During program loading we get absolute address which is also logical only — means ? how is it different from absolute address ?. The actual physical address is not at all relevant as it is handled by MMU and is not visible outside.

For example at link time we assume process starts at say 1000 or 0. Now, process may be loaded at an address 5000 in memory (this is also virtual address). So, every address in the executable is added with this difference 5000-1000 or 5000-0. This is done by relocating loader using base and limit register – which is usually an assignment in btech.

Yes, base register is used, but limit is for protection.

What is limit register? Isnt displacement used for relocation?

Limit says a process cant access an address beyond base+limit. So process address length is put in limit. If you try to access which is not in the range of base_limit , trap signal is raised.

In multilevel paging all page tables are present in main memory? yes, it should be.

okay then, i guess we can see VM problems tomorrow..

Can we do any gate problem on paging now?

First we need to show an example as to how address translation works.. Anyone volunteering? assume 1 level page table.

we have physical address space(PAS) and logicall address space(LAS).pas is basically ram and LAS can be secondary storage.usually PAS is much smaller than LAS.so we devide LAS into pages and PAS into frames. The pages from LAS which are required by process will brought to PAS (main memory). For mapping between these we create a page table.it is usually created for every process separately and base address is stored in PCB. no of entries in page table will be equal to no of pages in LAS.each entry will give a frame no. (if that page is in ram) .frame size and page size are equal.

CPU generates a logical address which is interpreted as two parts..first page no. and second page offset or word no.By looking at page no we will go to that no in page table and look at frame no.

this frame no. +Offset(will be same) will be physical addr.

Thanks quite nice. @anurag we can stop here. See the last sentence- that shows the way mapping is done and is quite important to solve problems. To understand this properly you must try a mapping yourself. So, tomorrow before discussion try to make a page table yourself. If you can do, all problems will become easy.

And we use VM mainly for

Increasing available process memory
Enabling protection in multiprocessing environment

So, we can see tomorrow 7pm

==============================================================

Virtual Memory

Can we start ?

yes..

As seen yesterday virtual memory is used to provide an abstraction to physical memory- user needn’t worry about how physical memory is organized or allocated.

Now, virtual memory means processes works with virtual addresses and these are converted to actual physical addresses and this address translation is the most difficult task here.

How is address translation done?

A cpu generate a logical address. Now this logical address is nothing but a page number and offset .

However since only a part of process come into memory ,we have page table for each process .

A page table is always loaded which give us the information about which page are loaded in which frame of main memory . And so we get the information about frame number . now we go to that particular frame and use offset to fetch a byte (word )

yes. Thats correct- answer is page table. Actually almost all dynamic mapping is implemented using “tables”- yesterday we saw one table for dynamic linking- which is called Global Offset table, now this page table. In C++ for dynamic polymorphism we have another table. Even switch statements are implemented using a table. So, this is quite important and also easy mechanism as it is just a look up…

So, a page table maps a given logical address to a given physical address. Is this true?

yes

Right. But page table doesn’t have an entry for each logical address. It is done at a larger granularity and that is called a “page”. A page is a block of logical addresses for which there is an entry in page table.

A page frame is similarly a block of “physical” addresses. Both page frame and frames have the same size. And we use “offset bits”- the lower bits of both physical as well as logical addresses- to index into a page frame. And page size is fixed.

Clear? yes

Anyone giving an example mapping? i.e., a page table contents..

Yes ,

Q) let us assume Logical address space = 128KB and physical address space is 128KB

So let us find logical address , physical address (in bits ) , page size , page offset , no of pages , frame and page table entry ?

// is it fine can i take this as example ?@arjun sir

yes, .. reply

Okay 🙂

Now we know that logical address space = 128KB

so no of bits used to represent a logical address = log(128*2^10)==17 bits

same goes for physical address space

just wait.

The above statements shows that we are addressing a “byte”. i.e., we are giving an address to each byte of 128 KB. Some systems allow only word addressing and word size can be say 2B. In that case we need only 16 bits for the above example.

Also, why we use byte addressing and not bit addressing? What we do in case we want to operate on bits?

Yes if i consider word as 2 byte so i have to make changes accordingly .

Two reason for byte addressable :each address is allocated to each byte

And more important that a bit — amount of information wont be so useful as compared to byte

and even if we do bit addressable , amount of translation would be high .

yes, practical reasons- in C language the smallest data size is a byte and similarly for most languages. Only rarely we want to use a data smaller than a byte- in that case we use “bit operations”.

@Priti you can continue assuming byte addressing..

yes sir .

No of bits used required for logical address space = 17 bits

No of bits used to required for physical address = 17bits

assume page size = 4KB – offset bits = 12

number of pages = Logical address space / page size = 2^17/2^12 = 2^5

Note page size = frame size . so that one page can fit into 1 frame

no of frames = physical address space (Main memory size )/page size = 2^5

Page table entry – bits required to represent frame .

note that page table entry — contain information about page frame , protection bits , some other information also ( i can tell contents later )

How are number of offset bits obtained? (4kb – offset bits) i have assumed that .

Again we assume byte addressing. So, we find no. of bytes in 4KB = 4K. Now, to address each of them in binary we need lg (4K) = 12 bits. Ok

Okay. So, we got a page table of size 32.

No, page table size = (32*5) bits = no of pages * page table entry (in bits )

0

25

1

21

2

8

31

1

So, the above can be a page table. It contains only the mapping. The left side is the index and is not stored in the page table. Only the right side and some extra info like Priti told- protection, valid bits etc are stored and they constitute PTE- page table entry.

In this example, what will be the min. size of PTE? – minimum means assume no other bits are there.

Assuming your min size of PTE as information required to represent frame number = 5 bits .

yes and the word min can be used apart from other information ,PTE will always hold information about frame number

yes. 5 is correct. So, min. size of page table will be ?

5 * 32 = 160 bits. This is the start of most questions in this area.

Now, another thing here is we mapped a 32KB logical space to 32 KB physical space- so mapping could be done 1-1 which makes calculations a lot easier.

Also, there was a previous GATE question asking if such a mapping makes sense? i.e., if both logical and physical spaces are same size, do we need virtual memory?

no

We brought virtual memory so that program size can be greater than that of main memory size

This give the programmer flexibility and they do need to worry about the size .

Yes sorry i took a wrong example

We can do multi level paging example also 🙂

Nopes. Answer is “yes”. irtual memory is used not just to increase the available memory for a process. It also provides abstraction for address space. On a multiuser environment, this is required to provide protection. In Galvin this is mentioned in an early section-hardware protection and at least 3 prev GATE questions have coVme from this.

@Arjun sir we can achive protection with just partitioning memory why take overhead to maintain page table?

Yes. But that will be very ineffective. Consider our PC- and the amount of processes running. It is impractical to partition the physical memory. Theoretically it is possible if we have a very very large physical memory or just 2 or 3 processes.

So, the example is fine. But lets move to a larger one to show the need of multi-level page tables.

Before that- where is page table stored?

In the main memory.

Yes, so suppose I do this instruction

Mov R0, 2000 //moves content of memory location 2000 to register R0

How many memory accesses will be there? (assume the earlier page table)

2 accesses

Yes. One for accessing page table and another for accessing the actual entry. And this happens for every memory access and thus is really bad. So, we need mechanism for avoiding this memory access for page table- which can be done if page table is inside CPU or as close as possible. We can see this later.

Also, does accessing a page table cause page fault?

i think so yes

If the page frame NUMBER is not present in page table, it causes page fault.

i.e., the logical address entry doesn’t have a corresponding physical address- so we accessed an invalid address and got page fault.

For valid memory access, page tables won’t cause any page fault- as the whole table is always in main memory- not in secondary.

but before that if i am not wrong we should check valid / invalid bit right ?

Yes, thats also true. Along with each PTE we can have protection bits and accessing them can cause page fault. Any example for this you have seen while programming?

No ,i dont know

Trying to modify a constant variable in C can cause this. Because constants will be stored in RO segments (segment is like a page as we see later) in linux and the protection bit will be set for no modify. So, if we access it for write, we get segmentation fault.

segmentation fault error while page accessing invalid memory location

invalid memory location – that causes segmentation fault due to no valid mapping being present- valid bit will be unset in page table.

So, in GATE they ask more practical questions. Somethings which were practical in 2000 are no longer relevant now. So, while practicing questions (especially numericals) you should give more importance for recent questions.

And lets assume a 32 bit logical space and 4KB page table which are the common one today. (64 bit logical space is the most common today). So, what will be the page table size? How much memory required and can we reduce the memory requirement?

you didnt give any information about PTE ? or no information about physical address space

i can just find number of pages . but i cant find page table size

‘.

yes, you cant find PTE size without physical memory details. Assume 4GB main memory.

no of pages = 2^20

page table size = no of pages * PTE = (2^20)*20 bits

yes.. we have 2^20 page frames too and thus needs at least 20 bits to address them. So, PTE must be min 20 bits.

So, this will be 2.5 MB rt?

How can we reduce this usage?

Reducing the usage , — mean less space required right ? so for this we can use inverted table .

But inverted page table is not good to use .

Since here page table size >> page size , so we will use multilevel paging

yes, we use multi-level paging.

Inverted paging is not a solution here. It is used for reducing page table memory usage – but only valid when physical address space is much smaller than virtual address space. Then we make page (inverted) table with physical address and stores the corresponding logical address and the process identifier also.

but again here inverted page table we have many disadvantage right ? Example we cant shared page among various process. We cant implement Write On policy. yes searching a particular entry take a long time

sir in book they have mention that we use hashing to speed up the access in inverted page table can elaborate how?

In inverted page table we have physical address mapped to logical address. But, CPU generates virtual address and not physical- so we cant directly index into this table like in a normal page table. We have to do a search and find which physical address maps to our logical address.

But does not make sense if we already have physical address then why lookup in this table to find virtual address? but searching in inverted page table done on Physical address or logical?

who has physical address? We just has virtual address- we want to know which physical address maps to this virtual address. And that physical address is what we need.

nopes, search is on virtual address..

i think so you are confused between page table and inverted page table. In inverted page table we dont store frame number . here frame number is used as a index .

we are searching based on frame no where page no is present but this require sequential search for inverted page table that is why we use hashing now my doubt is how hashing done here?

A — B
C — D

In inverted table we have B and we want to know which A maps to it. So, we search the second column of the table and finds B. Then we get A (physical frame no).

And inverted page table uses process id to narrow down the address space..

yes and here when we are using hash table , you may have more than one page information in one PTE . if i am not wrong it is associative in nature ?

you mean multiple physical page frames for a logical frame?

nope sir , since we are using hash function . It may happen that more than one page may hasg to same value . so here a PTE basically consists of 3 parts 1) page number 2) Frame number c) next pointer

Its like entry in it — is a linked list again .

Actually hash table is a separate table. not the same inverted table.

This gives direct physical address- but the mapping is not one-one as told above. So, we first look here and then look in inverted page table- thus we avoid a search in inverted page table.

Now, how does multi level paging reduce memory requirement?

number of frames is v less compared to no of pages so now we r having entry corres to every frame and not every page

this is for inverted page table. How about multi-level paging?

oh sorry..

for multilevel paging we only use that page of page table in which our page-frame entry will be present

yes. thats the correct one. We split our page table and only use the ones needed by our process. We don’t need to use the whole page table if our process uses only a small amount of memory.

But here memory access time increases .

Yes, say if we have 3 levels, we need to access memory 3 times + 1 real access. So, this is expensive.

also it is common for all processes.

all the levels are present in main memory?if yes then how it reduces size?

yes, thats a good question. And the answer is not everything is in memory. We can see an example.

there was gate question on this….

like above we had 32 bit logical space and 4KB page size. Earlier we had 2^20 pages. Now, suppose we divide the page table to 2 levels- first one using the top 10 bits and second one using the next 10 bits. We also assume 4GB physical address space.

so here no of frames=2^32/2^12=2^20

so 20 bits for frame

so each entry of level 2 page table is of 20 bits

so size of page table at level 2=2^10*20/8 bytes fine..

So, size of page table 1 = ?

2^20/2^10 =2^10 entries size =2^10 *entry size – cant we get entry size?

it will be no of bits required to address second level pt?

yes

yes.. and that will be?10 bits- yes.. we have 2^10 second level page tables- one each for each of the first level page table entry.

No. of page tables at level 2 = 2^10

So, total memory usage for page tables = 2^10 * 10 + 2^10 * 2^10 * 20 bits. almost 2.5 MB just over the previous case using single level page table. So, we just got extra memory requirement for the first level page table. No, space saving..

why cant we keep second level p.t in sec memory?

Before that question, we can ask do we need all those second level page tables? Only if our process uses those addresses we need them rt? That is since we split the tables, we can keep only the required page tables in memory. (Page tables are not swapped out as normal pages as having a page fault for a page table access makes it complicated)

If frame no. is 20 bits then how come each entry of page table becomes 20 bits?it would be page no right?I mean logical space consists of page no and page offset right?Please correct me if I am wrong @Arjun

Page table is use for address mapping ie logical to phyiscal ….

so for each entry we need to frame no ie physical location for logical page

The second level page table is directly pointing to frame address. So, for 2^20frames it needs 20 bits.

page table entry consists of frame no and if required some other information like valid bit etc

no of pages at each level is same also page table entry size at each level?

no not necessary- we saw the above case where 10 bits was the min PTE size in level 1 but 20 bits in level 2.

yes

@Arjun sir if page fault occur then how with the help of page no we map frame no .how os decide where page is present is disk and more important how it be done using virtual address which is generated by Cpu? but sir acheiving this using page no how it is possible suppsose i request for page no 2 contain (a,b,c,d) page fault occur request goes to page fault handler it search in swap space but how this page no mapping done with disk block where actually this content available?ok ..:)

There is no search in swap space- the requested page frame is directly mapped to a hard disk location- as in a physical memory page (disk also has address). The only difference here is fame is on disk and not on physical memory.

When a process is created OS allocates memory for it- this can grow during process run- but again OS gives the new memory. So, as long as memory is available on physical memory, the process gets actual physical frame (no page fault for any valid memory access). When physical memory is over, OS allocates memory from disk- like swap space on linux. Now, this is absent in page table and causes a page fault. Now, OS brings this requested page from swap space to main memory.

Now, we have covered all terms for virtual memory- except TLB. What is a TLB?

It is translation lookaside buffer used to speed up address translation…we can cache the frequently used pages…TLB is small compared to page table so its access time is less…if we get entry in tlb its called tlb hit otherwise miss

if there is miss then we need to look into page table to get required entry..

yes, but in practice TLB miss is very less. So, even with say 3 level page tables, performance is not bad.

yes

okay.. So, any questions on this topic?

One thing is we considered page tables separate and cache separate in CO. But questions can combine them 🙂

How is the average access time of TLB calculated?g

TLB access time will be fixed similar to cache time. What is usually asked is memory access time.

So, if we assume a physically addressed cache, and TLB hit rate Th, this will be

Th * TlbTime + (1-Th) * (tlbtime + memorytime)
+ Ch * cachetime + (1-Ch) * (cachetime + memorytime) (assuming no page fault).

Cache can be virtually addressed too- which means we needn’t wait for address translation to lookup cache.

there are four types of cache:

physically tagged virtually indexed

physically tagged physically indexed

virtually tagged virtually indexed

virtually tagged physically indexed

yes. we saw what is “tagging” during CO discussion. You should read the properties of each of these types of cache- especially virtually indexed physically tagged cache.

any reference for that sir?

Click to access 3810-20.pdf

Even i need ref

https://en.wikipedia.org/wiki/CPU_cache

for this cache

http://gateoverflow.in/490/gate2008_67 – why 25 ?? it should be 24 24 24 r8 ??

Why it should be 24?

Beacuse the frame number bits wont change in any level – as per which definition?

i didnt get you which definition thing . Every page table will have a frame number as page table entry right ?

yes. but is it defined anywhere that this must be same across different page table levels?

I haven’t seen this anywhere and practically there is no need for it also. Moreover this is explained in this link by experts 🙂

http://www.cs.utexas.edu/~lorenzo/corsi/cs372/06F/hw/3sol.html

okay .I didnt knew . I will look to this link and type of cache also

okay..

Difference between a page and a segment?

Page is from system view , while user see each division of program as a segment .

Page size is fixed, segment size is not.
Segments are logical separation of a program- like cod segment, data segment etc can be moved to different segments.

Paging suffers from _______ fragmentation? internal

very small amount of internal fragmentation. yes, because size is fixed.

Segmentation suffers from _______ fragmentation? /

only external fragmentation possible.

Why internal fragmentation? we can have segments of any size- so it shouldn’t cause internal fragmentation. It causes external fragmentation.

^ yeah

We also use paging with segmentation. That is at top level we have segments and then we have pages. Implementation is more like multi-level paging with the first table being for segments instead of pages.

here : we first divide into segments and then each segments into pages . And we have a page table for each segment . right ?

yes. correct.

sir pls provide some link on

physically tagged virtually indexed

physically tagged physically indexed

virtually tagged virtually indexed

virtually tagged physically indexed

the link is given,that pdf contain nothing on these topic

Click to access CSE240A-MBT-L18-VirtualMemory.ppt.pdf

and sir,previous ether pad was better,here randomly 2,3 pepole start writing at same time,very difficult to follow- shall see this..

if there is a entry in tlb (tlb hit) then the data of it will present in cache or need not to be present ???

No relation- may or may not be present. But if TLB is hit memory access is guaranteed not to go to disk- it’ll not cause page fault if the access is valid.

“it’ll not cause page fault if the access is valid” – means

Suppose we access memory for a write operation on a RO page. TLB hit, but page fault happens.

so if there is a tlb hit then also page fault may occur

yes. but don’t confuse this in problems. Usually they give page fault rate independent of TLB access.

sir can you explain the fork ()

its a function .r8 ??

yes. fork is a function. What it does? it clones the process at the point where it is called. That is an identical child process is created and added to the scheduler which works same as the parent except for one variable- the return value of fork- which is 0 in child but child process id in parent.

say

fork()

priint a

fork()

fork()

print b

print c

then the output can happen in any order

Problem with printf is output is bufferred. So, we cant be sure if this buffer gets copied to child even if printf is above fork..

how user level thread switching happens ?

User level thread all are managed by thread library at user level . Even switching synchronisation everything all are managed by thread library

how kernel level thread switching happens ?

http://stackoverflow.com/questions/7439608/steps-in-context-switching

So, we only have File Systems left in OS.

Arjun sir 2-3 question i have ? yes..

It will take 1 minute sir

In process scheduling , by default we need to count context switching at the start and end ?

if they dont specify

I wouldn’t count 🙂 but should watch the question..

Okay .In first discussion you corrected us with progress definition . That the decision will be taken by process NOT in Remainder section .— So these process are the one waiting to enter CS right ?

Ohh yes !

And one more thing , in last discussion you said a large quantum convert RR into FCFS

If time quantum is greater than the burst time of all processes, RR is nothing but FCFS.

yes here its fine , but saying a time quantum greater doesnt imply it will be FCFS only i got it (Y)

🙂

. But is this proper ? I mean shouldnt its more appropriaite to say quantum should be infinity (Note infinity will be burst time of each process ) if burst time of some process is inifinity, then time quantum must also be inifinity..

okay, then.. we can finish today.. We have File system left in OS, which we can see in December if needed..

Next week is for Pointers and Decidability.

okay . Thank you 🙂

okay sir. 🙂

sir,one final qs,in the case of cache miss or page fault should we add memory acces tym or not??

Published by Google Drive–Report Abuse–Updated automatically every 5 minutes

Arjun Suresh

View Comments

Share
Published by
Arjun Suresh

Recent Posts

GATE CSE 2022 Admissions, Results and Placement Responses

This page shows all details regarding GATE CSE 2022 Admissions including results, admission offers and…

2 years ago

GATE CSE 2021 Admission Responses

Source Add your Response Rank Predictor for GATE CSE 2021 Result Responses: GATE CSE 2021…

3 years ago

GATE CSE Books – More Options

Best Books for GATE CSE with Relevant Chapters to Read  Heads Up! These GATE CSE…

3 years ago

ISI PCB Previous Year Papers with Solution

Indian Statistical Institute(ISI) offers M Tech in Computer Science with the Admission Test Codes MMA/PCA…

3 years ago

ISI JRF Previous Year Papers with Solution

Indian Statistical Institute(ISI) offers Junior Research Fellowships (JRF) in Computer Science, Statistics, Mathematics, Quantitative Economics,…

3 years ago

ISI DCG Previous Year Papers with Solution

 Indian Statistical Institute(ISI) conduct the admission test for Postgraduate Diploma in Computer Applications(PGDCA) with the…

3 years ago