]Scheduling process is the basis of multiprogramming operating system. By averting switch processors between existing processes, making the operating system of computer systems become more productive and efficient. Multiprogramming target is to have a process running (executed) every time to maximize utilitasi processors. For a computer system with a single processor (called a uniprocessor system or singleprocessor) then no more than one process running (Running). If there are several processes in the system, one process is running while the rest waited until the processor is free and the process is scheduled to run.Multiprogramming is a simple idea, a process is executed until the process is waiting for something, usually the implementation of the operations I / O. On multiprogramming, several processes are stored in memory processes at one time. The purpose of multiprogramming is to run multiple processes at any given time so they can maximize CPU usage. The purpose of time-sharing is to cycle through the use of the CPU between processes so that the user can interact with each program when the program is run.
When a process must wait, take the processors from the operating system and give to the other processors. This pattern is done continuously. Each time the process of waiting, another process takes over the use of processors. Mahsudnya here is if there is more than one process, other processes must wait until the CPU is free and can be rescheduled.CPU scheduling is the basis of multiprogramming operating system, which is done by the CPU to switch between processes. Here the operating system can make the computer more productive.Scheduling is the basic function of the operating system. Scheduling is a collection of policies and mechanisms in the operating system associated with the sequence of the computer systems of work done. Scheduling in charge decided the following:Process to be running, when and for how long the process is running. Almost all computer resources are scheduled before use. CPU is one of the main resources of computer systems. So that CPU scheduling is central to the design of the operating system.In scheduling, a process that has not got a quota allocation of CPU will queue up in the ready queue. Here the algorithm is required to regulate these processes turn. There are several algorithms to manage it. One such algorithm is the scheduling priority.The types of scheduling algorithms:A. Nonpreemptive, using the concept:a. FIFO (First In First Out) or FCFS (First Come First Serve)b. SJF (Shortest Job First)c. HRN (Highest Ratio Next)d. MFQ (Multiple Feedback Queues)2. Preemptive, using the concept:a. RR (Round Robin)b. SRF (Shortest Remaining First)c. PS (Priority Scheduling)d. GS (Guaranteed scheduling)Other than classification based on the can / absence of a process taken by force is a classification based on the priorities in these processes, namely:A. Without priority scheduling algorithm.2. Priority scheduling algorithm, consisting of:a. Static priorityb. Dynamic priorityNonpreemptive algorithm1) First In First Out (FIFO)First In First Out (FIFO) scheduling is not priority. FIFO is the simplest scheduling, namely the processes of processing time allotted by the time of arrival. At the time the process gets a small processing time, the process runs to completion.Assessment is based on the criteria scheduling optimization:• Fair, in a formal sense (a process that will first come first served), but otherwise is not fair because the job-a job that takes longer to make job-job short wait. Job-job is not important to make job-important job to wait long.• Efficiency, very efficient.• Response time is very ugly, especially suitable for interactive systems for real time systems.• Turn around time is less good.• Throughtput unfavorable. FIFO is rarely used on their own, but combined with other schemes.• Good for batch systems are very rarely interact with the user.Example: application of numerical analysis, as well as creating a table.• It is not good (not useful) for an interactive system, because it does not give a good response time.• Can not be used for real time systems (real-time applications).First-Come First-Served (FCFS)This algorithm is the simplest scheduling algorithms that use the CPU. Withusing this algorithm seiap process is in ready status is inserted into the queueFIFO according to the time of his arrival. The process of arriving first to be executedfirst.For example, there are three processes that come together is at 0 ms, P1 has a bursttime 24 ms, P2 has the burst time 5 ms, P3 has a 3 ms burst time. Calculate the average wating timeand turnaround time (burst time + waiting time) of the three processes by usingFCFS algorithm.Process Burst timeP1 24 msP2 5 msP3 3 msWaiting time for P1 is 0 ms (P1 does not have to wait), while for p2 is equal to24 ms (P1 awaiting completion) and to p3 by 29 ms (P1 and P2 to wait for completion). Waitingtime average is equal to (0 +24 +29) / 3 = 17.6 ms.Turnaround time for P1 at 24 ms, while for P2 by 29 ms (calculated from the initialexecuted until after the arrival of P2), for p3 is 32 ms. Turnaround time on average forThe third process is (24 +29 +32) / 3 = 28.3 ms.The downside of this algorithm:a. Waiting time average is long enough.b. The Convoy effect, namely the processes to wait long to wait for a process ofbeing executed by the CPU.2) Shortest Job First (SJF)SJF scheduling (Shorthest Job First) is a special case of priority scheduling algorithm. Priorities can be associated each of these processes and the CPU is allocated to the process with highest priority. To do the same priority FCFS. The idea of scheduling priority of each process is given priority and highest priority process running (get any processor time).Priority may be given by:Static priority (static priorities).Dynamic priority (dynamic priorities).Static PriorityStatic priority means priority has not changed.Examples of priority schedulingThese processes are very much operating the input / output (I / O bound) spent most of the time menuggu completion of the operation input / output. These processes are given very high priority so that once the process started immediately require processors, will soon begin the request process input / output, causing subsequent processes blocked waiting for the completion of the operation of input / output. Thus the processor can be used other processes. The processes I / O bound runs parallel with other processes that really need the processors, while the processes I / O bound was awaiting the completion of the DMA operation.These processes are very much operating the input / output that will have to wait long to use processors (due to low priority) will only burden the memory of having to be stored without the need for these processes in memory because it does not finish-finish operation input waiting and waiting for processor allocation.This schedule assumes the passage of time to complete the process had been known previously. The mechanism is to schedule the process with the shortest path first time to completion, thus providing high efficiency and low turn around time and no-priority scheduling.Example:There are four processes (job), namely A, B, C, D with the time course of each is 8,4,4 and 4 minutes. If these processes are executed, then the turn around time for A is 8 minutes, to B is 12, for C is 16 and for D is 20. If the fourth process uses shortest job scheduling fisrt, then turn around time for B is 4, to C is 8, for D is 12 and for A is 20.SJF always pay attention because the average response time of the smallest, it is very good for interactive processes. Interactive processes generally have a pattern, that is waiting for orders, run the command, wait command and run the command, and so on. The problem that arises is not knowing the size of the job at job entry. To find out the size of the job is to make estimates based on previous behavior. The process does not come together, so that its adoption should be dynamic. Scheduling is rarely used because it is a theoretical study for the comparison of the turn around time.3) Highest Ratio Next (HRN)Next is the highest ratio with priority process scheduling strategy based not only on the function of service time but also the amount of waiting time process. Once the process gets a small processor, the process runs to completion.HRN dynamic priority is calculated by the formula: Priority = (waiting time + service time) / time of service For service time appears as a divisor, then the shorter-priority job is better, because the waiting time as the numerator, the process has to wait longer also had more opportunity nice. HRN is called, because the waiting time plus service time is the response time, which means the highest response time to be served.4) Multiple Feedback Queues (MFQ)A dynamic priority scheduling. Scheduling is to prevent (reduce) the number swappingdengan processes are very much using the processors (for completing the task takes a long time) allotted time (number kwanta) more at a time. It also requires scheduling classes a priority for existing processes. The highest class runs for one kwanta, the next class runs for two kwanta, running four kwanta next class, and so on. Applicable regulations are as follows:• Start the process at the highest class.• If the process uses the entire kwanta allocated, then the derived class priorities.• The process of entering for the first time to the direct system are given the highest grade.This mechanism prevents the long-running processes that need to be swapping over and over again and preventing the interactive processes have to wait a short while.Preemptive Algorithm1) Round Robin (RR)Is:
*Scheduling of the oldest, simplest, fair, widely used and easily implemented algorithm.
* Scheduling is not dipreempt by another process but by the scheduler based on the length of time goes by the (preempted by time).
* Scheduling without priority.
*Assume that all processes have the same interests, so there is no particular priority. All processes are considered important enough by the time given the number of processors called kwanta (quantum) or time slice in which the process was still running berjalan.Jika process until the end of the quantum, the CPU will mempreempt process and giving it to another process. Scheduler needs to maintain a list of runnable processes. When the quantum depleted to a certain process, then the process will be placed at the end of the list (list).2) Shortest Remaining First (SRF)Is:• Scheduling berprioritas.dinamis.• preemptive for timesharing• Completing the SJFIn the SRF, the process with the lowest estimated time remaining path is executed, including new processes tiba.Pada SJF, so the process is executed, the process is run until selesai.Pada SRF, a process that is running (running) can be taken with the rest of the time alihproses new road a lower estimate.Weaknesses:• Having a greater overhead than SJF. SRF to the storage time has been spent on services that job and sometimes have to deal with the transition.• The arrival of small processes will be executed.• Job-job with a longer mean length and variation in waiting times longer than the SJF.SRF to save service time has been spent, adding overhead. Theoretically, the SRF provides minimum waiting time but because of the overhead switch, then in certain situations SFJ can give better performance than the SRF.Shortest-Job First (SJF)This algorithm has a different way with FCFS scheduling. With this algorithm theevery process in the queue ready to be executed based on the smallest burst time. It isresulting in a shorter waiting time for each process and because it is waitingthe average time is also shorter, so it can be said that this algorithm isthe optimal algorithm.There are some drawbacks of this algorithm are:• The difficulty to predict the burst time process to be executed next.• Processes that have a great time burst will have a greater waiting time forexecuted prior to the burst time is smaller.These algorithms can be divided into two parts:A. Preemptive. If there is a process that is being executed by the CPU and there is a process in the queueready to burst time is smaller than the process of being executed, thenprocess is being executed by the CPU will be replaced by processes that are in the queueis ready. Preemptive SJF is often referred to as Shortest-Remaining-Time-First scheduling.2. Non-preemptive. CPU does not allow a process that is in the queue ready to moveprocess is being executed by the CPU even if the new process has burstsmaller time.For example, there are four processes with each burst arrival time at the describedin the table below. Calculate the average waiting time and turnaround time of four processesThe SJF algorithm to use.Process Arrival time Burst TimeP1 0 ms 7 ms2 ms 4 ms P2P3 4 ms 1 msP4 5 ms 4 msPreemptive Solutions:The average waiting time is (9 + 1 + 0 +2) / 4 = 3, wherein:P1: (0-0 +11-2) = 9P2: (2-2 +5-4) = 1P3: (4-4) = 0P4: (7-5) = 2The average turnaround time is ((9 +7) + (1 +4) + (0 +1) + (4 +2)) / 4 = 7Non-Preemptive solutions:The average waiting time is (0 + 6 + 3 + 7) / 4 = 4, wherein:P1: (0-0) = 0P2: (8-2) = 6P3: (7-4) = 33). Priority Scheduling (PS)Each process is given priority and the highest-priority process gets a small first time (running). It is assumed that each process has a certain priority, so that will be implemented based on the priorities they have. Illustrations that can clarify these priorities are in the military computers, where the process of general priority 100, the process of colonel 90, major priority, 80, 70 priority captain, lieutenant priority 60 and so on. In the UNIX command to change the priority using the nice command. Giving priority is given to:a.) Static (Static Priorities) means that priorities have not changed.Advantages:• Easy to implement.• Having a relatively small overhead.Weaknesses:• Not responsive to environmental changes that may require adjustment of priorities.b. ) Dynamic (Dynamic Priorities) is a mechanism to respond to changes in operating system environments. Initial priority given to the process may be short-lived after adjusting to a more appropriate environment.Weaknesses:Implementation of a dynamic priority mechanism is more complex and have greater overhead. Overhead is offset by an increase in system responsiveness.Examples of scheduling priority:These processes are very much operating the input / output spends most of the time waiting for the completion of the operation input / output. These processes are given very high priority that it requires the processor immediately granted, the request will immediately commence the process of input / output, causing subsequent processes blocked waiting for the completion of the operation of input / output. Thus the processor can be used other processes. The processes I / O runs parallel with other processes that really need the processors, while the processes I / O is waiting for the completion of the DMA operation.These processes are very many operations I / O it, if I have to wait long to use processors (due to low priority) will only burden the memory, because it must be stored without the need for these processes in memory because it is not finished-finished waiting and waiting for input operations ration of processors.4) Guaranteed scheduling (GS)Appointment scheduling gives a realistic (given the same processing power) to create and customize the performance is if there are N users, so that each process (user) will get 1 / N of the CPU processing power. To that end, the system should always keep the information about the amount of CPU time for all since the login process and also how long the user is logged. Then the amount of CPU time, ie the time from login divided by n, making it easier to calculate the ratio of CPU time. Because the amount of processing time each user can be known, it can be calculated the ratio between the actual processing time should be obtained, namely 1 / N and the total processing time processing time has been devoted to the process. Ratio of 0.5 means that a process has only 0.5 CPU time than what you have and the ratio of 2.0 means that a process has only 2.0 CPU time than what you have. The algorithm will run the process with the lowest ratio to rise above the level of the higher its nearest competitor. This simple idea can be implemented into real-time systems and has a dynamic priority scheduling.Multilevel Feedback QueueThe algorithm is very similar to Multilevel Queue algorithm. The difference is that this algorithmallow the process to move the queue. If a CPU-consuming process for too long, then the processit will be moved to a lower queue. This favorable interaction process, due processit only uses a little CPU time. Similarly, the process waits tooold. This process will be raised level.Usually the highest priority given to the process with the smallest CPU burst, so the CPUbe fully utilized and I / O can be kept busy. The lower level, long-CPUalso the larger the burst process.The algorithm is defined by several parameters, among others:• Number of queues• each queue scheduling algorithm• When raising process to a higher queue• When lowering the process to a lower queue• Queue which will enter a process that requiresFigure 4 multilevel feedback queueBy definition like that make these algorithms are often used. Because the algorithm is easybe reconfigured to fit the system. But where the scheduler to know the best, wemust know the value of that parameter. Multilevel feedback queue algorithm is onemulilevel algorithm based on queue. Fundamental differences that distinguish multilevelmultilevel feedback queue with ordinary queue is located on the possibility of athe process of moving from one queue to another queue, either by a lower priorityor higher, for example in the following example.• All the new arrivals will be placed in queue 0 (quantum = 8 ms)• If a process can not be completed within 8 ms, then the process will be terminatedand moved to the first queue (quantum = 16 ms)• The first queue will only be done if there is no longer a process in queue 0, and if aThe first process in queue 1 is not completed within 16 ms, then the process is movedto the second queue• The second queue will be done when the queue is empty 0 and 1, and will run with the algorithmFCFS.
*Scheduling of the oldest, simplest, fair, widely used and easily implemented algorithm.
* Scheduling is not dipreempt by another process but by the scheduler based on the length of time goes by the (preempted by time).
* Scheduling without priority.
*Assume that all processes have the same interests, so there is no particular priority. All processes are considered important enough by the time given the number of processors called kwanta (quantum) or time slice in which the process was still running berjalan.Jika process until the end of the quantum, the CPU will mempreempt process and giving it to another process. Scheduler needs to maintain a list of runnable processes. When the quantum depleted to a certain process, then the process will be placed at the end of the list (list).2) Shortest Remaining First (SRF)Is:• Scheduling berprioritas.dinamis.• preemptive for timesharing• Completing the SJFIn the SRF, the process with the lowest estimated time remaining path is executed, including new processes tiba.Pada SJF, so the process is executed, the process is run until selesai.Pada SRF, a process that is running (running) can be taken with the rest of the time alihproses new road a lower estimate.Weaknesses:• Having a greater overhead than SJF. SRF to the storage time has been spent on services that job and sometimes have to deal with the transition.• The arrival of small processes will be executed.• Job-job with a longer mean length and variation in waiting times longer than the SJF.SRF to save service time has been spent, adding overhead. Theoretically, the SRF provides minimum waiting time but because of the overhead switch, then in certain situations SFJ can give better performance than the SRF.Shortest-Job First (SJF)This algorithm has a different way with FCFS scheduling. With this algorithm theevery process in the queue ready to be executed based on the smallest burst time. It isresulting in a shorter waiting time for each process and because it is waitingthe average time is also shorter, so it can be said that this algorithm isthe optimal algorithm.There are some drawbacks of this algorithm are:• The difficulty to predict the burst time process to be executed next.• Processes that have a great time burst will have a greater waiting time forexecuted prior to the burst time is smaller.These algorithms can be divided into two parts:A. Preemptive. If there is a process that is being executed by the CPU and there is a process in the queueready to burst time is smaller than the process of being executed, thenprocess is being executed by the CPU will be replaced by processes that are in the queueis ready. Preemptive SJF is often referred to as Shortest-Remaining-Time-First scheduling.2. Non-preemptive. CPU does not allow a process that is in the queue ready to moveprocess is being executed by the CPU even if the new process has burstsmaller time.For example, there are four processes with each burst arrival time at the describedin the table below. Calculate the average waiting time and turnaround time of four processesThe SJF algorithm to use.Process Arrival time Burst TimeP1 0 ms 7 ms2 ms 4 ms P2P3 4 ms 1 msP4 5 ms 4 msPreemptive Solutions:The average waiting time is (9 + 1 + 0 +2) / 4 = 3, wherein:P1: (0-0 +11-2) = 9P2: (2-2 +5-4) = 1P3: (4-4) = 0P4: (7-5) = 2The average turnaround time is ((9 +7) + (1 +4) + (0 +1) + (4 +2)) / 4 = 7Non-Preemptive solutions:The average waiting time is (0 + 6 + 3 + 7) / 4 = 4, wherein:P1: (0-0) = 0P2: (8-2) = 6P3: (7-4) = 33). Priority Scheduling (PS)Each process is given priority and the highest-priority process gets a small first time (running). It is assumed that each process has a certain priority, so that will be implemented based on the priorities they have. Illustrations that can clarify these priorities are in the military computers, where the process of general priority 100, the process of colonel 90, major priority, 80, 70 priority captain, lieutenant priority 60 and so on. In the UNIX command to change the priority using the nice command. Giving priority is given to:a.) Static (Static Priorities) means that priorities have not changed.Advantages:• Easy to implement.• Having a relatively small overhead.Weaknesses:• Not responsive to environmental changes that may require adjustment of priorities.b. ) Dynamic (Dynamic Priorities) is a mechanism to respond to changes in operating system environments. Initial priority given to the process may be short-lived after adjusting to a more appropriate environment.Weaknesses:Implementation of a dynamic priority mechanism is more complex and have greater overhead. Overhead is offset by an increase in system responsiveness.Examples of scheduling priority:These processes are very much operating the input / output spends most of the time waiting for the completion of the operation input / output. These processes are given very high priority that it requires the processor immediately granted, the request will immediately commence the process of input / output, causing subsequent processes blocked waiting for the completion of the operation of input / output. Thus the processor can be used other processes. The processes I / O runs parallel with other processes that really need the processors, while the processes I / O is waiting for the completion of the DMA operation.These processes are very many operations I / O it, if I have to wait long to use processors (due to low priority) will only burden the memory, because it must be stored without the need for these processes in memory because it is not finished-finished waiting and waiting for input operations ration of processors.4) Guaranteed scheduling (GS)Appointment scheduling gives a realistic (given the same processing power) to create and customize the performance is if there are N users, so that each process (user) will get 1 / N of the CPU processing power. To that end, the system should always keep the information about the amount of CPU time for all since the login process and also how long the user is logged. Then the amount of CPU time, ie the time from login divided by n, making it easier to calculate the ratio of CPU time. Because the amount of processing time each user can be known, it can be calculated the ratio between the actual processing time should be obtained, namely 1 / N and the total processing time processing time has been devoted to the process. Ratio of 0.5 means that a process has only 0.5 CPU time than what you have and the ratio of 2.0 means that a process has only 2.0 CPU time than what you have. The algorithm will run the process with the lowest ratio to rise above the level of the higher its nearest competitor. This simple idea can be implemented into real-time systems and has a dynamic priority scheduling.Multilevel Feedback QueueThe algorithm is very similar to Multilevel Queue algorithm. The difference is that this algorithmallow the process to move the queue. If a CPU-consuming process for too long, then the processit will be moved to a lower queue. This favorable interaction process, due processit only uses a little CPU time. Similarly, the process waits tooold. This process will be raised level.Usually the highest priority given to the process with the smallest CPU burst, so the CPUbe fully utilized and I / O can be kept busy. The lower level, long-CPUalso the larger the burst process.The algorithm is defined by several parameters, among others:• Number of queues• each queue scheduling algorithm• When raising process to a higher queue• When lowering the process to a lower queue• Queue which will enter a process that requiresFigure 4 multilevel feedback queueBy definition like that make these algorithms are often used. Because the algorithm is easybe reconfigured to fit the system. But where the scheduler to know the best, wemust know the value of that parameter. Multilevel feedback queue algorithm is onemulilevel algorithm based on queue. Fundamental differences that distinguish multilevelmultilevel feedback queue with ordinary queue is located on the possibility of athe process of moving from one queue to another queue, either by a lower priorityor higher, for example in the following example.• All the new arrivals will be placed in queue 0 (quantum = 8 ms)• If a process can not be completed within 8 ms, then the process will be terminatedand moved to the first queue (quantum = 16 ms)• The first queue will only be done if there is no longer a process in queue 0, and if aThe first process in queue 1 is not completed within 16 ms, then the process is movedto the second queue• The second queue will be done when the queue is empty 0 and 1, and will run with the algorithmFCFS.
0 comments:
Post a Comment