CS 450: Operating Systems

xv6 Machine Problem 1: Modifying the Scheduler


In this first xv6 assignment, you will be modifying and adding code to a number of different files that together implement the kernel. The ultimate goal will be to change the current scheduling algorithm in xv6 (which is just a simple round robin) to something more sophisticated. Along the way, you'll learn how to build the kernel and test/debug your work, and will hopefully acquire a working knowledge of some essential kernel tasks and modules.


This assignment will be broken into parts where each part requires you to build off the last part. At first, you won't actually be adding any source code, but fret not, you will! The goal towards the beginning will be to just get you comfortable with the work flow so that future parts of this assignment (as well as future assignments) so more extensive code changes to the code base won't be too intimidating!

Part 1 - Environment Setup (10 Points)

Obtaining the Repository

You should've received an e-mail invitation to share a private repository with me on BitBucket --- you need to have accepted this invitation in order to continue.

Assuming you have Git installed on your machine, simply run the following command (with your own username) to clone your repository into a directory named xv6 on your machine.

$ git clone https://bitbucket.org/SeanWallace/cs450-summer2016-USERNAME.git xv6

Next, you should create an upstream branch that tracks my public xv6 repository, just in case I push any more commits to it for upcoming machine problems.

$ cd xv6
$ git remote add upstream https://bitbucket.org/SeanWallace/xv6.git

You can now fetch and merge the master branch from this upstream repository into your own master branch. Nothing should happen, as I won't have pushed any new commits yet.

$ git fetch upstream
From https://bitbucket.org/SeanWallace/xv6
 * [new branch]      master     -> upstream/master
$ git merge upstream/master
Already up-to-date.

Running xv6

In order to run xv6, we'll be using a combination of the VirtualBox virtualization platform and the Vagrant virtual environment management tool. Installers for both pieces of software are available for Linux, Mac OS X, and Windows. Download and install them first, then come back here — download links follow:

While VirtualBox comes with a graphical user interface, we won't be using it directly. Instead, everything will be done through Vagrant's command line interface.

Start up your platform's command line interpreter (a terminal emulator on Linux/OS X, or Windows command prompt), change into the cloned "xv6" directory, then type the following command:

$ vagrant up

If everything's been installed properly, this command will download the necessary images from the Internet and configure them so as to set up a virtual Linux machine that you can subsequently connect to in order to build and test xv6. It'll take a while. Output will look something like this (note, this is the result of the first run, subsequent runs will produce much less output):

Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'ubuntu/trusty64' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox
    default: Box Version: >= 0
==> default: Loading metadata for box 'ubuntu/trusty64'
    default: URL: https://atlas.hashicorp.com/ubuntu/trusty64
==> default: Adding box 'ubuntu/trusty64' (v20150909.1.0) for provider: virtualbox
    default: Downloading: https://atlas.hashicorp.com/ubuntu/boxes/trusty64/versions/20150909.1.0/providers/virtualbox.box
==> default: Successfully added box 'ubuntu/trusty64' (v20150909.1.0) for 'virtualbox'!
==> default: Importing base box 'ubuntu/trusty64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'ubuntu/trusty64' is up to date...
==> default: Setting the name of the VM: xv6_default_1442369930025_97653
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 => 2222 (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address:
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Connection timeout. Retrying...
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Mounting shared folders...
    default: /vagrant => /Volumes/Data/Users/sean/Repos/Classes/CS450/xv6
==> default: Running provisioner: shell...
    default: Running: /var/folders/q0/tqggx2357tz5h__gw4qymqrc0000gn/T/vagrant-shell20150915-80952-s0ho43.sh
==> default: Reading package lists...
==> default: Building dependency tree...
==> default: Reading state information...
==> default: tmux is already the newest version.
==> default: 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

When done, try the command "vagrant status" --- it should produce the following output:

Current machine states:

default                   running (virtualbox)

The VM is running. To stop this VM, you can run `vagrant halt` to
shut it down forcefully, or you can run `vagrant suspend` to simply
suspend the virtual machine. In either case, to restart it again,
simply run `vagrant up`.

This means the virtual machine is now running. You can connect to it using the command "vagrant ssh", which will result in the following:

Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-63-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Wed Sep 16 02:25:22 UTC 2015

  System load:  0.81              Processes:           82
  Usage of /:   3.1% of 39.34GB   Users logged in:     0
  Memory usage: 12%               IP address for eth0:
  Swap usage:   0%

  Graph this data and manage this system at:

  Get cloud support with Ubuntu Advantage Cloud Guest:


You're now logged into the virtual machine. The "/vagrant" directory on the virtual machine is synchronized with the git repository you cloned earlier, and we'll be going there to build xv6 and run it using an emulator that was automatically installed for you on the virtual machine. The following commands will do this:

$ cd /vagrant
$ make
$ make qemu-nox

The first make command should've resulted in a (successful) build process, while the second should've resulted in the following output:

qemu-system-i386 -nographic -hdb fs.img xv6.img -smp 1 -m 512  -snapshot
cpu0: starting
init: starting sh

This last prompt is produced by xv6, which you've now booted into on top of an emulator (which is in turn running within a virtual machine).

You can explore a bit here, but when you're ready to exit the xv6 session, use the key sequence "^a x" (i.e., hit the 'a' key while holding down the control key, then release both and hit the 'x' key) — ^a is a special prefix key used to send the emulator an interrupt sequence.

You should now be dropped back into the virtual linux machine. To get out of that and back onto your own machine, just type "exit". At this point, if you wish, you can terminate the virtual machine by entering "vagrant halt". To bring it back up just use "vagrant up". To SSH in to a running virtual machine, use "vagrant ssh".

Working on xv6

The great thing about using Vagrant is that it automatically gives us the ability to synchronize directory contents between the host machine and the virtual (or "guest") machine. This means you can do all your coding with tools installed on your own machine (e.g., favorite IDEs/editors) and only SSH into the virtual machine for testing purposes.

The xv6 kernel codebase is already sitting in your cloned repository, so you will simply open and edit any files in there. When done, you will do a "vagrant ssh", then "cd /vagrant" in the guest machine to get to the synchronized directory (containing your changes) so as to build and test your changes.

To re-build the xv6 image, you'll always want to first do a "make clean" before running "make", as weird errors will often arise otherwise. Get in the habit of just running the commands together with "make clean ; make" (the shell accepts multiple commands separated by semicolons).

If you wish to debug (e.g., step through and set breakpoints in) the kernel, you can use "make qemu-nox-gdb" to run the emulator in a mode that allows you to attach a gdb session to it. This will require that you "vagrant ssh" into the virtual machine twice --- once to start up the emulator and another to run gdb --- or make use of a terminal multiplexer (tmux is already installed for you).

What To Submit

Your deliverable for this part is simple: take a screenshot of your terminal window with xv6 running and commit it to the root of your repository. Make sure your name or username or CWID is visible in at least one part of the screenshot.

Part 2 - Process Accounting (30 Points)

Now that we've gotten the easy stuff out of the way, it's time to actually get working on this assignment. Our goal is to change the scheduler in xv6 to something a bit more robust, but before we even get to that step we're going to need some accounting facilities to keep track of CPU bursts for all of the processes running on our OS.

Adding A System Call

For this part of this lab, you'll need to add a number of system calls to xv6:

  1. start_burst: A call to this will do the necessary accounting to start a new CPU burst.
  2. end_burst: Conversely, a call to this will end the CPU burst started from the previous call.
  3. print_bursts: This system call will print out all of the CPU bursts a given process has produced.

For example, the following test program:

int main(int argc, char *argv[])
	int process_count;
	int pid = -1;
	for (process_count = 0; process_count <= 2; process_count++) {
		pid = fork();
	if (pid == 0) { // Children execute the test
		int i, j, garbage;
		for (i = 0; i < 10; i++) {
			int rand_num = rand() * (100 * getpid());
			garbage = 0;
			for (j = 0; j < rand_num; j++) {
				garbage += 1;
		printf(1, "Process %d bursts: ", getpid());
	else {
		while (wait() != -1);

Will produce the following output (note that the bursts are randomly generated, so you should NOT expect your output to be identical)

Process 7 bursts: 6, 80, 156, 84, 40, 24, 128, 60, 36, 144, 184 
Process 8 bursts: 102, 180, 92, 44, 24, 148, 76, 32, 164, 178
Process 9 bursts: 2, 108, 196, 100, 56, 24, 164, 88, 44, 174, 147
Process 10 bursts: 2, 108, 244, 112, 68, 24, 180, 92, 44, 169, 99


You will need to modify a number of different files for this exercise, though the total number of lines of code you'll be adding is quite small. At a minimum, you'll need to alter syscall.h, syscall.c, user.h, and usys.S to implement your new system call (try tracing how some other system call is implemented, e.g., uptime, for clues). You will also need to update struct proc, located in proc.h, to add tracking data structures for each process' CPU bursts. To re-initialize your data structure when a process terminates, you may want to look into the functions found in proc.c.

Remember what a CPU burst is, time consumed on the CPU by the program. Thus, any trap to the OS would be a voluntary indication by that program it was done with its current burst. This is not to be confused with a hardware interrupt such as the clock timer as these will happen outside of the program's logical control flow. In class we considered the time between CPU bursts to be I/O, but you should not limit yourself to just I/O calls (indeed, the test program uses sleep). Therefore, you will also need to call the start_burst and end_burst system calls every time any system call is invoked (other than the ones you are implementing, of course). There is one central area where this takes place, you'll need to find it and put your methods in the appropriate places.

Chapter 3 of the xv6 book contains details on traps and system calls (though most of the low level details won't be necessary for you to complete this exercise).


To test your implementation, you'll run the schedtest executable (when booted into xv6), which is the full version of the program above. Because the program depends on your implementation (that is, the methods it uses haven't been implemented yet), it isn't compiled into xv6 by default. When you are ready to test, you should modify the Makefile to include the schedtest executable to be built. Do this by changing the last line of the UPGROGS declaration from #_getcount\ to _schedtest\.

Part 3 - Modifying the Scheduler (40 Points)

Now that we have the necessary accounting measures in place, it's time to actually modify the scheduler in xv6! xv6 by default uses a round robin scheduler, this is both boring and not suitable for our needs, so we are going to change that to a Shortest Remaining Time First (SRTF) scheduler. As we discussed in class, the way this works is very simple, when it is time to select a process to be run on the CPU, we select the one which has the shortest remaining time.

Now, the question becomes, how do we know which process has the shortest remaining time? Luckily that part is relatively simple. While we acknowledge it's never possible to truly know exactly how much time a process has left, we can make an educated guess. We discussed a couple of methods in class (some more sophisticated than others), but for the purposes of this assignment we are going use the average of the last three CPU bursts. Since in the last part of this assignment you implemented automatic accounting methods for keeping track of CPU bursts, this process becomes as easy as using the last three you recorded to find the average.

Really important: if there are less than three previous bursts recorded (like when the process has just begun running), you should fall back to the default method of scheduling (RR). Only when there are three or more previous bursts for you to calculate an average from should you schedule based on which has the shortest average.

Your mission for this part is to modify the scheduler function in proc.c to pick the process whose last three CPU bursts were on average the least amongst all of the other processes. To see this in action, let's look at output from our test program again (this time with a few more processes to make it interesting):

Process 9 bursts: 6, 203, 408, 200, 88, 48, 336, 152, 80, 360, 488, turnaround time: 2377
Process 10 bursts: 249, 440, 232, 120, 64, 336, 176, 88, 368, 506, turnaround time: 2579
Process 12 bursts: 10, 270, 520, 256, 112, 80, 432, 224, 104, 479, 504, turnaround time: 3003
Process 14 bursts: 16, 336, 624, 320, 160, 80, 488, 240, 125, 422, 496, turnaround time: 3312
Process 15 bursts: 16, 344, 648, 336, 168, 96, 552, 264, 112, 431, 476, turnaround time: 3447
Process 16 bursts: 7, 391, 720, 344, 200, 80, 568, 265, 108, 442, 427, turnaround time: 3555
Process 17 bursts: 14, 392, 760, 392, 184, 88, 620, 239, 108, 453, 365, turnaround time: 3617
Process 18 bursts: 3, 400, 824, 408, 208, 104, 591, 256, 126, 422, 303, turnaround time: 3646

One of the first things you're going to want to do is come up with some way to track the turnaround times for each of the processes. You can instead use wait times if you wish, but this is much harder to keep track of an really not necessary. Our test program uses the PID of each of the processes to increase the amount of time it spends during bursts which we see in the output; the first process listed has a turnaround time of about 2500 and the last about 3600 with an overall average of 3192. Can we do better? I think so. Below is output from the same test but instead of using the standard RR scheduler a SRTF scheduler is used:

Process 9 bursts: 6, 195, 408, 32, 11, 7, 42, 20, 10, 43, 58, turnaround time: 833
Process 11 bursts: 10, 255, 644, 34, 16, 9, 50, 25, 12, 52, 74, turnaround time: 1182
Process 13 bursts: 10, 302, 1013, 40, 18, 9, 61, 30, 15, 62, 84, turnaround time: 1645
Process 10 bursts: 217, 692, 759, 19, 7, 45, 23, 12, 46, 65, turnaround time: 1886
Process 15 bursts: 14, 359, 1542, 46, 19, 12, 67, 35, 17, 71, 100, turnaround time: 2283
Process 16 bursts: 21, 344, 1932, 47, 23, 11, 73, 35, 18, 78, 106, turnaround time: 2689
Process 17 bursts: 3, 391, 2315, 49, 25, 12, 1, 79, 39, 20, 81, 114, turnaround time: 3130
Process 18 bursts: 9, 416, 2716, 51, 25, 14, 81, 42, 20, 89, 120, turnaround time: 3584

There are a few things to take away from just this output:

  1. The average turnaround time for all processes has decreased by over 1000 ticks (to 2154). We said in class that SRTF and algorithms like it produce the least amount of wait time. This of course also has an impact on turnaround time, if we wait less, we run quicker overall. Score! Note it is still very possible for hill-climbing (i.e., starvation) to happen with any greedy scheduling algorithm!
  2. Not all of the processes are listed in the order they were created. The randomness of our bursts introduces some real-worldness to these results in the form of processes not finishing in the order they were created and since we are no longer scheduling each process in a RR fashion, those processes with the smallest bursts (or, smallest last three bursts in our case) will finish first.
  3. The bursts themselves are often shorter (and less uniform)! xv6 is a preemptively scheduled operating system which uses RR (the fairest of them all) to determine which job runs next. If there are 10 jobs which are to be scheduled onto the system, each of those 10 jobs will get exactly the same amount of time to run in a given time unit. The SRTF algorithm almost guarantees this fairness will no longer be present.


As said above, you will need to preserve at least some of the logic already in xv6's scheduler until you are able to build up enough bursts to actually make an average (and therefore, an informed scheduling decision). That said, your implementation should only ever "fall back" to that method of scheduling and not outright depend on it. Take a good hard look at the current implementation and try making your changes progressively.

Other than adding any accounting necessary to keep track of turnaround time, the entirety of the rest of your implementation will be focused in the scheduler method. As such, your start and end burst methods shouldn't require any changes at all, you will be working directly with the data structure you defined in the second part.


Testing should be easy and straight forward, and in case you didn't last part, you should vary the number of processes which are running on the system in this part. Because of the randomness of the burst lengths, it might not be possible to see right away if your implementation is working, therefore repetition will be key. If you see roughly the same results each time and they roughly match the ones above, you're probably on the right track. While extremely annoying and not something your final submission should include, you might consider adding print statements which fire when you switch processes because you found a process with a shorter average.


For all parts of this assignment the submission process is identical and easy. Just make sure that you've committed all your work to your local repository --- "git commit -am "Your commit message"" will do the trick --- and then you can push these commits to your private repository (shared with me):

$ git push origin master
Counting objects: 3, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 292 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
To git@bitbucket.org:SeanWallace/cs450-summer2016-USERNAME.git
   9dce23b..0bdfeac  master -> master

If you see output like the above, your commits have been pushed. You might want to do a "git status" to make sure you have no uncommited changes and/or aren't ahead of the remote repository. If you really want to make sure, log into your Bitbucket account and check the "Commits" tab. If you don't see your commit(s) there, I won't be able to either!

Last Updated: Wednesday, February 3, 2016 0:43 AM