Introduction

Operating systems are now an integral part of our daily lives. Most of our appliances, gadgets and technology at some point require an interaction with an operating system. We will take a brief look at some of the differences between two of the most prominent platforms and how they function, the features that they provide and how they work. We will be mainly discussing the flagship software developed by Microsoft named Windows, however we will also look at the open-source operating system first released by Linus Torvalds which is called Linux.


Discussion

Microsoft Windows is the most widely used operating system in desktop computers today. Its current version, Windows 10, is responsible for over 70% of its market share. Windows is a closed source, hybrid kernel, single user multitasking OS. It allows for users to run multiple applications at the same time, tasks that are executed by the user are shared among the computers resources. Other popular multitasking operating systems have been developed by Apple and Linux.

Microsoft originally found success in the 1980s with their single-user/single-tasking operating system called MS-DOS. Single-task systems are limited to one user performing one task at a time, in MS-DOS you would input commands line by line to perform actions on your computer. Microsoft released Windows in November 1985, it featured the companies first iteration of a graphical user interface. A decade later Microsoft published Windows 95 and a significant marketing campaign was conducted to promote the launch. A task bar ran along the bottom of the screen and arguably one of the most identifiable elements was introduced; the start menu. The start menu was always located at the lower left side and it was constantly visible and ready to use for easy access to applications, files and folders. Instead of using the command line, less savvy users could quickly and easily navigate around their computer to find what they needed.

Linux is an open-source, monolithic, single user multitasking OS. It was conceived by Linus Torvalds and the overwhelming majority of mobile, server and cloud based systems are using some form of Linux distribution. What sets Linux apart from its competitors is its open source code, free to be modified and iterated upon. It is a highly customizable OS and enjoys an active, dedicated and helpful community that is focused on sharing. Users can directly run a distribution using a live CD or USB without ever installing it on the computer. Linux software can be located on its own repository where users can download and install many free applications all in one place. For those concerned about privacy, Linux collects little to no data from its users. The huge variety of Linux distros mean that you can often repurpose low end machines or find a suitable operating system for niche hardware or applications.

Microsoft, along with other platforms such as Linux, also develop what are known as network operating systems. This was born from the desire to allow for global access to databases and applications between multiple computers and electronic devices. You can manage security features such as authentication and access control, you can complete system maintenance on all connected devices and monitor device resources.

Network operating systems can function on a client/server basis, where the clients can access one or more file servers to share resources, or on a peer-to-peer basis, where all computers are considered equal and share directly with each other. The benefits of using a NOS include easy and well managed security, the simple addition and removal of resources and the ability to access and maintain devices operating in a different location.With distributed networking systems, users can share general communications, files, documents, even network bandwidth and disk storage without the need to connect to a centralized server. Any NOS must be robust in design as it will likely support workstations running different operating systems, within a network the devices could be utilizing Mac OS, Windows and Linux simultaneously. It should be able to recognize a wide variety of third-party applications and hardware, whilst remaining secure and easy to access .


Architecture

Linux is monolithic, all services and core functionality are a group that share the same kernel space. Although modern Microsoft Windows is at its heart monolithic, it is considered by many to be a hybrid due to a large part of its subsystems running fully in user mode. Both Linux and Windows are generally using 64-bit architecture, but you can still find home computers running 32-bit operating systems. The move to 64-bit was to accommodate 64-bit CPUs which weren’t widely available for consumer PCs until the early 2000s. A 32-bit processor needs a 32-bit operating system whereas a 64-bit processor can utilize a 64-bit or 32-bit operating system, however with a 32-bit operating system the CPU would not function at maximum capability. One of the main differences between these architecture types is how many calculations can be completed each second. Software and programs that use large amounts of data can operate more efficiently on a multicore 64-bit processor and a 64-bit operating system is naturally optimized for this hardware.

Most computers from the 1990s up to the 2000s were 32-bit machines. A 32-bit machine can handle a maximum of 4 GB of RAM, which was sufficient at the time. However with users often using multiple monitors, browsing countless webpages at the same time as having various applications running, there was a need to increase the available memory. A 64-bit system can take advantage of 16 exabytes of RAM, more than enough for the foreseeable future. Software developers now almost exclusively program for 64-bit systems, manufacturers tend not to offer 32-bit versions for their hardware given the lack of market demand.


File, Process and Memory Management

File management is an important part of an OS. Files on a computer are all part of a hierarchical system that begins with root, then directories and finally subdirectories. How a computer stores, retrieves and names files can make a huge difference in organization and efficiency. Attributes are given to files upon creation or modification, characteristics such as file type, size and the location on the disk are assigned to help a user locate and organize their system. In Windows, the default file management system is Windows explorer and users can navigate through their computers directories using the mouse and keyboard to locate and manipulate files and folders. An alternative is to use the command prompt, in Linux this is called the terminal. With various commands a user can interact with their chosen system to manage files. Most popular operating systems come with a file management system built in and that is sufficient for the majority of users, however there are also third party applications for businesses and power users that may have the need for more advanced options.

Another important function of the operating system is memory and process management. A process is the binary code of a program being executed, it is an active entity whereas a program is considered a passive entity. The OS needs to allocate resources to a process and allow for information sharing and exchange, the system will move processes back and forth between main memory and disk space during program execution. Memory management tracks each memory location and how much has been allocated to a process, it decides which process will receive memory and when. If some memory is freed or unallocated, it will update the status. Every program that we open must be copied from a storage device and into what is known as main memory, this memory type is also called RAM and there are various techniques used to achieve this. ‘Single contiguous allocation’ is the easiest method whereby almost all of the computers memory is available for one application, MS-DOS allocates memory this way. ‘Partitioned allocation’ divides the main memory into various partitions and each partition stores the information required for a specific task. ‘Swapping’ temporarily exchanges the process from main memory back to storage, it allows for dynamic relocation which refers to address transformations being done during the execution of a program. This means that the OS can easily move a process when needed and a program can grow over time.

Each process has an ID which is unique and assigned by the OS, a process state, I/O status information such as devices allocated to the process and the scheduling information. Scheduling information is utilized by the CPU to decide which process might need to execute vs which process may need to be placed in a waiting state, therefore making full use of the CPU. If a CPU is idle, the operating system will choose a process and place it in a ready queue to be executed. This selection is completed by the CPU scheduler. The scheduler will make a decision based on the processes that are ready and waiting in memory and allocate the CPUs attention. There are various algorithms that provide strategies for scheduling. ‘First come first serve scheduling’ is simple, it forms a queue system like customers in a shop each waiting their turn. This method can incur some long average wait times; particularly if the first process takes a long time. There is the ‘shortest job first’, where the algorithm will pick the quickest job and complete it first, then the next fastest job and so on. There are quite a few different techniques that have not been mentioned, each has its positive and negative aspects and all of them have an impact on the performance of your operating system as they determine which of your programs have priority over the computers resources.


Kernel Operation

Microsoft Windows features a hybrid kernel design, all of their current systems are based on the original Windows NT released in 1993. The kernel program initiates immediately after the bootloader, it controls every other program and process on a computer. For most Windows platforms, it attempts to merge the benefits of microkernel and monolithic architectures. Monolithic systems run the whole operating system in kernel mode. Microkernel based systems run most of the OS as separate processes, mostly outside of the kernel, the kernel coordinates messaging between the processes.

The kernel in any platform is considered a key program that is crucial for the OS to function and in Windows and Linux it is composed of four rings, the two middle rings are generally unused. The central ring manages the functioning of the hardware and has direct access to the CPU and system memory. Processes that run inside kernel mode can impact the whole system so if something fails it will likely result in a complete shutdown. Instructions are carried out via a hardware abstraction layer known as HAL that obfuscates differences in hardware to provide a consistent platform for software to run. The outer ring, known as the user mode, oversees user interaction with the kernel. When you open a user mode application Windows will create a process, it receives a private virtual address so that it cannot change or interfere with data that belongs to another application. It is isolated, meaning any crashes will be limited to a single application. Code that runs in kernel mode shares its virtual address space. If a kernel mode driver accidentally writes to the wrong address, data could be compromised and if it crashes the entire operating system may shutdown.


Security Features

There are four security classifications in computer systems, from high to low: A, B, C, D. Most commercially available systems are no higher than the C class, where users are accountable for their private data. In order to protect the computer from interference there are several functions that provide a secure environment. A common method is user authentication, whereby anybody that requests access is verified using a password and username. We can now also use fingerprints, retina and facial scans, vocal prints and 2FA with a second device to help secure the system. Most operating systems use some form of containment so that an application is only controlling specific resources. Limited containment, which is used by most commercial operating systems, bases its decisions on user identity/ownership without any other criteria such as integrity of a particular program. As such it is quite easy to breach the security of a system should an application be compromised. If a user launches an infected application the attacker gains all of the privileges associated with that user. The next option is access control, where the owner of a computer can choose who can access any particular resource. This will be commonly found in networked systems and involves setting up various tiers of privilege so that sensitive areas can be ringfenced.


Summary

Overall there are a multitude of choices and the operating system will generally depend on the type computer and the needs of the user. Windows is extremely popular in desktop PCs, laptops, businesses and servers. It is a very familiar platform for most people so it can be easily integrated into places of work and study. It is so popular that practically every hardware manufacturer and software developer will provide support for recent versions of Windows, its library of software is vast and can meet almost any need. Linux distributions have a significant market share in server environments and Android enjoys the lions share of the mobile userbase. Because of its open nature and being free to use, it has lent itself well to innovation and compatibility is growing each year. It is certainly an exciting time where our only burden is a greater choice than ever before.