Multitasking is the computer's ability to run multiple tasks seemingly at the same time, while forks and threads are mechanisms to create and manage those tasks. From early RTOS to modern Unix-like systems, they turned single-processor machines into multitaskers. In the MicroBasement, multitasking connects vintage time-sharing systems to today's multicore beasts — the clever software tricks that make one CPU feel like many. This write-up covers the concept of multitasking, RTOS and Unix, what forks and threads are, implementation with timers/interrupts, the illusion on single-core systems, how multiprocessors changed things, and the complexity of multicore programming.
Multitasking allows a computer to execute multiple processes or tasks "simultaneously" by rapidly switching between them. It started in the 1960s with mainframes needing to handle multiple users. Real-time operating systems (RTOS) prioritize tasks for embedded systems (e.g., VxWorks, 1980s), ensuring deadlines are met. Unix (1969, Bell Labs) popularized preemptive multitasking, where the OS interrupts tasks via a scheduler. Other systems like DOS (1981) were single-tasking, but OS/2 (1987) and Windows NT (1993) brought true multitasking to PCs.
A **fork** (Unix system call) creates a new process by duplicating the current one — the child process gets a copy of the parent's memory, file descriptors, and code. It's heavy-weight but isolated (separate address space). Threads are light-weight: they share the same process memory but have their own stack and registers. Threads (e.g., pthreads in Unix) are faster to create/switch but risk shared data corruption (race conditions). Forks are for independent tasks; threads for parallel subtasks within a process.
Multitasking relies on hardware timers and interrupts. A system timer (e.g., PIT on x86) generates periodic interrupts (ticks, ~10–100 ms). The OS kernel's scheduler handles the interrupt, saves the current task's state (context switch), and loads another. Preemptive multitasking interrupts running tasks; cooperative requires tasks to yield voluntarily. RTOS use priority-based scheduling; Unix uses round-robin or priority queues. This creates the illusion of parallelism on single-core CPUs.
On single-core systems, only one process runs at a time — the OS switches so fast (milliseconds) it appears concurrent. This "time-slicing" fooled users into thinking multiple programs ran simultaneously. Early Unix and Windows 95 used this for multitasking; it worked for GUI apps but struggled with CPU-intensive tasks. The illusion breaks under heavy load, causing slowdowns.
Multiprocessor (multicore) systems (e.g., IBM's multiprocessor mainframes 1960s, Intel Core 2 Duo 2006) allow true parallelism — multiple tasks run on separate cores simultaneously. This boosted performance for multithreaded apps (video encoding, servers). However, it's complex to break up tasks: programs must be thread-safe, use locks/mutexes to avoid data races, and scale across cores (Amdahl's Law limits speedup). OS schedulers distribute threads across cores, but poor design leads to bottlenecks.
Multitasking, forks, and threads turned computers from sequential machines into parallel powerhouses, enabling modern OSes like Linux and Windows. In the MicroBasement, they remind us that the "illusion" of multitasking on early systems laid the groundwork for today's multicore world — a clever mix of hardware interrupts and software magic that keeps our digital lives running smoothly.