gila/docs/DESIGN.md
August a018810c9c
All checks were successful
Continuous Integration / Check (push) Successful in 1m29s
Continuous Integration / Clippy (push) Successful in 1m11s
last one I swear bro
2025-11-13 18:12:00 -05:00

123 lines
5.9 KiB
Markdown

# Design Outline
Gila is a microkernel, and almost all functionality of the OS is relegated to
"server" processes. A server is a process that provides a specific
functionality. The means by which processes will locate services is still up
in the air as of right now, but I believe I want to implement something
similar to (if not compatible with) D-Bus.
## Development Goals
- No custom tooling: Gila should compile, build, and run, without any custom
languages, syntaxes, formats, emulators, build tools, frameworks, compilers,
etc.
- Easy to work on: Gila should be easy for developers to understand- not just
the kernel itself, but the way userspace works together, and the way an ISO
image is built by the build system. I want people to be able to easily write,
compile, install, and run a program within a bootable ISO.
## Inspiration
- [Linux](https://kernel.org): A highly featureful monolithic kernel, with
support for namespacing different kinds of resources.
- [The seL4 Microkernel](https://sel4.systems/): Formally verified, capability
based microkernel, setting the gold standard for secure kernels. Capable of
hosting virtual machines as well as normal processes.
- [Redox OS](https://www.redox-os.org/): Linux-compatible microkernel, written
almost entirely in Rust.
- [Fuchsia's Zircon Kernel](https://fuchsia.dev/): A new kernel and OS by
Google, which aims to replace the Linux kernel within Android. Features a
really cool component model for applications/system services.
## Boot Process
After being initialized by the bootloader at a random address, the kernel will
perform some memory management work to start allocating memory for the userboot
binary. Userboot is a binary executable that will be loaded as a module, and
it will be initialized as the very first process.
The Userboot concept was taken from the Zircon kernel, used in Google's Fuchsia
OS.
Userboot has only one job, and that is to parse the compressed initramfs image
and start the true init system based on the contents of that image. After that,
it can exit. This approach eliminates a lot of code from the kernel, since we
don't have to parse ELF files or perform decompression in-kernel.
The init system will rely only on a small set of kernel & userboot APIs to
bring up other "system software". Userboot will be treated as a part of the
kernel, effectively, allowing the user to build userspace initramfs archives
without having to recompile the kernel.
The benefit of this approach is threefold:
- The system does not need to include a filesystem OR disk driver, if neither
the disk nor the filesystem are read or written.
- The driver or filesystem server can crash, and the whole stack can recover.
- The user or developer can trivially introduce new drivers without a reboot.
This goes for filesystem drivers AND disk device drivers.
The system, hence, can be configured in two ways:
- The drivers can all be included in the initramfs for diskless operation.
- The bare minimum drivers needed for disk access are included in the
initramfs, and all other drivers are included in the root filesystem.
## APIs
Processes will access services by means of a data bus, possibly similar to D-Bus.
In this model, processes would access information from services by making an
IPC call to the kernel, which would either serve as a D-Bus server or
delegate D-Bus stuff to a special server process. From there, the client
process may establish a connection to the system bus, and use that connection
to request services.
For example, if a process wanted to request the kernel version, it could
access the service `site.shibedrill.Gila`, the object path `/site/shibedrill/Gila/Kernel`,
and the property `site.shibedrill.Gila.Kernel.Version`. If the same process
wanted to access the vendor ID of a specific PCI device, it could access
service `site.shibedrill.Pci`, object `/site/shibedrill/Pci/Device/07/00/0`, and
property `site.shibedrill.Pci.Device.Vendor`. This property would be present
in all PCI devices, as it would be defined in an interface common to all PCI
device objects in the service's namespace.
## Device Drivers
Device drivers, in this userspace concept, are initialized as-needed. If a
process requests a service provided by a driver that is not yet running, a
privileged process (or the kernel) will initialize a device driver process.
If the relevant device is present, the kernel will map the necessary portions
of physical memory into the driver's address space, and return information on
the mappings to the new driver. If the device does not exist, the message bus
will return an error.
How the kernel will recognize whether a device is present, is still unknown.
Hopefully, a comprehensive enumeration system can be developed which does not
require device definitions to be built into the kernel. I am considering a
system where device driver binaries have "enumerate" entry points in
conjunction with their "main" entry points, and the "enumerate" function
instructs the driver server to search for compatible devices and fork if any
are found. This removes all device-specific code from the kernel.
## Servers vs. Shared Libraries
Servers and shared libraries serve similar purposes: They make some
functionality usable by any process without code duplication. However, there
are times when processes and developers should prefer one or the other.
A server should be used when:
- The function must somehow acquire a mutually exclusive lock on a resource.
- The function should complete asynchronously.
A shared library should be used when:
- No resources involved need to be mutually exclusive.
- The function is non-blocking and synchronous.
Hence, servers are very important for things like disk drivers and file
system drivers, where non-synchronized writes could cause data loss. It should
also be noted that libraries *can*, and often will, call local procedures
from servers. The functionality of calling a procedure will be handled by the
standard library and shared across processes.