gila/docs/DESIGN.md
August 65804ef56a
Some checks are pending
Continuous Integration / Rustfmt (push) Waiting to run
Continuous Integration / Clippy (push) Waiting to run
Continuous Integration / Build (x86_64) (push) Waiting to run
Continuous Integration / Check (push) Successful in 5m21s
Docs, linting
2025-09-24 23:40:55 -04:00

4.6 KiB

Design Outline

Gila is a microkernel, and almost all functionality of the OS is relegated to "server" processes. A server is a process that provides a specific functionality. The means by which processes will locate services is still up in the air as of right now, but I believe I want to implement something similar to (if not compatible with) D-Bus.

Boot Process

Gila initializes as a bare kernel, with the bootloader providing an init RAM filesystem in the form of a .tar.lzma archive. The kernel reads this file, and launches an init process (/system/bin/init). The init process has its own configuration file located at /system/cfg/init.toml, which should detail the steps needed to bring the system up to a multi-user status. This config file will also contain versioning information for compatibility, detailing which kernel version and architecture it is compatible with.

If the init system needs to access a filesystem, it must first get the handle of the filesystem server. If the filesystem server is not running when this handle request is made, the kernel will launch the server before returning its handle. From there, the filesystem server will request the handle of the disk driver that corresponds to the requested filesystem. The kernel then launches the disk driver server, and assigns it a seat based on the device it drives, granting it access to the memory region responsible for that device.

The benefit of this approach is threefold:

  • The system does not need to include a filesystem OR disk driver, if neither the disk nor the filesystem are read or written.
  • The driver or filesystem server can crash, and the whole stack can recover.
  • The user or developer can trivially introduce new drivers without a reboot. This goes for filesystem drivers AND disk device drivers.

The system, hence, can be configured in two ways:

  • The drivers can all be included in the initramfs for diskless operation.
  • The bare minimum drivers needed for disk access are included in the initramfs, and all other drivers are included in the root filesystem.

APIs

Processes will access services by means of a data bus, possibly similar to D-Bus. In this model, processes would access information from services by making an IPC call to the kernel, which would either serve as a D-Bus server or delegate D-Bus stuff to a special server process. From there, the client process may establish a connection to the system bus, and use that connection to request services.

For example, if a process wanted to request the kernel version, it could access the service site.shibedrill.Gila, the object path /site/shibedrill/Gila/Kernel, and the property site.shibedrill.Gila.Kernel.Version. If the same process wanted to access the vendor ID of a specific PCI device, it could access service site.shibedrill.Pci, object /site/shibedrill/Pci/Device/07/00/0, and property site.shibedrill.Pci.Device.Vendor. This property would be present in all PCI devices, as it would be defined in an interface common to all PCI device objects in the service's namespace.

Device Drivers

Device drivers, in this userspace concept, are initialized as-needed. If a process requests a service provided by a driver that is not yet running, a privileged process (or the kernel) will initialize a device driver process. If the relevant device is present, the kernel will map the necessary portions of physical memory into the driver's address space, and return information on the mappings to the new driver. If the device does not exist, the message bus will return an error.

How the kernel will recognize whether a device is present, is still unknown. Hopefully, a comprehensive enumeration system can be developed which does not require device definitions to be built into the kernel.

Servers vs. Shared Libraries

Servers and shared libraries serve similar purposes: They make some functionality usable by any process without code duplication. However, there are times when processes and developers should prefer one or the other.

A server should be used when:

  • The function must somehow acquire a mutually exclusive lock on a resource.
  • The function should complete asynchronously.

A shared library should be used when:

  • No resources involved need to be mutually exclusive.
  • The function is non-blocking and synchronous.

Hence, servers are very important for things like disk drivers and file system drivers, where non-synchronized writes could cause data loss. It should also be noted that libraries can, and often will, call local procedures from servers. The functionality of calling a procedure will be handled by the standard library and shared across processes.