Dinghy index concept papers Values Palette . Here, we consider design qualities that could influence the design of a platform. [!] This is not a set of design principles for dinghy. Rather: in order to think, we look at possibilities. Cheap process forking Unix has this. Unix developers can compose systems by creating and leaning on many processes, and on short-lived processes. Cheap process forking encourages the unix piping mechanism. Upside: allows for rapid composition of complex pipelines from simple tools. Downside: requires lots of copying of memory from one process space to the next. The early apache webserver httpd handled CGI through such a strategy: the server spawned a new child process for each request. Upside: Offloads lots of nasty concurrency problems onto the operating system, which has already had to solve those problems. Downside: does not scale as well as later single-process efforts like apache 2 (uses threads) and nginx (focus on async techniques). Summary: cheap process forking is good for prototyping, bad for scale. Network-centric filesystem Most commonly seen in institutions with lots of unix systems arranged around a large NFS filesystem held on high-cost equipment. Has practical value. A dedicated team can ensure data is backed up. You can log in from any system. You can easily share data between operators. Network-centric filesystem is an anti-pattern in some some settings. If you are processing lots of data, network-centric filesystems encourage users to be in a mindset of drawing data to work. At scale, it is better to instead send work to data. Everything is a File This is contested territory. We will handle it through different perspectives, below. Everything is a File (Stream of Bytes) This is the unix way. There is a central hierarchical filesystem. Things that are not data-stored-on-disk can be attached to the hierarchy as though they were, and you can interact with them using the same tools as you would use to interact with data-stored-on-disk. (Kind of. Some of the time.) There is some elegance. You can use select(2) to detect non-blocking behaviour across both files and sockets and stdin. Kind of. This is not at all possible in Windows. There is some inelegance. Lots of edge-cases. Calling select(2) on files does not behave the same as calling it on network sockets. There are things you can't write to. Linus wrote, The whole point with "everything is a file" is not that you have some random filename (indeed, sockets and pipes show that "file" and "filename" have nothing to do with each other), but the fact that you can use common tools to operate on different things. And later, The UNIX philosophy is often quoted as "everything is a file", but that really means "everything is a stream of bytes". In Windows, you have 15 different versions of "read()" with sockets and files and pipes all having strange special cases and special system calls. That's not the UNIX way. It should be just a "read()", and then people can use general libraries and treat all sources the same. https://yarchive.net/comp/linux/everything_is_file.html Everything is a File (Namespace emphasis) This is the plan-9 way. Plan-9 achieves minimalism through rigid structuring of the system into namespaces that look like filesystems. Linus uses the term {Everything is a namespace} to contrast the Plan-9 ethos from the {Everything is a stream of bytes} principle as he interprets it. In Plan-9, there is a standard interface (namespace) that presents as though it is a filesystem. Applications present as filesystems. Network connections present like filesystems. The system interface presents like a filesystem. Consequently, the Plan 9 system-call interface is very small. The system is discoverable. You can browse around /proc and look for things. It can take effort to build all applications so as to honour this interface. How do you implement a web browser such that it exposes itself as a namespace? Perhaps this speaks less of plan-9 failure, and instead of what a terrible settlement the web is. Everything is an icon The BeOS/Haiku way A variety of concepts are exposed to a browsable GUI as icons. Concept of a hierarchical filesystem. Icons represent metadata and an optional stream of data (file). Sophisticated use of metadata tags leads to an innovative type of naked application. In one example, users interact with email without the use of an email client. A daemon draws mail down into icons as it arrives. Users use the standard OS GUI to view their set of email. There are small, distinct programs for viewing and writing email. This hints at a design tradition to rival vs monolithic applications such as Outlook and Photoshop. Imagine an art program where each layer was a file in a directory. more With that said, I am wary of this as a design principle. IPC-via-filesystem suffers from tragedy-of-the-common interface bleed. Imagine reading a file and expecting to see X or Y. But the writer wrote Z. Who is at fault? Everything is an interface Imagine an operating system where there was no filesystem. Rather, you had to interact with interfaces. You could use existing tools to construct new types and interfaces, and then interact with those. Consider: how to interact with the network, which produces data that is outside your sphere of control. The Memory Map is the Computer What is the essence of the computer? Here we consider that it could be the memory map. Computer architectures often presents several types of memory in a single memory map. By accessing addresses A to B, you can read from ROM. Addresses B to C will hook into standard RAM. Addresses C to D will hook into video ram. Potentially, you could build an instruction architecture strictly oriented around reads and writes to memory. You could even implement mathematics operations as a consequence of writing to fake addresses. What behaviour would we expect when the program attempts to divide by zero? The CPU is the Computer What is the essence of the computer? Someone might point to the Central Processing Unit. You can tell by its name that it is the heart of the computer. The CPU reads from memory and writes to memory, and thereby drives activity. Computers are inherently synchronous. The Bus is the Computer What is the essence of the computer? It is the bus. The term CPU is a misnomer. Processing cores are simply peers on a bus. Operations come and go: user input, network interaction, interrupts. Computers are inherently asynchronous. The Backbone is the Computer This strictly follows the logic of "The Bus is the Computer" to a further conclusion. Picture a system which is a set of computers arranged around a private, contained network. They are running different nodes of a single deployment of a software system. Operations come and go. Computers are simply peers on a bus. Computers are inherently asynchronous. Multi-threading Mainstream operating systems are generally synchronous. Mainstream applications are generally asynchronous. Multithreading is a means for bridging those two worlds. Pure async Speculation: what about an OS where all system interactions were based on callback? Tannenbaum comments GUI-driven development Applications where the developer starts with a GUI, and then works out the data structures later. Examples, - Programs written in Visual Basic - Programs written in Interface Builder/WO - Stuff build for MacOS classic Problem: data structures tend to be a mess. Concurrency tends to be a mess. Outliner-driven development Userland Frontier presents an outliner as the entrypoint into the system. http://frontier.userland.com/ (Need to do work here. Waiting on a house move to unpack my computers) User is a Guest Positive: System is Lord Protector Computers are necessarily complicated. System software has a duty to silently run things in an orderly manner, The user should only be exposed to things they are likely to care about. Negative: System is Jailor Computers are not necessarily complicated. Systems software should not make the owner feel like a guest in their own home. FPGAs Positive: Liquid hardware is Awesome Microchips are obsolete. We can build what we want in FPGAs. We can create hardware without needing to get our hands dirty again. We can easily create extra layers. Negative: Liquid hardware is Terrifying Intel has put lots of hidden layers into our main computing platform. The security is dodgy. Unless we are careful, all those problems are going to get a lot worse with FPGAs. How could you know if your FPGA had been backdoored? We should be wary of every layer.