Unix is a multi-user operating system whose development was started in 1969 by Ken Thompson and Dennis Ritchie, as well as an entire family of operating systems derived from the original Unix. Since Unix is also a trademark that only applies to specific products, terms like "*nix" are often used to refer to the entire family of Unix-compatible systems (including ?GNU/?Linux). In this article, we don't bother to respect the trademark (Linux is a Unix for us).
Unix was originally a mainframe-like time-sharing operating system scaled down to much smaller computers with much more limited processing power and storage space. In order to keep the system small, elegant and flexible, it was decided to have a set of "small and sharp" tools that can interoperate with each other via input/output piping. At later times, Unix gathered bloat, and from the 1980s microcomputer point of view it was already seen as a huge and complex OS for big computers.
The possibility to reimplement the system gradually, one tool at a time, was a major reason why Unix was chosen as the basis of the ?GNU project, even though ?Stallman didn't particularly like it.
Historically, it may be interesting to compare Unix with Forth that was born at the same time for a somewhat similar purpose (bringing a "mainframe-grade" environment to a small computer), although Forth is a memory-oriented single-user system whereas Unix is a disk-oriented multi-user system. A major difference is that while Unix adopted a lot of ideas and principles from the mainframe world, Forth actively questioned them in order to get as small as possible. Also, Forth is a programming language to the core, while Unix consists of many separate tools that sometimes have a programming language built in.
Advantages of Unix from the permacomputing perspective
- The basic idea of having small, flexible and interoperable tools is close to permacomputing ideals.
- The use of a high-level language (C) has made it independent from specific processor and computer architectures. Programs tend to be source-code-compatible across Unix systems and often even with non-Unix systems.
- There are several independent but largely compatible implementations (classical Unix, ?GNU/?Linux, ?Minix ...), many of which are FLOSS.
- Unix-like systems may have relatively low hardware requirements, especially when talking about "barebones" systems mainly used with character terminals.
- Long history of use in a vast variety of different types of devices (embedded, workstation, server, supercomputer, etc.).
Disadvantages and problems
- Modern, "real-world" Unix systems, like most general-purpose operating systems, suffer from a lot of bloat and unnecessary complexity.
- A lot of this complexity is somehow related to legacy compatibility as are many weird quirks one can find in Unix. One might say that this has resulted from an excessive prioritization of accumulated tradition over system-level refactoring.
- Unix has reached such a dominant position in many areas of computing that it represents monoculture that narrows down technological diversity.
- Despite having a long legacy, Unix is far from a bedrock platform. Software often needs to be constantly maintained in order to keep it compatible with various libraries and other changing pieces of the environment.
- Binary compatibility between different versions of the same OS may be surprisingly bad; even C libraries (such as ?glibc) may change their ABIs in ways that cause incompatibilities and force recompilation.
- Inefficiencies and limitations that can be traced back to the pre-Unix
mainframe ideals:
- Preference for sequences of plain-text lines in input and output (as in 80-column IBM punched cards). Translating between plain-text formats and various internal representations causes overhead. Large plain-text files are often cumbersome to operate with.
- There are good tools for defining a task (by writing a command line) but the chances to affect the running of the task are much more limited (as in the old batch-job culture). Possibilities of building interoperability between running programs are much weaker than between not-yet-running programs.
- "Waterfall model" in software compilation, producing static monoliths that are very difficult to arbitrarily change especially when they are running. This has resulted in a plethora of scripting/configuration languages that compensate for the inflexibility.
- The original Unix shell was designed to be quite minimal, and programmability was added to it in various ad-hoc ways by later developers. Various scripting languages have come into use as a response to the messiness of shell scripting.
- Unix can be considered far too large and complex to many tasks it is currently used for (embedded systems, single-user mobile computers, etc.)