Unix had (and still may have, I'm not up on Linux, etc) a really major, hard
boundary beween 'user' code, in processes,and the kernel. There are
'instructions' that invoke system primitives - but not too many, and limited
interactions across that boundary. So, restricted semantics.
The same is true for the more recent Unix variants, modulo a few special cases such as Larry mentions, but broadly speaking userland and the kernel are still separated.
Imagine building a
large application which had a hard boundary across the middle of it, with
extremely limited interactions across the boundary.
You mean like the Web? :-)
In 2000-2005 I wrote a substantial quasi-batch application that supported $EMPLOYER's main product and was written about half in shell scripts and half in Perl, or more accurately entirely in shell scripts, but if I needed a pipeline component that wasn't already available in SunOS or as third-party open source, I wrote it in Perl. (There was a single 10-line C program to eliminate a performance bottleneck.) So the application as a whole was full of hard boundaries across which nothing could pass except text streams; I found that this added substantially to its debuggability and maintainability.