> On Mar 21, 2016, at 10:00 PM, John Cowan <cowan@mercury.ccil.org> wrote:
>
> Dan Cross scripsit:
>
>> Those file structures are collected into a single, global table. The
>> question is why this latter table? One could rather imagine an
>> implementation where open() allocates (e.g., via malloc()) a new 'struct
>> file' that contains as a structure field an 'int refcnt' that is
>> incremented when a descriptor is dup()'d or as a side-effect of a fork(),
>> and is decremented as a result of a close(); when 'refcnt' drops to zero,
>> the structure could be freed with e.g. 'mfree'. What is the benefit of
>> 'struct file file[];'?
>
> Sure you could, but it would be more complex, slower, and less robust.
> "When in doubt, use brute force." --ken
And hard-coded limited, like the filesystem table, were all over the
place in early OSes, mostly to cope with memory sharing on tiny
RAM systems where it was better to just statically allocate things
at compile time. This made the code simpler (and smaller) which
made it both faster and allowed one to pack more functionality into
the system. It was rare that you’d have so much memory you could
take advantage of dynamic allocation. If you used all your file descriptors
that were statically compiled into the kernel, chances are you wouldn’t
have the address space to hold enough RAM to source and sink
data to the files in question, nor deal with the connections between
the file stable and the file system.
Dynamic allocation, and moving away from static limits, only came
about later, as memory sizes grew. It was this dynamic that made
Ken’s advice such a win in the hardware of the day.
Warner