I think it was BSD 4.1 that added quotas to the disk system, and I was just wondering if anyone ever used them, in academia or industry. As a user and an admin I never used this and, while I thought it was interesting, just figured that the users would sort it out amongst themselves. Which they mostly did.
So, anyone ever use this feature?
David
Several list members report having used, or suffered under, filesystem
quotas.
At the University Utah, in the College of Science, and later, the
Department of Mathematics, we have always had an opposing view:
Disk quotas are magic meaningless numbers imposed by some bozo
ignorant system administrator in order to prevent users from
getting their work done.
Thus, in my 41 years of systems management at Utah, we have not had a
SINGLE SYSTEM with user disk quotas enabled.
We have run PDP-11s with RT-11, RSX, and RSTS, PDP-10s with TOPS-20,
VAXes with VMS and BSD Unix, an Ardent Titan, a Stardent, a Cray
EL/94, and hundreds of Unix workstations from Apple, DEC, Dell, HP,
IBM, NeXT, SGI, and Sun with numerous CPU families (Alpha, Arm, MC68K,
SPARC, MIPS, NS 88000, PowerPC, x86, x86_64, and maybe others that I
forget at the moment).
For the last 15+ years, our central fileservers have run ZFS on
Solaris 10 (SPARC, then on Intel x86_64), and for the last 17 months,
on GNU/Linux CentOS 7.
Each ZFS dataset gets its space from a large shared pool of disks, and
each dataset has a quota: thus, space CAN fill up in a given dataset,
so that some users might experience a disk-full situation. In
practice, that rarely happens, because a cron job runs every 20
minutes, looking for datasets that are nearly full, and giving them a
few extra GB if needed. Affected users in a average of 10 minutes or
so will no longer see disk-full problems. If we see serious imbalance
in the sizes of previously similar-sized datasets, we manually move
directory trees between datasets to achieve a reasonable balance, and
reset the dataset quotas.
We make nightly ZFS snapshots (hourly for user home directories), and
send the nightlies to an off-campus server in a large datacenter, and
we write nightly filesystem backs to a tape robot. The tape technology
generations have evolved through 9-track, QIC, 4mm DAT, 8mm DAT, DLT,
LTO-4, LTO-6, and perhaps soon, LTO-8.
Our main fileserver talks through a live SAN FibreChannel mirror to
independent storage arrays in two different buildings.
Thus, we always have two live copies of all data, and third far-away
live copy that is no more than 24 hours old.
Yes, we do see runaway output files from time to time, and an
occasional student (among currently more than 17,000 accounts) who
uses an unreasonable amount of space. In such cases, we deal with the
job, or user, involved, and get space freed up; other users remain
largely remain unaware of the temporary space crisis.
The result of our no-quotas policy is that few of our users have ever
seen a disk-full condition; they just get on with their work, as they,
and we, expect them to do.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
i used the fair share schedular whilst a sysadmin of a small cray at UNSW. being an expensive machine the various departments who paid for it wanted, well, their fair share.
in a different job i had a cron job that restricted Sybase backend engines to a subset of the cpus on an big SGI box during peak hours, at night sybase had free reign of all cpus.
anyone did anything similar?
-Steve
> From: KatolaZ
> I remember a 5MB quota at uni when I was an undergrad, and I definitely
> remember when it was increased to 10MB :)
Light your cigar with disk blocks!
When I was in high school, I had an account on the school's computer, a
PDP-11/20 running RSTS, with a single RF11 disk (well, technically, an RS11
drive on an RF11 controller). For those whose jaw didn't bounce off the floor,
reading that, the RS11 was a fixed-head disk with a total capacity of 512KB
(1024 512-byte blocks).
IIRC, my disk quota was 5 blocks. :-)
Noel
----- Forwarded message from meljmel-unix(a)yahoo.com -----
Warren,
Thanks for your help. To my amazement in one day I received
8 requests for the documents you posted on the TUHS mailing
list for me. If you think it's appropriate you can post that
everything has been claimed. I will be mailing the Unix TMs
and other papers to Robert Swierczek <rmswierczek(a)gmail.com>
who said he will scan any one-of-a-kind items and make them
available to you and TUHS. The manuals/books will be going
to someone else who very much wanted them.
Mel
----- End forwarded message -----
> That photo is not Belle, or at least not the Belle machine that the article
is about.
The photo shows the piece-sensing (by tuned resonant circuits)
chess board that Joe Condon built before he and Ken built the
harware version of Belle that reigned as world computer chess
champion for several years beginning in 1980 and became the
first machine to earn a master rating.
Doug
> From: "John P. Linderman"
> Brian interviewing Ken
Ah, thanks for that. I had intended going (since I've never met Ken), but
alas, my daughter's family had previously scheduled to visit that weekend, so
I couldn't go.
The 'grep' story was amusing, but historically, probably the most valuable
thing was the detail on the origins of B - DMR's paper on early C ("The
Development of the C Language") mentions the FORTRAN, but doesn't give the
detail on why that got canned, and B appeared instead.
Noel