Hi,
Branden wrote:
> the amount of space in the instruction for encoding registers seems to
> me to have played a major role in the design of the RV32I/E and C
> (compressed) extension instruction formats of RISC-V.
Before RISC-V, the need for code density caused ARM to move away from
the original, highly regular instruction format of one 32-bit word per
instruction to Thumb and then Thumb-2 encoding. IIRC, Thumb moved to
16-bit per instruction which was expanded by Thumb-2 to also have some
32-bit instructions. The mobile market was getting going and the
storage options and their financial and power costs meant code density
mattered more.
The original ARM instructions had the top four bits hold the ‘condition
code’ which decided if the instruction was executed, thus the top hex
nibble was readable.
0 f
eq ne cs cc mi pl vs vc hi ls ge lt gt le al nv
Data processing instructions, like
and rd, rn, rm ; d = n & m
aligned each of the four-bits to identify which of the sixteen registers
were used on nibble boundaries so again it was readable as hex.
xxxx000a aaaSnnnn ddddcccc ctttmmmm
The ‘a aaa’ above wasn't aligned, but still neatly picked which of the
sixteen data-processing instructions was used.
and eor sub rsb add adc sbc rsc tst teq cmp cmn orr mov bic mvn
And so it went on. A SoftWare Interrupt had an aligned 1111 to select
it and the low twenty-four bits as the interrupt number.
xxxx1111 yyyyyyyy yyyyyyyy yyyyyyyy
I assume this neat arrangement helped keep the decoding circuitry small
leading to a simpler design and lower power consumption. The latter was
important because Acorn, the ARM chip's designer, wanted a cheaper
plastic case rather than ceramic so they set a design limit of 1 W. Due
to the poor tooling available, it came in at 0.1 W after allowing for a
margin of error. This was so low that Acorn were surprised when an
early board ran without power connected to the ARM; they found it was
clocking just from the leakage of the surrounding support chips.
Also, Acorn's Roger Wilson who designed the ARM's instruction set was an
expert assembly programmer, e.g. he wrote the 16 KiB BASIC ROM for 6502,
so he approached it from the programmer's viewpoint as well as the chip
designer he became.
Thumb and Thumb-2 naturally had to destroy all this so instructions are
now not orthogonal. Having coded swtch() in assembler for various ARM
Cortex M-..., it's a pain to have to keep checking what instructions are
available on this model and what registers can it access. On ARM 2,
there were few rules to remember and writing assembler was fun.
--
Cheers, Ralph.
I’m wondering if anyone here is able to assist …
tvtwm is Tom LaStrange’s modified version of the twm window manager: Tom’s Virtual TWM. Somewhat unusually, tvtwm modelled the screen as a single large window and the display was a viewport that could be shifted around it, rather than the now-standard model of distinct virtual desktops.
I’m trying to rebuild the history of its releases. I have the initial release, and the first 6 patches Tom published via comp.sources.x, the 7th patch from the X11R5 contrib tape, the 10th patch from the X11R6 contrib tape, the widely available patch 11, and an incomplete patch 12 via personal email from Chris Ross, who took over maintenance from Tom at some point.
So, I’m looking for patch 8 and/or patch 9 (either one would do, since I can reconstruct the other from what I have plus one).
I’ve failed to find either of them. I’m not sure how they were distributed, and my searches have proven fruitless so far.
Does anyone here happen to have a trove of X11 window manager source code tucked away?
Thanks in advance,
d
Forwarded separately as original bounced due to use of old 'minnie'
address. Sorry :-(
---------- Forwarded message ---------
From: Rudi Blom <rudi.j.blom(a)gmail.com>
Date: Thu, 15 Dec 2022 at 10:18
Subject: [COFF] Re: DevOps/SRE [was Re: [TUHS] Re: LOC [was Re: Re: Re.:
Princeton's "Unix: An Oral History": who was in the team in "The Attic"?]
To: <mparson(a)bl.org>
Cc: coff <coff(a)minnie.tuhs.org>
<snip>
Our basic tooling is github enterprise for source and saltstack is our
config management/automation framework.
Their work-flow is supposed to basically be:
<snip>
Makes me wonder how current twitter deadlines are affecting 'quality', of
the code that is. Twits and tweets are a different matter :-)
Cheers,
uncle rubl
--
The more I learn the better I understand I know nothing.
--
The more I learn the better I understand I know nothing.
On 14 Dec 2022 06:54 -0500, from brad(a)anduin.eldar.org (Brad Spencer):
> [...] but you needed to know 6809 or 68000 assembly to create anything
> new for the OS itself,
Wasn't that the norm at the time, though? As I recall one of the
things that really set UNIX apart from other operating systems up
until about the early 1990s was precisely how machine-independent it
was by virtue of (with the exception of the early versions) having
been written in something other than assembler.
--
✍ Michael Kjörling 🏡 https://michael.kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”
Since I haven't seen it mentioned here: According to various sources,
Fred Brooks passed away on 17th November - see
https://en.wikipedia.org/wiki/Fred_Brooks
--
Peter Jeremy
(Moving to COFF, probably drifted enough from UNIX history)
On 2022-11-09 03:01, steve jenkin wrote:
>> On 9 Nov 2022, at 19:41, Dan Cross <crossd(a)gmail.com> wrote:
>>
>> To tie this back to TUHS a little bit...when did being a "sysadmin"
>> become a thing unto itself? And is it just me, or has that largely
>> been superceded by SRE (which I think of as what one used to,
>> perhaps, call a "system programmer") and DevOps, which feels like a
>> more traditional Unix-y kind of thing?
>>
>> - Dan C.
>
> In The Beginning, We were All Programmers…
<snip>
I got started in this field in the mid '90s, just as the Internet
started moving from mostly EDU & military to the start of dial-up ISPs.
My first job was at a small community college/satellite campus of
UTexas where me and my co-worker set up the first website for a UTexas
satellite campus. I'd played with VMS and SunOS, Linux was brand new
and was something we could install on a system we built out of spare
parts from the closet. At the time, my job title was "Assistant Systems
Manager," where my main job was to add/remove users from the VMS system,
reset stuck terminal lines, clean out the print queue, etc. Linux was
very much a toy and the Linux system we installed was a playground. It
was mostly myself, a few others on the team, and a few CS students that
wanted to use something that looked more like Unix than VMS.
> SRE roles & as a discipline has developed, alongside DevOps, into
> managing & fault finding in large clusters of physical and virtual
> machines.
My next several years were spent dot-com hopping, as a sysadmin. Mostly
in IT shops where we kept the systems that company used online and
working. The mail server(s), web-servers, ftp sites, database servers,
NFS/CIFS, etc.
My job-title for most of my jobs through the mid '00s was (senior)
sysadmin.
I then spent 8 years as a senior product support "engineer" at IBM
(I was CAG/SWAT, for anyone that's familiar with IBM/Rational's job
roles), during which time I started seeing the rise of what they
eventually started calling DevOps in the early 2010s.
As the web grew bigger and bigger, and the concept of Software as a
Service and so-called "Cloud" services (AWS, Azure, etc.) became more
and more of a thing, the job of keeping the systems that ran those
services started splitting off of IT and into their own teams.
They took what they learned in IT, tried to codify some "best practices"
around monitoring, automation and tooling, started using more
shrink-wrapped stuff like ansible/chef/saltstack instead of home-grown
stuff we (re)wrote with each job, etc, started forcing ourselves to be
part of the dev/test/deploy cycle of the products we were supporting,
etc, and someone branded the new work-flow as 'DevOps'. I've glossed
over the dev side of that a bit, as they also got more and better build
tools, IDEs, and for better or worse, all things git.
My current day-job is being a DevOps manager. I started here 8 years
ago on the DevOps team and was promoted to manager 4 years ago.
> Never done it myself, but it’d seem the potential for screw-ups is
> now infinite and unlimited in time :)
Yup, the potential for pushing a bad config or big of code to dozens,
hundreds, or even thousands of systems with the click of mouse or a
single command line has never been higher, but only if the dev/test
cycle failed to find the error (or wasn't properly followed) before
someone decided to deploy.
The guys on my team are supposed to have tested their stuff in their
environments before even committing it to the repo, then it spends some
time in the QA/test lab before it gets pushed to production. They're not
even supposed to commit directly to the main repo, it should be done as
a pull-request and someone else at least does an eye-ball review to look
for obvious mistakes, which should have been caught by the originator,
if they were doing proper testing in their dev environment first.
Our basic tooling is github enterprise for source and saltstack is our
config management/automation framework.
Their work-flow is supposed to basically be:
1 pull latest copy of main repo
2 branch a working set
3 make their changes
4 use something like vagrant to spin up test VMs to test their changes
(some people use docker instead of vagrant/virtualbox)
5 loop over 3-4 until it works
6 commit their changes to their branch
7 pull-request to main
a. someone else on the team does an eyeball code-review
b. other team member performs the merge
8 cherry-pick changes to the next release branch if changes need to
go in the next release, PR those picks to the release branches, same
process as above for merges.
9 push changes to the test env (test env is running on the next release
branch)
10 when QA clears the release, we push to prod on release day.
The developers that actually write the software offering have similar
workflows for their stuff, except they have a build-system involved to
compile & pkg stuff up & put the packages into the package repo which
get deployed to test (and eventually prod) with saltstack rules.
Our SRE is mostly concerned with making sure the monitoring of
everything is up to snuff and the playbooks for acting on alerts is
up-to-date and the on-call person can follow it. We have a meeting every
other week to go over the alerts & playbooks to make sure that we're
keeping things up to date there. He doesn't manage the systems at all,
he just makes sure all the moving pieces are properly monitored and we
know how to deal with the problems as they come up.
--
Michael Parson
Pflugerville, TX
KF5LGQ
Hi,
Will someone update the 386BSD Wikipedia page for me? Wikipedia doesn't
like my IPv6 address at home nor my VPS and I've given up trying to find
a way to make the edit myself without breaking IPv6 on my network.
The second paragraph of the history page states the following:
"The porting process with code was extensively documented in an 18-part
series written by Lynne Jolitz and William Jolitz in Dr. Dobb's Journal
beginning in January 1991."
There are only 17 parts to the series. My authoritative source is the
DDJ DVD available on The Internet Archive. There are 17 articles,
month's listed below.
In case my ability to find the article sis called into question, the
Tracing BSD System Calls article from DDJ March 1998 states the
following in the fourth paragraph.
"(see the DDJ series "Porting UNIX to the 386," by William and Lynne
Jolitz, January-November 1991 and February-July 1992)"
01 - 91-Jan - PORTING UNIX TO THE 386: A PRACTICAL APPROACH - Designing
the software specification
02 - 91-Feb - PORTING UNIX TO THE 386: THREE INITIAL PC UTILITIES -
Getting to the hardware
03 - 91-Mar - PORTING UNIX TO THE 386: THE STANDALONE SYSTEM - Creating
a protected-mode standalone C programming environment
04 - 91-Apr - PORTING UNIX TO THE 386 LANGUAGE TOOLS CROSS SUPPORT -
Developing the initial utilities
05 - 91-May - PORTING UNIX TO THE 386 THE INITIAL ROOT FILESYSTEM -
Completing the toolset
06 - 91-Jun - PORTING UNIX TO THE 386 RESEARCH & THE COMMERCIAL SECTOR -
Where does BSD fit in?
07 - 91-Jul - PORTING UNIX TO THE 386: A STRIPPED-DOWN KERNEL - Onto the
initial utilities
08 - 91-Aug - PORTING UNIX TO THE 386: THE BASIC KERNEL - Overview and
initialization
09 - 91-Sep - PORTING UNIX TO THE 386: THE BASIC KERNEL -
Multiprogramming and Multitasking, Part One
10 - 91-Oct - PORTING UNIX TO THE 386: THE BASIC KERNEL -
Multiprogramming and Multitasking, Part II
11 - 91-Nov - PORTING UNIX TO THE 386: THE BASIC KERNEL - Device
autoconfiguration
12 - 92-Feb - PORTING UNIX TO THE 386: DEVICE DRIVERS - Drivers for the
basic kernel
13 - 92-Mar - PORTING UNIX TO THE 386 DEVICE DRIVERS - Entering,
exiting, and masking processor interrupts
14 - 92-Apr - PORTING UNIX TO THE 386 DEVICE DRIVERS - Getting into and
out of interrupt routines
15 - 92-May - PORTING UNIX TO THE 386: MISSING PIECES, PART 1 -
Completing the 386BSD kernel
16 - 92-Jun - PORTING UNIX TO THE 386 MISSING PIECES II - Completing the
386BSD kernel
17 - 92-Jul - PORTING UNIX TO THE 386 THE FINAL STEP - Running light
with 386BSD
--
Grant. . . .
unix || die
[ Moved to COFF ]
On Thu, 4 Aug 2022, Dan Cross wrote:
[...]
> It wasn't particularly notable, or at least didn't leave much of an
> impression; it presented a pretty "standard" (and primitive!)
> graphical experience. I believe it was monochrome, with amber
> on black.
It also ran on the Applix 1616, an Aussie designed and built 68000
system; it was pretty much ahead of its time.
https://en.wikipedia.org/wiki/Applix_1616
-- Dave