Being judicious in using just the good parts, PL/I seemed just fine.
Essentially all of my work at Yorktown was in PL/I.
Every programming language I've encountered has had its share of what I call toxic language features--things that impair reliability and maintainability. Good production programming shops ban the use of these features in their code.
In the case of PL/I, IMO the most toxic feature is the DEFAULT statement. This is Fortran's IMPLICIT on steroids. The end result of using the DEFAULT statement is that, when you see a variable declaration, you need to review all of the applicable DEFAULT statements to figure out what the variable's attributes are.
PL/I was designed to be a successor language to both COBOL and Fortran. One of the ugly features of Fortran is a fussy set of rules on statement ordering, particularly on declaration-type statements. PL/I relaxed those rules. Declarations can appear anywhere in the program and apply throughout the block they appear in. I know of folks who composed programs at the keypunch who used to write down variable declarations on a piece of paper as they came up with them, then when they reached the END statement of the scope block, punched out all the cards for the DECLARE statements. This practice was annoying in that, while reading the code you'd encounter variables that hadn't been declared yet and you'd have to rummage through the code to find the declarations. In the PL/I shops I worked at it was required that all declarations be at the beginning of the scope block.
PL/I also has very weak data typing--you can convert almost any data type into almost any other data type. The language has a large and baroque set of conversion rules, and these don't always produce intuitive results, particularly when doing fixed decimal division. It can also mask typos, leading to hard-to-find bugs. I once mistyped:
IF A ^= B [I'm using ^ here for the angle-bracket EBCDIC NOT sign character]
as:
IF A =^ B
where A and B were both declared as character strings. So I wrote "if A equals NOT B". The compiler happily generated code to treat B as a character string of 0s and 1s, convert that to a bit string, apply the NOT operation, then similarly convert A to a bit string and do the comparison. This of course caused unexpected program behavior. It might have taken me a long time to find it except that there was a warning diagnostic message from the compiler: "data conversion will be done by subroutine call". This almost always means you've unintentionally mixed data types.
-Paul W.