I think that's is a problem in that it needs to be data blocks, inodes, and finally superblocks to do the least damage in a crash.
That is definitely the case and that was perhaps the biggest fix in BSD (and other later) was to make the file system writing more consistent so at least you didn't get trashed filesystems but at worst got some orphaned blocks that needed intervention to reclaim.
It was mandatory for operators at JHU to understand how the file system was laid out on disk, and what icheck/dcheck reported and what the options to fix things. Link counts that were too low and dups in free should NEVER happen with an intelligently ordered set of I/O operations, but thats not what Version 6 UNIX had. It wasn't uncommon to find several errors in the file system that would be degenerate system faults if not corrected.
But all that aside, even in those shakey days, typing sync multiple times really didn't accomplish anything and it because less useful as the file systems became more stable.