Imho the problem is the fixation on parser generators and BNF. It's just a lot easier to write a recursive descent parser than to figure out the correct BNF for anything other than a toy language with horrible syntax.
Imo BNF (or some other formal notation) is quite useful for defining your syntax, my biggest gripe with BNF in particular is the way it handles operator precedence (through nested recursive expressions), which can get messy quite fast.
Pratt parsers dont even use this recursion, they only have a concept of 'binding strength', which means in laymans terms that if I'm parsing the left side of say a '' expression, and I managed to parse something a binary subexpression, and the next token I'm looking at is another binary op, do I continue parsing that subexpression, which will be the RHS of the '' expression, or do I finish my original expression which will then be the LHS of the new one?
It represents this through the concept of stickiness, with onesimple rule - the subexpression always sticks to the operator that's more sticky.
This is both quite easy to imagine, and easy to encode, as stickiness is just a number.
I think a simpler most straightforward notation that incorporates precedence would be better.
I would argue the opposite: Being describable in BNF is exactly the hallmark of sensible syntax in a language, and of a language easily amenable to recursive descent parsing. Wirth routinely published (E)BNF for the languages he designed.
> But then, pushing regular languages theory into the curriculum, just to rush over it so you can use them for parsing is way worse.
At least in the typical curriculum of German universities, the students already know the whole theory of regular languages from their Theoretical Computer Science lectures quite well, thus in a compiler lecture, the lecturer can indeed rush over this topic because it is just a repetition.