His talk used the attack he implemented as an example of a broader family of attacks. In his wrapping-up section (morals) he lists other ways one could embed backdoors into systems, and he noted that the further down you go the harder it is to detect. From Ken Thompson's conclusion:
The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect.
It really isn't. People just love that Thompson paper so much they always bring it up. Whereas I predicted this exact problem repeatedly with people dismissing it. Especially for microcontrollers where there's not much else to modify. And here we have the attack vector proven on a microcontroller. :)
[1] https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...