Profiling the compilation process suggests that this isn't the case. Rust's higher level passes are rarely the dominant part of execution time.
Check out https://github.com/lqd/rustc-benchmarking-data/tree/main/res... and the other benchmarks in that repository for some data on how real world crates compilation times are spent. You'll find that backend code generation and optimization dominate most crates compile times. There are a few exceptions: particularly macro heavy crates, a couple crates with deeply nested types that hit some quadratic behavior in the compiler. But overall, the backend is still the largest piece.
The front end is time-consuming enough where replacing the backend with something lightweight like Go’s wouldn’t get you a 5-10x improvement, which is what I think you’d need to really move the needle on user perception. Moreover, a lot of the backend slowdown is due to front end choices monomophization which generates large amounts of intermediate code that must then be optimized away.
I doubt that a hypothetical version of Rust that avoided monomorphization would compile any faster. I remember doing experiments to that effect in the early days and found that monomorphization wasn't really slower. That's because all the runtime bookkeeping necessary to operate on value types generically adds up to a ton of code that has to be optimized away, and it ends up a wash in the end. As a point of comparison, Swift does all this bookkeeping, and it's not appreciably faster to compile than Rust; Swift goes this route for ABI stability reasons, not for compiler performance.
What you would need to go faster would be not only a non-monomorphizing compiler but also boxed types. That would be a very different language, one higher-level than even Go (which monomorphizes generics).
Just wanted to note Go does only a partial monomorphization, only monomorhpizes for gcshapes and not for all types. This severely limits the optimization potential and adds a runtime cost to dispatch, at least in its initial implementation.
Then there is an open niche for a “development mode”, that outputs barely optimized binaries with proper error handling, fast. (I do know about debug, etc).
It already exists: It's called “debug” mode and it's what you get when you don't compile it in release mode. (The biggest problem with debug mode is how slow the unoptimized code is: for back-end stuff it doesn't matter, but for things like gamedev you want your dependencies to be compiled in release mode (fortunately the cargo allows you to specify that you want some deps to be compiled in release mode even when your project is compiled in debug mode).
Check out https://github.com/lqd/rustc-benchmarking-data/tree/main/res... and the other benchmarks in that repository for some data on how real world crates compilation times are spent. You'll find that backend code generation and optimization dominate most crates compile times. There are a few exceptions: particularly macro heavy crates, a couple crates with deeply nested types that hit some quadratic behavior in the compiler. But overall, the backend is still the largest piece.