Microcoding has been a thing since the 1950s, it’s the default. Early RISCs tried to get away with it and for a brief time RISCs weren’t microcoded kinda by definition, but it snuck back in because it’s just too useful to not hard-wire everything. You maybe get away with it on MIPS but Arm? Tough luck. RISC-V can be done and it can make microcontroller-scale chips simpler, but you can also implement the RV32I (full) insn set in terms of RVC (compressed subset) and be faster. Not to mention that when you get to things like the vector extensions you definitely want to use microcode. The Cray-1 was hardwired, but they, too, dropped it for a reason.
I guess in modern days RISC more or less means “a decent chunk of the instruction set will not be microcoded but can instead be used as microcode”, whereas with modern CISC processors the instruction set and the microcode may have no direct correspondences at all.
Virtually all modern x86 chips work that way
Microcoding has been a thing since the 1950s, it’s the default. Early RISCs tried to get away with it and for a brief time RISCs weren’t microcoded kinda by definition, but it snuck back in because it’s just too useful to not hard-wire everything. You maybe get away with it on MIPS but Arm? Tough luck. RISC-V can be done and it can make microcontroller-scale chips simpler, but you can also implement the RV32I (full) insn set in terms of RVC (compressed subset) and be faster. Not to mention that when you get to things like the vector extensions you definitely want to use microcode. The Cray-1 was hardwired, but they, too, dropped it for a reason.
I guess in modern days RISC more or less means “a decent chunk of the instruction set will not be microcoded but can instead be used as microcode”, whereas with modern CISC processors the instruction set and the microcode may have no direct correspondences at all.