Both are instructions sets. They are part of the equation that gives a CPU its “power”, but it isn’t the only reason.
What gives ARM its power efficiency edge is its smaller instructions set, which translate to smaller die size to do the same work, which is also its Achilles heel as it means that some workload that uses those missing instructions need to be either translated by the hardware or the software, or it will just not work. Both have their own inconvenience (bigger die size and less energy efficiency or bigger overhead and slower execution.
But for workloads that do not use x86’s specificities, ARM is very competitive.
No, and the above commentor is a little mixed up. While we originally thought the benefit of RISC CPUs was their smaller instruction set - hence the name - it’s turned out that the gains really come from a couple other things common to RISC architectures. In x86 pretty much every instruction can reference memory directly, but in RISC architectures you can only do it from a few specific instructions. Modern RISC architectures actually tend to have a lot of instructions, so RISC means something more like “load/store architecture” nowadays.
Another big part of RISC architectures is they try to make instruction fetch+decode as easy as possible. x86 instructions are a nightmare to decode and that adds a lot of complexity and somewhat limits optimization opportunities. There’s some more to it, like how RISC thinks about the job of the compiler, but in my experience load/store and ease of fetch+decode are the main differentiators for RISC.
More towards your question, a lot of the issues with running x86 programs on ARM (really running any program on a different architecture than it was compiled for) is that it will likely depend on very specific behaviors that may not be the same across architectures and may be computationally expensive to emulate. For some great write-ups about that kind of thing check out the Dolphin (Wii emulator) blog posts.
ARM can be really powerful, there are ARM servers out there.
More powerful than X86? Or are there other reasons to use it in a desktop?
At least as powerful for less energy, being energy efficient is also a good thing even in desktops.
I didn’t know ARM was as powerful as x86.
Both are instructions sets. They are part of the equation that gives a CPU its “power”, but it isn’t the only reason.
What gives ARM its power efficiency edge is its smaller instructions set, which translate to smaller die size to do the same work, which is also its Achilles heel as it means that some workload that uses those missing instructions need to be either translated by the hardware or the software, or it will just not work. Both have their own inconvenience (bigger die size and less energy efficiency or bigger overhead and slower execution.
But for workloads that do not use x86’s specificities, ARM is very competitive.
Yeah, but would those workloads be more performant if they used CISC features?
No, and the above commentor is a little mixed up. While we originally thought the benefit of RISC CPUs was their smaller instruction set - hence the name - it’s turned out that the gains really come from a couple other things common to RISC architectures. In x86 pretty much every instruction can reference memory directly, but in RISC architectures you can only do it from a few specific instructions. Modern RISC architectures actually tend to have a lot of instructions, so RISC means something more like “load/store architecture” nowadays.
Another big part of RISC architectures is they try to make instruction fetch+decode as easy as possible. x86 instructions are a nightmare to decode and that adds a lot of complexity and somewhat limits optimization opportunities. There’s some more to it, like how RISC thinks about the job of the compiler, but in my experience load/store and ease of fetch+decode are the main differentiators for RISC.
More towards your question, a lot of the issues with running x86 programs on ARM (really running any program on a different architecture than it was compiled for) is that it will likely depend on very specific behaviors that may not be the same across architectures and may be computationally expensive to emulate. For some great write-ups about that kind of thing check out the Dolphin (Wii emulator) blog posts.