The final RISCs vs. CISCs – 2: Definitions

After talking about abstraction levels, we come straight to the heart of the matter, which necessarily passes through the definition of what RISCs are and, consequently (because the two concepts are mutually exclusive, as already mentioned), what CISCs will be.

RISC: Reduced Instruction Set Computer

The idea of a RISC architecture had already been conceived in the mid-1970s by some engineers at IBM, who had then realised it in what is considered the first RISC: the 801 processor.

It was, however, in 1980 that the acronym was coined in an article by Patterson and Ditzel that has gone down in history: The Case for the Reduced Instruction Set Computer. Which illustrates the inspiring principles of this new (macro)family of processors. Here is an explanatory excerpt:

By a judicious choice of the proper instruction set and the design of a corresponding architecture, we feel that it should be possible to have a very simple instruction set that can be very fast. This may lead to a substantial net gain in overall program execution speed. This is the concept of the Reduced Instruction Set Computer.

The highlighted parts represent the cornerstones, but still leave some doubts: the instruction set should be very simple, OK, but is it relative (also) to the (small?) number of instructions available? The first highlighted part would seem to lean in that direction.

It is also unclear whether this also relates to the structure/complexity of the instructions: should they be inherently ‘simple’ (so not instructions that internally do ‘a lot of work’)? Linking this to performance (instructions can be very fast) should lead to validate this thesis (‘complicated’ instructions tend to be slow in execution).

However, reading the entire article ultimately leads to exactly these conclusions: the processor should have a small number of instructions, which must also be simple.

A concept that was even better reiterated and defined a couple of years later, in an article that can be considered the RISCs manifesto, signed by Patterson (again) and Séquin: Design and implementation of RISC I (1982), from which I extract the relevant portions leading to the intelligible definition of a RISC architecture:

The Reduced Instruction Set Computer (RISC) is an architecture particularly well suited for implementation as a single-chip VLSI computer. It demonstrate that by a judicious choice of a small set of instructions and the design of a corresponding microarchitecture, one can obtain a machine with high throughput. […]
Because of the bandwidth bottleneck at the chip periphery, the emphasis in a VLSI chip must be on self-contained action. Most of the RISC I instructions are thus “register-to-register” and take place entirely inside the chip. Data memory access is restricted to the LOAD and STORE instructions. The instructions are kept simple so that they can be executed in a single, short machine cycle; and they are each one word (32-bits) long to avoid the hardware complexity associated with variable-length instructions. Less frequent operations are implemented with instruction sequences or subroutines.

This can be summarised in what I call the four pillars of RISC architectures:

  1. There must be a reduced set of instructions.
  2. Only so-called load/store instructions can access memory (read/write data).
  3. Instructions must have a fixed length –> not being variable-length.
  4. Instructions must be simple –> executed in a single clock cycle.

Two things immediately jump out from this definition. The first is that RISC architectures must fulfil all four of these requirements: it is enough not to fulfil one of them in order to not have a RISC. And, consequently, we would be dealing with a CISC.

If anyone still has any doubts, I strongly recommend that they read both publications carefully to see how RISCs, as they were defined (and also realised, at the time), must indeed faithfully comply with the aforementioned four requirements.

Just as it is true that a clear distinction is made between RISCs and ‘everything else’, which is represented by CISCs, which have been cited, in this regard, always in clear opposition to RISCs (and this is a general talk, which does not only concern these two publications). There is no escaping it!

Immediate consequences

As I had already mentioned in the previous article, the immediate consequences of all this are that many architectures, even famous ones, which have long since (and still do!) fall under the RISCs banner, are in reality… (they would not be: they are!) CISCs.

Take, for example, those developed by ARM: at least the fourth requirement is certainly not fulfilled, because it is well known that several instructions of its ISAs are particularly complicated (especially those of ARM32, Thumb, and Thumb-2, which also support load/store instructions operating on multiple registers).

The third requirement is certainly not true for all Thumb architectures, because they have a variable-length instruction set (16 or 32 bits). The first requirement, finally, is trivially not verified for all architectures (even the very first model, the ARM1, had some twenty instructions, most of which ‘doubled’ by being able to specify an immediate value, as well as richly enhanced by the use of a far from ‘simple’ barrel shifter).

Ultimately, all this means that talking about ARM vs. x86 in terms of RISCs vs. CISCs becomes completely useless, as well as totally nonsensical: they are both CISCs!

The inseparable link of RISCs with microarchitectures

At this point, it would be interesting to see, definition in hand, how many and which architectures could really be classified as RISCs. Of the best known ones, probably only the ‘basic’ version of RISC-V could be, and only if there were at least one microarchitecture capable of executing all its instructions in a single clock cycle (which I very much doubt is possible. And by that I am not only referring to the effects of the failure of branch prediction).

This inevitably leads to the second element derived from the RISC definition: both architectural (ISA) and microarchitectural requirements are part of it. Which consequently implies that the second and third levels of abstraction (which we discussed in the previous article) are inextricably involved/affected in it.

In short, it is the very definition of RISC that has implications at the microarchitecture level. Therefore, and irrespective of the other three requirements / pillars, a processor that defines itself as RISC cannot absolutely avoid the fact that its instructions are ‘simple’ because the objective is that their execution must be completed in a single clock cycle.

At this point, someone (well more than someone, as we have been reading about it for quite some time) might even argue that this definition would no longer apply because the world has changed since then and “there are no longer RISCs or CISCs as they are both mixed up”.

I am fiercely opposed to this stance because one would like to wipe out definitions that were and remain objective and, above all, which are still perfectly valid in themselves (I see no reason why they would no longer be). Apart from the fact that an ‘exchange’ / ‘reshuffling’ between RISCs and CISCs did not in fact take place (but I will talk more about this in another article in the series).

I will stop here for the time being, because the purpose was to provide precise definitions to be used to correctly classify the members of the two macro-families. The next piece will focus on the one-sided propaganda that has been (and still is!) in place over the past 40+ years, aimed at glorifying RISCs and, on the flip side, mocking and pillorying CISCs.

Press ESC to close