
| From: James Knott <james.knott@rogers.com>
| Many years ago, I used to maintain Data General Eclipse systems. The | CPU used microcode to control AMD bit slice processors and associated | logic. The microcode instructions were over 100 bits wide. Now | *THAT'S* RISC. ;-)
Technically, that was called (horizontal) microcode. Geac did the same thing. Several years later when I was with ISG we developed a 128bit processor
On 05/21/2016 04:01 PM, D. Hugh Redelmeier wrote: that we jokingly called a VRISC processor because it had something like 6 instructions. We were using the processor in a graphics display system.
With WCS, a customer could sweat bullets and perhaps get an important performance improvement. It wasn't easy. Perhaps that is similar to the way GPUs can be used very effectively for some computations.
My opinions:
Microcode made sense when circuits were significantly faster than core memory and there was no cache: several microcode instructions could be "covered" by the time it took to fetch a word from core.
The Geac system was originally designed with core memory where the access times were in the range of micro-seconds and the clock speed of the microcode in the CPU was about 4Mhz built using 4bit bit-slice ALU's and a lot of random logic.
Microcode can still make sense but only for infrequent things or for powerful microcode where one micro-instruction does just about all the work of one macro-instruction. Even with these considerations, it tends to make the pipeline longer and thus the cost of branches higher.
Microcode also helped with reusing gates. For example coding a multiply instruction as a loop of adds and shifts. now days most processors have ripple multipliers.
The big thing about RISC was that it got rid of microcode. At just the right time -- when caches and semiconductor memory were coming onstream. Of course UNIX was required because it was the only popular portable OS.
RISC also benefited from increased transistor density.
The idea of leaving (static) scheduling to the compiler instead of (dynamic) scheduling in the hardware is important but not quite right. Many things are not known until the actual opperations are done. For example, is a memory fetch going to hit the cache or not? I think that this is what killed the Itanium project. I think that both kinds of scheduling are needed.
CISC losses: the Instruction Fetch Unit and the Instruction Decoder are complex and potential bottlenecks (they add to pipeline stages). CISC instruction sets live *way* past their best-before date.
RISC losses: instructions are usually less dense. More memory is consumed. More cache (and perhaps memory) bandwidth is consumed too. Instruction sets are not allowed to change as quickly as the underlaying hardware so the instruction set is not as transparent as it should be.
x86 almost vanquished RISC. No RISC worksations remain. On servers, RISC has retreated a lot. SPARC and Power don't seem to be growing. But from out in left field, ARM seems to be eating x86's lunch. ATOM, x86's champion, has been cancelled (at least as a brand).
The x86 although popular is not the best example of a CISC design. The National Semiconductor NS32000 which I believe was the first production 32bit microprocessor. The current x86 64bit is just the last of a long set of patches from the 8086. I believe the last original CPU design from intel was the iAPX 432. Intel had plans to dead end the x86 in favour if the Itanium as the step up to 64bit but AMD scuttled those plays by designing a 64 but instruction set addition. A number of Risc processors still live on mostly in embedded applications.. MIPS. ARM. Power(IBM) It was a shame to see the end of the Alpha it was a nice processor and opened the door to NUMA interprocessor interconnects that just came into the the Intel world.
--- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk
-- Alvin Starr || voice: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||