
On 05/22/2016 12:11 AM, D. Hugh Redelmeier wrote:
| From: Alvin Starr <alvin@netvel.net>
| Geac did the same thing. | Several years later when I was with ISG we developed a 128bit processor | that we jokingly called a VRISC processor because it had something like | 6 instructions. | We were using the processor in a graphics display system.
My impression is that GEAC was in a really special place in the market. It was highly vertical -- hardware through applications deployment. It may have helped vendor lock-in to have their own hardware.
I don't actually know what their hardware advantage was, if any. Perhaps they understood transaction processing better than designers of minis or micros.
The Geac history that I was told went. Geac used HP minicomputers. Mike Sweet designed a disk controller that was actually faster and smarter than the HP mini so they took that hardware and just built their own computers. The 8000 was a 4 processor system but the processors each served specific functions(Disk,Tape,Comms,CPU). The thing was to support small banks you needed a system that was close to IBM mainframe performance. So you could be an IBM VAR or ????? This was in the 1980 time frame and at that time Geac had a system that could run a credit union and was about the size of an IBM communications concentrator. Geac had a lot of good technology but got caught up in a second system design and kind of got Osborned.
| The Geac system was originally designed with core memory where the | access times were in the range of micro-seconds and the clock speed of | the microcode in the CPU was about 4Mhz built using 4bit bit-slice ALU's | and a lot of random logic.
If you were interested in hardware in those days, that was an attractive approach. But if you were really interested in supporting credit unions and libraries, I don't see that this was a good use of your energy.
(I did know Gus German before GEAC. Interesting guy. One of the original four undergrads who wrote the WATFOR compiler (for the IBM 7040/44).)
| Microcode also helped with reusing gates. | For example coding a multiply instruction as a loop of adds and shifts. | now days most processors have ripple multipliers.
Some of the original RISC machines had a "multiply step" instruction. You just wrote a sequence of them. The idea was that each instruction took exactly one cycle and a multiply didn't fit that.
| RISC also benefited from increased transistor density.
Every processor benefited from that. I guess RISC processors had more regularity and that made the design cycle take less engineering which in turn could improve time-to-market. But Intel had so many engineers that that advantage disappeared.
But there was a point where all or a RISC CPU would fit on a single die but a comparable CISC CPU would not. This made a big difference. But we passed that roughly when the i386 came out. Of course Intel's process advantage helped a bit.
| The x86 although popular is not the best example of a CISC design. | The National Semiconductor NS32000 which I believe was the first | production 32bit microprocessor.
It sure wasn't the first if you count actually really working silicon. I have some scars to prove it.
While still at Geac I was part of the group evaluating future processors. Intel was trying to sell us the 286 and hinting at the upcoming 432. Motorola had the 68000 and a segmented memory manager co-processor. NS had the 16000(at the time) and plans for a paged virtual memory manager and a floating point co-processor. The group liked the NS design because it was clean and consistent. All the instructions had the same addressing modes I still have a NS32000 multibus system in my basement. Its running a variant of V7(kind of a bsd 3.x) that we got from Bill Jolitz.
| The current x86 64bit is just the last of a long set of patches from the | 8086.
Yes, but the amazing thing is that the i386 and AMD patches were actually quite elegant. If I remember correctly, Gordon Bell said roughly that you could update an architecture once, but after that things get to be a mess. I'm impressed that the AMD architecture is OK. I'm not counting all the little hacks (that I don't even know) like MMX, AVX, ...
| I believe the last original CPU design from intel was the iAPX 432.
i860? i960 (with Siemens)? Itanium (with HP)?
I wonder how much intel and the other partners brought to the table in the design of the 960 ane Itanium. My impression was the HP developed the base design as an outflow if their PA-RISC work. I have never been a great fan of Intel. To me it just seemed that they did little original work and leveraged their size and the work of others. But that is just my personal impression.
| Intel had plans to dead end the x86 in favour if the Itanium as the step | up to 64bit but AMD scuttled those plays by designing a 64 but | instruction set addition.
Yeah. It kind of pulled an Intel. The AMD architecture filled a growing gap in x86's capability and was good enough.
One of Intel's motivations for Itanium seemed to be to own the architecture. It really was unhappy with having given a processor license to AMD.
| A number of Risc processors still live on mostly in embedded applications.. | MIPS.
I didn't mention MIPS. Mostly because they seem to be shrinking. Most MIPS processors that I see are in routers and newer models seem to be going to ARM.
| It was a shame to see the end of the Alpha it was a nice processor and | opened the door to NUMA interprocessor interconnects that just came into | the the Intel world.
If I remember correctly, some Alpha folks went to AMD, some went to Sun, and some went to Intel.
The Alpha group were working closely with AMD close to the end and Hypertransport technology got spun off into a separate group. I liked the Alpha also and we had a number of them early on running Digital Unix and Linux. An interesting read is http://www.hypertransport.org/docs/news/Digitimes-Processor-War-01-27-06.pdf
Intel doesn't have all the good ideas even if it had all the processor sales.
AMD's first generation of 64-bit processors was clearly superior to Intel's processors, up until the Core 2 came out. AMD still lost to Intel in the market. It's sad to see AMD's stale products now.
I think that ARM's good idea was to stay out of Intel's field of view until they grew strong. Intel had an ARM license (transferred from DEC) from the StrongARM work. They decided to stop using it after producing some chips focused on networking etc. ("XScale"). They sold it to Marvell.
There were and are a lot of architectures in the embedded space but ARM seems to be the one that scaled. It could be the wisdom of ARM Inc. but I don't know that.
On the other hand there are now a number of interesting processor designs out there. Some based on FPGAs like the OpenCores project and others like the parallella,tilera, are more custom. So hopefully new designs will not have to come from the big guys.
--- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk
-- Alvin Starr || voice: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||