IBM - cache skirmish story.

Actually this is more of a can't see the forrest for the trees issue. Or; when your only tool is a hammer, everything looks like a nail type thing, rather than one of Hughs war stories. I'm a corporate correctness kind of guy, so in my mind Wars are sanctioned. This was a pure and simple un-sanctioned tech skirmish. I only ever did one hack on IBM owned equipment. It was while I was visiting at their Celistica site. The problem, which had been happening to my friend, who worked there and did some of the jobs she did remotely from home in the early 90's, was typematic delay. However IBM Staff told her it was the network lagging. Both at home and at the office. My friend was an internal event planner and would login to something, VAX/VMS I think. I never looked or asked. Her words not mine. I just checked the bios. After I booted to dos I noticed severe cache lag in typing. If you weren't a touch typist, you'd hardly notice the issue. I'm touch trained and so was my friend. I rebooted and went into the bios and increased the buffer from 4cps to 12cps. I am a fast typist, my friend was lightning fast. She dragged me up to Celstica the next day to do the same thing on her office computer. For that hack and helping out with some tech stuff ie. setting up hw, which was being awarded to students at the Annual banquet, I got to see Celine Dion perform with Pebo Bryson and eat an outstanding meal, coordinated by the event group at the Inn on the Park Also since the group got an award for their coordination of the banquet, I got to share in that award at Marche. I'll always remember my friend saying nobody will ever believe that "my carpenter boyfriend fixed a computer issue that IBM techieboobies, could not." So, I hope that the cache wiring diagrams have improved at IBM over the years, or this snippet of ledger news doesn't bode well at all. https://www.ibm.com/blogs/blockchain/2018/04/blockchain-based-batavia-platfo... -- Russell

On 04/24/2018 08:44 AM, Russell via talk wrote:
My friend was an internal event planner and would login to something, VAX/VMS
VAX/VMS was a DEC operating system. I used to work with it on DEC VAX 11/780 systems.
I got to see Celine Dion perform
No good deed goes unpunished. ;-)

On Tue, Apr 24, 2018, 8:53 AM James Knott via talk <talk@gtalug.org> wrote:
On 04/24/2018 08:44 AM, Russell via talk wrote:
My friend was an internal event planner and would login to something, VAX/VMS
VAX/VMS was a DEC operating system. I used to work with it on DEC VAX 11/780 systems.
Whatever she was using for line by line editing of reports. She was a fast modal operator and hers was a local cache problem. She was also a very serious employee. I wouldn't even look over her shoulder to see what she was doing at the time. Her words not mine.
I got to see Celine Dion perform
No good deed goes unpunished. ;-)
This was before the fame set in. Blue jeans and charmingly crooked teeth and all.
--- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk

On 04/24/2018 09:54 AM, Russell Reiter wrote:
VAX/VMS was a DEC operating system. I used to work with it on DEC VAX 11/780 systems.
Whatever she was using for line by line editing of reports. She was a fast modal operator and hers was a local cache problem. She was also a very serious employee. I wouldn't even look over her shoulder to see what she was doing at the time. Her words not mine.
IBM has VM (VM/CMS), which I used when I worked at IBM. One difference with IBMs user interface was it was screen based, rather than lines in other OS. You still often see it in businesses (I saw it in Lowes the other day), where the entire screen is processed at once. The only other place I've seen screens used, other than web sites, was on the old telegram system at CNCP. However, that system was based on a Data General Nova 800 and used on dumb terminals (made by VST) that used delay line memory.

| From: James Knott via talk <talk@gtalug.org> | However, that system was based on a Data | General Nova 800 and used on dumb terminals (made by VST) that used | delay line memory. Video terminal "VDT" development was very much gated by developments of memory technology. CRTs need constant refresh so there needs to be some kind of backing store for the image. Many different solutions were developed. Tektronix developed a CRT technology that retained an image once it was written. The trouble was that the only kind of erasing was total image erasing. Think of an etch-a-sketch. I used one of these with a PDP-8 in the late 1960s. It had a lot of advantages over the Teletype Model 33 ASR. Think of the output as going through more(1): after a page of output, you had to type a control character to request the next page. Remember, the PDP-8 was a computer costing $10000 or more and only having 4k words of main memory (12 bits/word) (1967). A frame buffer for a black and white 640x480 screen would require 25k words! The tail would be wagging the dog. In those days RAM was implemented as core memory. Each bit was a little torus of ferite, with three or so wires running though it. Assembled by hand. It cost roughly a buck a byte. A little earlier, IBM made the 2250 display. It was a vector display: the screen was painted via vectors. For graphs, this was a very dense representation. It cost more than a house. Both University of Waterloo and University of Toronto had one, highly subsidized by IBM. I think that it was developed for NASA. The next step was to store characters in a buffer: much more compact than a pixel buffer. Refreshing would by by raster scan, but a character generating ROM would on-the-fly generate pixels for each character. The buffer would be about 25x80 = 2000 bytes. Even this was expensive so different kinds of implementations were used: - magneto-restrictive delay lines (eg. in the IBM 2260 or the VST) - shift registers (logically similar to delay lines, but using semiconductors) (e.g. DataPoint terminals) - finally: RAM Until RAM was used, terminals often did not allow editing the middle of a screen. up, down, etc. were not implemented. These were the bad old days. One early CRT that I worked with was the product of an MASc thesis at UofT. It used a slow-decay orange phosphor. The refresh was the duty of the program in the attached computer (IBM 1710). The output was encoded the same way as plotter output was encoded. As long as the program output the same stuff every quarter (?) second or so, it was visible. (The attached computer was about a hundred thousand times slower than current machines.) In the mid-1970s, the Dynamic Graphics Project at U of T commissioned a couple of displays: a large monochrome vector and a more modest colour raster display with one byte per pixel and a 256 entry colour mapping table. Each cost about $20k. Eventually RAM became cheap enough that a colour frame buffer was affordable for individuals. For example, The Atari ST (1985) supported only 16 colours at a time for a resolution of 320x200 -- somewhat usable. I preferred mine in monochrome at 640x400 but my kids preferred colour. Now GPU cards come with 4G or more of RAM!

On 04/24/2018 01:33 PM, D. Hugh Redelmeier via talk wrote:
| From: James Knott via talk <talk@gtalug.org>
| However, that system was based on a Data | General Nova 800 and used on dumb terminals (made by VST) that used | delay line memory.
Video terminal "VDT" development was very much gated by developments of memory technology. CRTs need constant refresh so there needs to be some kind of backing store for the image.
Many different solutions were developed.
As I mentioned, the VST terminals used an a delay line. This was a coil of wire, with the signal inserted at one end and retrieved at the other. There was an adjustment for the position of the sensor, to adjust the total delay. The only operations that could be done directly on the stored data was to insert or delete a single character. Anything beyond that required reading and rewriting the display. This involved sending the data to the Nova 800, where the operation was performed, and back to the terminal. In that office, there was another system, based on a PDP-8i and Phillips terminals. Those terminals used core memory.
Now GPU cards come with 4G or more of RAM!
My first video card, in my IMSAI 8080, came with 512 bytes! I soon increased it to a whopping 1K. ;-)

On Tue, Apr 24, 2018 at 01:33:11PM -0400, D. Hugh Redelmeier via talk wrote:
Eventually RAM became cheap enough that a colour frame buffer was affordable for individuals. For example, The Atari ST (1985) supported only 16 colours at a time for a resolution of 320x200 -- somewhat usable. I preferred mine in monochrome at 640x400 but my kids preferred colour.
The amiga cheated and could do 4096 colours at that resolution using about the same amount of ram.
Now GPU cards come with 4G or more of RAM!
I remember the video card we had in the first x86 my dad has for a CAD system. It had 2MB VRAM and 1MB DRAM and a TI 34020 processor (The TIGA architecture). It worked using displaylists for the most part which meant that even though it was on an ISA bus it was able to operate amazingly fast. CAD simply sent commands to update the display list and draw specific parts of the list at specific zoom levels. Windows 3.1 was also impressive since all the GDI commands were simply sent to the card to be done there, and moving a window was a copy command sent to the card followed by the appropriate fill of the leftovers. Of course eventually VLB came along and people started wanting multimedia support with playing videos which it was not suitable for. Of course Direct 3D and OpenGL to some extent are also display lists, not counting the transfer of texture bitmaps. Some things are just so efficient they are worth using. -- Len Sorensen

I forgot to mention the amazing plasma technology invented at University of Illinois for the Plato project. <https://ece.illinois.edu/newsroom/article/9931> First, lets think of a neon tube. It gets turned on when a high voltage is applied. With no voltage, no light. But once it has started, an intermediate voltage can maintain the light. If it hasn't started, that voltage will leave the tube off. So, powered by that voltage, the tube is a memory. You can tell its state by the amount of current it draws. Same with pixels on a plasma display. So each pixel is its own memory and no refresh is needed. I first saw a Plato terminal when CDC (a mainframe computer company which had commercial rights to Plato) loaned a few Plato terminals to University of Waterloo for a summer (1972?). (Some old devices did use neon tubes as memories (and not displays). One bit per tube. One example: the IBM 407 accounting machine.) These things were way in advance of other technology. I don't know why they didn't take over the world. Perhaps they were expensive. ================ Another technology that I did not mentioned: Williams Tube memories. These are CRTs but they were used as memories, not displays. So they are a bit off topic.

On 04/24/2018 03:10 PM, D. Hugh Redelmeier via talk wrote:
First, lets think of a neon tube. It gets turned on when a high voltage is applied. With no voltage, no light. But once it has started, an intermediate voltage can maintain the light. If it hasn't started, that voltage will leave the tube off. So, powered by that voltage, the tube is a memory. You can tell its state by the amount of current it draws.
Neon lamps have long been known to have a memory effect. Many years ago, I worked on an ancient system, made by Teleregister, at the old Toronto Stock Exchange on Bay St.. It used neon bulbs for some storage. It also used a memory drum and flip flops built around 4 vacuum tubes. One of my first tasks in the morning was to slowly crank up the filament voltage, for all the vacuum tubes in the system. I also had to start a motor/generator set, to provide +/- 130V DC to run it. Go to the link and search on Teleregister for a description. http://www.torontoghosts.org/index.php/the-city-of-toronto/public-buildings/122-the-former-toronto-stock-exchange-current-design-exchange-?showall=1&limitstart= That system was installed over a year before I was born!

On 04/24/2018 03:10 PM, D. Hugh Redelmeier via talk wrote:
I forgot to mention the amazing plasma technology invented at University of Illinois for the Plato project.
[snip] Another thing CRT displays were useful for was "the magic of science" demos. Take a flyback connector and disconnect it from the tube. Turn it upside down and stick a pencil in it striking straight up. Create a little hat from a cigarette tinfoil and stick some straight pins on the outside edge pointing in opposite directions. Balance the had on the pencil tip. Turn on the power. The high voltage will cause electrons to stream from the ends of the pins making the little hat rotate. You have an ion drive ;) Try that with your skinny little LCD display. -- Alvin Starr || land: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||

On April 24, 2018 1:33:11 PM EDT, "D. Hugh Redelmeier via talk" <talk@gtalug.org> wrote:
| From: James Knott via talk <talk@gtalug.org>
| However, that system was based on a Data | General Nova 800 and used on dumb terminals (made by VST) that used | delay line memory.
If someone saw you loading a mercury delay line memory module onto a truck these days, the five eyes guys would be on you faster than Trump on a Tweet on a national security threat. https://upload.wikimedia.org/wikipedia/commons/f/fd/Mercury_memory.jpg
Video terminal "VDT" development was very much gated by developments of memory technology. CRTs need constant refresh so there needs to be some kind of backing store for the image.
Many different solutions were developed.
Tektronix developed a CRT technology that retained an image once it was written. The trouble was that the only kind of erasing was total image erasing. Think of an etch-a-sketch. I used one of these with a PDP-8 in the late 1960s. It had a lot of advantages over the Teletype Model 33 ASR. Think of the output as going through more(1): after a page of output, you had to type a control character to request the next page.
Remember, the PDP-8 was a computer costing $10000 or more and only having 4k words of main memory (12 bits/word) (1967). A frame buffer for a black and white 640x480 screen would require 25k words! The tail would be wagging the dog. In those days RAM was implemented as core memory. Each bit was a little torus of ferite, with three or so wires running though it. Assembled by hand. It cost roughly a buck a byte.
I bump into a guy in my neighbourhood once and a while. He told me about weaving some of this stuff together when he was at UofT. I only ever saw the core display at the OSC.
A little earlier, IBM made the 2250 display. It was a vector display: the screen was painted via vectors. For graphs, this was a very dense representation. It cost more than a house. Both University of Waterloo and University of Toronto had one, highly subsidized by IBM. I think that it was developed for NASA.
The next step was to store characters in a buffer: much more compact than a pixel buffer. Refreshing would by by raster scan, but a character generating ROM would on-the-fly generate pixels for each character. The buffer would be about 25x80 = 2000 bytes. Even this was expensive so different kinds of implementations were used:
- magneto-restrictive delay lines (eg. in the IBM 2260 or the VST)
- shift registers (logically similar to delay lines, but using semiconductors) (e.g. DataPoint terminals)
- finally: RAM
Until RAM was used, terminals often did not allow editing the middle of a screen. up, down, etc. were not implemented. These were the bad old days.
One early CRT that I worked with was the product of an MASc thesis at UofT. It used a slow-decay orange phosphor. The refresh was the duty of the program in the attached computer (IBM 1710). The output was encoded the same way as plotter output was encoded. As long as the program output the same stuff every quarter (?) second or so, it was visible. (The attached computer was about a hundred thousand times slower than current machines.)
In the mid-1970s, the Dynamic Graphics Project at U of T commissioned a couple of displays: a large monochrome vector and a more modest colour raster display with one byte per pixel and a 256 entry colour mapping table. Each cost about $20k.
Eventually RAM became cheap enough that a colour frame buffer was affordable for individuals. For example, The Atari ST (1985) supported only 16 colours at a time for a resolution of 320x200 -- somewhat usable. I preferred mine in monochrome at 640x400 but my kids preferred colour.
Now GPU cards come with 4G or more of RAM!
I chose my recent motherboards for the fact there is accoustic isolation gained by placing the audio channels on different pcb layers and also; the fact the planar is bolstered to accept massive video cards. Ledger domain likes CUDA cores. The US miners are staking out our power at the border. Thought I'd look into it. https://www.ctvnews.ca/mobile/business/power-sucking-bitcoin-mines-spark-bac...
--- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk
-- Russell

On 04/24/2018 04:00 PM, Russell via talk wrote:
| General Nova 800 and used on dumb terminals (made by VST) that used | delay line memory. If someone saw you loading a mercury delay line memory module onto a truck
| However, that system was based on a Data these days, the five eyes guys would be on you faster than Trump on a Tweet on a national security threat.
The delay line was a coil of wire, with the data carried as acoustic wave on it. I've never worked with a mercury delay line.


| From: James Knott via talk <talk@gtalug.org> | The delay line was a coil of wire, with the data carried as acoustic | wave on it. <http://www.computerhistory.org/revolution/memory-storage/8/309> <https://en.wikipedia.org/wiki/Delay_line_memory> I suspect that all wire delay lines were magnetorestrictive, not acoustic. I don't know that.

On 04/24/2018 06:06 PM, D. Hugh Redelmeier via talk wrote:
| From: James Knott via talk <talk@gtalug.org>
| The delay line was a coil of wire, with the data carried as acoustic | wave on it.
<http://www.computerhistory.org/revolution/memory-storage/8/309> <https://en.wikipedia.org/wiki/Delay_line_memory>
I suspect that all wire delay lines were magnetorestrictive, not acoustic. I don't know that.
At one point there were spring based audio delay devices use for adding reverb but they quickly got replaced with memory based solutions when dram started showing up. The huge growth in ram killed lots of very interesting technologies. Who remembers core memory with the 20x30 inch boards of very thin wires. Or bubble memory? -- Alvin Starr || land: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||

On 04/24/2018 07:42 PM, Alvin Starr via talk wrote:
I suspect that all wire delay lines were magnetorestrictive, not acoustic. I don't know that.
At one point there were spring based audio delay devices use for adding reverb but they quickly got replaced with memory based solutions when dram started showing up.
They weren't spring based, just a loose coil of wire.

On 04/24/2018 09:04 PM, James Knott via talk wrote:
On 04/24/2018 07:42 PM, Alvin Starr via talk wrote:
I suspect that all wire delay lines were magnetorestrictive, not acoustic. I don't know that.
At one point there were spring based audio delay devices use for adding reverb but they quickly got replaced with memory based solutions when dram started showing up. They weren't spring based, just a loose coil of wire.
Take a look at https://en.wikipedia.org/wiki/Reverberation under Spring reverberators. These are the ones I knew of but there may have been some other devices based on other operating principles. -- Alvin Starr || land: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||

On 04/24/2018 10:33 PM, Alvin Starr via talk wrote:
At one point there were spring based audio delay devices use for adding reverb but they quickly got replaced with memory based solutions when dram started showing up. They weren't spring based, just a loose coil of wire.
Take a look at https://en.wikipedia.org/wiki/Reverberation under Spring reverberators. These are the ones I knew of but there may have been some other devices based on other operating principles.
I remember those reverb springs, from back when I was a kid. The delay lines I'm referring to look like the one pictured in this link. https://en.wikipedia.org/wiki/Delay_line_memory#Magnetostrictive_delay_lines

On 04/24/2018 10:37 PM, James Knott via talk wrote:
At one point there were spring based audio delay devices use for adding reverb but they quickly got replaced with memory based solutions when dram started showing up. They weren't spring based, just a loose coil of wire.
Take a look at https://en.wikipedia.org/wiki/Reverberation under Spring reverberators. These are the ones I knew of but there may have been some other devices based on other operating principles. I remember those reverb springs, from back when I was a kid. The delay
On 04/24/2018 10:33 PM, Alvin Starr via talk wrote: lines I'm referring to look like the one pictured in this link. https://en.wikipedia.org/wiki/Delay_line_memory#Magnetostrictive_delay_lines
Forgot to mention, that brass rod at the front is threaded and used to position the pick up transducer, which is that blue block. You'd range the delay line by jamming a key to provided constant characters and then adjust the screw one way and then the other, to the point the text on the display started breaking up and then set it half way between the 2 points.

On 04/24/2018 10:37 PM, James Knott via talk wrote:
At one point there were spring based audio delay devices use for adding reverb but they quickly got replaced with memory based solutions when dram started showing up. They weren't spring based, just a loose coil of wire.
Take a look at https://en.wikipedia.org/wiki/Reverberation under Spring reverberators. These are the ones I knew of but there may have been some other devices based on other operating principles. I remember those reverb springs, from back when I was a kid. The delay
On 04/24/2018 10:33 PM, Alvin Starr via talk wrote: lines I'm referring to look like the one pictured in this link. https://en.wikipedia.org/wiki/Delay_line_memory#Magnetostrictive_delay_lines
Ah now that is interesting. Sending bits down the line as sound waves is an interesting concept. You would need to feed them from the output back into the input along with some timing signals to use it for memory. I got into computers when core memory was on its way out so I never got to see any systems built using this kind of technology. cheap mass produced bits in silicon has made a whole bunch of really interesting technologies go away. -- Alvin Starr || land: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||

On 04/24/2018 11:00 PM, Alvin Starr via talk wrote:
I got into computers when core memory was on its way out so I never got to see any systems built using this kind of technology.
My first computer, which I bought in 1975 had static RAM in memory chips, but a couple of years later, when I started working on computers at work, they were all core memory. I still have a core memory plane from one of those systems. Here's a photo of that core plane: https://drive.google.com/file/d/1F0K1vDzT0HjrBDKySpQ8m91TPw2UDCsS/view?usp=s...
participants (7)
-
Alvin Starr
-
D. Hugh Redelmeier
-
James Knott
-
lsorense@csclub.uwaterloo.ca
-
Russell
-
Russell Reiter
-
Scott Allen