War Story: Asus UX305ca SSD failures
We bought two Asus ux305ca notebooks about three years ago. The Microsoft Store had a remarkably good deal on them. I'm not the only GTALUGger to buy this notebook. Two years ago one of the m.2 SATA SSDs suddenly stopped working. If I remember correctly, it didn't even show up as a disk. Last week the same thing happened on the second notebook. The only warning was that a few days earlier the firmware forgot what to boot. I easily fixed that by telling it again. This could easily have been a CMOS battery problem but I guess it wasn't. The computer acted as if the drive were not there. I installed it in a different machine and it was not detected in the other machine either. The firmware ("BIOS" is not the correct term) on both machines failed to see it. A live Fedora system (booted off a USB stick) failed to see it. It's dead, Jim. Lesson: SSDs don't give you warning about failures. Much worse (in my modest experience) than HDDs. Backup now. I'm skeptical about S.M.A.R.T. for SSDs. Inference: there may have been something wrong with the model of SSD used by Asus in the ux305ca. Two out of two failed us. Micron M600 256GB. I've had other SSDs fail, but only in the early days of SSDs. It's fairly easy to replace such a drive. You need to remove about a dozen screws and pry open the case. For the ux305ca you need a TORX T5 driver. For prying: use an old credit card, guitar pick, or spudger (don't use a screw driver since it may scratch or scar the case). Lesson: search YouTube for videos on taking apart you notebook. They are not perfect but they give you an idea of what you are in for. I'm actually doing musical chairs with SSDs. I have a spare NVMe SSD but the Asus can only take an m.2 SATA drive. I have a Dell notebook with an m.2 SATA drive but it can also support an NVMe drive. - broken m.2 SATA drive from Asus => ??? - m.2 SATA SSD from DELL notebook => Asus - new unused NVMe SSD => DELL notebook Background on SSD interfaces: First there were mSATA SSDs. Those were supported on some older machines (eg. ThinkPad T20 and T30 notebooks). Then came NGFF ("New Generation Form Factor") connectors and interfaces about the time of Haswell processors from Intel. They were quickly renamed to m.2 (because it rolls of the tongue, doesn't it?). Originally m.2 drives used the same SATA interface. Recently, the NVMe interface was added. It's just like PCIe on a different connector. That interface is much faster than SATA. It's so good that most new desktop motherboards support it -- it's not just for notebooks. I'm not sure when one would notice the speed difference between SATA and NVMe. SATA SSDs are already almost always a lot faster than HDDs. The first generation of NVMe SSDs had internal bottlenecks that may have limited the improvement over SATA.
On Thu, 1 Aug 2019 at 11:15, D. Hugh Redelmeier via talk <talk@gtalug.org> wrote:
Two years ago one of the m.2 SATA SSDs suddenly stopped working. If I remember correctly, it didn't even show up as a disk.
Last week the same thing happened on the second notebook.
Yikes! I guess I better see if I have anything on mine I need to backup! So far the only issue I've had with mine in the power button after I spilled a drink on it. I had to replace the keyboard and now I have a rubber cover over the keyboard (very cheap on AliExpress).
On Thu, 1 Aug 2019 11:15:43 -0400 (EDT) "D. Hugh Redelmeier via talk" <talk@gtalug.org> wrote:
Two years ago one of the m.2 SATA SSDs suddenly stopped working. If I remember correctly, it didn't even show up as a disk.
Last week the same thing happened on the second notebook.
The only warning was that a few days earlier the firmware forgot what to boot. I easily fixed that by telling it again. This could easily have been a CMOS battery problem but I guess it wasn't.
The computer acted as if the drive were not there. I installed it in a different machine and it was not detected in the other machine either. The firmware ("BIOS" is not the correct term) on both machines failed to see it. A live Fedora system (booted off a USB stick) failed to see it. It's dead, Jim.
Lesson: SSDs don't give you warning about failures. Much worse (in my modest experience) than HDDs. Backup now. I'm skeptical about S.M.A.R.T. for SSDs.
Hugh, When I bought a hard drive at Best Buy, I asked about SSDs. I understand that there is a maximum number of writes you can do to them, and the number is rather small. I was buying a backup drive that runs at night while I am in bed, so I went for cheap and reliable. -- Howard Gibson hgibson@eol.ca jhowardgibson@gmail.com http://home.eol.ca/~hgibson
| From: Howard Gibson via talk <talk@gtalug.org> | When I bought a hard drive at Best Buy, I asked about SSDs. Seeking advice from Best Buy isn't a great idea. Q: What's the difference between a used car salesperson and a computer salesperson? A: The used car salesperson knows that he's lying. | I | understand that there is a maximum number of writes you can do to them, | and the number is rather small. I was buying a backup drive that runs | at night while I am in bed, so I went for cheap and reliable. Don't buy a backup drive, buy several. At least alternate them. Otherwise all your backups may disappear in the same nasty event. If you wear out a backup device (SSD or HDD), you are doing it wrong. (SSDs actually have decent "endurance" specs for normal uses. Do the arithmetic, if you care.) I imagine that an HDD (or several) would be better for backups than an SSD: - HDDs are quite a bit cheaper per byte than SSDs - HDDs are fast enough for backups. - backups usually need decent sequential write performance, something that HDDs are fine with. Relative to HDDs, SSDs excel at random access, something that rarely matters with backups. - many recent-generation inexpensive SSDs slow to a crawl once their write buffer is full. This would likely happen with a backup. - there's a finite lifetime for information written to an HDD; my guess: 5 years is safe. You don't want to find this out experimentally. SSD information might well be significantly shorter-lived: I've heard claims of this but don't know the reality. I don't wish to find out :-) All archives need to be recopied regularly. Media change (I have some information stranded in 9-track tapes). It seems as if the newer the medium, the shorter the lifespan. - petroglyphs: long long time - clay tablets: millennia - paper (pre-wood-pulp): five hundred years - paper made from wood pulp: 75 years - punch cards and paper tape: 100 years - 9-track mag tape: 10 years - digital cassette tape 4 years (formats changed too quickly) - floppy disks: 5 years? Depends on the format (consider 3.0" floppies) - USB flash drives: I've had them die after a year, but that's not expected. - hard drives: death by standards evolution. Try finding an ST506 controller. Or MFM, ESDI, SCSI, FireWire. Support for even PATA is fading. - Laser Disc, Magneto-optical disks, CD-ROM, DVD (multiple standards), BluRay: each has standards that get obsolete. The actual data may deteriorate too. I do have some DVD that claim to have a lifetime of over 100 years.
On Fri, Aug 2, 2019, 7:23 AM Stewart C. Russell via talk <talk@gtalug.org> wrote:
On 2019-08-01 11:09 p.m., D. Hugh Redelmeier via talk wrote:
- punch cards and paper tape: 100 years
- 9-track mag tape: 10 years
Would this form factor be sized like commercial 8 track audio tape with an extra track squeezed in for sync? I recall my musical friends in the 80's were buying four track cassette recorders with simultaneous synchronization. They used standard stereo cassettes which played back two channels in stereo and you flipped it over to play the other two. One popular brand recorder was Fostex, it used both sides of the cassette at once. You could record three channels, then mix down to one, play back and simultaniously record another two over top and mix again, then run the audio through the unit mixer and balance the output to stereo, then line out to for either dub recording or live playback.
Good luck getting a reader for any of these now. At least the paper media is scannable.
Chances are if you have the data on tape, you already have the reader. These folks will repair or replace your equipment.
https://www.repairmytapedrive.com/?gclid=Cj0KCQjwvo_qBRDQARIsAE-bsH-VNiWSUGL...
It may seem out of date, but there is still a strong business case for maintaining the original archive records on original format, as well as a copy transferred to newer media, depending on the importance of the dataset itself.
Stewart
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
On 2019-08-02 08:03 AM, Russell Reiter via talk wrote:
> - 9-track mag tape: 10 years
Would this form factor be sized like commercial 8 track audio tape with an extra track squeezed in for sync?
No, it was 1/2" tape on an open reel. And it was 9 or 7 actual tracks, with one track used for the parity bit. They used odd parity, to guaranty at least one "1" bit, to provide clocking for the NRZI encoding used. https://en.wikipedia.org/wiki/Non-return-to-zero
On Fri, Aug 2, 2019, 8:49 AM James Knott via talk, <talk@gtalug.org> wrote:
On 2019-08-02 08:03 AM, Russell Reiter via talk wrote:
> - 9-track mag tape: 10 years
Would this form factor be sized like commercial 8 track audio tape with an extra track squeezed in for sync?
No, it was 1/2" tape on an open reel. And it was 9 or 7 actual tracks, with one track used for the parity bit. They used odd parity, to guaranty at least one "1" bit, to provide clocking for the NRZI encoding used.
Interesting. In audio recording synchronization, one track is usually a click track. Not necessarily tic tic tic but rather tic tok etc. This helps the listener who is overdubbing another instrument to centre to the beat more accurately. The interval between the tic and the toc sets the logical state of the beat, tic being the leading edge of the beat and toc the trailing edge. I read that in RLL over serial UART reception, the receiver is often clocked at a higher rate (than the bus clock? ) in order to garner a broader validated baseline of leading and trailing signal edge.
From your link, NRZ may map to the trailing edge of the signal but not necessarily, as there are several other validation methods for non clocked signals.
Also interesting is that NRZI seems to have two definitions. Non Return Zero Inverted or NRZ-IBM. Some nice pictures of an IBM unit in this link to a manual, for any other creative anachronists. http://ibm-14In .info/223-6988-729-MagTapeCE-InstRef-62-r.pdf <http://ibm-1401.info/223-6988-729-MagTapeCE-InstRef-62-r.pdf>
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
On 2019-08-04 08:09 AM, Russell Reiter wrote:
Also interesting is that NRZI seems to have two definitions. Non Return Zero Inverted or NRZ-IBM.
NRZI was created by IBM, specifically for use with tape drives. They were one of the earliest, if not earliest to use mag tape. I read the technical reason for "inverted" many years ago, but I have forgotten the details. Often that sort of thing is done to obtain best performance from something. One such example was the use of odd parity. From a strictly error detection point of view odd or even will work, but with odd, there will always be one "1" bit for clocking, as I mentioned.
Some nice pictures of an IBM unit in this link to a manual, for any other creative anachronists.
http://ibm-14In .info/223-6988-729-MagTapeCE-InstRef-62-r.pdf <http://ibm-1401.info/223-6988-729-MagTapeCE-InstRef-62-r.pdf>
I used to work on drives that looked similar. However, they were made by a company called Potter, but had the Collins branding on them.
On Sun, Aug 4, 2019, 9:21 AM James Knott via talk <talk@gtalug.org> wrote:
On 2019-08-04 08:09 AM, Russell Reiter wrote:
Also interesting is that NRZI seems to have two definitions. Non Return Zero Inverted or NRZ-IBM.
NRZI was created by IBM, specifically for use with tape drives. They were one of the earliest, if not earliest to use mag tape. I read the technical reason for "inverted" many years ago, but I have forgotten the details. Often that sort of thing is done to obtain best performance from something. One such example was the use of odd parity. From a strictly error detection point of view odd or even will work, but with odd, there will always be one "1" bit for clocking, as I mentioned.
From looking at the manual, inverted might be a reference to their NOR & XOR logic gates.
Check out the wiring patches on the unit in the manual you can see what a
cluster fork that could turn out to be if you had to troubleshoot it, especially where line voltage is used for sync.
From your Wikipedia link it is indicated that NRZI was designed to work with or without a clock sync. A term I never heard before, off keying, refers to using the actual line polarity to determine if the logical state is 0 or 1; that is where line clock tic is not used.
Here's the second paragraph from your link. There are secondary data sync methods when there is no specific timing signal multiplexed into the stream. "For a given data signaling rate <https://en.m.wikipedia.org/wiki/Data_signaling_rate>, i.e., bit rate <https://en.m.wikipedia.org/wiki/Bit_rate>, the NRZ code requires only half the baseband bandwidth <https://en.m.wikipedia.org/wiki/Bandwidth_%28signal_processing%29> required by the Manchester code <https://en.m.wikipedia.org/wiki/Manchester_code> (the passband bandwidth is the same). When used to represent data in an asynchronous communication <https://en.m.wikipedia.org/wiki/Asynchronous_communication> scheme, the absence of a neutral state requires other mechanisms for bit synchronization when a separate clock signal is not available." It goes on to say that NRZ draws half the bandwidth of RZ encoding. I guess that's why there are numerous validation methods built into NRZ. Thats an attractive feature for in house IT, at a time when they have to cobble their own systems together with parts from different manufacturers.
Some nice pictures of an IBM unit in this link to a manual, for any
other creative anachronists.
http://ibm-14In .info/223-6988-729-MagTapeCE-InstRef-62-r.pdf <http://ibm-1401.info/223-6988-729-MagTapeCE-InstRef-62-r.pdf>
I used to work on drives that looked similar. However, they were made by a company called Potter, but had the Collins branding on them.
How often would you do routine servicing, as opposed to repairs? They look like huge dust magnets to me and I can't see dust and magnetic tape playing well together. --- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
On 2019-08-05 08:28 AM, Russell Reiter via talk wrote:
How often would you do routine servicing, as opposed to repairs? They look like huge dust magnets to me and I can't see dust and magnetic tape playing well together.
I've forgotten the interval, but they received regular maintenance. I just did it when my supervisor told me to. Maintenance would involve cleaning the heads, tape path and capstan, checking air filters and running reliability tests. Data centres tend to be fairly clean but, back in those days, we had to tell a couple of the programmers to not smoke in that area.
On 2019-08-05 08:28 AM, Russell Reiter via talk wrote:
From looking at the manual, inverted might be a reference to their NOR & XOR logic gates.
No that wasn't it. I don't recall the details, but it had to do with data recovery. I read about it in a book about IBM's early computers, which described the development of their various computers and other hardware.
On Mon, Aug 5, 2019, 9:38 AM James Knott via talk, <talk@gtalug.org> wrote:
On 2019-08-05 08:28 AM, Russell Reiter via talk wrote:
From looking at the manual, inverted might be a reference to their NOR & XOR logic gates.
No that wasn't it. I don't recall the details, but it had to do with data recovery. I read about it in a book about IBM's early computers, which described the development of their various computers and other hardware.
Thanks by adding data recovery as key words to a NZRI info search I got to this link. http://pallen.ece.gatech.edu/Academic/ECE_6440/Summer_2003/L200-CDR-I(2UP).p... I'm a pictorialy oriented learner. That and my limited math skills make this sort of presentation doc, with diagrams of logic gates and circuits and such, very helpful. If not just to get a handle on the correct terminology when dealing with a concept.
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
On 2019-08-05 09:38 AM, James Knott wrote:
No that wasn't it. I don't recall the details, but it had to do with data recovery. I read about it in a book about IBM's early computers, which described the development of their various computers and other hardware.
This is the book I read. I borrowed it from the Mississauga Library, but they don't seem to have it anymore. https://mitpress.mit.edu/books/ibms-early-computers
On Mon, Aug 5, 2019, 11:58 AM James Knott via talk, <talk@gtalug.org> wrote:
On 2019-08-05 09:38 AM, James Knott wrote:
No that wasn't it. I don't recall the details, but it had to do with data recovery. I read about it in a book about IBM's early computers, which described the development of their various computers and other hardware.
This is the book I read. I borrowed it from the Mississauga Library, but they don't seem to have it anymore.
https://mitpress.mit.edu/books/ibms-early-computers
Wow the index is 22 pages. Must have been a heavy read. Maybe the reference library still has a copy. $50.00 for the hard cover but $70 for paperback is kind of hard to understand but so are lots of things these days.
---
Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
On 2019-08-05 01:54 PM, Russell Reiter wrote:
$50.00 for the hard cover but $70 for paperback is kind of hard to understand but so are lots of things these days.
That $50 is only because it's out of print. Otherwise, it would cost more. ;-) I borrowed it from the Mississauga Library. Perhaps other libraries might have it. It would be nice if some of these old books could be released as ebooks. 1986 is recent enough that it would have been composed on a computer.
On Mon, Aug 5, 2019, 2:17 PM James Knott via talk, <talk@gtalug.org> wrote:
On 2019-08-05 01:54 PM, Russell Reiter wrote:
$50.00 for the hard cover but $70 for paperback is kind of hard to understand but so are lots of things these days.
That $50 is only because it's out of print. Otherwise, it would cost more. ;-)
I borrowed it from the Mississauga Library. Perhaps other libraries might have it. It would be nice if some of these old books could be released as ebooks. 1986 is recent enough that it would have been composed on a computer.
Ya an ebook would be great, you get a lot of clues to current developments from reading the words of past developers. Personally when I got started hacking, I read operational manuals when I found them discarded for many years before I got my hands on any viable hardware. My first build in the mid 80's was a found 8086 which needed sipps and a 5mb drive for storage which I found elsewhere from other discards. Bonus, the case had two 5.25 floppy drives so you could load DOS 1.0 more quickly. Heck that was half the fun. I lived close to IBMs Don Mills campus. At one point as IBM staff, who were living in the neighbourhood, upgraded their own home terminal units, there were treasures by the curb every week.
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
On 2019-08-02 8:03 a.m., Russell Reiter via talk wrote:
On Fri, Aug 2, 2019, 7:23 AM Stewart C. Russell via talk <talk@gtalug.org <mailto:talk@gtalug.org>> wrote:
On 2019-08-01 11:09 p.m., D. Hugh Redelmeier via talk wrote: > > - punch cards and paper tape: 100 years > > - 9-track mag tape: 10 years [snip] Good luck getting a reader for any of these now. At least the paper media is scannable.
Chances are if you have the data on tape, you already have the reader. These folks will repair or replace your equipment. [snip] ????It may seem out of date, but there is still a strong business case for maintaining the original archive records on original format, as well as a copy transferred to newer media, depending on the importance of the dataset itself.??
I have printouts of programs that had been printed on an old IBM 1403 chain printer, some programs that are on punch cards, and one on paper tape. I also have two mag tapes reels that were used with IBM mainframes. I have no idea what is on those tapes. The printouts, punch cards, and paper tape have survived intact for over 40 years. I still (mostly) remember how to read punch cards. I would have to find a site to help decode the paper tape but it could be run through an ASR 33 teletype to generate a printout. I recently discovered I have a cassette tape with 4K BASIC for Altair 8800 dated 1976. I have now archived the audio on that cassette that on to my computer. I have other cassette tapes I used with old computers that I'm doing to digitize and attempt to decode. It is interesting to realize that a lot of this "old school technology" has survived many a decade yet modern devices like CDs, DVDs, and hard drives often have much shorter shelf lives. -- Cheers! Kevin. http://www.ve3syb.ca/ | "Nerds make the shiny things that https://www.patreon.com/KevinCozens | distract the mouth-breathers, and | that's why we're powerful" Owner of Elecraft K2 #2172 | #include <disclaimer/favourite> | --Chris Hardwick
On Fri, Aug 9, 2019, 2:45 AM Kevin Cozens via talk <talk@gtalug.org> wrote:
On 2019-08-02 8:03 a.m., Russell Reiter via talk wrote:
On Fri, Aug 2, 2019, 7:23 AM Stewart C. Russell via talk < talk@gtalug.org <mailto:talk@gtalug.org>> wrote:
On 2019-08-01 11:09 p.m., D. Hugh Redelmeier via talk wrote: > > - punch cards and paper tape: 100 years > > - 9-track mag tape: 10 years [snip] Good luck getting a reader for any of these now. At least the paper media is scannable.
Chances are if you have the data on tape, you already have the reader. These folks will repair or replace your equipment. [snip] ????It may seem out of date, but there is still a strong business case for maintaining the original archive records on original format, as well as a copy transferred to newer media, depending on the importance of the dataset itself.??
I have printouts of programs that had been printed on an old IBM 1403 chain printer, some programs that are on punch cards, and one on paper tape. I also have two mag tapes reels that were used with IBM mainframes. I have no idea what is on those tapes. The printouts, punch cards, and paper tape have survived intact for over 40 years.
I still (mostly) remember how to read punch cards. I would have to find a site to help decode the paper tape but it could be run through an ASR 33 teletype to generate a printout.
I recently discovered I have a cassette tape with 4K BASIC for Altair 8800 dated 1976. I have now archived the audio on that cassette that on to my computer. I have other cassette tapes I used with old computers that I'm doing to digitize and attempt to decode.
There are also technologies which were developed but which got overwhelmed by rapid changes in other areas. The lazer optical turntable for playing old records is one. https://en.m.wikipedia.org/wiki/Laser_turntable Also from this link, a camera which scans the vinyl grooves and uses software to reconstruct the sound. Both of these technologies were eclipsed by compact disk technology, yet each of them could (probably) non destructively read the records and reconstruct the sound. Sort of a microfiche picture treatment of sound, instead of its traditional use for tiny copies of text and pictures.
It is interesting to realize that a lot of this "old school technology" has survived many a decade yet modern devices like CDs, DVDs, and hard drives often have much shorter shelf lives.
One of the philosophical founders of media theory, Marshal McLuhan said; "the medium is the message." This is probably more true today than it was when he coined the phrase, given all the hyperbole around the collection of metadata on the net these days. The issue with recording data electronically is bit-rot. This problem is amplified by making a copy of a copy of a copy etc. Having a master copy, no matter what the form, would be be beneficial for any data recovery expert, should they have an urgent need to reconstruct the original data after suspected corruption. We can't reverse entropy, at least not yet. The best we can hope to do is retard it. Whether data is corrupted by dust and scratches in vinyl or cd records, we should always be able to recreate the old technology used for the creation of media at the time it was originally written. Except of course that which is lost to the ancient past, like the lazer anti gravity devices used by the Egyptians to cut and stack rocks into pyramids. We have the lazers but fall short on the anti-gravity devices. Personally I believe that if we don't bit-rot the planet first, we will get there eventually. ;-)
-- Cheers!
Kevin.
http://www.ve3syb.ca/ | "Nerds make the shiny things that https://www.patreon.com/KevinCozens | distract the mouth-breathers, and | that's why we're powerful" Owner of Elecraft K2 #2172 | #include <disclaimer/favourite> | --Chris Hardwick --- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
On Fri, Aug 9, 2019, 8:58 AM James Knott via talk, <talk@gtalug.org> wrote:
On 2019-08-09 08:03 AM, Russell Reiter via talk wrote:
This problem is amplified by making a copy of a copy of a copy etc.
That is certainly the case with analog, but with digital the copies should be exactly the same, even if on a different media.
There is a possibility that keeping your photos in raw form will protect from major copy errors, but in all situations of moving bits in a data stream, there is the possibility of transient error. Jpg was considered lossy as it could not fully recreate the the full raw data. 25mb or more of raw image data per image is, or was, a hefty size to move across the bus in early days, much less across the internet. Now we have so called lossless JPEG, however its accuracy is based on predictive sampling rather than a pure collection of bits per pixel. https://en.m.wikipedia.org/wiki/Lossless_JPEG Humans lose thousands of skin cells a day, yet the fabric of the persona stays the same. I think photo data is a little the same, a few stray bits lost here or there won't change the picture that much. However it is possible to corrupt that significant bit which would make decoding the picture impossible. ---
Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
On 2019-08-09 12:47 PM, Russell Reiter wrote:
There is a possibility that keeping your photos in raw form will protect from major copy errors, but in all situations of moving bits in a data stream, there is the possibility of transient error. Jpg was considered lossy as it could not fully recreate the the full raw data. 25mb or more of raw image data per image is, or was, a hefty size to move across the bus in early days, much less across the internet. Now we have so called lossless JPEG, however its accuracy is based on predictive sampling rather than a pure collection of bits per pixel.
Jpegs are not copies. They are manipulated to save space. When you copy a file, that is do not modify it, then the copy should be an exact replica. If you're really worried, you can use shasum etc. to ensure integrity of the copy.
On Fri, Aug 9, 2019, 12:51 PM James Knott via talk, <talk@gtalug.org> wrote:
On 2019-08-09 12:47 PM, Russell Reiter wrote:
There is a possibility that keeping your photos in raw form will protect from major copy errors, but in all situations of moving bits in a data stream, there is the possibility of transient error. Jpg was considered lossy as it could not fully recreate the the full raw data. 25mb or more of raw image data per image is, or was, a hefty size to move across the bus in early days, much less across the internet. Now we have so called lossless JPEG, however its accuracy is based on predictive sampling rather than a pure collection of bits per pixel.
Jpegs are not copies. They are manipulated to save space. When you copy a file, that is do not modify it, then the copy should be an exact replica. If you're really worried, you can use shasum etc. to ensure integrity of the copy.
Jpegs are an exported file format created from aggregated image data collected by the CCD. They are digital files and subject to transmission errors just like any other signal. My 13 megapixel phone saves image data directly as a jpg file. Sure the raw data has been manipulated before writing the original, but that is much different than having an image recorded using raw format and then exporting a copy in a lossy format in order to save space.
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
On 2019-08-09 01:19 PM, Russell Reiter wrote:
Jpegs are an exported file format created from aggregated image data collected by the CCD. They are digital files and subject to transmission errors just like any other signal. My 13 megapixel phone saves image data directly as a jpg file. Sure the raw data has been manipulated before writing the original, but that is much different than having an image recorded using raw format and then exporting a copy in a lossy format in order to save space.
Perhaps you need to learn about some of the technology. Some media, such as CDs & DVDs use forward error correction, to ensure data is copied correctly. When you transfer data over a network, there is a checksum used with TCP, with IP(v4) and a CRC check at the Ethernet level. On top of that some applications provide their own integrity check. So, it would be very difficult for an error to propagate. Now, I am aware some media will deteriorate with time, so the sensible thing to do is make periodic backups.
On Fri, Aug 9, 2019, 1:36 PM James Knott via talk <talk@gtalug.org> wrote:
On 2019-08-09 01:19 PM, Russell Reiter wrote:
Jpegs are an exported file format created from aggregated image data collected by the CCD. They are digital files and subject to transmission errors just like any other signal. My 13 megapixel phone saves image data directly as a jpg file. Sure the raw data has been manipulated before writing the original, but that is much different than having an image recorded using raw format and then exporting a copy in a lossy format in order to save space.
Perhaps you need to learn about some of the technology. Some media, such as CDs & DVDs use forward error correction, to ensure data is copied correctly. When you transfer data over a network, there is a checksum used with TCP, with IP(v4) and a CRC check at the Ethernet level. On top of that some applications
Umm what if two bits of the transmitted file are inverted but the count adds up to the same checksum? Checksums report errors, it would be up to the operator to ensure the data was copied correctly. So I can't agree with the semantics of your statement about forward correction ensuring a correct copy. I think that since all these factors deal with accidental errors the term bitrot was not correct. Bit corruption on the other hand would include those changes induced by other corrupting factors, as opposed to happenstance and transient errors. provide their own integrity
check. So, it would be very difficult for an error to propagate. Now
CRC as an ECC tool also has its limitations where the SNR is high. I am aware some media will deteriorate with time, so the sensible thing
to do is make periodic backups.
Well that is where this thread started. I think the logical conclusion of that was, the ancient recording methods had the longest true lifespan and as more and more technology was introduced, so did the valid life of the data get reduced.
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
| From: Alvin Starr via talk <talk@gtalug.org> | It would be interesting to see the bit densities over time from stone | tablets(how would cave paintings count?) to the latest production storage | systems. | It would also be interesting to know how many people had access to the storage | media over time. That would have started with a few priests and specially | trained people to just about everybody having a cell phone with a few GB of | storage. Yeah. The main reason for new formats is that they are cheaper or larger. Sadly not because of endurance. One McLuhan-esqe observation is that what gets recorded on new media is likely less significant / important. My brother often remarks that that photos on film (or glass and metal plates!) were way more considered and significant than photos now. We are buried in snaps and will need AI to find ones that we might actually be interested in later. A separate problem is that we often don't know how useful or interesting something is until later. I imagine that to collectors TV Guide is more precious than National Geographic since nobody saved the former and everybody saved the latter. (It is thought that the NSA is buried in data with insufficient means to find even the obviously interesting stuff.) | From: James Knott via talk <talk@gtalug.org> | On 2019-08-09 01:19 PM, Russell Reiter wrote: | > Jpegs are an exported file format created from aggregated image data | > collected by the CCD. They are digital files and subject to | > transmission errors just like any other signal. My 13 megapixel phone | > saves image data directly as a jpg file. Sure the raw data has been | > manipulated before writing the original, but that is much different | > than having an image recorded using raw format and then exporting a | > copy in a lossy format in order to save space. In cameras, it seems to be called RAW, not raw. I think that is to remind us that each camera has its own proprietary format. Only sometimes is the format disclosed. Worse: some cameras never have a raw image: they convert to JPEG on the fly since their storage has neither the bandwidth or capacity to store a whole raw image. | Perhaps you need to learn about some of the technology. Some media, | such as CDs & DVDs use forward error correction, to ensure data is | copied correctly. A significant number of problems cannot be corrected by FEC. In particular, misplacing a CD or having a 100% failure in an SSD (the original problem that prompted the thread). | When you transfer data over a network, there is a | checksum used with TCP, with IP(v4) and a CRC check at the Ethernet | level. On top of that some applications provide their own integrity | check. So, it would be very difficult for an error to propagate. Now, An excuse to roll out another story. When I was trying to make my Altair useful, I wrote a monitor. I needed a way of doing integrity checks for files recorded to audio tape and files sent over a serial line. I decided that CRC16 was way better than a simple XOR checksum. I looked into it and devised a way of calculating CRCs in software by byte-at-a-time operations rather than bit at a time (CRC was designed to be implemented in hardware, bit-wise, with a linear-feedback shift register). I wrote the code in 8080 assembler and in C. I released the code on usenet (no, not all of usenet was porn). Unbknownst to me, my code made it into the RFCs for PPP. Open source works. <https://tools.ietf.org/html/rfc1134> BTW, CRC isn't any good at detecting forgeries. For that we have cryptographic hashes. And those hashes need to be protected against forgery too (digital signatures or another secure communications channel).
On Sat, Aug 10, 2019, 10:39 AM D. Hugh Redelmeier via talk <talk@gtalug.org> wrote:
| From: Alvin Starr via talk <talk@gtalug.org>
| It would be interesting to see the bit densities over time from stone | tablets(how would cave paintings count?) to the latest production storage | systems. | It would also be interesting to know how many people had access to the storage | media over time. That would have started with a few priests and specially | trained people to just about everybody having a cell phone with a few GB of | storage.
Yeah. The main reason for new formats is that they are cheaper or larger. Sadly not because of endurance.
One McLuhan-esqe observation is that what gets recorded on new media is likely less significant / important. My brother often remarks that that photos on film (or glass and metal plates!) were way more considered and significant than photos now. We are buried in snaps and will need AI to find ones that we might actually be interested in later.
Another emerging media issue is that; as the cost of delivering the media content starts to converge with zero, so is the value of that written content lessening. While the author of the link below lambasted Open Source Cheapskates, the demise of the Linux Journal speaks to that effect on users of Linux. I never subscribed, but I picked up a copy now and then, or browsed it at the library. https://betanews.com/2019/08/08/linux-journal-dies-again-rip/ All print media is feeling the digital crunch and while everyone is now able to publish on the net, in whatever fashion they choose, it is nice to have authentic curation of information. That too will probably require some kind of AI in the not to distant future. A masthead on a broadsheet was an emblem of credibility. On the web it appears under the sponsoring message of the day.
A separate problem is that we often don't know how useful or interesting something is until later. I imagine that to collectors TV Guide is more precious than National Geographic since nobody saved the former and everybody saved the latter.
(It is thought that the NSA is buried in data with insufficient means to find even the obviously interesting stuff.)
| From: James Knott via talk <talk@gtalug.org>
| On 2019-08-09 01:19 PM, Russell Reiter wrote: | > Jpegs are an exported file format created from aggregated image data | > collected by the CCD. They are digital files and subject to | > transmission errors just like any other signal. My 13 megapixel phone | > saves image data directly as a jpg file. Sure the raw data has been | > manipulated before writing the original, but that is much different | > than having an image recorded using raw format and then exporting a | > copy in a lossy format in order to save space.
In cameras, it seems to be called RAW, not raw. I think that is to remind us that each camera has its own proprietary format. Only sometimes is the format disclosed.
Worse: some cameras never have a raw image: they convert to JPEG on the fly since their storage has neither the bandwidth or capacity to store a whole raw image.
| Perhaps you need to learn about some of the technology. Some media, | such as CDs & DVDs use forward error correction, to ensure data is | copied correctly.
A significant number of problems cannot be corrected by FEC. In particular, misplacing a CD or having a 100% failure in an SSD (the original problem that prompted the thread).
| When you transfer data over a network, there is a | checksum used with TCP, with IP(v4) and a CRC check at the Ethernet | level. On top of that some applications provide their own integrity | check. So, it would be very difficult for an error to propagate. Now,
An excuse to roll out another story.
When I was trying to make my Altair useful, I wrote a monitor. I needed a way of doing integrity checks for files recorded to audio tape and files sent over a serial line. I decided that CRC16 was way better than a simple XOR checksum. I looked into it and devised a way of calculating CRCs in software by byte-at-a-time operations rather than bit at a time (CRC was designed to be implemented in hardware, bit-wise, with a linear-feedback shift register).
I wrote the code in 8080 assembler and in C.
I released the code on usenet (no, not all of usenet was porn).
Unbknownst to me, my code made it into the RFCs for PPP. Open source works. <https://tools.ietf.org/html/rfc1134>
BTW, CRC isn't any good at detecting forgeries. For that we have cryptographic hashes. And those hashes need to be protected against forgery too (digital signatures or another secure communications channel).--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
On 2019-08-10 12:52 PM, Russell Reiter via talk wrote:
While the author of the link below lambasted Open Source Cheapskates, the demise of the Linux Journal speaks to that effect on users of Linux. I never subscribed, but I picked up a copy now and then, or browsed it at the library.
I subscribed for several years, but then Sean Powers took over as editor and it became less interesting. I still maintained my subscription, until the paper version ended.
On 8/9/19 12:47 PM, Russell Reiter via talk wrote:
On Fri, Aug 9, 2019, 8:58 AM James Knott via talk, <talk@gtalug.org <mailto:talk@gtalug.org>> wrote:
On 2019-08-09 08:03 AM, Russell Reiter via talk wrote: > This problem is amplified by making a copy of a copy of a copy etc.
That is certainly the case with analog, but with digital the copies should be exactly the same, even if on a different media.
There is a possibility that keeping your photos in raw form will protect from major copy errors, but in all situations of moving bits in a data stream, there is the possibility of transient error. Jpg was considered lossy as it could not fully recreate the the full raw data. 25mb or more of raw image data per image is, or was, a hefty size to move across the bus in early days, much less across the internet. Now we have so called lossless JPEG, however its accuracy is based on predictive sampling rather than a pure collection of bits per pixel. When you say keeping your photos in the raw format you are opening up a few other questions. First. If the photos are film based then over time your master image will fade and any copying will result in some form of loss. If your using most phones/cameras you have no actual access to the raw images unless you start hacking so that your first image is already a less than exact copy of the intensity of the raw pixels in the sensor.
Once you have an image file then copying will almost always result in a perfect copy of the original. Bit error rates in copying and storage are exceedingly low and there are numerous methods to decrese the risk go insanely low levels. Data loss in modern computer systems during copy is much more of an academic discussion than a real life discussion.
It is not clear in the referenced article that lossless JPEG is a lossy compression scheme. It may work as well as any of the popular compression schemes like Huffman or LZP where data can be compressed and exactly restored. All compression schemes take advantage of some kind of pattern in the data once the data does not match that pattern then often the compressed data can be larger then the raw data. There is no compression scheme that will guarantee compression of any arbitrary data stream.
Humans lose thousands of skin cells a day, yet the fabric of the persona stays the same. I think photo data is a little the same, a few stray bits lost here or there won't change the picture that much. However it is possible to corrupt that significant bit which would make decoding the picture impossible.
Raw digital photo images as an array of RGB pixels will be very tolerant of bit errors without destroying the whole of the image. GIF or JPEG compression has the downside of being sensitive to errors and will not fail gracefully but given the reliability of digital copies I would be hard pressed to say that it is less reliable than a photo in a drawer. Data loss by accident is way more likely than data loss due to bit-rot. -- Alvin Starr || land: (647)478-6285 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||
On 8/9/19 12:47 PM, Russell Reiter via talk wrote: [snip]
Humans lose thousands of skin cells a day, yet the fabric of the persona stays the same. I think photo data is a little the same, a few stray bits lost here or there won't change the picture that much. However it is possible to corrupt that significant bit which would make decoding the picture impossible.
That's an interesting analogy. Humans are both digital and analog. The digital component is the DNA which contains the whole of the instructions to build and operate a human. The analog component is they way we are grown and the effects of the outside world. In theory you could take a cell and use the DNA to build a copy of the person but it would not be a very faithful copy even leaving aside the effects of living and learning. Or I guess it would be as good as identical twins where one was thrown into a rocket at relativistic speeds to come back later. The other thing about humans is the algorithm of sex to create new humans where data is combined in new ways to provide the ability to evolve. -- Alvin Starr || land: (647)478-6285 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||
On 8/9/19 8:03 AM, Russell Reiter via talk wrote:
On Fri, Aug 9, 2019, 2:45 AM Kevin Cozens via talk <talk@gtalug.org <mailto:talk@gtalug.org>> wrote:
On 2019-08-02 8:03 a.m., Russell Reiter via talk wrote: > On Fri, Aug 2, 2019, 7:23 AM Stewart C. Russell via talk <talk@gtalug.org <mailto:talk@gtalug.org>
[snip]
I recently discovered I have a cassette tape with 4K BASIC for Altair 8800 dated 1976. I have now archived the audio on that cassette that on to my computer. I have other cassette tapes I used with old computers that I'm doing to digitize and attempt to decode.
There are also technologies which were developed but which got overwhelmed by rapid changes in other areas. The lazer optical turntable for playing old records is one.
https://en.m.wikipedia.org/wiki/Laser_turntable
Also from this link, a camera which scans the vinyl grooves and uses software to reconstruct the sound. Both of these technologies were eclipsed by compact disk technology, yet each of them could (probably) non destructively read the records and reconstruct the sound. Sort of a microfiche picture treatment of sound, instead of its traditional use for tiny copies of text and pictures.
The problem with records in general was the destructive playback systems the more you loved your music the worse it got. The old 78RPM recordings were made of a shellac material that I believe was more resilient than vinyl but both the masters and produced records were subject to degradation with use.
It is interesting to realize that a lot of this "old school technology" has survived many a decade yet modern devices like CDs, DVDs, and hard drives often have much shorter shelf lives.
One of the philosophical founders of media theory, Marshal McLuhan said; "the medium is the message." This is probably more true today than it was when he coined the phrase, given all the hyperbole around the collection of metadata on the net these days.
The issue with recording data electronically is bit-rot. This problem is amplified by making a copy of a copy of a copy etc. Having a master copy, no matter what the form, would be be beneficial for any data recovery expert, should they have an urgent need to reconstruct the original data after suspected corruption.
bit-rot is not new but dates back to the earliest recording technologies(clay tablets) where people copying something they would make errors and the meaning of the work would change slightly. When books were copied by hand the error rate was very high but the move to the printing press dramatically reduced the error rate. The current state of the art in error detection and correction has an amazingly low error rate and some schemes can suffer large chunks of data loss. What are the chances that someone on this mailing list will see a misspelled word due to bit-rot? I have some old records that I converted to digital recordings because I liked the music and cranking up the old gramophone is so much more work than point and click. I can assure you that my "master" copies suffered from bit-rot but at least now the loss per replay has gone down by a factor measured in 10 to the power of some number.
We can't reverse entropy, at least not yet. The best we can hope to do is retard it. Whether data is corrupted by dust and scratches in vinyl or cd records, we should always be able to recreate the old technology used for the creation of media at the time it was originally written. Except of course that which is lost to the ancient past, like the lazer anti gravity devices used by the Egyptians to cut and stack rocks into pyramids.
We have the lazers but fall short on the anti-gravity devices. Personally I believe that if we don't bit-rot the planet first, we will get there eventually. ;-)
What happens when we digitally encode the world with next gen ECC systems. We can live forever and never have to worry about forgetting our stupid mistakes because they will be preserved with perfect digital fidelity. -- Alvin Starr || land: (647)478-6285 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||
On Fri, Aug 9, 2019, 9:23 AM Alvin Starr via talk <talk@gtalug.org> wrote:
On 8/9/19 8:03 AM, Russell Reiter via talk wrote:
On Fri, Aug 9, 2019, 2:45 AM Kevin Cozens via talk <talk@gtalug.org> wrote:
On 2019-08-02 8:03 a.m., Russell Reiter via talk wrote:
On Fri, Aug 2, 2019, 7:23 AM Stewart C. Russell via talk < talk@gtalug.org
[snip]
I recently discovered I have a cassette tape with 4K BASIC for Altair 8800
dated 1976. I have now archived the audio on that cassette that on to my computer. I have other cassette tapes I used with old computers that I'm doing to digitize and attempt to decode.
There are also technologies which were developed but which got overwhelmed by rapid changes in other areas. The lazer optical turntable for playing old records is one.
https://en.m.wikipedia.org/wiki/Laser_turntable
Also from this link, a camera which scans the vinyl grooves and uses software to reconstruct the sound. Both of these technologies were eclipsed by compact disk technology, yet each of them could (probably) non destructively read the records and reconstruct the sound. Sort of a microfiche picture treatment of sound, instead of its traditional use for tiny copies of text and pictures.
The problem with records in general was the destructive playback systems the more you loved your music the worse it got. The old 78RPM recordings were made of a shellac material that I believe was more resilient than vinyl but both the masters and produced records were subject to degradation with use.
There were lots of hand crank gramaphones around when I was a kid. Those shellac disks were brittle and since most of the players had old springs the playback speed was inconsistent, but that was all part of the fun.
It is interesting to realize that a lot of this "old school technology" has survived many a decade yet modern devices like CDs, DVDs, and hard drives often have much shorter shelf lives.
One of the philosophical founders of media theory, Marshal McLuhan said; "the medium is the message." This is probably more true today than it was when he coined the phrase, given all the hyperbole around the collection of metadata on the net these days.
The issue with recording data electronically is bit-rot. This problem is amplified by making a copy of a copy of a copy etc. Having a master copy, no matter what the form, would be be beneficial for any data recovery expert, should they have an urgent need to reconstruct the original data after suspected corruption.
bit-rot is not new but dates back to the earliest recording technologies(clay tablets) where people copying something they would make errors and the meaning of the work would change slightly. When books were copied by hand the error rate was very high but the move to the printing press dramatically reduced the error rate. The current state of the art in error detection and correction has an amazingly low error rate and some schemes can suffer large chunks of data loss. What are the chances that someone on this mailing list will see a misspelled word due to bit-rot?
I have some old records that I converted to digital recordings because I liked the music and cranking up the old gramophone is so much more work than point and click. I can assure you that my "master" copies suffered from bit-rot but at least now the loss per replay has gone down by a factor measured in 10 to the power of some number.
We can't reverse entropy, at least not yet. The best we can hope to do is retard it. Whether data is corrupted by dust and scratches in vinyl or cd records, we should always be able to recreate the old technology used for the creation of media at the time it was originally written. Except of course that which is lost to the ancient past, like the lazer anti gravity devices used by the Egyptians to cut and stack rocks into pyramids.
We have the lazers but fall short on the anti-gravity devices. Personally I believe that if we don't bit-rot the planet first, we will get there eventually. ;-)
What happens when we digitally encode the world with next gen ECC systems. We can live forever and never have to worry about forgetting our stupid mistakes because they will be preserved with perfect digital fidelity.
Forgetting something is a great gift. One cancer researcher said that cannabis may help his patients because it makes the cancer cells forget to divide and multiply. Its possible the universe may be entirely mathematical in nature and physicality is just one its many abstractions. The free electron theory led researchers to the god particle. However Professor Higgs said; I'm an atheist, so please don't refer to the discovery using that metaphore. At this point it may be more than just vague theory that the universe is holographic, with each bit of it mathematically predisposed to recreate itself in its own entirety.
-- Alvin Starr || land: (647)478-6285 Netvel Inc. || Cell: (416)806-0133alvin@netvel.net ||
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
| From: Alvin Starr via talk <talk@gtalug.org> | The old 78RPM recordings were made of a shellac material that I believe was | more resilient than vinyl but both the masters and produced records were | subject to degradation with use. I gave away my grandmother's 78s to someone who promised to digitize them and give me a copy. That has yet to happen. The needles in 78s are quite different from 33 RPM LPs. Much cruder. They were typically steel or cactus thorns! Mixed in with the shellac was an abrasive, intended to keep the needle sharp! Imagine where the filings ended up! In the stereo LP world, great care was taken to avoid anything abrasive. The needles were made of very hard materials like diamond.
On 8/9/19 2:44 AM, Kevin Cozens via talk wrote:
On 2019-08-02 8:03 a.m., Russell Reiter via talk wrote:
On Fri, Aug 2, 2019, 7:23 AM Stewart C. Russell via talk <talk@gtalug.org <mailto:talk@gtalug.org>> wrote:
On 2019-08-01 11:09 p.m., D. Hugh Redelmeier via talk wrote: > > - punch cards and paper tape: 100 years > > - 9-track mag tape: 10 years [snip] Good luck getting a reader for any of these now. At least the paper media is scannable.
Chances are if you have the data on tape, you already have the reader. These folks will repair or replace your equipment. [snip] ????It may seem out of date, but there is still a strong business case for maintaining the original archive records on original format, as well as a copy transferred to newer media, depending on the importance of the dataset itself.??
This one piqued my curiosity having worked with most of these formats so I thought to ask the question. "How many of these can still be read and how"
I have printouts of programs that had been printed on an old IBM 1403 chain printer, some programs that are on punch cards, and one on paper tape. I also have two mag tapes reels that were used with IBM mainframes. I have no idea what is on those tapes. The printouts, punch cards, and paper tape have survived intact for over 40 years.
You could retype them or cut them up and put them on a scanner with OCR.
I still (mostly) remember how to read punch cards. I would have to find a site to help decode the paper tape but it could be run through an ASR 33 teletype to generate a printout. I figured a DIY kind of reader would work but I fell over this one https://www.ebay.com/sch/i.html?LH_CAds=&_ex_kw=&_fpos=&_fspt=1&_mPrRngCbx=1&_nkw=paper+tape+reader&_sacat=&_sadis=&_sop=12&_udhi=&_udlo=&_fosrp=1
80 col punch cards don't seem to have readers on ebay but someone built a really nice low tech arduino solution. https://arduining.com/2012/06/10/arduino-punched-card-reader/ This project kind of makes me think of a book I saw years ago about how to build a computer using paper clips and a tin cans. Here is a serious overkill kind of solution. https://www.youtube.com/watch?v=LcwxW2ne-UU&feature=youtu.be
I recently discovered I have a cassette tape with 4K BASIC for Altair 8800 dated 1976. I have now archived the audio on that cassette that on to my computer. I have other cassette tapes I used with old computers that I'm doing to digitize and attempt to decode.
These are more interesting because it is recording format can be much more custom from using multi-tone audio to custom raw encoding formats.
It is interesting to realize that a lot of this "old school technology" has survived many a decade yet modern devices like CDs, DVDs, and hard drives often have much shorter shelf lives.
I am not sure if the durability was the primary concern when these formats were devised but it was likely more about increasing the information density and trying to make the format machine processable. It would be interesting to see the bit densities over time from stone tablets(how would cave paintings count?) to the latest production storage systems. It would also be interesting to know how many people had access to the storage media over time. That would have started with a few priests and specially trained people to just about everybody having a cell phone with a few GB of storage. -- Alvin Starr || land: (647)478-6285 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||
On 2019-08-02 07:23 AM, Stewart C. Russell via talk wrote:
On 2019-08-01 11:09 p.m., D. Hugh Redelmeier via talk wrote:
- punch cards and paper tape: 100 years
- 9-track mag tape: 10 years Good luck getting a reader for any of these now. At least the paper media is scannable.
I used to work with all of those, back when I was a computer tech working on mini computers. There were even a couple of 7 track tape drives.
On Thu, 1 Aug 2019 23:09:32 -0400 (EDT) "D. Hugh Redelmeier via talk" <talk@gtalug.org> wrote:
- petroglyphs: long long time
- clay tablets: millennia
- paper (pre-wood-pulp): five hundred years
- paper made from wood pulp: 75 years
- punch cards and paper tape: 100 years
- 9-track mag tape: 10 years
- digital cassette tape 4 years (formats changed too quickly)
- floppy disks: 5 years? Depends on the format (consider 3.0" floppies)
- USB flash drives: I've had them die after a year, but that's not expected.
- hard drives: death by standards evolution. Try finding an ST506 controller. Or MFM, ESDI, SCSI, FireWire. Support for even PATA is fading.
- Laser Disc, Magneto-optical disks, CD-ROM, DVD (multiple standards), BluRay: each has standards that get obsolete. The actual data may deteriorate too. I do have some DVD that claim to have a lifetime of over 100 years.
Hugh, I copy my HDD backup to Blu-Rays periodically. Occasionally, I have had to recover stuff from them, and it has always worked. Typically, this was months after the fact. I archive my digital photos to DVD. I store these in a dark, cool place, and again, they are doing fine. My good camera has two SD cards, one of which I have designed as a backup. I have my DVD archive, my Blu-Ray backups, and I archive the SDs when they are full. I think my odds are pretty good. It is getting harder to find DVD and Blu-Ray discs in stores. The next time I order DVDs, it will be online, and I will order archival quality. -- Howard Gibson hgibson@eol.ca jhowardgibson@gmail.com http://home.eol.ca/~hgibson
On Thu, Aug 01, 2019 at 07:00:43PM -0400, Howard Gibson via talk wrote:
When I bought a hard drive at Best Buy, I asked about SSDs. I understand that there is a maximum number of writes you can do to them, and the number is rather small. I was buying a backup drive that runs at night while I am in bed, so I went for cheap and reliable.
A low end SSD can usually last 1000 writes. That is 1000 writes to every block and it does wear leveling. So for a backup drive you would never reach that. As a swap partition on a machine with not enough ram, you can easily reach that. For something constantly writing a ton of log files, you could easily reach that. In a typical user machine, you probably never would either. For example Samsung says the 860 EVO is rated for 600TB total writes for a 1TB model, so 600 times the size of the drive. Of course they take some wear leveling and moving stuff around to deal with that into account. The 860 Pro doubles that to 1200TB written for a 1TB. Go to an enterprise SSD and a 960GB drive can have 6000TB written as endurance. They cost more of course. -- Len Sorensen
| From: Lennart Sorensen via talk <talk@gtalug.org> | A low end SSD can usually last 1000 writes. That is 1000 writes to | every block and it does wear leveling. There is the invisible problem of "write amplification". Does write amplification count against the official endurance specifications? If so, the user can have no real idea of where they are on the odometer. Write amplification can get pathologically bad if the drive controller thinks that the disk is close to full. - manufacturers always "overprovision". They provide more flash memory than is in the view of the disk that the computer sees. That prevents the drive actually from filling up. I presume that cheaper drives have less overprovisioning. The amount of overprovisioning isn't something disclosed on spec sheets. - using "trim" (see fstrim(8)) can inform the drive controller of blocks that the filesystem considers deleted. This cannot be inferred by the controller until the logical block is overwritten. - the user can leave some of the disk space unused and this helps as long as the controller knows that the space is unused. Another thing that needlessly wears SSDs: updating access times in the inodes of open files. This is needed for POSIX semantics but I think that modern Linux systems default to being lazy about those updates (a Good Thing). I could be wrong about this -- I haven't checked. A purely log-structued filesystem would probably be good for SSDs.
On Tue, Aug 06, 2019 at 12:14:35PM -0400, D. Hugh Redelmeier via talk wrote:
There is the invisible problem of "write amplification". Does write amplification count against the official endurance specifications? If so, the user can have no real idea of where they are on the odometer.
That is true, and I suspect it does count, although they may have taken some into account in the specs.
Write amplification can get pathologically bad if the drive controller thinks that the disk is close to full.
- manufacturers always "overprovision". They provide more flash memory than is in the view of the disk that the computer sees. That prevents the drive actually from filling up. I presume that cheaper drives have less overprovisioning. The amount of overprovisioning isn't something disclosed on spec sheets.
I think that is part of why you get a 1TB consumer drive but a 960GB enterprise drive. Internally they are likely the same amount of actual flash blocks.
- using "trim" (see fstrim(8)) can inform the drive controller of blocks that the filesystem considers deleted. This cannot be inferred by the controller until the logical block is overwritten.
- the user can leave some of the disk space unused and this helps as long as the controller knows that the space is unused.
Another thing that needlessly wears SSDs: updating access times in the inodes of open files. This is needed for POSIX semantics but I think that modern Linux systems default to being lazy about those updates (a Good Thing). I could be wrong about this -- I haven't checked.
A purely log-structued filesystem would probably be good for SSDs.
Might be. -- Len Sorensen
On Thu, Aug 01, 2019 at 11:15:43AM -0400, D. Hugh Redelmeier via talk wrote:
We bought two Asus ux305ca notebooks about three years ago. The Microsoft Store had a remarkably good deal on them. I'm not the only GTALUGger to buy this notebook.
Lesson here: Don't buy Asus laptop, if you want reliability. -- William Park <opengeometry@yahoo.ca>
| From: William Park via talk <talk@gtalug.org> | Lesson here: Don't buy Asus laptop, if you want reliability. Can you expand on that?
On 8/1/19 7:14 PM, William Park via talk wrote:
On Thu, Aug 01, 2019 at 11:15:43AM -0400, D. Hugh Redelmeier via talk wrote:
We bought two Asus ux305ca notebooks about three years ago. The Microsoft Store had a remarkably good deal on them. I'm not the only GTALUGger to buy this notebook. Lesson here: Don't buy Asus laptop, if you want reliability.
The lesson should be don't buy the least expensive product if you want reliability. I am typing this on a 6+ year old Asus laptop with a couple of 256G SDDs and I have not had a problem with it. Most manufacturers will produce products that are scaled back with the least expensive components. I have a few "cheap" HP,ACER and Dell laptops that have been trashed in a couple of years in that category. I would be willing to bet that Microsoft had Asus make a special production run with parts of "less expensive" quality so that they could meet that "good deal" price point. Remember. You only get what you pay for(on a good day). -- Alvin Starr || land: (647)478-6285 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||
On 2019-08-02 09:35 AM, Alvin Starr via talk wrote:
The lesson should be don't buy the least expensive product if you want reliability.
Yep. That's why I bought a Lenovo ThinkPad, rather than a plain Lenovo notebook.
Remember. You only get what you pay for
That's something a lot of people have never learned.
Hi Alvin, On Fri, 2 Aug 2019 09:35:47 -0400 Alvin Starr via talk <talk@gtalug.org> wrote:
On Thu, Aug 01, 2019 at 11:15:43AM -0400, D. Hugh Redelmeier via talk wrote:
We bought two Asus ux305ca notebooks about three years ago. The Microsoft Store had a remarkably good deal on them. I'm not the only GTALUGger to buy this notebook. Lesson here: Don't buy Asus laptop, if you want reliability. The lesson should be don't buy the least expensive product if you want reliability. I am typing this on a 6+ year old Asus laptop with a couple of 256G SDDs and I have not had a problem with it. Most manufacturers will produce products that are scaled back with
On 8/1/19 7:14 PM, William Park via talk wrote: the least expensive components. I have a few "cheap" HP,ACER and Dell laptops that have been trashed in a couple of years in that category. I would be willing to bet that Microsoft had Asus make a special production run with parts of "less expensive" quality so that they could meet that "good deal" price point.
You will not lose your money, but as the odds will be fairly even, you will not win anything :)
Remember. You only get what you pay for(on a good day).
Well said!
+1 on all Alvin said. Many brands seem to need one or two really low-end PoS units to give Best Buy (and others) something for the far end of the display table where all that matters is price. You may even see really nice outside finishes on the units, or even a connector you won't see on more-expensive units (ooh! a RS-232 port!) because at that point the decision is wholly based on superficial. While it seems every brand seems to have something at this low end IMO Acer seems to predominate. FWIW my current laptop is an Asus which I've also had for 6+ years and I'm very happy with it. On Fri, 2 Aug 2019 at 09:36, Alvin Starr via talk <talk@gtalug.org> wrote:
On 8/1/19 7:14 PM, William Park via talk wrote:
On Thu, Aug 01, 2019 at 11:15:43AM -0400, D. Hugh Redelmeier via talk wrote:
We bought two Asus ux305ca notebooks about three years ago. The Microsoft Store had a remarkably good deal on them. I'm not the only GTALUGger to buy this notebook. Lesson here: Don't buy Asus laptop, if you want reliability.
The lesson should be don't buy the least expensive product if you want reliability.
I am typing this on a 6+ year old Asus laptop with a couple of 256G SDDs and I have not had a problem with it.
Most manufacturers will produce products that are scaled back with the least expensive components.
I have a few "cheap" HP,ACER and Dell laptops that have been trashed in a couple of years in that category.
I would be willing to bet that Microsoft had Asus make a special production run with parts of "less expensive" quality so that they could meet that "good deal" price point.
Remember. You only get what you pay for(on a good day).
-- Alvin Starr || land: (647)478-6285 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
-- Evan Leibovitch, Toronto Canada @evanleibovitch or @el56
| From: Alvin Starr via talk <talk@gtalug.org> | The lesson should be don't buy the least expensive product if you want | reliability. Not in this case. | Most manufacturers will produce products that are scaled back with the least | expensive components. Not exactly the case for notebooks. For one thing there are many fewer manufacturers (ODMs and OEMs) than there aree brands. Don't just look at the brand to discern quality. There seem to be quality and price "bands". But note that quality isn't a single thing. Much of what the higher-priced notebooks have may be of no interest to you. Here are some of the things that I've noticed about business notebooks and how important they are to me. For example, businesses want a product that is available for years (kind of like LTS Linux distros). For my (one-off) purchases, I'd rather the latest and greatest features. I really don't need a VGA port! Business sales take a lot more negotiation. This often results in padded list prices. Businesses expect long term software support. This as ALMOST useless to me: I want to run Linux and most manufacturers don't support that. The ones that do support distros I don't care about (but it still would be useful to me). The one bit that I do value: continuing upgrades to the firmware. That seems rare except in business-class hardware. Businesses seem to expect decent hardware manuals. That's something I value. | I have a few "cheap" HP,ACER and Dell laptops that have been trashed in a | couple of years in that category. Cheap/inexpensive notebooks have been pretty good to me. But I generally know what I'm getting into. If you don't want to waste your time figuring this out, it is safer to throw money at the problem. The Asus UX305ca was not a cheap notebook. It is an Ultrabook(tm) so it is mechanically less robust than, say, a ThinkPad T<whatever>. I've seen used models for sale at the price we paid for a new one three years ago. | I would be willing to bet that Microsoft had Asus make a special production | run with parts of "less expensive" quality so that they could meet that "good | deal" price point. That's not the Microsoft Store model. They do sell standard vendor hardware, with a couple of wrinkles: - their supply chain seems to be US-based. Evidence: they seem to sell devices with US keyboards where the versions sold in places like Best Buy have the Canadian Bilingual keyboards. - The often sell "Signature Editions" which have clean-ish Windows loads (no crapware other that what is part of Windows). That's the only difference. They have sometimes been inept at pricing. Sometimes non-competitive and sometimes (but rarely) very low. I've wondered whether they don't always understand the difference between US$ and CA$. | Remember. You only get what you pay for(on a good day). A good first-order rule of thumb. Not an inviolable law of nature. I have fun discovering the cases where it does not apply. I even post some of the examples to this list. ================ One thing that annoys me and might be relevant to our plight: - the SSD (Micron M600) had a firmware update <https://media.digikey.com/pdf/PCNs/Micron/PCN_31973.pdf> - the SSD crash we experienced might have been prevented by this firmware update - the Micron update, as distributed, will not apply to OEM devices (the Micron firware utility will refuse) - Asus never issued a version of the firmware update for its notebooks This is Standard Operating Procedure for most vendors but I think that it is unconscionable.
On Thu, Aug 01, 2019 at 07:14:04PM -0400, William Park via talk wrote:
Lesson here: Don't buy Asus laptop, if you want reliability.
Don't buy consumer oriented laptops if you want reliability. My wife had one Asus laptop that was pretty near indestructible, but it was very much a business oriented model. That was the R1 tabletpc about 12 years ago. It survived an amazing amount of abuse. Thinkpads are generally very sturdy. Ideapads are total junk. One aims at business, the other consumers. -- Len Sorensen
participants (13)
-
ac -
Alvin Starr -
Chris F.A. Johnson -
D. Hugh Redelmeier -
Evan Leibovitch -
Howard Gibson -
James Knott -
Kevin Cozens -
lsorense@csclub.uwaterloo.ca -
Russell Reiter -
Stewart C. Russell -
Tim Tisdall -
William Park