
I could use some help from people who are more expert at GnuPG than I am, which is most people. Using GnuPG 2.0.26 and libcrypt 1.6.2, running under Gentoo with kernel 3.17.4, all at the command line. I have a gpg key (since 2004) and it's always worked well for me. Just today, though, I tried to use it to decrypt a file I'd successfully decrupted many times in the past: $ gpg -d File_to_Decrypt > decrypted_file I kept getting a "bad session key" error. After doing some googling, I eventually tried the suggestion to run: $ gpgconf --reload After doing so, however, when I try to decrypt the file as before: $ gpg -d File_to_Decrypt > decrypted_file gpg now asks for the data file, and when I offer File_to_Decrypt, I get back a list of of signatures, each correctly identified with my DSA key ID (as shown by the command gpg --listkeys), but each entry is followed by the line "BAD signature from <my_ID> [ultimate]" -- which seems like very bad news indeed. Naturally, the encrypted file is valuable (to me), and I have backup encrypted copies, but nothing in clear... Well, the gnupg key hasn't changed since February 2004, and it's worked fine all the way to at least mid-October 2014, the last time I encrypted/decrypted the file. Now I'm more or less clueless. What should I try next? -- Peter King peter.king@utoronto.ca Department of Philosophy 170 St. George Street #521 The University of Toronto (416)-978-3311 ofc Toronto, ON M5R 2M8 CANADA http://individual.utoronto.ca/pking/ ========================================================================= GPG keyID 0x7587EC42 (2B14 A355 46BC 2A16 D0BC 36F5 1FE6 D32A 7587 EC42) gpg --keyserver pgp.mit.edu --recv-keys 7587EC42

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I'd try to restore an older keyring and try with that. If you get the same results you can always put the current keyring back. You could also try to install GnuPG 1.x and use that. GnuPG 1.x doesn't have all the command line options that GnuPG 2.x does, but 1.x should still be able to decrypt a file. There's also a new version, GnuPG 2.1.x which I haven't used yet, but may provide better diagnostics. There's an unlikely chance that your encrypted file is corrupted in such a manner as to throw signature errors; did you try one of your backed-up files? - --Bob. On 26/12/14 12:06 AM, Peter King wrote:
I could use some help from people who are more expert at GnuPG than I am, which is most people.
Using GnuPG 2.0.26 and libcrypt 1.6.2, running under Gentoo with kernel 3.17.4, all at the command line.
I have a gpg key (since 2004) and it's always worked well for me. Just today, though, I tried to use it to decrypt a file I'd successfully decrupted many times in the past:
$ gpg -d File_to_Decrypt > decrypted_file
I kept getting a "bad session key" error. After doing some googling, I eventually tried the suggestion to run:
$ gpgconf --reload
After doing so, however, when I try to decrypt the file as before:
$ gpg -d File_to_Decrypt > decrypted_file
gpg now asks for the data file, and when I offer File_to_Decrypt, I get back a list of of signatures, each correctly identified with my DSA key ID (as shown by the command gpg --listkeys), but each entry is followed by the line "BAD signature from <my_ID> [ultimate]" -- which seems like very bad news indeed.
Naturally, the encrypted file is valuable (to me), and I have backup encrypted copies, but nothing in clear...
Well, the gnupg key hasn't changed since February 2004, and it's worked fine all the way to at least mid-October 2014, the last time I encrypted/decrypted the file. Now I'm more or less clueless. What should I try next?
Bob Jonkman <bjonkman@sobac.com> Phone: +1-519-669-0388 SOBAC Microcomputer Services http://sobac.com/sobac/ http://bob.jonkman.ca/blogs/ http://sn.jonkman.ca/bobjonkman/ Software --- Office & Business Automation --- Consulting GnuPG Fngrprnt:04F7 742B 8F54 C40A E115 26C2 B912 89B0 D2CC E5EA -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) Comment: Ensure confidentiality, authenticity, non-repudiability iEYEARECAAYFAlSeEjgACgkQuRKJsNLM5eo+rQCeL13lx/jYuGMrHOsUDZ1wYChn 15IAoLhHjm/SplOme092uqwyazE1szVX =GpcY -----END PGP SIGNATURE-----

On Fri, Dec 26, 2014 at 08:58:18PM -0500, Bob Jonkman wrote:
I'd try to restore an older keyring and try with that. If you get the same results you can always put the current keyring back.
You could also try to install GnuPG 1.x and use that. GnuPG 1.x doesn't have all the command line options that GnuPG 2.x does, but 1.x should still be able to decrypt a file.
There's also a new version, GnuPG 2.1.x which I haven't used yet, but may provide better diagnostics.
There's an unlikely chance that your encrypted file is corrupted in such a manner as to throw signature errors; did you try one of your backed-up files?
Thanks for the suggestions. The particular file is backed up on three other computers daily, and I get the same problem on each of them. Of course, if there were damage to the file, the damaged file would have been backed up. When I get back to Toronto in a few days I'll check a few places where an older version of the file might be. Ditto for the keyring. I have some hopes that I will get hold of a sufficiently old version of the file and of gpg to decrypt it that way. We'll see. I upgraded to the latest testing version under gentoo (not the bleeding-edge version), 2.0.26-r2, but it made no difference to the problem. Haven't yet tried to downgrade GnuPG. One of the things that puzzles me is that the error is "bad session key" -- which at least suggests that the file is okay and the passphrase checks out, but there is something about my current "session" at fault. This seems to happen when people run gpg-agent, but I'm not doing that. No idea what GnuPG thinks a "session" is... perhaps it has a bad key or something cached in its memory? But I've completely rebooted my laptop several times, and the same problem comes up, so that doesn't seem likely. Thanks again! -- Peter King peter.king@utoronto.ca Department of Philosophy 170 St. George Street #521 The University of Toronto (416)-978-3311 ofc Toronto, ON M5R 2M8 CANADA http://individual.utoronto.ca/pking/ ========================================================================= GPG keyID 0x7587EC42 (2B14 A355 46BC 2A16 D0BC 36F5 1FE6 D32A 7587 EC42) gpg --keyserver pgp.mit.edu --recv-keys 7587EC42

When I get back to Toronto in a few days I'll check a few places where an older version of the file might be. Ditto for the keyring. I have some hopes that I will get hold of a sufficiently old version of the file and of gpg to decrypt it that way. We'll see.
I managed to locate two partial versions of the missing file, from which I could reconstruct most of it. Still no idea about what went wrong, but given that the partial versions decrypted without problem, my guess is a disk error or something of the sort that corrupted the encrypted file, which was then propagated to all my backups. Moral of the Story (one moral among many): Keep static time-stamped backups as well as current redundant copies. Will implement a scheme to do so this week, a better New Year's resolution than most! Thanks to all who offered suggestions. -- Peter King peter.king@utoronto.ca Department of Philosophy 170 St. George Street #521 The University of Toronto (416)-978-3311 ofc Toronto, ON M5R 2M8 CANADA http://individual.utoronto.ca/pking/ ========================================================================= GPG keyID 0x7587EC42 (2B14 A355 46BC 2A16 D0BC 36F5 1FE6 D32A 7587 EC42) gpg --keyserver pgp.mit.edu --recv-keys 7587EC42

On 12/30/2014 04:53 PM, Peter King wrote:
Moral of the Story (one moral among many): Keep static time-stamped backups as well as current redundant copies. Will implement a scheme to do so this week, a better New Year's resolution than most!
This is way more common than people expect. The ZFS folks found out that they needed to checksum disk files, somewhat to their surprise. ICL used to do it on a sector-by-sector basis as part of the sector footer, and would noisily recover when they detected problems. Sun 3.5 did too, but you had to do the recovery by booting (!) /stand/diag --dave -- David Collier-Brown, | Always do right. This will gratify System Programmer and Author | some people and astonish the rest davecb@spamcop.net | -- Mark Twain

David Collier-Brown wrote:
On 12/30/2014 04:53 PM, Peter King wrote:
Moral of the Story (one moral among many): Keep static time-stamped backups as well as current redundant copies. Will implement a scheme to do so this week, a better New Year's resolution than most!
This is way more common than people expect.
The ZFS folks found out that they needed to checksum disk files, somewhat to their surprise. ICL used to do it on a sector-by-sector basis as part of the sector footer, and would noisily recover when they detected problems. Sun 3.5 did too, but you had to do the recovery by booting (!) /stand/diag
I've had ZFS, and recently now also BTRFS, tell me that it was getting checksum errors from a disk. In the one case it seemed like a one-off glitch, but the other seems to be hardware that's failing by flailing, sometimes returning whatever crap it thinks it might have seen, and in any event its problems are widespread and that disk's being replaced and e-wasted. Defensively, either of those filesystems are a better thing than the callow trusting filesystems of yesteryear. Subtle problems you don't notice til long later waste far too much of one's time. At the application level, one thing that can help is keeping important files in git (and pushing a copy to another computer!) so that you have copies and history of revisions and it keeps checksums and will squawk at corruption. Another is cold-storage backups that don't overwrite existing files, just add new ones, for things like media and other archival files that shouldn't ever change after their initial creation. -- Anthony de Boer

On Tue, Dec 30, 2014 at 04:53:09PM -0500, Peter King wrote:
I managed to locate two partial versions of the missing file, from which I could reconstruct most of it. Still no idea about what went wrong, but given that the partial versions decrypted without problem, my guess is a disk error or something of the sort that corrupted the encrypted file, which was then propagated to all my backups.
Moral of the Story (one moral among many): Keep static time-stamped backups as well as current redundant copies. Will implement a scheme to do so this week, a better New Year's resolution than most!
Thanks to all who offered suggestions.
I suppose things like rsnapshot which keeps copies with hardlinks to save space really ought to be on a good filesystem. By default rsync doesn't compare files if the size and timstamp matches, which of course means rsnapshot could happily think you have a good copy of a file but that has in fact gotten corrupt because the underlying disk is failing. Making it always do a read compare would make it much slower, so having the filesystem maintain the reduncancy and checksums does seem more efficient. Getting reliable storage is getting complicated. Just remember not to yell at your disks though. They don't like that. :) -- Len Sorensen

On 2015-01-01 5:53 PM, Lennart Sorensen wrote:
On Tue, Dec 30, 2014 at 04:53:09PM -0500, Peter King wrote:
I managed to locate two partial versions of the missing file, from which I could reconstruct most of it. Still no idea about what went wrong, but given that the partial versions decrypted without problem, my guess is a disk error or something of the sort that corrupted the encrypted file, which was then propagated to all my backups.
Moral of the Story (one moral among many): Keep static time-stamped backups as well as current redundant copies. Will implement a scheme to do so this week, a better New Year's resolution than most!
Thanks to all who offered suggestions.
I suppose things like rsnapshot which keeps copies with hardlinks to save space really ought to be on a good filesystem. By default rsync doesn't compare files if the size and timstamp matches, which of course means rsnapshot could happily think you have a good copy of a file but that has in fact gotten corrupt because the underlying disk is failing. Making it always do a read compare would make it much slower, so having the filesystem maintain the reduncancy and checksums does seem more efficient.
The benefit of rsnapshot being that unless the original or subsequent versions are deleted, it is possible to go back in time to a version of the file that is intact. If the underlying disk is failing and corrupting files then ZFS or rsnapshot or tarballs won't make a difference anyway. FWIW, I use rsnapshot for backups on top of ZFS (on Linux) as a production remote backup server. Apart from lengthy delete times (which is an issue with BTRFS as well, and rsnapshot for any meaningful amount of backups on any filesystem), it has been a reliable, and space efficient backup system for a few years now. Cheers, Jamon

On Thu, Jan 01, 2015 at 08:37:10PM +0000, Jamon Camisso wrote:
The benefit of rsnapshot being that unless the original or subsequent versions are deleted, it is possible to go back in time to a version of the file that is intact. If the underlying disk is failing and corrupting files then ZFS or rsnapshot or tarballs won't make a difference anyway.
But ZFS would help. If the file is corrupted by a disk, then the checksum almost certainly will fail and the copy from an alternate disk in the filesystem will be read and is very unlikely to be corrupted at the same time.
FWIW, I use rsnapshot for backups on top of ZFS (on Linux) as a production remote backup server. Apart from lengthy delete times (which is an issue with BTRFS as well, and rsnapshot for any meaningful amount of backups on any filesystem), it has been a reliable, and space efficient backup system for a few years now.
I like rsnapshot too, but it is worth remembering that if your backup of a file is ever corrupted by the disk, you won't get a new copy when using rsnapshot/rsync in the typical mode. -- Len Sorensen
participants (6)
-
Anthony de Boer
-
Bob Jonkman
-
David Collier-Brown
-
Jamon Camisso
-
Lennart Sorensen
-
Peter King