Toronto Public Library website

What is going on with the library website? There was a CBC article that said there was a ransomware attack, but it's been down for a week and it's hard to imagine why it would take so long to recover unless their infrastructure was much weaker than I would expect.

On 2023-11-08 08:49, Warren McPherson via talk wrote:
What is going on with the library website? There was a CBC article that said there was a ransomware attack, but it's been down for a week and it's hard to imagine why it would take so long to recover unless their infrastructure was much weaker than I would expect.
It's been mentioned in the news recently and they expect to be up within a week. Perhaps recovering from a ransomware attack is a bigger job than you imagine.

Recovery doesn't take that long. That comes from experience. The main reason they wouldn't be up again is they don't have confidence they could prevent another attack. Taking longer than I imagine likely suggests prehistoric infrastructure. Solving that problem for public institutions is a hard and interesting problem. On Wed, Nov 8, 2023 at 8:52 AM James Knott via talk <talk@gtalug.org> wrote:
On 2023-11-08 08:49, Warren McPherson via talk wrote:
What is going on with the library website? There was a CBC article that said there was a ransomware attack, but it's been down for a week and it's hard to imagine why it would take so long to recover unless their infrastructure was much weaker than I would expect.
It's been mentioned in the news recently and they expect to be up within a week. Perhaps recovering from a ransomware attack is a bigger job than you imagine.
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk

It does depend on how much you have to recover. I have clients whose data sets take weeks to months to copy over multi gigabit links. If you have to recover every computer in the building from a clean backup that can take a long time. Or worse imagine you have to re-install then upgrade each system then do the restore of application data. Because you installed these systems over years you did not do automated roll outs but each is a one of. Lots of people have the habit of only backing up application or user data because whole system backups "take so much more space". Then your restore process goes from a single copy of the whole system to a complex set of tasks to try and discover what configurations or hidden data is missing. How often do people actually do a recovery to test if the backup data works. Back in the days of tapes I worked for a company that had a bug that caused the tapes to be written with 0's. They did have a verify step in the process where they read the data after it was written but they only verified that the checksums matched the data. Fortunately this was found by a customer and none of the financial institutions involved had a bad day. On 2023-11-08 09:10, Warren McPherson via talk wrote:
Recovery doesn't take that long. That comes from experience. The main reason they wouldn't be up again is they don't have confidence they could prevent another attack. Taking longer than I imagine likely suggests prehistoric infrastructure. Solving that problem for public institutions is a hard and interesting problem.
On Wed, Nov 8, 2023 at 8:52 AM James Knott via talk <talk@gtalug.org> wrote:
On 2023-11-08 08:49, Warren McPherson via talk wrote: > What is going on with the library website? > There was a CBC article that said there was a ransomware attack, but > it's been down for a week and it's hard to imagine why it would take > so long to recover unless their infrastructure was much weaker than I > would expect. > It's been mentioned in the news recently and they expect to be up within a week. Perhaps recovering from a ransomware attack is a bigger job than you imagine.
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
--- Post to this mailing listtalk@gtalug.org Unsubscribe from this mailing listhttps://gtalug.org/mailman/listinfo/talk
-- Alvin Starr || land: (647)478-6285 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||

On 2023-11-08 10:12, Alvin Starr via talk wrote:
It does depend on how much you have to recover. I have clients whose data sets take weeks to months to copy over multi gigabit links.
Not too long ago I had to move a database to a new server. A couple of TBs needed to be moved. Both machines were part of the same network but it still took the better part of a week to copy the data to the new box and get it up and running with the copied data.
They did have a verify step in the process where they read the data after it was written but they only verified that the checksums matched the data. Fortunately this was found by a customer and none of the financial institutions involved had a bad day.
Lucky that someone noticed in time to correct the issue. I wonder how a customer was the one to notice it rather than the people doing the data recovery. -- Cheers! Kevin. https://www.patreon.com/KevinCozens | "Nerds make the shiny things that | distract the mouth-breathers, and Owner of Elecraft K2 #2172 | that's why we're powerful" #include <disclaimer/favourite> | --Chris Hardwick

On 2023-11-08 18:20, Kevin Cozens via talk wrote:
On 2023-11-08 10:12, Alvin Starr via talk wrote:
It does depend on how much you have to recover. I have clients whose data sets take weeks to months to copy over multi gigabit links.
Not too long ago I had to move a database to a new server. A couple of TBs needed to be moved. Both machines were part of the same network but it still took the better part of a week to copy the data to the new box and get it up and running with the copied data. A common thing is to move blobs of data out of a database and into files on a file system. Which seems like a good idea till you get a hundred million or so small files in a 5-10 deep directory tree. Then something like a tar or dump can take weeks to run but doing something like an image copy of the whole filesystem can be done in a day.
They did have a verify step in the process where they read the data after it was written but they only verified that the checksums matched the data. Fortunately this was found by a customer and none of the financial institutions involved had a bad day.
Lucky that someone noticed in time to correct the issue. I wonder how a customer was the one to notice it rather than the people doing the data recovery.
Someone tried to extract something from a backup and got back a big block of zeros. Once it was figured out what happened it was all hands on deck to fix it and get it rolled out to the other customers. Another one was a clients client who faithfully ran backups each night for years. Just someone forgot to tell them to replace the tape. You can guess what happened when they had a drive failure...... I am always harping at people to test their recovery process regularly. -- Alvin Starr || land: (647)478-6285 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||

speaking personally? It probably was. My reasoning comes from a rather disturbing exchange I had with an employee about the sites lack of inclusive design. The sense I got is that those in charge took a lets build things with lots of third party input based on what is the latest trend. instead of building a solid secure, progressive enhancement based floor. Articles I saw on the cp24 site hinted that likely some staffer downloaded a file or opened an attachment. if you trust your computer foundations to third parties, again speaking personally, then you cannot swiftly put things back together. Just my 2 cents, Kare On Wed, 8 Nov 2023, Warren McPherson via talk wrote:
What is going on with the library website? There was a CBC article that said there was a ransomware attack, but it's been down for a week and it's hard to imagine why it would take so long to recover unless their infrastructure was much weaker than I would expect.

On 2023-11-08 11:35, Karen Lewellen via talk wrote:
speaking personally? It probably was. My reasoning comes from a rather disturbing exchange I had with an employee about the sites lack of inclusive design. The sense I got is that those in charge took a lets build things with lots of third party input based on what is the latest trend. instead of building a solid secure, progressive enhancement based floor. Articles I saw on the cp24 site hinted that likely some staffer downloaded a file or opened an attachment. if you trust your computer foundations to third parties, again speaking personally, then you cannot swiftly put things back together. Just my 2 cents, Kare
In the libraries defense. Lots of bigger and supposedly more secure organizations have been hit by ransomware attacks. Phishing is getting more and more sophisticated and all it takes is a momentary lapse.
On Wed, 8 Nov 2023, Warren McPherson via talk wrote:
What is going on with the library website? There was a CBC article that said there was a ransomware attack, but it's been down for a week and it's hard to imagine why it would take so long to recover unless their infrastructure was much weaker than I would expect.
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
-- Alvin Starr || land: (647)478-6285 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||

An employer is constantly phishing staff, in hopes of sensitizing people so that real attacks won't get through. Alas, all they do is make us paranoid. Humans are particularly bad at /reliably/ detecting attacks, so occasional attacks get through, after which we get even more paranoid, and wonder if our jobs are on the line... Every single phishing attack I've seem, real or self-inflicted, laughable or brilliant, got detected by spamcop.net. Does the company use a spam filer? Sure, but it's the Microsoft one, which is useless. Any time I see something I don't recognize at work, I paste it into spamcop. So: 1. /Do/ use technological means to deal with ransomware attacks 2. Make sure it's a /credible/ means By this I mean a backup service like one Lexis Nexis had: they connected via a VPN, they were only connected when backing up, the connection was a disk mount, and they offered /financial guarantees. / That last reassured my VP: she said "they don't want to be sued out of business, and know a legal publisher like us will be litigious if they mess up". The only thing I didn't like was how slow it was do do a restore (;-)) --dave On 11/8/23 13:23, Alvin Starr via talk wrote:
On 2023-11-08 11:35, Karen Lewellen via talk wrote:
speaking personally? It probably was. My reasoning comes from a rather disturbing exchange I had with an employee about the sites lack of inclusive design. The sense I got is that those in charge took a lets build things with lots of third party input based on what is the latest trend. instead of building a solid secure, progressive enhancement based floor. Articles I saw on the cp24 site hinted that likely some staffer downloaded a file or opened an attachment. if you trust your computer foundations to third parties, again speaking personally, then you cannot swiftly put things back together. Just my 2 cents, Kare
In the libraries defense. Lots of bigger and supposedly more secure organizations have been hit by ransomware attacks.
Phishing is getting more and more sophisticated and all it takes is a momentary lapse.
On Wed, 8 Nov 2023, Warren McPherson via talk wrote:
What is going on with the library website? There was a CBC article that said there was a ransomware attack, but it's been down for a week and it's hard to imagine why it would take so long to recover unless their infrastructure was much weaker than I would expect.
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk

I sometimes freelance for an IT company, they got hit by ransomware and took 3 weeks to recover. They are an IT company with 400 employees, a dozen of those are in administrative positions and all the rest are programmers, system admins and db admins. A library would have a harder time recovering. They had backups, but the download times were long. We had to basically create a new network infrastructure on the side, reimage every laptop, change every single password, and connect them to the new infrastructure. Every file that wasn't on the "known clean" backup was scrapped. We worked 12-16 hours per day for a week to recreate the infrastructure and two more weeks for everyone to gradually recover their files and data. So if they take a month to recover I would not be surprised. Mauro https://www.maurosouza.com - registered Linux User: 294521 Scripture is both history, and a love letter from God. On Wed, Nov 8, 2023 at 4:35 PM David Collier-Brown via talk <talk@gtalug.org> wrote:
An employer is constantly phishing staff, in hopes of sensitizing people so that real attacks won't get through. Alas, all they do is make us paranoid.
Humans are particularly bad at *reliably* detecting attacks, so occasional attacks get through, after which we get even more paranoid, and wonder if our jobs are on the line...
Every single phishing attack I've seem, real or self-inflicted, laughable or brilliant, got detected by spamcop.net. Does the company use a spam filer? Sure, but it's the Microsoft one, which is useless. Any time I see something I don't recognize at work, I paste it into spamcop.
So:
1. *Do* use technological means to deal with ransomware attacks 2. Make sure it's a *credible* means
By this I mean a backup service like one Lexis Nexis had: they connected via a VPN, they were only connected when backing up, the connection was a disk mount, and they offered *financial guarantees. *
That last reassured my VP: she said "they don't want to be sued out of business, and know a legal publisher like us will be litigious if they mess up". The only thing I didn't like was how slow it was do do a restore (;-))
--dave On 11/8/23 13:23, Alvin Starr via talk wrote:
On 2023-11-08 11:35, Karen Lewellen via talk wrote:
speaking personally? It probably was. My reasoning comes from a rather disturbing exchange I had with an employee about the sites lack of inclusive design. The sense I got is that those in charge took a lets build things with lots of third party input based on what is the latest trend. instead of building a solid secure, progressive enhancement based floor. Articles I saw on the cp24 site hinted that likely some staffer downloaded a file or opened an attachment. if you trust your computer foundations to third parties, again speaking personally, then you cannot swiftly put things back together. Just my 2 cents, Kare
In the libraries defense. Lots of bigger and supposedly more secure organizations have been hit by ransomware attacks.
Phishing is getting more and more sophisticated and all it takes is a momentary lapse.
On Wed, 8 Nov 2023, Warren McPherson via talk wrote:
What is going on with the library website? There was a CBC article that said there was a ransomware attack, but it's been down for a week and it's hard to imagine why it would take so long to recover unless their infrastructure was much weaker than I would expect.
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
participants (7)
-
Alvin Starr
-
David Collier-Brown
-
James Knott
-
Karen Lewellen
-
Kevin Cozens
-
Mauro Souza
-
Warren McPherson