Linux hardening question

Hi All, I am starting to go down the road to harden a Linux server, I am using the Ubuntu server image as my starting point. I searched a few articles and compiled a list of things to do, so far the stuff is a bit dated. So I was wondering if anyone has stuff ideas to help me harden my system which I plan to use to host my website using a VPS host. So far I've got step for the following: SSH / No root login, public key login Using DenyHost to reduce brute force password hacking Block port scanning Disable PING response Closing unused ports Q: What service should I consider disabling from starting automatically. Q: What program should I remove like (telnet) from my system. I am reading up on iptable and also know about ufw, but not sure how to setup a good firewall, like what to block and not. Any other ideas or checklist would be appreciated. Thanks, TH

On 2017-06-27 07:37 PM, Truth Hacker via talk wrote:
I am starting to go down the road to harden a Linux server, I am using the Ubuntu server image as my starting point. [snip] Q: What service should I consider disabling from starting automatically.
Disable any service you won't need for what you are going to be doing with the machine. :)
I am reading up on iptable and also know about ufw, but not sure how to setup a good firewall, like what to block and not.
It depends on the extent to which you want to harden the machine. One way to set up a firewall is deny everything by default then open the holes for the services you need. firewalld is also a firewall related package I've been running across lately. Install logwatch and have it send the logs to you on a daily basis. Use fail2ban to automatically firewall any machine who fails too many times to login via SSH. You may also want to "chmod 711 /etc", FWIW. If you are really serious about hardening a machine read up on SELinux. -- Cheers! Kevin. http://www.ve3syb.ca/ |"Nerds make the shiny things that distract Owner of Elecraft K2 #2172 | the mouth-breathers, and that's why we're | powerful!" #include <disclaimer/favourite> | --Chris Hardwick

On Tue, Jun 27, 2017 at 07:53:02PM -0400, Kevin Cozens via talk wrote:
On 2017-06-27 07:37 PM, Truth Hacker via talk wrote:
I am starting to go down the road to harden a Linux server, I am using the Ubuntu server image as my starting point. [snip] Q: What service should I consider disabling from starting automatically.
Disable any service you won't need for what you are going to be doing with the machine. :)
I am reading up on iptable and also know about ufw, but not sure how to setup a good firewall, like what to block and not.
It depends on the extent to which you want to harden the machine. One way to set up a firewall is deny everything by default then open the holes for the services you need. firewalld is also a firewall related package I've been running across lately.
Install logwatch and have it send the logs to you on a daily basis. Use fail2ban to automatically firewall any machine who fails too many times to login via SSH.
You may also want to "chmod 711 /etc", FWIW.
How well does that work out? So regular users (and services not running as root) can't resolve dns anymore (can't read nsswitch.conf or resolv.conf). That sounds inconvinient.
If you are really serious about hardening a machine read up on SELinux.
-- Len Sorensen

On 2017-06-28 10:05 AM, Lennart Sorensen wrote:
On Tue, Jun 27, 2017 at 07:53:02PM -0400, Kevin Cozens via talk wrote:
You may also want to "chmod 711 /etc", FWIW.
How well does that work out? So regular users (and services not running as root) can't resolve dns anymore (can't read nsswitch.conf or resolv.conf). That sounds inconvinient.
It works out well. I've been doing it for years. It seems some people somehow misread or misunderstood the chmod. I meant "chmod" and definitely not "chmod -R" as I think some people chose to interpret it. It will inconvenience someone needing to do something on the machine where they have to look at some file in /etc. They will typically to su to root first or use sudo. The main idea is that it limits some of the casual poking around on the machine that some non-root, non-staff users of the machine may want to do. It won't do much to slow down some system cracker who manages to illegally gain access to a system. BTW, I liked that comment about temporarily changing perms on /tmp just to mess with the heads of some users. :) -- Cheers! Kevin. http://www.ve3syb.ca/ |"Nerds make the shiny things that distract Owner of Elecraft K2 #2172 | the mouth-breathers, and that's why we're | powerful!" #include <disclaimer/favourite> | --Chris Hardwick

On Sat, Jul 01, 2017 at 02:54:51PM -0400, Kevin Cozens via talk wrote:
It works out well. I've been doing it for years. It seems some people somehow misread or misunderstood the chmod. I meant "chmod" and definitely not "chmod -R" as I think some people chose to interpret it.
It will inconvenience someone needing to do something on the machine where they have to look at some file in /etc. They will typically to su to root first or use sudo.
The main idea is that it limits some of the casual poking around on the machine that some non-root, non-staff users of the machine may want to do. It won't do much to slow down some system cracker who manages to illegally gain access to a system.
BTW, I liked that comment about temporarily changing perms on /tmp just to mess with the heads of some users. :)
It wasn't on purpose though. Doing dpkg-deb -x foo.deb in /tmp as root is a BAD IDEA. Don't do it. Remember to use a subfolder. Much easier to clean up. Some people just don't learn from their mistakes, like the one named down there ---v -- Len Sorensen

On 27 June 2017 at 19:53, Kevin Cozens via talk <talk@gtalug.org> wrote:
On 2017-06-27 07:37 PM, Truth Hacker via talk wrote:
I am starting to go down the road to harden a Linux server, I am using the Ubuntu server image as my starting point.
[snip]
Q: What service should I consider disabling from starting automatically.
Disable any service you won't need for what you are going to be doing with the machine. :)
Better still, uninstall... The OpenBSD philosophy is that they set up virtually all services as deactivated by default; you are expected to configure and activate anything that you need. That's philosophically pretty approprate. Unfortunately, some services may induce others that you weren't expecting. At any rate, reviewing /etc/init.d, /lib/systemd/system, and such is a wise idea.
You may also want to "chmod 711 /etc", FWIW.
That means that non-root-space applications will have no access to their configuration in /etc, thereby breaking services. Notable ones I notice there include: - Oops, your shell can't get at defaults under /etc - Postgres default configuration on my Debian system - MySQL default configuration It also breaks users' DNS resolution, normally controlled by /etc/resolv.conf /etc/passwd is probably needful too... I wouldn't be too quick to chmod /etc ... -- When confronted by a difficult problem, solve it by reducing it to the question, "How would the Lone Ranger handle this?"

Christopher Browne via talk wrote:
On 27 June 2017 at 19:53, Kevin Cozens via talk <talk@gtalug.org> wrote:
You may also want to "chmod 711 /etc", FWIW.
That means that non-root-space applications will have no access to their configuration in /etc, thereby breaking services.
Umm, no. The x-bit is what you need to access files inside a directory, so a non-root user can still access /etc/resolv.conf and so on. Not having the r-bit means you can't "read" the directory itself and get a list of files in it. So no filename autocompletion for you while you're trying to cat that file! However, all the filenames that matter in /etc are fairly canonical and not being able to "ls /etc" isn't really going to slow folk down much, just unnecessarily annoy them. Many years ago a coworker tried "chmod 700" on /etc etc, and chmod 600 on many key files, the upshot of which was that everything on the "secured" firewall had to run as root and it ended up less secure. -- Anthony de Boer

On Wed, Jun 28, 2017 at 07:21:55PM -0400, Anthony de Boer via talk wrote:
Christopher Browne via talk wrote:
On 27 June 2017 at 19:53, Kevin Cozens via talk <talk@gtalug.org> wrote:
You may also want to "chmod 711 /etc", FWIW.
That means that non-root-space applications will have no access to their configuration in /etc, thereby breaking services.
Umm, no. The x-bit is what you need to access files inside a directory, so a non-root user can still access /etc/resolv.conf and so on. Not having the r-bit means you can't "read" the directory itself and get a list of files in it. So no filename autocompletion for you while you're trying to cat that file!
Without the r bit you can not read the contents of a file.
However, all the filenames that matter in /etc are fairly canonical and not being able to "ls /etc" isn't really going to slow folk down much, just unnecessarily annoy them.
Yes removing the x bit would probably not be a problem, but removing the r bit would.
Many years ago a coworker tried "chmod 700" on /etc etc, and chmod 600 on many key files, the upshot of which was that everything on the "secured" firewall had to run as root and it ended up less secure.
And 711 is no better. 744 might work OK though. Now if you meant chmod JUST /etc, then sure fine. I think we all thought you meant recursively chmod /etc which would be a disaster. -- Len Sorensen

On Thu, Jun 29, 2017 at 09:24:09AM -0400, Lennart Sorensen via talk wrote:
On Wed, Jun 28, 2017 at 07:21:55PM -0400, Anthony de Boer via talk wrote:
Christopher Browne via talk wrote:
On 27 June 2017 at 19:53, Kevin Cozens via talk <talk@gtalug.org> wrote:
You may also want to "chmod 711 /etc", FWIW.
That means that non-root-space applications will have no access to their configuration in /etc, thereby breaking services.
Umm, no. The x-bit is what you need to access files inside a directory, so a non-root user can still access /etc/resolv.conf and so on. Not having the r-bit means you can't "read" the directory itself and get a list of files in it. So no filename autocompletion for you while you're trying to cat that file!
Without the r bit you can not read the contents of a file.
However, all the filenames that matter in /etc are fairly canonical and not being able to "ls /etc" isn't really going to slow folk down much, just unnecessarily annoy them.
Yes removing the x bit would probably not be a problem, but removing the r bit would.
Many years ago a coworker tried "chmod 700" on /etc etc, and chmod 600 on many key files, the upshot of which was that everything on the "secured" firewall had to run as root and it ended up less secure.
And 711 is no better. 744 might work OK though.
Now if you meant chmod JUST /etc, then sure fine. I think we all thought you meant recursively chmod /etc which would be a disaster.
OK that 'you' should have been the person that suggested chmod on /etc. -- Len Sorensen

I think OP will be the only user on the server, so chmod /etc is not that important. If someone exploits any service and gets a shell on the box, chmod will not help too much. Jailing the accessible servers on a container, or a old school chroot would be nice. On Jun 29, 2017 10:24, "Lennart Sorensen via talk" <talk@gtalug.org> wrote:
Christopher Browne via talk wrote:
On 27 June 2017 at 19:53, Kevin Cozens via talk <talk@gtalug.org> wrote:
You may also want to "chmod 711 /etc", FWIW.
That means that non-root-space applications will have no access to
On Wed, Jun 28, 2017 at 07:21:55PM -0400, Anthony de Boer via talk wrote: their
configuration in /etc, thereby breaking services.
Umm, no. The x-bit is what you need to access files inside a directory, so a non-root user can still access /etc/resolv.conf and so on. Not having the r-bit means you can't "read" the directory itself and get a list of files in it. So no filename autocompletion for you while you're trying to cat that file!
Without the r bit you can not read the contents of a file.
However, all the filenames that matter in /etc are fairly canonical and not being able to "ls /etc" isn't really going to slow folk down much, just unnecessarily annoy them.
Yes removing the x bit would probably not be a problem, but removing the r bit would.
Many years ago a coworker tried "chmod 700" on /etc etc, and chmod 600 on many key files, the upshot of which was that everything on the "secured" firewall had to run as root and it ended up less secure.
And 711 is no better. 744 might work OK though.
Now if you meant chmod JUST /etc, then sure fine. I think we all thought you meant recursively chmod /etc which would be a disaster.
-- Len Sorensen --- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk

Lennart Sorensen wrote:
On Wed, Jun 28, 2017 at 07:21:55PM -0400, Anthony de Boer via talk wrote:
Many years ago a coworker tried "chmod 700" on /etc etc, and chmod 600 on many key files, the upshot of which was that everything on the "secured" firewall had to run as root and it ended up less secure.
And 711 is no better. 744 might work OK though.
You mean "OK" in the "OK if you want to really torque nonroot users off" sense, right? Just for fun, try "chmod 744 /etc" in a root shell, then "ls -la /etc" from a nonroot shell. Then change it back to 755 and deal with any other users wondering why the machine did a weird there. (For extra points, do this on a nonshared machine!) Things like ls get really confused if they can see that the files are there but can't even stat them let alone any other access. Users staring at all that STDERR don't fare much better. -- Anthony de Boer

On Thu, Jun 29, 2017 at 10:18:26AM -0400, Anthony de Boer via talk wrote:
Lennart Sorensen wrote:
On Wed, Jun 28, 2017 at 07:21:55PM -0400, Anthony de Boer via talk wrote:
Many years ago a coworker tried "chmod 700" on /etc etc, and chmod 600 on many key files, the upshot of which was that everything on the "secured" firewall had to run as root and it ended up less secure.
And 711 is no better. 744 might work OK though.
You mean "OK" in the "OK if you want to really torque nonroot users off" sense, right?
Just for fun, try "chmod 744 /etc" in a root shell, then "ls -la /etc" from a nonroot shell. Then change it back to 755 and deal with any other users wondering why the machine did a weird there. (For extra points, do this on a nonshared machine!)
Things like ls get really confused if they can see that the files are there but can't even stat them let alone any other access. Users staring at all that STDERR don't fare much better.
I find accidentally changing permissions on /tmp a much better way to get people confused and annoyed at you. -- Len Sorensen

IMHO if you are looking for a hardened system you should not start with Ubuntu. Ubuntu is what l like to call 'kitchen sink Linux' Start with a minimal Debian install, then add the packages you need incrementally. Package removal is never an exact rollback of package installation. Then add your IDS, customize whatever host based firewall. Disable IPv6. Disable broadcast icmp. Etcetera etcetera etcetera .... On Thu, Jun 29, 2017 at 3:20 PM Lennart Sorensen via talk <talk@gtalug.org> wrote:
On Thu, Jun 29, 2017 at 10:18:26AM -0400, Anthony de Boer via talk wrote:
Lennart Sorensen wrote:
On Wed, Jun 28, 2017 at 07:21:55PM -0400, Anthony de Boer via talk wrote:
Many years ago a coworker tried "chmod 700" on /etc etc, and chmod 600 on many key files, the upshot of which was that everything on the "secured" firewall had to run as root and it ended up less secure.
And 711 is no better. 744 might work OK though.
You mean "OK" in the "OK if you want to really torque nonroot users off" sense, right?
Just for fun, try "chmod 744 /etc" in a root shell, then "ls -la /etc" from a nonroot shell. Then change it back to 755 and deal with any other users wondering why the machine did a weird there. (For extra points, do this on a nonshared machine!)
Things like ls get really confused if they can see that the files are there but can't even stat them let alone any other access. Users staring at all that STDERR don't fare much better.
I find accidentally changing permissions on /tmp a much better way to get people confused and annoyed at you.
-- Len Sorensen --- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk

On 06/29/2017 03:31 PM, Ansar Mohammed via talk wrote:
Disable IPv6. Why? That's the way the Internet is moving.
Perhaps something like this would be useful: https://www.suse.com/documentation/sles11/book_hardening/data/book_hardening...

Not really. We have a 12% adoption of IPv6 in Canada. On Thu, Jun 29, 2017 at 3:42 PM James Knott <james.knott@rogers.com> wrote:
On 06/29/2017 03:31 PM, Ansar Mohammed via talk wrote:
Disable IPv6. Why? That's the way the Internet is moving.
Perhaps something like this would be useful:
https://www.suse.com/documentation/sles11/book_hardening/data/book_hardening...

On 06/29/2017 04:06 PM, Ansar Mohammed wrote:
Not really. We have a 12% adoption of IPv6 in Canada.
And growing. Rogers started offering IPv6 a bit over a year ago. It's now available to every cable and cell customer (some cable customers may need a new modem). Telus has also had it for a while, along with Teksavvy over ADSL. There are other Canadian companies that are offering it, though Bell seems to be stuck. There are simply not enough IPv4 addresses to go around and there hasn't been for quite some time. Some carriers are providing IPv4 only via carrier grade NAT, which means you can pretty well forget about accessing your own network. Also, IPv6 brings with it some security features. For example, IPSec was originally designed for IPv6 and then added to IPv4. IPv6 can also use something called "privacy addresses", where a random number is used to form part of your address. These addresses change frequently, so it would be difficult to attack them. There are other security benefits to IPv6 that are not available in IPv4. Like it or not, IPv6 is coming. Better get used to it. I've been running IPv6 for over 7 years and have been using that time to learn about it. As for address space, the smallest amount an ISP is supposed to provide is a /64 prefix. That leaves the customer with 2^64 addresses. I have a /56 prefix from Rogers, which gives me 2^72 addresses or 256 /64s. Now, given that other than the address space, IPv6 is pretty much the same as IPv4, what are you afraid of?

It's not a matter of being afraid of anything. Security 101 tells you to reduce your attack surface area. I would not increase my attack surface area just for the sake of being an early adopter of IPv6. To be clear the conversation is about hardening. This is the right thing to do. On Thu, Jun 29, 2017 at 5:05 PM James Knott via talk <talk@gtalug.org> wrote:
On 06/29/2017 04:06 PM, Ansar Mohammed wrote:
Not really. We have a 12% adoption of IPv6 in Canada.
And growing. Rogers started offering IPv6 a bit over a year ago. It's now available to every cable and cell customer (some cable customers may need a new modem). Telus has also had it for a while, along with Teksavvy over ADSL. There are other Canadian companies that are offering it, though Bell seems to be stuck. There are simply not enough IPv4 addresses to go around and there hasn't been for quite some time. Some carriers are providing IPv4 only via carrier grade NAT, which means you can pretty well forget about accessing your own network. Also, IPv6 brings with it some security features. For example, IPSec was originally designed for IPv6 and then added to IPv4. IPv6 can also use something called "privacy addresses", where a random number is used to form part of your address. These addresses change frequently, so it would be difficult to attack them. There are other security benefits to IPv6 that are not available in IPv4.
Like it or not, IPv6 is coming. Better get used to it.
I've been running IPv6 for over 7 years and have been using that time to learn about it. As for address space, the smallest amount an ISP is supposed to provide is a /64 prefix. That leaves the customer with 2^64 addresses. I have a /56 prefix from Rogers, which gives me 2^72 addresses or 256 /64s.
Now, given that other than the address space, IPv6 is pretty much the same as IPv4, what are you afraid of?
--- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk

On 06/29/2017 05:14 PM, Ansar Mohammed wrote:
It's not a matter of being afraid of anything. Security 101 tells you to reduce your attack surface area. I would not increase my attack surface area just for the sake of being an early adopter of IPv6.
To be clear the conversation is about hardening. This is the right thing to do.
Then you'll be hardening yourself out of a growing portion of the Internet. I use a browser addon called "ShowIP" which displays the web site IP address. I can see a significant part of the sites I go to are now IPv6. Also, if you don't know how to set up a firewall on IPv6, you really can't consider yourself capable of hardening anything. Fore example, consider setting up a firewall. On Cisco gear, unless you filter on address, you IPv4 and IPv6 rules are identical. On other firewalls, such as pfSense, you can do both IPv4 & IPv6 with one rule. You can also have separate rules if needed, your choice. Also, if you're not competent with IPv6, you'll never get some certifications such as CCNA etc. They require you to know IPv6. BTW, here's the IPv6 address for gtalug.org: 2600:3c03::f03c:91ff:fe50:ea0a

Again, please follow the thread, this is not about competency or capability on IPv6. This is a simple question on hardening a Linux system. My entire network runs IPv6 also. But my home systems do not need to be hardened. There have been many IPv6 only bugs and exploits including last years IPv6 ping of death on Cisco. https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-... The stack simply isn't as battle tested as IPv4. Oh, and that growing portion of the internet that's IPv6 only is primarily China. What's your business reason for the additional risk of IPv6? Does your application support IPv6? Has your application been tested with IPv6? Do you have users that are IPv6 only? If you don't need it on a hardened system, you are just adding another attack vector for no good reason. On Thu, Jun 29, 2017 at 5:36 PM James Knott via talk <talk@gtalug.org> wrote:
On 06/29/2017 05:14 PM, Ansar Mohammed wrote:
It's not a matter of being afraid of anything. Security 101 tells you to reduce your attack surface area. I would not increase my attack surface area just for the sake of being an early adopter of IPv6.
To be clear the conversation is about hardening. This is the right thing to do.
Then you'll be hardening yourself out of a growing portion of the Internet. I use a browser addon called "ShowIP" which displays the web site IP address. I can see a significant part of the sites I go to are now IPv6. Also, if you don't know how to set up a firewall on IPv6, you really can't consider yourself capable of hardening anything. Fore example, consider setting up a firewall. On Cisco gear, unless you filter on address, you IPv4 and IPv6 rules are identical. On other firewalls, such as pfSense, you can do both IPv4 & IPv6 with one rule. You can also have separate rules if needed, your choice. Also, if you're not competent with IPv6, you'll never get some certifications such as CCNA etc. They require you to know IPv6.
BTW, here's the IPv6 address for gtalug.org: 2600:3c03::f03c:91ff:fe50:ea0a --- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk

I have worked with telecommunications and networks for many years (I first worked on a computer network in 1978, before there was such a thing as Ethernet or IPv4) and often see IPv6 in my work. I cannot say I'm not going to work with it or the customer shouldn't use it. I have to be prepared to deal with the situation and these days that includes being competent with IPv6. Also, I wasn't referring to home users when I was talking about hardening. Much of my work has been in high security data centres, where there are public web sites, among others, running in a protected environment. In today's world, working with IPv6 is part of the job and disabling it, when it is the future, is just plain incompetence. If you can't protect attacks via IPv6 as you would via IPv4, you really should be looking for another job. IPv6 is here now, learn to deal with it, instead of hiding from it. It's not going away. On 06/29/2017 06:18 PM, Ansar Mohammed wrote:
Again, please follow the thread, this is not about competency or capability on IPv6.
This is a simple question on hardening a Linux system. My entire network runs IPv6 also. But my home systems do not need to be hardened.
There have been many IPv6 only bugs and exploits including last years IPv6 ping of death on Cisco. https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-...
The stack simply isn't as battle tested as IPv4.
Oh, and that growing portion of the internet that's IPv6 only is primarily China.
What's your business reason for the additional risk of IPv6?
Does your application support IPv6?
Has your application been tested with IPv6?
Do you have users that are IPv6 only?
If you don't need it on a hardened system, you are just adding another attack vector for no good reason.
On Thu, Jun 29, 2017 at 5:36 PM James Knott via talk <talk@gtalug.org <mailto:talk@gtalug.org>> wrote:
On 06/29/2017 05:14 PM, Ansar Mohammed wrote: > It's not a matter of being afraid of anything. Security 101 tells you > to reduce your attack surface area. > I would not increase my attack surface area just for the sake of being > an early adopter of IPv6. > > To be clear the conversation is about hardening. This is the right > thing to do. >
Then you'll be hardening yourself out of a growing portion of the Internet. I use a browser addon called "ShowIP" which displays the web site IP address. I can see a significant part of the sites I go to are now IPv6. Also, if you don't know how to set up a firewall on IPv6, you really can't consider yourself capable of hardening anything. Fore example, consider setting up a firewall. On Cisco gear, unless you filter on address, you IPv4 and IPv6 rules are identical. On other firewalls, such as pfSense, you can do both IPv4 & IPv6 with one rule. You can also have separate rules if needed, your choice. Also, if you're not competent with IPv6, you'll never get some certifications such as CCNA etc. They require you to know IPv6.
BTW, here's the IPv6 address for gtalug.org <http://gtalug.org>: 2600:3c03::f03c:91ff:fe50:ea0a --- Talk Mailing List talk@gtalug.org <mailto:talk@gtalug.org> https://gtalug.org/mailman/listinfo/talk

Actually James, incompetence would be opening up a high security system to additional attack vectors without a good business or technical reason (which you really haven't provided). On Thu, Jun 29, 2017 at 6:33 PM James Knott via talk <talk@gtalug.org> wrote:
I have worked with telecommunications and networks for many years (I first worked on a computer network in 1978, before there was such a thing as Ethernet or IPv4) and often see IPv6 in my work. I cannot say I'm not going to work with it or the customer shouldn't use it. I have to be prepared to deal with the situation and these days that includes being competent with IPv6. Also, I wasn't referring to home users when I was talking about hardening. Much of my work has been in high security data centres, where there are public web sites, among others, running in a protected environment. In today's world, working with IPv6 is part of the job and disabling it, when it is the future, is just plain incompetence. If you can't protect attacks via IPv6 as you would via IPv4, you really should be looking for another job. IPv6 is here now, learn to deal with it, instead of hiding from it. It's not going away.
On 06/29/2017 06:18 PM, Ansar Mohammed wrote:
Again, please follow the thread, this is not about competency or capability on IPv6.
This is a simple question on hardening a Linux system. My entire network runs IPv6 also. But my home systems do not need to be hardened.
There have been many IPv6 only bugs and exploits including last years IPv6 ping of death on Cisco.
https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-...
The stack simply isn't as battle tested as IPv4.
Oh, and that growing portion of the internet that's IPv6 only is primarily China.
What's your business reason for the additional risk of IPv6?
Does your application support IPv6?
Has your application been tested with IPv6?
Do you have users that are IPv6 only?
If you don't need it on a hardened system, you are just adding another attack vector for no good reason.
On Thu, Jun 29, 2017 at 5:36 PM James Knott via talk <talk@gtalug.org <mailto:talk@gtalug.org>> wrote:
On 06/29/2017 05:14 PM, Ansar Mohammed wrote: > It's not a matter of being afraid of anything. Security 101 tells you > to reduce your attack surface area. > I would not increase my attack surface area just for the sake of being > an early adopter of IPv6. > > To be clear the conversation is about hardening. This is the right > thing to do. >
Then you'll be hardening yourself out of a growing portion of the Internet. I use a browser addon called "ShowIP" which displays the web site IP address. I can see a significant part of the sites I go to are now IPv6. Also, if you don't know how to set up a firewall on IPv6, you really can't consider yourself capable of hardening anything. Fore example, consider setting up a firewall. On Cisco gear, unless you filter on address, you IPv4 and IPv6 rules are identical. On other firewalls, such as pfSense, you can do both IPv4 & IPv6 with one
rule.
You can also have separate rules if needed, your choice. Also, if you're not competent with IPv6, you'll never get some certifications such as CCNA etc. They require you to know IPv6.
BTW, here's the IPv6 address for gtalug.org <http://gtalug.org>: 2600:3c03::f03c:91ff:fe50:ea0a --- Talk Mailing List talk@gtalug.org <mailto:talk@gtalug.org> https://gtalug.org/mailman/listinfo/talk
--- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk

On 06/29/2017 06:46 PM, Ansar Mohammed wrote:
Actually James, incompetence would be opening up a high security system to additional attack vectors without a good business or technical reason (which you really haven't provided).
The business reason is the world is moving to IPv6. Failing to accept that fact means crippling the future. Why have companies such as Google, Microsoft, Apple, Cisco, Globe & Mail, SuSE, Mozilla and many, many more, along with Canadian and U.S. federal governments made the move to IPv6? Don't they have systems that have to be hardened? The Internet has been hobbled by IPv4 for far too long. It's time to move on. Refusing to use IPv6 means you're stuck in the past.

On June 29, 2017 7:37:54 PM EDT, James Knott via talk <talk@gtalug.org> wrote:
On 06/29/2017 06:46 PM, Ansar Mohammed wrote:
Actually James, incompetence would be opening up a high security system to additional attack vectors without a good business or technical reason (which you really haven't provided).
The business reason is the world is moving to IPv6. Failing to accept that fact means crippling the future. Why have companies such as Google, Microsoft, Apple, Cisco, Globe & Mail, SuSE, Mozilla and many, many more, along with Canadian and U.S. federal governments made the move to IPv6? Don't they have systems that have to be hardened?
The Internet has been hobbled by IPv4 for far too long. It's time to move on. Refusing to use IPv6 means you're stuck in the past.
These words were attributed to Benjamin Franklin. "If you fail to plan, you are planning to fail!" This is the sort of reasoning which provided for IPV6's creation in the first place. The internet is running out of address space. Any networked system, currently hardened or otherwise, has to take the future into account when planning for the present. Perhaps its the term hardening which is the difficult concept. Things which are hard are often brittle and prone to fractures. Perhaps tempering might be a better term. This implies resilience, as in bonding a tempered steel edge to softer iron. For a chefs knife or a sword the hard part keeps its cutting edge for effective slicing and the softer part absorbes the force which might fracture the knife were it completely hardened. Certainly the systematic rollout of IPV6 follows a plan of sustainable modularity akin to the historic unix norms and it is the planned way of the future, at least so far. The only real qustion is how long will it be before you must enable IPV6 or be shut out of maintream knowledge, interests or economies? Competent or not we will all face up to this, some sooner than others.
--- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk
-- Russell Sent by K-9 Mail

On 06/30/2017 08:45 AM, Russell wrote:
"If you fail to plan, you are planning to fail!"
This is the sort of reasoning which provided for IPV6's creation in the first place. The internet is running out of address space. Any networked system, currently hardened or otherwise, has to take the future into account when planning for the present.
According to Vint Cerf, IPv4 was never intended to be released as a public system. It was intended to demonstrate the concepts, and the "official" version would have a much larger address space.
The only real qustion is how long will it be before you must enable IPV6 or be shut out of maintream knowledge, interests or economies?
That's already happening in some parts of the world.
Competent or not we will all face up to this, some sooner than others.
I'm the kind of person who always has to learn. I first read about IPv6 in the April 1995 issue of Byte magazine (I still have that issue on my shelf, along with every other paper issue of Byte, going back to Vol 1 #1, Sept 1975). When I decided it was time to start working with IPv6, I used a 6in4 tunnel, back in May 2010. Also, I spend a lot of time with Wireshark, looking at what's on the wire, to fully understand the various protocols. I can also recommend the "IPv6 Essentials" book, from O'Reilly as an excellent reference. I have no use for those who insist IPv4 is good enough, when it hasn't been since the day it became necessary to use NAT. It also has other flaws that have been addressed in IPv6. So, anyone who's in the business, but not willing to work with IPv6, should start looking for another job. They'll soon be obsolete.

On Fri, Jun 30, 2017 at 09:34:06AM -0400, James Knott via talk wrote:
According to Vint Cerf, IPv4 was never intended to be released as a public system. It was intended to demonstrate the concepts, and the "official" version would have a much larger address space.
That's what happens when you make a decent beta and show it to the people in charge. They instantly want to use it, without wanting to spend the time to make the final version properly. :) -- Len Sorensen

On 06/30/2017 10:53 AM, Lennart Sorensen wrote:
On Fri, Jun 30, 2017 at 09:34:06AM -0400, James Knott via talk wrote:
According to Vint Cerf, IPv4 was never intended to be released as a public system. It was intended to demonstrate the concepts, and the "official" version would have a much larger address space. That's what happens when you make a decent beta and show it to the people in charge. They instantly want to use it, without wanting to spend the time to make the final version properly. :)
Here's a link about what he said: "It's enough to do an experiment," he said. "The problem is the experiment never ended." http://www.networkworld.com/article/2227543/software/software-why-ipv6-vint-...

| From: James Knott via talk <talk@gtalug.org> | I have no use for those who insist IPv4 is good enough, when it | hasn't been since the day it became necessary to use NAT. Actually NAT was not introduced to deal with a global shortage of IP addresses. It was introduced to get rid of a local shortage. For example, Rogers@home (the first broadband service for consumers in my area) was marketed as meant for hooking one device (not a server!) to the internet. The theory was that you'd pay extra for each other device and they would get their own IP. This wasn't 100% crazy since most homes that had a computer that could connect to the internet had only one. I ran NAT (and servers) at home with a Linux gateway because I did already have a LAN. (Unlike most folks, I had globally routable addresses in my LAN but of course Rogers could not route that traffic to me.) Pretty soon people wanted to run LANs at home BUT they were Microsoft LANs -- not safe in public. So naturally a broadband router-with-NAT made a lot of sense. Now many folks think NATing is the normal and most reasonable form of firewall! NAT actually damages the internet's original design. Nodes are peers, not clients or servers. But only initiators (clients, roughly speaking) can be behind NAT. So many protocols have had to be butchered to survive NAT.

On 07/01/2017 05:38 PM, D. Hugh Redelmeier via talk wrote:
| From: James Knott via talk <talk@gtalug.org>
| I have no use for those who insist IPv4 is good enough, when it | hasn't been since the day it became necessary to use NAT.
Actually NAT was not introduced to deal with a global shortage of IP addresses. It was introduced to get rid of a local shortage.
For example, Rogers@home (the first broadband service for consumers in my area) was marketed as meant for hooking one device (not a server!) to the internet. The theory was that you'd pay extra for each other device and they would get their own IP. This wasn't 100% crazy since most homes that had a computer that could connect to the internet had only one.
I ran NAT (and servers) at home with a Linux gateway because I did already have a LAN. (Unlike most folks, I had globally routable addresses in my LAN but of course Rogers could not route that traffic to me.)
These days, I get a /56 prefix from Rogers. That's 2^72 addresses, which get split into 256 /64s. Rogers can route the entire /56 prefix to me. My first Internet connection was with io.org, using SLIP, not PPP, over dial up. I had a static address then. I also had Rogers@home.
Pretty soon people wanted to run LANs at home BUT they were Microsoft LANs -- not safe in public. So naturally a broadband router-with-NAT made a lot of sense.
Back in those days, Microsoft networks did not use IP. I recall reading, while at IBM, what went into making it IP compatible. (I had access to a lot of technical info, when I worked at IBM.)
Now many folks think NATing is the normal and most reasonable form of firewall!
NAT actually damages the internet's original design. Nodes are peers, not clients or servers. But only initiators (clients, roughly speaking) can be behind NAT. So many protocols have had to be butchered to survive NAT.
Yep, you may recall the days when FTP wouldn't work through NAT. However, the address limitation of IPv4 was recognized well over 20 years ago and led to the development of IPv6. As I mentioned, I first heard of it in 1995. You may want to see what Vint Cerf has to say about it. He's been regretting 32 bit addresses for many years. Incidentally, I first heard about NAT when I saw a dial up NAT router, at Computer Fest in 1996. Also, at IBM, I had 5 static IPv4 addresses, 1 for my computer and 4 for testing in my work. I similarly had 5 SNA addresses. Back then, my computer's address was 9.29.146.147.

| From: James Knott via talk <talk@gtalug.org> | On 07/01/2017 05:38 PM, D. Hugh Redelmeier via talk wrote: | > For example, Rogers@home (the first broadband service for consumers in my | > area) I'm wrong. Rogers Wave was the first in my area (1997 or 1998, I think). It was rebranded in 2000 to Rogers @ Home. | These days, I get a /56 prefix from Rogers. I'm not sure why I don't get IPv6 from Rogers. I intend to look into that -- probably I've misconfigured something on my gateway (a PC running CentOS 7; the cable modem is running in bridge mode). My IPv4 /24 is globally assigned. That's not going to happen with IPv6. | > Pretty soon people wanted to run LANs at home BUT they were Microsoft LANs | > -- not safe in public. So naturally a broadband router-with-NAT made a | > lot of sense. | | Back in those days, Microsoft networks did not use IP. I recall | reading, while at IBM, what went into making it IP compatible. (I had | access to a lot of technical info, when I worked at IBM.) Was that still true in 1997? I thought by Windows for Workgroups 3.11 had a TCP/IP stack and Windows 95 must have (but I didn't use Windows). | > NAT actually damages the internet's original design. Nodes are peers, not | > clients or servers. But only initiators (clients, roughly speaking) can | > be behind NAT. So many protocols have had to be butchered to survive NAT. | > | Yep, you may recall the days when FTP wouldn't work through NAT. Right. But part of that is that FTP was a very early protocol and was not designed that well. Even an FTP client can't survive NAT without the NATting box having special-purpose code to rewrite things inside the FTP packets. | However, the address limitation of IPv4 was recognized well over 20 | years ago and led to the development of IPv6. As I mentioned, I first | heard of it in 1995. You may want to see what Vint Cerf has to say | about it. He's been regretting 32 bit addresses for many years. Of course IPv6 is a Good Thing. But change is hard, especially if one sees no immediate personal benefit. I think that it is even worse that we don't use DNSSec. The security implications of not securing DNS seem enormous. And while listing currently lost causes, I really wish we'd gotten to Opportunistic Encryption. | Incidentally, I first heard about NAT when I saw a dial up NAT router, | at Computer Fest in 1996. I miss Computer Fests. | Also, at IBM, I had 5 static IPv4 addresses, 1 for my computer and 4 for | testing in my work. I similarly had 5 SNA addresses. Back then, my | computer's address was 9.29.146.147. I got my /24 before I had a broadband connection. I think that it was in the late 1980s when I was pondering what IP addresses to stick on my 2-node LAN. I didn't want to use an RFC 1918 address (this was before RFC 1918 or even 1597). So I naively asked for some IPs and got them. It was many years before they were actually routed from the internet to my LAN.

On July 2, 2017 10:29:00 AM EDT, "D. Hugh Redelmeier via talk" <talk@gtalug.org> wrote:
| From: James Knott via talk <talk@gtalug.org>
| On 07/01/2017 05:38 PM, D. Hugh Redelmeier via talk wrote:
<snip previous>
And while listing currently lost causes, I really wish we'd gotten to Opportunistic Encryption.
Ok, this made me chuckle. One of the first questions I typically get when I get into a computer discussion with a non techie parent is, "what should my child learn as part of the basics" I always said try a machine assembly language. My answer these days is a one time pad.
| Incidentally, I first heard about NAT when I saw a dial up NAT router, | at Computer Fest in 1996.
I miss Computer Fests.
| Also, at IBM, I had 5 static IPv4 addresses, 1 for my computer and 4 for | testing in my work. I similarly had 5 SNA addresses. Back then, my | computer's address was 9.29.146.147.
I got my /24 before I had a broadband connection. I think that it was in the late 1980s when I was pondering what IP addresses to stick on my 2-node LAN. I didn't want to use an RFC 1918 address (this was before RFC 1918 or even 1597). So I naively asked for some IPs and got them. It was many years before they were actually routed from the internet to my LAN. --- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk
-- Russell Sent by K-9 Mail

On 07/02/2017 11:01 AM, Russell via talk wrote:
And while listing currently lost causes, I really wish we'd gotten to
Opportunistic Encryption. Ok, this made me chuckle.
One of the first questions I typically get when I get into a computer discussion with a non techie parent is, "what should my child learn as part of the basics" I always said try a machine assembly language.
I recently watched a show on CNN about how the Russians interfered with the U.S. election, including stealing thousands of emails. If they'd just used X.509 encryption, that wouldn't have been an issue. Every modern email app supports it, yet it's generally not used.

On July 2, 2017 11:06:35 AM EDT, James Knott via talk <talk@gtalug.org> wrote:
And while listing currently lost causes, I really wish we'd gotten to
Opportunistic Encryption. Ok, this made me chuckle.
One of the first questions I typically get when I get into a computer discussion with a non techie parent is, "what should my child learn as
On 07/02/2017 11:01 AM, Russell via talk wrote: part of the basics" I always said try a machine assembly language.
I recently watched a show on CNN about how the Russians interfered with the U.S. election, including stealing thousands of emails. If they'd just used X.509 encryption, that wouldn't have been an issue. Every modern email app supports it, yet it's generally not used.
It took a lot of highway deaths before car manufacturers were compelled to make seatbelts a standard from the factory. It took another generation and a lot of physical hacks, ie. no start till buckled up, to get people to use them. Its open to everyone to generate or use a keysigning authority. Perhaps that was the problem with Hillary Clintons use of a home network mailserver? That she signed her own keys privately but then did the governments work using them. This would clearly outside of government and even any reasonable business policy. Notwithstanding that the least amount of understanding of any network topology is a tool which may be used in formulating an exploit. Linking cryptography needs a strong enforcement of objectives and policy. X.509 is a policy objective; a part of an abstraction which has established a protocol of verification under Standardly Applied Protocols. Everyone ignores standards at their own peril. In effect, there is truth to the statement that cloud services are more secure. They are generally large enough to have enabled Rapid Response Protocols for such eventualities. Interact e-transfer was down locally last week. No doubt recent some recent TLS issues are to blame. For economic and security reasons the general public will probably not know whether there was actual financial loss or what network hardware topology has been modified.
--- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk
-- Russell Sent by K-9 Mail

On 07/03/2017 08:56 AM, Russell wrote:
It took a lot of highway deaths before car manufacturers were compelled to make seatbelts a standard from the factory. It took another generation and a lot of physical hacks, ie. no start till buckled up, to get people to use them.
Its open to everyone to generate or use a keysigning authority.
Yep. I get mine from cacert.org. The problem is most people don't know about them or bother with getting them. In large organizations, directory servers can provide them. I worked at IBM Canada HQ, back in the late '90s. One of the first things I had to do, when I started, was get my email certificates. This was on Lotus Notes. Any LDAP server should be able to support X.509 certificates, which makes them easier to use. Otherwise, you have to manually exchange them, by sending signed email.
Perhaps that was the problem with Hillary Clintons use of a home network mailserver? That she signed her own keys privately but then did the governments work using them. This would clearly outside of government and even any reasonable business policy.
My understanding is that they were plain text email. If they had been encrypted, then this wouldn't have been such a problem. Also, it was the DNC's server, along with another Democrat one that was hacked, not Hilary's. Regardless, she shouldn't have been using a personal server for government business. But that pales in comparison with Trump's use of Twitter.

On July 3, 2017 9:15:23 AM EDT, James Knott via talk <talk@gtalug.org> wrote:
On 07/03/2017 08:56 AM, Russell wrote:
It took a lot of highway deaths before car manufacturers were compelled to make seatbelts a standard from the factory. It took another generation and a lot of physical hacks, ie. no start till buckled up, to get people to use them.
Its open to everyone to generate or use a keysigning authority.
Yep. I get mine from cacert.org. The problem is most people don't know about them or bother with getting them. In large organizations, directory servers can provide them. I worked at IBM Canada HQ, back in the late '90s. One of the first things I had to do, when I started, was get my email certificates. This was on Lotus Notes. Any LDAP server should be able to support X.509 certificates, which makes them easier to use. Otherwise, you have to manually exchange them, by sending signed email.
Perhaps that was the problem with Hillary Clintons use of a home network mailserver? That she signed her own keys privately but then did the governments work using them. This would clearly outside of government and even any reasonable business policy.
My understanding is that they were plain text email. If they had been encrypted, then this wouldn't have been such a problem. Also, it was the DNC's server, along with another Democrat one that was hacked, not Hilary's. Regardless, she shouldn't have been using a personal server for government business. But that pales in comparison with Trump's use of Twitter.
Welcome to the new age of digital populism, where Twitter upvotes are an acceptable metric of facts. I think Donald Trump is living proof that you shouldn't do something on the net, just because you are able to. If he was just Trump the TV actor or real estate developer, it wouldn't be such a big deal. The fact that Trump is Commander In Chief in charge of the US's entire system of both physical and SIGINT defences, is probably the strongest argument I can make for Blockchaining governments, as a part of any fundamental approach to a nations security. Twitter is a distraction from good governance, yet every politician seems to use it to build their personal brand. Worse they actually use taxpayer funded resources under the guise of getting the word out. As an aside, apparantly the TTC has two full time employees monitoring Twitter for negative comments. The spin doctoring of Twitter posted complaints can start even before an involved worker gets back from lunch. Rightly or wrongly waiting for facts is an essential part of problem solving. By eliminating the wait time for collection and evaluation of information and then verifying the correctness of that information as factual or other, Twitter cloaks legal innuendo in a veil of truth, expressing opinion as fact, when it is no such thing. The more confused people are in a crowd, the easier it is for pickpokets to work. This is a physical reality. The same holds true for virtual spaces like Twitter. Bob Dylan once said you shouldnt let your TV get your kicks for you. Twitter is about as usefull as TV in that respect. As much as TV abuse only affects your own perception or misperceptions of facts, with Twitter you can pass that state along, virally. The grifters, hucksters and other charlitans, seem to be having an internet field day on Twitter and other so called social medias right now. Hopefully this can change. But for now, don't ever assume that Donald Trump is using Twitter in order to create clarity rather than confusion. His approach to this medium is just the digitized version of his business practices. Confusion is the oldest and most effective sales tactic in the world. The US reaffirmed its commitment to maintaining that status quo in governance in their last election, which made hucksterisim a desirable characteristic for a US president. The unheralded truth is that there is profit in confusion as an actual product. Also, contrary to the presidents assertions, there is no such thing as fake news. News is a product for sale and bears some little resemblence to true facts and upheld truths. Those ideas are subjective in nature and each of us has to make that decision for ourselves. IMHO Twitter is the last place on the planet to look for truth. Its like when I was a kid in rural NB and we shared a party line phone. Our ring was two long one short. The family rule was you never gave out personal information over the phone, except for emergencies, because anyone else on the line and not just the operator could be listening in. Lots of untrue stories made the rounds of whole neighbourhoods based on "party line" facts before the truth was known. That to me is Twitter in a nutshell. A swamp of ego reinforcing braggadocio and untested facts. At least when used by business's and politicians. All soapboxing aside, for me its about deciding the who, what, where, why and how of, metrical analysis in trust of facts. Twitter is so far down my trust list, I don't have an account or even read most of the Twitter that is fed to my email account by people who do use it. Like my local politicians after I sign up for updates on community issues I am following.
--- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk
-- Russell Sent by K-9 Mail

On 2017-07-03 08:56 AM, Russell via talk wrote:
Its open to everyone to generate or use a keysigning authority.
Unfortunately, that's a technical solution for a social problem: keys and authorities need to be something a user (almost) never needs to worry about. Mail clients need to come with relevant keys to verify most other users' identity, or the uptake of secure e-mail will be too low to reach critical mass. I've worked with X.509-based signing in two very different domains, and in each there have been deep problems that limit the value of the process incredibly: * in the construction industry, X.509-signed secure PDFs are used to move final drawings and contractual communications (‘transmittals’) around. Unfortunately, many of these are only verifiable within the issuer's company or between members of the same trade associations, as companies and associations act as signing authorities. Many users aren't aware that scans of electronically signed documents are no longer electronically signed. * in amateur radio, the US hobbyist/lobby group ARRL maintains a full X.509 infrastructure for secure collection and verification of radio contest logs. The maintainers of this system (‘Logbook of the World’) have done a lot to make the process simple, but there are still roadblocks such as keys expiring every few years. It doesn't help that the majority of radio hams who do radio contests are very technologically conservative, and received wisdom has it that Logbook of the World is hard to use and unreliable. So while everyone could get secure keys, too few people do it to make the process worthwhile. cheers, Stewart

On July 4, 2017 9:03:36 PM EDT, "Stewart C. Russell via talk" <talk@gtalug.org> wrote:
On 2017-07-03 08:56 AM, Russell via talk wrote:
Its open to everyone to generate or use a keysigning authority.
Unfortunately, that's a technical solution for a social problem: keys and authorities need to be something a user (almost) never needs to worry about. Mail clients need to
As a kid I never had a key to our front door. It was never locked. Our entire legal system of governance is a technial solution to social problems. Most people don't realize that ISO standards compliance is voluntary and only enforceable in measures of associated trust, as you point out in your examples below. I think that when James raised X.509 certificate authority, within the scope of email hacking of politicians, he was saying that the lack of understanding of established trust mechanism, is a weak link in government process's. Individual freedom to not generate keys for personal email is quit a bit different than email used in business and in government.
come with relevant keys to verify most other users' identity, or the uptake of secure e-mail will be too low to reach critical mass.
I've worked with X.509-based signing in two very different domains, and in each there have been deep problems that limit the value of the process incredibly:
* in the construction industry, X.509-signed secure PDFs are used to move final drawings and contractual communications (‘transmittals’) around. Unfortunately, many of
Not to be trite, but these types of documents are limited in scope and the loss of security is trivial to the national interest. Any breaches which are discovered to be a result of these insecure transmission are dealt with in civil courts.
these are only verifiable within the issuer's company or between members of the same trade associations, as companies and associations act as signing authorities. Many users aren't aware that scans of electronically signed documents are no longer electronically signed.
* in amateur radio, the US hobbyist/lobby group ARRL maintains a full X.509 infrastructure for secure collection and verification of radio contest logs. The maintainers of this system (‘Logbook of the World’) have done a lot to make the process simple, but there are still roadblocks such as keys expiring every few years. It doesn't help that the majority of radio hams who do radio contests are very technologically conservative, and received wisdom has it that Logbook of the World is hard to use and unreliable.
So while everyone could get secure keys, too few people do it to make the process worthwhile.
If a friend emailed me something and I was worried, I could say, this is sensitive, delete it, we have to deal with it face to face. In business or government I could say HUSH, you're leaking secrets and inform SYSOPS, who would then review the incident and either remind us of policy or move to remediate the factors which allowed the potential leak. I think in all cases its about economy of scale. Groups of people using internet networks all either set or ignore threat levels for themselves. You would hope that COMSEC in government is somewhat higher than; gee I left the front door wide open, I hope no one goes in and takes something important from me before I get back. For a sitting US president that COMSEC process is impeachment. Any grifter could tell you the problem with Trumps twitter bloviation. Its not so much what he says, but that he speaks without knowledge or understanding, thus revealing his personality. Couple that with known past issues relating to emails and leaks and they have an understanding of the topology they are going to grift. Its been pointed out that if Nixon lied the way Trump is lying he would never have been impeached. I'm a trained typist. I have used dictatype tape devices. I have accidentally erased bits of recordings cycling back and forth on the tape while working on a research project. If I have done that, you can be sure that I believe that it could also have happened to Rosemary Woods while she was transcribing Nixon. Blockchain government communication over ipv6. I wonder what the edgepoint of trust is in that case?
cheers, Stewart --- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk
-- Russell Sent by K-9 Mail

On 07/02/2017 10:29 AM, D. Hugh Redelmeier via talk wrote:
| From: James Knott via talk <talk@gtalug.org>
| On 07/01/2017 05:38 PM, D. Hugh Redelmeier via talk wrote:
| > For example, Rogers@home (the first broadband service for consumers in my | > area)
I'm wrong. Rogers Wave was the first in my area (1997 or 1998, I think). It was rebranded in 2000 to Rogers @ Home.
| These days, I get a /56 prefix from Rogers.
I'm not sure why I don't get IPv6 from Rogers. I intend to look into that -- probably I've misconfigured something on my gateway (a PC running CentOS 7; the cable modem is running in bridge mode).
Call Rogers. IPv6 is available to everyone, but some modems may have to be replaced. If you use a separate router, it has to support DHCPv6-PD, as that's how the prefix is assigned. I use a refurb computer running pfSense. BTW, when I got a new modem, a little over a year ago, it was part of a bundle that, while providing pretty much the same service, cost me about $50 less per month.
My IPv4 /24 is globally assigned. That's not going to happen with IPv6.
Actually it does. I have no problem reaching computers on my LAN when I'm elsewhere. With Rogers you can have a /64 to /56 all to yourself and they are all globally unique and reachable from anywhere in the world. BTW, with the Rogers modem/routers in router mode, you only get a /64. With a separate router, you can select any prefix size, between /64 (2^64 addresses and /56 (2^72). Also, the Rogers cell network also supports IPv6 and, with the newer phones, even tethered devices get IPv6 addresses.
| > Pretty soon people wanted to run LANs at home BUT they were Microsoft LANs | > -- not safe in public. So naturally a broadband router-with-NAT made a | > lot of sense. | | Back in those days, Microsoft networks did not use IP. I recall | reading, while at IBM, what went into making it IP compatible. (I had | access to a lot of technical info, when I worked at IBM.)
Was that still true in 1997? I thought by Windows for Workgroups 3.11 had a TCP/IP stack and Windows 95 must have (but I didn't use Windows).
Yes, up to 3.x, TCP/IP was an option, because ol' Billy didn't think the Internet was going anywhere. It was standard in W95 and later.
| > NAT actually damages the internet's original design. Nodes are peers, not | > clients or servers. But only initiators (clients, roughly speaking) can | > be behind NAT. So many protocols have had to be butchered to survive NAT. | > | Yep, you may recall the days when FTP wouldn't work through NAT.
Right. But part of that is that FTP was a very early protocol and was not designed that well. Even an FTP client can't survive NAT without the NATting box having special-purpose code to rewrite things inside the FTP packets.
Passive vs active mode.
| However, the address limitation of IPv4 was recognized well over 20 | years ago and led to the development of IPv6. As I mentioned, I first | heard of it in 1995. You may want to see what Vint Cerf has to say | about it. He's been regretting 32 bit addresses for many years.
Of course IPv6 is a Good Thing. But change is hard, especially if one sees no immediate personal benefit.
I think that it is even worse that we don't use DNSSec. The security implications of not securing DNS seem enormous.
As I understand it, that's coming. In another thread (possibly openSUSE list) I was discussing SMTP ports. One person was claiming only port 25 was required, with StartTLS, but due to security concerns, the move to full TLS & DNSSec is recommended.
And while listing currently lost causes, I really wish we'd gotten to Opportunistic Encryption.
| Incidentally, I first heard about NAT when I saw a dial up NAT router, | at Computer Fest in 1996.
I miss Computer Fests.
| Also, at IBM, I had 5 static IPv4 addresses, 1 for my computer and 4 for | testing in my work. I similarly had 5 SNA addresses. Back then, my | computer's address was 9.29.146.147.
I got my /24 before I had a broadband connection. I think that it was in the late 1980s when I was pondering what IP addresses to stick on my 2-node LAN. I didn't want to use an RFC 1918 address (this was before RFC 1918 or even 1597). So I naively asked for some IPs and got them. It was many years before they were actually routed from the internet to my LAN.
Back in the early days, it wasn't hard to get multiple addresses. It was later, with the shortage looming, that they got stingy with addresses. Back in the dial up days, I originally had a static address, but one of the reasons for ISPs moving to dynamic addresses was to free them up, when someone disconnected. BTW, contrary to popular belief, DHCP was not used with dial up. You just got whatever address was assigned to the port you connected to. Many years ago, I set up a dial up "terminal server", on Red Hat, and had to configure the address for the port to use, along with proxy arp.

| From: James Knott via talk <talk@gtalug.org> | On 07/02/2017 10:29 AM, D. Hugh Redelmeier via talk wrote: | > I'm not sure why I don't get IPv6 from Rogers. I intend to look into | > that -- probably I've misconfigured something on my gateway (a PC | > running CentOS 7; the cable modem is running in bridge mode). | | Call Rogers. IPv6 is available to everyone, but some modems may have to | be replaced. If you use a separate router, it has to support DHCPv6-PD, | as that's how the prefix is assigned. I use a refurb computer running | pfSense. I just assume that dhclient knows how to do this. But I'll have to look into it. My service is new (a month or two) and so my modem must be up to date. I'm just using it as a modem, not a router. Anyway, my starting point is seeing if my system is doing anything wrong before I ask Rogers | BTW, when I got a new modem, a little over a year ago, it was part of a | bundle that, while providing pretty much the same service, cost me about | $50 less per month. Rogers and Bell are or were in a competitive spasm. I get a gigabit internet and modest cable TV for $100/month on a two year contract. Bell offered a similar contract but there is no Fibre To The Home in my neighbourhood. Service is limited to VDSL2 at 50 megabits. The competition seemed to have lessened at the moment. Bell offers a 2 year contract with a good price for the first year. The ads are worded misleadingly so you won't notice that the second year is twice as expensive. In any case, neither Bell nor Rogers know how to route my IP addresses into my home so I have to use a third party ISP that uses Bell's last mile. (I want two connections but only one routes my IPs.) | > My IPv4 /24 is globally assigned. That's not going to happen with | > IPv6. | | Actually it does. I have no problem reaching computers on my LAN when | I'm elsewhere. With Rogers you can have a /64 to /56 all to yourself | and they are all globally unique and reachable from anywhere in the world. By "Globally assigned" I meant "Assigned to me directly by (the precursor to) ARIN". That makes it portable: I can keep the IP addresses when I move between service providers. Globally Routable addresses are now assigned by a process like feudalism: IANA gives addresses to RIPE, ARIN, etc. Internet companies on the backbone get addresses from RIPE, ARIN, etc (depending on their geographic location). ISPs get subassignments from their upstream providers. Apply this last rule recursively. So if you, an edge user, gets IP addresses, they are not yours but are merely loaned to you by upstream. If your system has multiple internet connections and your upstreams are willing to support this, perhaps you can get your own addresses assigned (and an ASN -- something I don't have). The smallest global assignment of IPv4 addresses is /24 (256 addresses). This is to reduce the size of the routing tables in core routers. They even grumble about /24 being too small and burdensome. I don't know about IPv6. | BTW, with the Rogers modem/routers in router mode, you only get a /64. | With a separate router, you can select any prefix size, between /64 | (2^64 addresses and /56 (2^72). I did not know that. | > I think that it is even worse that we don't use DNSSec. The security | > implications of not securing DNS seem enormous. | | As I understand it, that's coming. In another thread (possibly openSUSE | list) I was discussing SMTP ports. One person was claiming only port 25 | was required, with StartTLS, but due to security concerns, the move to | full TLS & DNSSec is recommended. Secure flows require encryption AND authentication. email mostly seems to travel over encrypted paths but the authentication appears to be dodgy. My nodes just have self-signed certificates and that seems to work. DNSSec could provide better authentication, with the right convention. There surely are such conventions but I'm not versed in them. But DNSSec is able to prevent all spoofing if DNS. Except by someone who can subvert the root. | Back in the early days, it wasn't hard to get multiple addresses. Do you mean /24 from ARIN or something smaller from your upstream? | addresses. Back in the dial up days, I originally had a static address, | but one of the reasons for ISPs moving to dynamic addresses was to free | them up, when someone disconnected. Having a static IP address for an intermittent connection wasn't too important. Broadband for the masses, from Bell and Rogers, was meant for consumers. Static IP addresses were used for price discrimination: organizations that wanted static IP addresses had to pay a lot more even though it cost Bell and Rogers almost nothing. Remember: since broadband connections were essentially always on, they always used one IP address.

On 07/03/2017 01:44 AM, D. Hugh Redelmeier via talk wrote:
In any case, neither Bell nor Rogers know how to route my IP addresses into my home so I have to use a third party ISP that uses Bell's last mile. (I want two connections but only one routes my IPs.)
| > My IPv4 /24 is globally assigned. That's not going to happen with | > IPv6. | | Actually it does. I have no problem reaching computers on my LAN when | I'm elsewhere. With Rogers you can have a /64 to /56 all to yourself | and they are all globally unique and reachable from anywhere in the world.
By "Globally assigned" I meant "Assigned to me directly by (the precursor to) ARIN". That makes it portable: I can keep the IP addresses when I move between service providers.
Globally Routable addresses are now assigned by a process like feudalism: IANA gives addresses to RIPE, ARIN, etc. Internet companies on the backbone get addresses from RIPE, ARIN, etc (depending on their geographic location). ISPs get subassignments from their upstream providers. Apply this last rule recursively.
So if you, an edge user, gets IP addresses, they are not yours but are merely loaned to you by upstream. That's correct. My /56 is part of Rogers' block. I don't know why you need your own block these days. With IPv6 it's very easy to change address blocks, when you change providers. Just start up the new connection and make it primary. Then update DNS and after a while disconnect the old service.
merely loaned to you by upstream.
If your system has multiple internet connections and your upstreams are willing to support this, perhaps you can get your own addresses assigned (and an ASN -- something I don't have).
Then you'd need a provider that's willing to talk some routing protocol, such as OSPF with you . I don't know that Bell or Rogers would, at least not at the consumer level.
The smallest global assignment of IPv4 addresses is /24 (256 addresses). This is to reduce the size of the routing tables in core routers. They even grumble about /24 being too small and burdensome.
Yep, I remember that crash a few years ago, when the routing tables got too big. With IPv6, the address blocks are handed out in a hierarchical manner geographically to reduce the size of routing tables.
| Back in the early days, it wasn't hard to get multiple addresses.
Do you mean /24 from ARIN or something smaller from your upstream?
No, I meant they were easy to get because there weren't a lot of users, so no shortage.
| addresses. Back in the dial up days, I originally had a static address, | but one of the reasons for ISPs moving to dynamic addresses was to free | them up, when someone disconnected.
Having a static IP address for an intermittent connection wasn't too important.
Also, I don't think SLIP supported automatic address assignment, so static only.
Broadband for the masses, from Bell and Rogers, was meant for consumers. Static IP addresses were used for price discrimination: organizations that wanted static IP addresses had to pay a lot more even though it cost Bell and Rogers almost nothing. Remember: since broadband connections were essentially always on, they always used one IP address.
And that means some people a forced to live behind carrier grade NAT. Incidentally, an excellent book is "IPv6 Essentials" from O'Reilly.

On 07/03/2017 01:44 AM, D. Hugh Redelmeier via talk wrote:
I just assume that dhclient knows how to do this. But I'll have to look into it.
Actually, this is something that caused me problems. I used to use openSUSE for my firewall, but it couldn't handle DHCPv6-PD. As a result, I switched to pfSense for my firewall. However, a Linux computer should be able to get an IPv6 address for itself, when connected directly to the modem. The "PD" refers to prefix delegation and it's how a router is assigned the LAN prefix. BTW, previous to getting IPv6 from Rogers, I used a 6in4 tunnel to get IPv6 from a tunnel broker. My openSUSE router/firewall worked fine with this. I also had a /56 prefix then. Some people recommend handing out /48 (2^80 addresses) prefixes to everyone. There are enough of those to give every person on earth well over 4000 of them and this is with only 1/8th of the entire IPv6 address space allocated for global unicast addresses.

On 07/03/2017 01:44 AM, D. Hugh Redelmeier via talk wrote:
| > I'm not sure why I don't get IPv6 from Rogers. I intend to look into | > that -- probably I've misconfigured something on my gateway (a PC | > running CentOS 7; the cable modem is running in bridge mode). | | Call Rogers. IPv6 is available to everyone, but some modems may have to | be replaced. If you use a separate router, it has to support DHCPv6-PD, | as that's how the prefix is assigned. I use a refurb computer running | pfSense.
I just assume that dhclient knows how to do this. But I'll have to look into it.
My service is new (a month or two) and so my modem must be up to date. I'm just using it as a modem, not a router.
Anyway, my starting point is seeing if my system is doing anything wrong before I ask Rogers
You can try connecting a computer directly to the modem. It should get an IPv6 address.

On Mon, Jul 03, 2017 at 01:44:52AM -0400, D. Hugh Redelmeier via talk wrote:
I just assume that dhclient knows how to do this. But I'll have to look into it.
Not likely. I think you want dhcp6c or something like that. -- Len Sorensen

On July 1, 2017 5:38:14 PM EDT, "D. Hugh Redelmeier via talk" <talk@gtalug.org> wrote:
| From: James Knott via talk <talk@gtalug.org>
| I have no use for those who insist IPv4 is good enough, when it | hasn't been since the day it became necessary to use NAT.
Actually NAT was not introduced to deal with a global shortage of IP addresses. It was introduced to get rid of a local shortage.
For example, Rogers@home (the first broadband service for consumers in my area) was marketed as meant for hooking one device (not a server!) to the internet. The theory was that you'd pay extra for each other device and they would get their own IP. This wasn't 100% crazy since most homes that had a computer that could connect to the internet had only one.
I ran NAT (and servers) at home with a Linux gateway because I did already have a LAN. (Unlike most folks, I had globally routable addresses in my LAN but of course Rogers could not route that traffic to me.)
Pretty soon people wanted to run LANs at home BUT they were Microsoft LANs -- not safe in public. So naturally a broadband router-with-NAT made a
lot of sense.
Now many folks think NATing is the normal and most reasonable form of firewall!
NAT actually damages the internet's original design. Nodes are peers, not clients or servers. But only initiators (clients, roughly speaking) can be behind NAT. So many protocols have had to be butchered to survive NAT.
I came across this memo of general interest to this topic. Section 4 in particular. https://tools.ietf.org/rfc/rfc4864.txt 4. Using IPv6 Technology to Provide the Market Perceived Benefits of NAT The facilities in IPv6 described in Section 3 can be used to provide the protection perceived to be associated with IPv4 NAT. This section gives some examples of how IPv6 can be used securely.
--- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk
-- Russell Sent by K-9 Mail

On 07/02/2017 09:08 AM, Russell via talk wrote:
I came across this memo of general interest to this topic. Section 4 in particular.
https://tools.ietf.org/rfc/rfc4864.txt
4. Using IPv6 Technology to Provide the Market Perceived Benefits of NAT
The facilities in IPv6 described in Section 3 can be used to provide the protection perceived to be associated with IPv4 NAT. This section gives some examples of how IPv6 can be used securely.
Yep. While I haven't read that RFC, I knew that a long time ago. The sole reason NAT provides protection is the stateful nature of it. That's set up when an outgoing connection is made, allowing the reverse traffic. Beyond that, you have to have some means of specifically allowing incoming traffic. This is no different from a firewall that has default deny all and rules added to permit access. Of course, not using NAT means you can access the same service on multiple devices, without changing port numbers etc.. On top of this, NAT requires hacks, such as VTUN, to get around the problems it causes. This is even before we get to those who are behind carrier grade NAT and have no means of reaching their own network from the outside.

On 06/29/2017 06:18 PM, Ansar Mohammed wrote:
Oh, and that growing portion of the internet that's IPv6 only is primarily China.
Actually, Belgium is in the lead, at around 35%. However, in many parts of the world including, but not limited to, China IPv6 is the only thing available, because of the way IPv4 was handed out.
What's your business reason for the additional risk of IPv6?
Given IPv6 has some additional security features, perhaps we should be asking about the risk of IPv4.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Mr. Mohammed, Thanks for sharing your thoughts. At no time did you ever state that you refused to use IPv6. You actually stated that you do, in fact, use IPv6. Neither did you ever state that IPv4 is "good enough." Aside from IPv4 vs IPv6, do you have any suggestions for hardening a Linux system? kind regards, Daniel Villarreal PGP key 2F6E 0DC3 85E2 5EC0 DA03 3F5B F251 8938 A83E 7B49 On 06/29/2017 06:18 PM, Ansar Mohammed via talk wrote:
Again, please follow the thread, this is not about competency or capability on IPv6.
This is a simple question on hardening a Linux system. My entire network runs IPv6 also. But my home systems do not need to be hardened.
There have been many IPv6 only bugs and exploits including last years IPv6 ping of death on Cisco. https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/ cisco-sa-20160525-ipv6
The stack simply isn't as battle tested as IPv4.
Oh, and that growing portion of the internet that's IPv6 only is primarily China.
What's your business reason for the additional risk of IPv6?
Does your application support IPv6?
Has your application been tested with IPv6?
Do you have users that are IPv6 only?
If you don't need it on a hardened system, you are just adding another attack vector for no good reason. ...
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEzBAEBCAAdBQJZVnAAFhx5b3VjYW5saW51eEBnbWFpbC5jb20ACgkQ8lGJOKg+ e0kIaAf/Yw4QQQweyh0NEX2oro/YrvsDZU3r8zPKL/NXc42w38Q9imJr6J6Ue+Si 6jQ5hZRO0O29Q6Z0DcA1nAg+jOhVBl+cK+TF4RVlxDvAIM55WtwuouQaT5TZwXb/ PRLkR6ZzNnmiIb37jbe0hZSK9CYmI/0wPwyCB5JmrlUNanMA93i4AjBBgKKD24qm w0ph6SPscSv44BkynkOS8Qf5yMZsGt8JjOs19HvJ5AlwUx67aLHIrBvF8SWKw2/W 22Md8cey26LdxChdXR1L7pDCyjxw/OBtTX0Q78ypxucYi7zqx3CVP8HIGO1dZX1T Othjz6Gq6UE6nYRIJupcfAA295nbPg== =lWC7 -----END PGP SIGNATURE-----

UFW, fail2ban, and Ansible have all been mentioned, which gives me an opportunity to mention a Hugh-like "war story" related to hardening. It appears that Debian 9 (aka "stretch," which is now "stable") included a stupid-ass version of fail2ban. Our cloud machines have always included a kitchen-sink jail.local, including blocks for PHP and MySQL (although we have yet to use either of them) because fail2ban always just logged warnings on start-up and went about its business. The version of fail2ban in Debian 9 doesn't warn when it finds a non-existent log file in the config: it errors out completely and refuses to start. This is a known issue that the devs "just haven't had time to fix." (I blame the fail2ban devs less than the Debian devs, who shouldn't have included this version of fail2ban.) Anyway, one of the options we looked at as a replacement/supplement/alternative was UFW's rate limiting feature. But since UFW is meant to be simple, it's totally unconfigurable. After six connections in 30 seconds, an IP is blocked for 30 seconds or a minute. Note - I didn't say "six failed connections," which I read by implication because that would have been sane (at least for ssh). Imagine using Ansible with that rule being enforced. For those not familiar, Ansible makes several SSH connections PER SECOND. Ansible gets itself banned by the time it's done "gathering facts" (essentially the setup phase, before it does any of the stuff you were trying to achieve in the script). We disabled that little fix in a hurry. For the person with the original question: UFW is fairly good, although simplistic. But don't use its rate-limiting feature. Fail2ban seems to be a reasonable product, but you'll have to hand-craft your configs - no cut-and-paste a "secure version" from the internet, at least not if you're using Debian (which I would also recommend). You should also keep a close eye on the security advisories for your OS (and apply them). For Debian, that's https://www.debian.org/security/ . We have it feed notifications into Slack, which is very helpful. On 30 June 2017 at 11:36, Daniel Villarreal via talk <talk@gtalug.org> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
Mr. Mohammed,
Thanks for sharing your thoughts.
At no time did you ever state that you refused to use IPv6. You actually stated that you do, in fact, use IPv6. Neither did you ever state that IPv4 is "good enough."
Aside from IPv4 vs IPv6, do you have any suggestions for hardening a Linux system?
kind regards, Daniel Villarreal PGP key 2F6E 0DC3 85E2 5EC0 DA03 3F5B F251 8938 A83E 7B49
On 06/29/2017 06:18 PM, Ansar Mohammed via talk wrote:
Again, please follow the thread, this is not about competency or capability on IPv6.
This is a simple question on hardening a Linux system. My entire network runs IPv6 also. But my home systems do not need to be hardened.
There have been many IPv6 only bugs and exploits including last years IPv6 ping of death on Cisco. https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/ cisco-sa-20160525-ipv6
The stack simply isn't as battle tested as IPv4.
Oh, and that growing portion of the internet that's IPv6 only is primarily China.
What's your business reason for the additional risk of IPv6?
Does your application support IPv6?
Has your application been tested with IPv6?
Do you have users that are IPv6 only?
If you don't need it on a hardened system, you are just adding another attack vector for no good reason. ...
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2
iQEzBAEBCAAdBQJZVnAAFhx5b3VjYW5saW51eEBnbWFpbC5jb20ACgkQ8lGJOKg+ e0kIaAf/Yw4QQQweyh0NEX2oro/YrvsDZU3r8zPKL/NXc42w38Q9imJr6J6Ue+Si 6jQ5hZRO0O29Q6Z0DcA1nAg+jOhVBl+cK+TF4RVlxDvAIM55WtwuouQaT5TZwXb/ PRLkR6ZzNnmiIb37jbe0hZSK9CYmI/0wPwyCB5JmrlUNanMA93i4AjBBgKKD24qm w0ph6SPscSv44BkynkOS8Qf5yMZsGt8JjOs19HvJ5AlwUx67aLHIrBvF8SWKw2/W 22Md8cey26LdxChdXR1L7pDCyjxw/OBtTX0Q78ypxucYi7zqx3CVP8HIGO1dZX1T Othjz6Gq6UE6nYRIJupcfAA295nbPg== =lWC7 -----END PGP SIGNATURE----- --- Talk Mailing List talk@gtalug.org https://gtalug.org/mailman/listinfo/talk
-- Giles https://www.gilesorr.com/ gilesorr@gmail.com

On Thu, Jun 29, 2017 at 07:31:10PM +0000, Ansar Mohammed wrote:
IMHO if you are looking for a hardened system you should not start with Ubuntu. Ubuntu is what l like to call 'kitchen sink Linux'
Yeah I wouldn't start with that either.
Start with a minimal Debian install, then add the packages you need incrementally.
I would start with that too.
Package removal is never an exact rollback of package installation.
Well it should be able to be, although I agree sometimes it isn't.
Then add your IDS, customize whatever host based firewall. Disable IPv6.
I use that all the time. I think most of my internet traffic is IPv6. Why would anyone disable that? -- Len Sorensen

On Tue, Jun 27, 2017 at 07:37:29PM -0400, Truth Hacker via talk wrote:
I am starting to go down the road to harden a Linux server, I am using the Ubuntu server image as my starting point.
I searched a few articles and compiled a list of things to do, so far the stuff is a bit dated. So I was wondering if anyone has stuff ideas to help me harden my system which I plan to use to host my website using a VPS host.
So far I've got step for the following:
SSH / No root login, public key login
I must be awful. I don't do that.
Using DenyHost to reduce brute force password hacking
Is that anything like fail2ban?
Block port scanning Disable PING response
Why?
Closing unused ports
Well any proper firewall would block everything except what is explicitly allowed in, which should take care of that.
Q: What service should I consider disabling from starting automatically.
Anything you are not using.
Q: What program should I remove like (telnet) from my system.
telnet is fine. telnetd on the other hand shouldn't be installed by default on any distribution made this millenium.
I am reading up on iptable and also know about ufw, but not sure how to setup a good firewall, like what to block and not.
I personally like using shorewall to manage iptables.
Any other ideas or checklist would be appreciated.
-- Len Sorensen

On Tue, Jun 27, 2017 at 7:37 PM, Truth Hacker via talk <talk@gtalug.org> wrote:
Hi All,
I am starting to go down the road to harden a Linux server, I am using the Ubuntu server image as my starting point.
I searched a few articles and compiled a list of things to do, so far the stuff is a bit dated. So I was wondering if anyone has stuff ideas to help me harden my system which I plan to use to host my website using a VPS host.
So far I've got step for the following:
SSH / No root login, public key login Using DenyHost to reduce brute force password hacking Block port scanning Disable PING response Closing unused ports
Q: What service should I consider disabling from starting automatically.
Q: What program should I remove like (telnet) from my system.
I am reading up on iptable and also know about ufw, but not sure how to setup a good firewall, like what to block and not.
Any other ideas or checklist would be appreciated.
I use to follow the [My First 10 Minutes On A Server][0], but found it too annoying to follow a "checklist" so I converted it to [an Ansible playbook][1]. I now use dev-sec's [Hardening Framework][2] as it does everything I want. I find this stuff extremely boring so automating the work is a big +1 for me. For firewall, I use UFW as it's while documented and easy to use. [0]: https://www.codelitt.com/blog/my-first-10-minutes-on-a-server-primer-for-sec... [1]: https://github.com/myles/2016-10-11-ansible/tree/master/1-getting-started/ex... [2]: http://dev-sec.io/

On 27/06/17 07:37 PM, Truth Hacker via talk wrote:
Hi All,
I am starting to go down the road to harden a Linux server, I am using the Ubuntu server image as my starting point.
I searched a few articles and compiled a list of things to do, so far the stuff is a bit dated. So I was wondering if anyone has stuff ideas to help me harden my system which I plan to use to host my website using a VPS host.
So far I've got step for the following:
SSH / No root login, public key login
I don't disable root login, I actually use it frequently. But I disable PasswordAuthentication (occasionally, on some servers, whitelisting some users who are allowed to use PasswordAuthentication using 'Match user'). I certainly disable PasswordAuthentication for root, but I allow root login with a keypair. fail2ban, as others have mentioned, I always enable too. Though it's nice to whitelist some of your own IPs if they're steady, as a few times a year otherwise I found legit users getting themselves banned (using a different computer, or forgetting a password, and thinking keys were setup when they weren't, typo in the username, etc.). Whitelisting the office IP address has stopped my co-workers from tripping fail2ban :)
participants (15)
-
Ansar Mohammed
-
Anthony de Boer
-
Blaise Alleyne
-
Christopher Browne
-
D. Hugh Redelmeier
-
Daniel Villarreal
-
Giles Orr
-
James Knott
-
Kevin Cozens
-
lsorense@csclub.uwaterloo.ca
-
Mauro Souza
-
Myles Braithwaite
-
Russell
-
Stewart C. Russell
-
Truth Hacker