
Since this weekend seems to be my hackers holiday at home, no fires, no fude's and no FUBAR, I thought I update the progress on the Toshiba laptop Peter offered up to the list. When last hacked I had XFCE running on Light DM with a small ram upgrade. Still not enough to force the video aperture to 64mg but videos played well and the system did not appear to overheat. Then I got cocky. I had fixed the dsdt tables but still couldn' connect to the apic timer. Everything seemed stable so I installed Chrome. The rest happened so quickly I didn't even have time to say this does not look good and presto, one cooked drive. I had acquired the other Toshiba I was helping my friend with when he got tied of failing to get the battery to charge after installing Windows 7. So I had another drive to tinker with but there are two important differences in the laptops. The one Peter gave me has single core and runs without the battery being installed, the other one is quad core and will not start without a battery being present. So what to do, experiment with the other drive or something else, I went with something else. I removed the drive, removed the battery, removed the wifi and booted. The unit fails to register the realtek Ethernet and loops the message but the unit stays alive. So now I have a fairly quiet three port USB charger. Oh yea, I also have the unit sitting on a USB cooling base with he access plates removed from the base but this is for the next part. Sugar on a Stick. This is the base OS for the one child one computer project. For some reason this OS requires the battery to be installed in it. I imagine that is because of the form factor for the prototype laptops which appear to have a perminent built in battery. So the upshot is Sugar runs fine, no Ethernet as of yet but the unit itself is quite stable. Hours of use and I've left it on overnight. No freezups, crashes or chokes the next day and no excessive heating. No temp gauge but by hand it feels a lot cooler. So I will probably upgrade the ram at some point which will increase the video aperature to 64mg and try a full fedora with LDXE or something similar. So thanks to Peter, it's lodes of fun and keeps me out of trouble. Well for the most part anyway. :-) Russell

Very interesting, Russell. Thanks for the update. I'm glad we're keeping you off the streets and out of trouble ;). Regards - Peter
Since this weekend seems to be my hackers holiday at home, no fires, no fude's and no FUBAR, I thought I update the progress on the Toshiba laptop Peter offered up to the list.
When last hacked I had XFCE running on Light DM with a small ram upgrade. Still not enough to force the video aperture to 64mg but videos played well and the system did not appear to overheat.
Then I got cocky. I had fixed the dsdt tables but still couldn' connect to the apic timer. Everything seemed stable so I installed Chrome. The rest happened so quickly I didn't even have time to say this does not look good and presto, one cooked drive.
I had acquired the other Toshiba I was helping my friend with when he got tied of failing to get the battery to charge after installing Windows 7. So I had another drive to tinker with but there are two important differences in the laptops. The one Peter gave me has single core and runs without the battery being installed, the other one is quad core and will not start without a battery being present.
So what to do, experiment with the other drive or something else, I went with something else. I removed the drive, removed the battery, removed the wifi and booted. The unit fails to register the realtek Ethernet and loops the message but the unit stays alive.
So now I have a fairly quiet three port USB charger. Oh yea, I also have the unit sitting on a USB cooling base with he access plates removed from the base but this is for the next part. Sugar on a Stick. This is the base OS for the one child one computer project. For some reason this OS requires the battery to be installed in it. I imagine that is because of the form factor for the prototype laptops which appear to have a perminent built in battery. So the upshot is Sugar runs fine, no Ethernet as of yet but the unit itself is quite stable. Hours of use and I've left it on overnight. No freezups, crashes or chokes the next day and no excessive heating. No temp gauge but by hand it feels a lot cooler.
So I will probably upgrade the ram at some point which will increase the video aperature to 64mg and try a full fedora with LDXE or something similar.
So thanks to Peter, it's lodes of fun and keeps me out of trouble. Well for the most part anyway. :-) Russell --- Talk Mailing List talk@gtalug.org http://gtalug.org/mailman/listinfo/talk
-- Peter Hiscocks Syscomp Electronic Design Limited, Toronto http://www.syscompdesign.com USB Oscilloscope and Waveform Generator 647-839-0325

I have to say the Toshiba hardware is a cut above any of my other reclamations to date. I'd of happily under clocked this unit to keep it functional. I like the form factor a lot and as I said Sugar runs fine. On another note, in respect of one of the laptops you surprised me with when I picked up the Toshiba. Project reflow video is on the back burner after certain classified materials were discovered. Construction of the fume hood has stopped and I am engaged in a strict social contract to not Frankenstein any electronics in food preparation areas. Curses foiled again. Its kind of like the time my dad caught me doping the tissue paper wings of a Folker BiPlane in my room with the windows closed. However in this case no lecture needed. I am after all, not completely intransigent to reason. I hav en't even looked at the HP yet except to confirm; yep that's one dead battery. Many hours of fun left in the tickle trunk now, so my thanks and regards. Russell On Sunday, March 15, 2015, <phiscock@ee.ryerson.ca> wrote:
Very interesting, Russell. Thanks for the update.
I'm glad we're keeping you off the streets and out of trouble ;).
Regards Peter
Since this weekend seems to be my hackers holiday at home, no fires, no fude's and no FUBAR, I thought I update the progress on the Toshiba laptop Peter offered up to the list.
When last hacked I had XFCE running on Light DM with a small ram upgrade. Still not enough to force the video aperture to 64mg but videos played well and the system did not appear to overheat.
Then I got cocky. I had fixed the dsdt tables but still couldn' connect to the apic timer. Everything seemed stable so I installed Chrome. The rest happened so quickly I didn't even have time to say this does not look good and presto, one cooked drive.
I had acquired the other Toshiba I was helping my friend with when he got tied of failing to get the battery to charge after installing Windows 7. So I had another drive to tinker with but there are two important differences in the laptops. The one Peter gave me has single core and runs without the battery being installed, the other one is quad core and will not start without a battery being present.
So what to do, experiment with the other drive or something else, I went with something else. I removed the drive, removed the battery, removed the wifi and booted. The unit fails to register the realtek Ethernet and loops the message but the unit stays alive.
So now I have a fairly quiet three port USB charger. Oh yea, I also have the unit sitting on a USB cooling base with he access plates removed from the base but this is for the next part. Sugar on a Stick. This is the base OS for the one child one computer project. For some reason this OS requires the battery to be installed in it. I imagine that is because of the form factor for the prototype laptops which appear to have a perminent built in battery. So the upshot is Sugar runs fine, no Ethernet as of yet but the unit itself is quite stable. Hours of use and I've left it on overnight. No freezups, crashes or chokes the next day and no excessive heating. No temp gauge but by hand it feels a lot cooler.
So I will probably upgrade the ram at some point which will increase the video aperature to 64mg and try a full fedora with LDXE or something similar.
So thanks to Peter, it's lodes of fun and keeps me out of trouble. Well for the most part anyway. :-) Russell --- Talk Mailing List talk@gtalug.org <javascript:;> http://gtalug.org/mailman/listinfo/talk
-- Peter Hiscocks Syscomp Electronic Design Limited, Toronto http://www.syscompdesign.com USB Oscilloscope and Waveform Generator 647-839-0325
--- Talk Mailing List talk@gtalug.org <javascript:;> http://gtalug.org/mailman/listinfo/talk

Russell, What's with the change of font sizes in the majority of your posts to this list? You have a small font on some paragraphs and then a larger font on others. Is this something you're doing intentionally? Maybe it's just how it's presented on my end using the Gmail web reader. Anyone else seeing this? Is there any way you can change to plain text only posts? -- Scott

I'm not sure whats happening to the rendering. I'm not doing any thing regarding format myself, just using phablet defaults. No irregular font sizes in my display. The quoted text is in one colour, new text in another. I thought issues of plain text v html were sorted when the mailserver was upgraded. Just checked this with my samsung and no issues for my display on this device either, so for the moment I dont know what's going on. On Monday, March 16, 2015, Scott Allen <mlxxxp@gmail.com> wrote:
Russell,
What's with the change of font sizes in the majority of your posts to this list? You have a small font on some paragraphs and then a larger font on others. Is this something you're doing intentionally?
Maybe it's just how it's presented on my end using the Gmail web reader. Anyone else seeing this?
Is there any way you can change to plain text only posts?
-- Scott --- Talk Mailing List talk@gtalug.org http://gtalug.org/mailman/listinfo/talk

On Mon, Mar 16, 2015 at 12:23:58PM -0400, Scott Allen wrote
Russell,
What's with the change of font sizes in the majority of your posts to this list? You have a small font on some paragraphs and then a larger font on others. Is this something you're doing intentionally?
Maybe it's just how it's presented on my end using the Gmail web reader. Anyone else seeing this?
Is there any way you can change to plain text only posts?
My mailreader, mutt, sees...
[-- Attachment #1 --] [-- Type: multipart/alternative, Encoding: 7bit, Size: 9.7K --]
...and gives me the text version. Actually, you can pull popmail from Gmail encapsulated in SSL using server=pop.gmail.com and port=995 -- Walter Dnes <waltdnes@waltdnes.org>

OK just checked the rendering of sent messages on this phablet and sure enough I noticed a slight difference in font sizes. Those with better eyesight might quibble over my use of the word slight but I digress here. I think the last time this was an issue with mail for me was when majordomo was choking on mime attachments to HTML messages, hence the mail server upgrade. Same solution this time. Abandon the natively provided gmail, which compels the sender to HTML and use K9 Mail, which does not. Retro thanks to Mario who provided this mail agent suggestion the last time when a varient of the IFS VAR was trashing googles servers and borking mail sent via gmail in general. On another note, I notice Debian implemented dash to try to deal with exploits. Does anyone know offhand what Fedora has implemented, or is all this handled through SElinux policy management? On March 16, 2015 1:27:49 PM EDT, Walter Dnes <waltdnes@waltdnes.org> wrote:
On Mon, Mar 16, 2015 at 12:23:58PM -0400, Scott Allen wrote
Russell,
What's with the change of font sizes in the majority of your posts to this list? You have a small font on some paragraphs and then a larger font on others. Is this something you're doing intentionally?
Maybe it's just how it's presented on my end using the Gmail web reader. Anyone else seeing this?
Is there any way you can change to plain text only posts?
My mailreader, mutt, sees...
[-- Attachment #1 --] [-- Type: multipart/alternative, Encoding: 7bit, Size: 9.7K --]
...and gives me the text version. Actually, you can pull popmail from Gmail encapsulated in SSL using server=pop.gmail.com and port=995
-- Sent from my Android device with K-9 Mail. Please excuse my brevity.

On Mon 16 Mar 2015 14:34 -0400, R Russell Reiter wrote:
On another note, I notice Debian implemented dash to try to deal with exploits. Does anyone know offhand what Fedora has implemented, or is all this handled through SElinux policy management?
dash is for performance, not security.

I'm not sure that performance and security aren't interchangable concepts. While the implimentation of dash did improve performance it did also mitigate the effects of the Shellshock vulnaribiliy discovered last year. On Tue, Mar 17, 2015 at 9:32 AM, Loui Chang <louipc.ist@gmail.com> wrote:
On Mon 16 Mar 2015 14:34 -0400, R Russell Reiter wrote:
On another note, I notice Debian implemented dash to try to deal with exploits. Does anyone know offhand what Fedora has implemented, or is all this handled through SElinux policy management?
dash is for performance, not security. --- Talk Mailing List talk@gtalug.org http://gtalug.org/mailman/listinfo/talk

On 17 March 2015 at 10:16, Russell Reiter <rreiter91@gmail.com> wrote:
I'm not sure that performance and security aren't interchangable concepts. While the implimentation of dash did improve performance it did also mitigate the effects of the Shellshock vulnaribiliy discovered last year.
Well, if you examine the package information about Dash, the description is reasonably specific... https://packages.debian.org/sid/shells/dash "The Debian Almquist Shell (dash) is a POSIX-compliant shell derived from ash. Since it executes scripts faster than bash, and has fewer library dependencies (making it more robust against software or hardware failures), it is used as the default system shell on Debian systems." I agree that performance is somewhat related to security; a denial of service can result from poor performance. But the above seems to be descriptive of why Dash was chosen as the default shell in Debian post-Squeeze. Fewer library dependencies is an interesting additional property. That is presumably "more secure" as well, but I think they were after "more reliable" which, while not unrelated, is a distinctly separate measure. -- When confronted by a difficult problem, solve it by reducing it to the question, "How would the Lone Ranger handle this?"

I think what is important to remember is that most recently discovered exploits were in fact known at one point or another, at least to the original authors of the code, if not necessarily documented and shared. How much information is shared between allies and foes is usually a matter of operational security. I believe that Debian has moved towards implement Dependency Based Booting with an eye to, at sometime in the future, compiling the OS each time at runtime. In this case brevity would be a factor in the time it takes to initialize key security layers and foiling "injected" exploits as opposed to "discovered" ones. However, too much simplicity can lead to security holes and other hidden features. I tend to disagree that reliability and security are distinctly separate and measurable. Each may be quantified as a measure of trust in relationship to the other and acted upon accordingly in relation to any OPSEC priorities. On Tue, Mar 17, 2015 at 11:05 AM, Christopher Browne <cbbrowne@gmail.com> wrote:
On 17 March 2015 at 10:16, Russell Reiter <rreiter91@gmail.com> wrote:
I'm not sure that performance and security aren't interchangable concepts. While the implimentation of dash did improve performance it did also mitigate the effects of the Shellshock vulnaribiliy discovered last year.
Well, if you examine the package information about Dash, the description is reasonably specific... https://packages.debian.org/sid/shells/dash
"The Debian Almquist Shell (dash) is a POSIX-compliant shell derived from ash.
Since it executes scripts faster than bash, and has fewer library dependencies (making it more robust against software or hardware failures), it is used as the default system shell on Debian systems."
I agree that performance is somewhat related to security; a denial of service can result from poor performance. But the above seems to be descriptive of why Dash was chosen as the default shell in Debian post-Squeeze.
Fewer library dependencies is an interesting additional property. That is presumably "more secure" as well, but I think they were after "more reliable" which, while not unrelated, is a distinctly separate measure. -- When confronted by a difficult problem, solve it by reducing it to the question, "How would the Lone Ranger handle this?" --- Talk Mailing List talk@gtalug.org http://gtalug.org/mailman/listinfo/talk

On Tue, Mar 17, 2015 at 11:37:02AM -0400, Russell Reiter wrote:
I believe that Debian has moved towards implement Dependency Based Booting with an eye to, at sometime in the future, compiling the OS each time at runtime.
Certainly not. Debian does binary distrubition and to do anything else is stupid. Good grief, try compiling kde each time you boot. Not going to happen. -- Len Sorensen

<snip the stupid stuff>
Certainly not. Debian does binary distrubition and to do anything else is stupid.
I think you miss-presume that technology will stay at current levels and that capacity planning is done the week before instead of the decade before. Reminds me of Bill Gates, no one is going to need more than 640k, or IBM's forecast that the entire worlds computing would be done on one or two of their mainframes.
Good grief, try compiling kde each time you boot. Not going to happen.
Says you. I'm not even going to mention VLIW, concurrency booting in failover, control granularity on process exit, or the fact that the first time I compiled a kernel on a 486 it took 20 hr's. Want to bet there is one now that does it in 20 seconds and how long will it be before that is microseconds. What's stupid is not engaging possibilities.
-- Len Sorensen --- Talk Mailing List talk@gtalug.org http://gtalug.org/mailman/listinfo/talk

-------- Original Message -------- going to need more than 640k, or IBM's Ummm I meant 64k ROM. Sorry. -- Sent from my Android device with K-9 Mail. Please excuse my brevity.

On Tue, Mar 17, 2015 at 05:53:56PM -0400, Russell Reiter wrote:
I think you miss-presume that technology will stay at current levels and that capacity planning is done the week before instead of the decade before.
Reminds me of Bill Gates, no one is going to need more than 640k, or IBM's forecast that the entire worlds computing would be done on one or two of their mainframes.
So you want to use all the improvements in technology to do pointless repetitive work? Why would you want that. Debian went to parallel startup to improve startup times. Your idea would make it much slower.
Good grief, try compiling kde each time you boot. Not going to happen. Says you. I'm not even going to mention VLIW, concurrency booting in failover, control granularity on process exit, or the fact that the first time I compiled a kernel on a 486 it took 20 hr's. Want to bet there is one now that does it in 20 seconds and how long will it be before that is microseconds. What's stupid is not engaging possibilities.
Well best kernel compile time I have heard was 7 seconds. Of course it was on an IBM p795 with 256 4GHz cores. That's way beyond what we are going to have this decade in a typical machine. Recompiling serves no purpose. Do you want to first recompile the compiler before compiling the OS? What are you going to compile the compiler with? How far down do you want to go? If you want to check your code, then checksum it and validate that it matches the code last time you booted and if it does, great, don't bother redoing anything. If it doesn't match, well then is that because the code is now corrupt (in which case recompiling is a bad idea) or because you made a change? Useful uses of modern technology is to make things faster or more power efficient so they can run longer. Laptops with 10 to 15 hour battery lives exist now. Those we didn't have in the past. I don't want to waste that battery life running pointless compile jobs. I think source distributions like gentoo are stupid, but at least they only compile things once. They aren't that crazy. Some people like what they can do with tweaking the settings and turning features on and off. I prefer things that are tested and work and don't waste tons of CPU time compiling what the distribution could have already compiled. -- Len Sorensen

Lennart Sorensen wrote:
Recompiling serves no purpose. Do you want to first recompile the compiler before compiling the OS? What are you going to compile the compiler with? How far down do you want to go?
Too much of a modern system is in various scripting languages, which do effectively that everytime you run them. Granted, so long as you're not doing tight inner loops in a script the performance hit isn't as bad as it can be. Shell scripts that forked N things per loop iteration used to really crawl along, though the fact we had a couple of dozen users on a 386 running SVR3 might have had something to do with it too. Optimization is getting to be a lost art.
I think source distributions like gentoo are stupid, but at least they only compile things once. They aren't that crazy. Some people like what they can do with tweaking the settings and turning features on and off. I prefer things that are tested and work and don't waste tons of CPU time compiling what the distribution could have already compiled.
Part of the reason I run Gentoo is to have the source code aboard my system and be sure the binaries were compiled from exactly that; their infrastructure facilitates that and the build process has to be robust enough to work on various strange folks' machines. (The other part was at the time wanting something as unlike an RPM-based distro as possible due to having had enough of that for awhile.) Crazy would be taking something like Debian or Red Hat where you're supposed to love and run their distributed binaries and recompile them all yourself and find out how many builds only worked that once and are irreproduceable. But someone somewhere has to keep them honest. :-) -- Anthony de Boer

On Tue, Mar 17, 2015 at 11:17:45PM -0400, Anthony de Boer wrote:
Too much of a modern system is in various scripting languages, which do effectively that everytime you run them. Granted, so long as you're not doing tight inner loops in a script the performance hit isn't as bad as it can be. Shell scripts that forked N things per loop iteration used to really crawl along, though the fact we had a couple of dozen users on a 386 running SVR3 might have had something to do with it too.
Optimization is getting to be a lost art.
Well there is a difference between not optimizing, and purposely unoptimizing.
Part of the reason I run Gentoo is to have the source code aboard my system and be sure the binaries were compiled from exactly that; their infrastructure facilitates that and the build process has to be robust enough to work on various strange folks' machines. (The other part was at the time wanting something as unlike an RPM-based distro as possible due to having had enough of that for awhile.)
But running gentoo you do not recompile the same code each time you boot. Just doing it once is sufficient.
Crazy would be taking something like Debian or Red Hat where you're supposed to love and run their distributed binaries and recompile them all yourself and find out how many builds only worked that once and are irreproduceable. But someone somewhere has to keep them honest. :-)
Debian even has a "reprodueable builds" project going on, hunting down any package that generates different result each time you rebuilt it, and fixing them. -- Len Sorensen

On 17 March 2015 at 23:17, Anthony de Boer <adb@adb.ca> wrote:
Lennart Sorensen wrote:
Recompiling serves no purpose. Do you want to first recompile the compiler before compiling the OS? What are you going to compile the compiler with? How far down do you want to go?
Too much of a modern system is in various scripting languages, which do effectively that everytime you run them. Granted, so long as you're not doing tight inner loops in a script the performance hit isn't as bad as it can be. Shell scripts that forked N things per loop iteration used to really crawl along, though the fact we had a couple of dozen users on a 386 running SVR3 might have had something to do with it too.
Optimization is getting to be a lost art.
I think source distributions like gentoo are stupid, but at least they only compile things once. They aren't that crazy. Some people like what they can do with tweaking the settings and turning features on and off. I prefer things that are tested and work and don't waste tons of CPU time compiling what the distribution could have already compiled.
Part of the reason I run Gentoo is to have the source code aboard my system and be sure the binaries were compiled from exactly that; their infrastructure facilitates that and the build process has to be robust enough to work on various strange folks' machines. (The other part was at the time wanting something as unlike an RPM-based distro as possible due to having had enough of that for awhile.)
Crazy would be taking something like Debian or Red Hat where you're supposed to love and run their distributed binaries and recompile them all yourself and find out how many builds only worked that once and are irreproduceable. But someone somewhere has to keep them honest. :-)
You don't need to compile everything *every* time to keep them honest; you need to compile it *once*. And it's not so much you as there being *someone*. And better still if the "someone" is an automated batch process so that we can have a non-negligible amount of confidence that it's repeatable. And some of the "recompile for (imagined) security" takes this to further heights of silliness... - Do we need to recompile Bash (or Dash or zsh or whatever) each time we reboot? - Oh dear, that means we need to recompile the Perl, Python, and Ruby distributions every time. Should we be running the test suites, too, to verify that they're working as predicted? - It seems idiotic to need to recompile KDE, libraries *and* apps. - I'm running StumpWM as my window manager; this "security by recompiling everything" model means I need to recompile SBCL (the Common Lisp environment). - Whoops, can we really trust things if we haven't recompiled GCC/LLVM since the last time we rebooted? If recompiling code lends security, then surely not. - Have you recompiled Grub lately? And all of this falls out of deciding that when people say "reliability," they don't *really* mean that; they really mean "security." And when they say "performance", they don't *really* mean that; they really meant to say "security" (even though they didn't, which ought to be a hint that it wasn't what they meant). Claim was made that Debian switched from using Bash as the default shell (!= "default login shell", by the way) "because security." When the declared reasons didn't have the word "security" anywhere. But I guess that since *everything* is really computer security, then the plans must be already well under way for Debian to recompile everything, from the kernel to Grub to all the scripting engines during the boot process. -- When confronted by a difficult problem, solve it by reducing it to the question, "How would the Lone Ranger handle this?"

On Wed, Mar 18, 2015 at 12:07:10PM -0400, Christopher Browne wrote:
You don't need to compile everything *every* time to keep them honest; you need to compile it *once*. And it's not so much you as there being *someone*. And better still if the "someone" is an automated batch process so that we can have a non-negligible amount of confidence that it's repeatable.
And some of the "recompile for (imagined) security" takes this to further heights of silliness...
- Do we need to recompile Bash (or Dash or zsh or whatever) each time we reboot?
- Oh dear, that means we need to recompile the Perl, Python, and Ruby distributions every time. Should we be running the test suites, too, to verify that they're working as predicted?
Sure, but why trust the test suites haven't been tampered with?
- It seems idiotic to need to recompile KDE, libraries *and* apps.
- I'm running StumpWM as my window manager; this "security by recompiling everything" model means I need to recompile SBCL (the Common Lisp environment).
- Whoops, can we really trust things if we haven't recompiled GCC/LLVM since the last time we rebooted? If recompiling code lends security, then surely not.
- Have you recompiled Grub lately?
And all of this falls out of deciding that when people say "reliability," they don't *really* mean that; they really mean "security." And when they say "performance", they don't *really* mean that; they really meant to say "security" (even though they didn't, which ought to be a hint that it wasn't what they meant).
Claim was made that Debian switched from using Bash as the default shell (!= "default login shell", by the way) "because security." When the declared reasons didn't have the word "security" anywhere.
But I guess that since *everything* is really computer security, then the plans must be already well under way for Debian to recompile everything, from the kernel to Grub to all the scripting engines during the boot process.
But why trust your compiler? All such a stupid idea is doing is moving the problem, while putting some stuff in front that sounds like they are doing something to improve security, while doing no such thing. There are ways to make sure you are booting trusted code. Recompiling from source at boot is not one of them. It does the opposit in fact. -- Len Sorensen

On 18 March 2015 at 12:36, Lennart Sorensen <lsorense@csclub.uwaterloo.ca> wrote:
On Wed, Mar 18, 2015 at 12:07:10PM -0400, Christopher Browne wrote:
- Oh dear, that means we need to recompile the Perl, Python, and Ruby distributions every time. Should we be running the test suites, too, to verify that they're working as predicted?
Sure, but why trust the test suites haven't been tampered with?
Yep, that means we need to download the sources of *everything*, from trusted sources, and check the checksums. Recursively. And it doesn't really validate that the test suites are any good, which is distinct from tampered with... The only way to be totally confident the test suites are any good is if you wrote them yourself.
But I guess that since *everything* is really computer security, then the plans must be already well under way for Debian to recompile everything, from the kernel to Grub to all the scripting engines during the boot process.
But why trust your compiler? All such a stupid idea is doing is moving the problem, while putting some stuff in front that sounds like they are doing something to improve security, while doing no such thing.
There are ways to make sure you are booting trusted code. Recompiling from source at boot is not one of them. It does the opposit in fact.
Yep, it shuffles around the problem, pretending that the compiling process is a grand protection. This properly steps us back to Ken Thompson's paper on trust http://cm.bell-labs.com/who/ken/trust.html where he points out an exploit (discovered by MULTICS folk somewhat earlier) where a suitably hacked compiler might put arbitrary exploits anywhere into this process. And there's actually a tale in the last week pointing to attempts to do exactly what Thompson is pointing at; it seems as though some TLA agencies have tried such stunts with some of the Apple compiler toolchain called XCode. http://www.macrumors.com/2015/03/10/leaked-cia-documents-hacked-xcode/ Gentoo, at one time, had proponents that would claim no end of benefits from compiling everything from scratch. I don't think that's what it's about now, but at one time, there were plenty of "fanboys" claiming that they were making their system better and understanding it better just by virtue of watching the successive series of "make" output, lines of logs indicating what file GCC most recently compiled, and with what flags, scroll by. Pointing back to those fun times, with maximum sarcasm... http://funroll-loops.teurasporsaat.org/ Watching the compiler 'logging' scroll past doesn't represent actual understanding. (And if someone pulled Thompson's exploit on your compiler toolchain, recompiling ensures INsecurity!) Instead, I'll step back to Thompson's paper... "The moral is obvious. You can't trust code that you did not totally create yourself." That's a deeper statement than it seems; deep trust requires that you write your own compiler, your own libraries, your own linker, your own bootloader, and so forth. But the shallow interpretation also works decently. Recompiling someone else's code using someone else's compiler using someone else's control scripts doesn't provide deep trust. -- When confronted by a difficult problem, solve it by reducing it to the question, "How would the Lone Ranger handle this?"

On 18 Mar 2015 12:59 pm, "Christopher Browne" <cbbrowne@gmail.com> wrote:
Instead, I'll step back to Thompson's paper...
"The moral is obvious. You can't trust code that you did not totally create yourself."
This whole conversation/hissy-fit is missing yet another problem. Even Ken Thompson wrote code with bugs and constructed code vulnerable to exploitation. I certainly don't blindly trust the code I write - I make the confident assumption that it is buggy and possibly dangerous. If anyone on Earth wrote a program that actually addressed my needs I wouldn't be writing the code at all. Trusting something so complicated it requires a computer to interpret and run it is a challenge. Many approaches are being explored, differences of opinion are being generated constantly. That's OK. There are a lot of things that are now standard that are based on history rather than good theoretical bases. Let's look at working solutions, ask questions and try to be friendly and a little more understanding. There is no right way, and priorities differ.

<snip>
And all of this falls out of deciding that when people say "reliability," they don't *really* mean that; they really mean "security." And when they say "performance", they don't *really* mean that; they really meant to say "security" (even though they didn't, which ought to be a hint that it wasn't what they meant).
If it is a hint then open reasoning is on the table. There is a type of security in obscurity, as witnessed by the discovery of embedded exploits which, by needs, must be addressed by enterprise.
Claim was made that Debian switched from using Bash as the default shell (!= "default login shell", by the way) "because security." When the declared reasons didn't have the word "security" anywhere.
But I guess that since *everything* is really computer security, then the plans must be already well under way for Debian to recompile everything, from the kernel to Grub to all the scripting engines during the boot process.
I'm not privy to the inner workings of Debian plans, but all the best planners, I think from the logistic perspective, work on failover and have a contingency for rapid deployment in case a primary plan doesn't work as expected and does indeed fail in service. This is what government brings to the table that enterprise does not; the willingness to spend large amounts of money on two radically different plans with identical aims. So if Debian does not have a fully formulated plan to have a compile at runtime OS, I'd bet there is a set of schema on someone's drawing board somewhere. -- Sent via K-9 Mail.

On Wed 18 Mar 2015 12:59 -0400, R. Russell Reiter wrote:
I'm not privy to the inner workings of Debian plans, but all the best planners, I think from the logistic perspective, work on failover and have a contingency for rapid deployment in case a primary plan doesn't work as expected and does indeed fail in service.
And thus, cdrkit was born.

<snip>
Recompiling serves no purpose. Do
I consider security in mission critical environments to be a valid purpose and note that enterprise is not necessarily mission critical whereas national security is uniformly considered to be very critical.
Useful uses of modern technology is to make things faster or more power efficient so they can run longer.
This is an enterprise point of view. It has validity in its own niche. There are other critical niches which have different aims and security needs. Fire control systems for example. Lap tops with 10 to 15 hour battery
lives exist now.
LI technology has improved in the last ten years and its true those improvements were drive by markets rather than government or national interests but those interests can and will take advantage of those improvements. Let me put it this way, would you deliberately not take advantage of a secure booting feature which because of hardware and software improvements, works with little or no added overhead? You could do that as a matter of personal preference but in enterprise you would lose market share when your bank clients discover your system is not as secure as your competitors.
I think source distributions like gentoo are stupid, but at least they only compile things once. They aren't that crazy.
So you only compile your critical dependency system once at runtime and if and when you make hardware or other critical changes you do it again. There are valid reasons to harden systems and keep them hard. You got 82000 hrs spin time from one of your drives, that's a lot of times between boots. I don't just pull this stuff out from under my hat you know, I do a lot of reading. Its just that I'm limited to the stuff that's not above my pay grade or not otherwise trade secrets. -- Sent from my Android device with K-9 Mail. Please excuse my brevity.

On Wed, Mar 18, 2015 at 04:17:52AM -0400, R Russell Reiter wrote:
I consider security in mission critical environments to be a valid purpose and note that enterprise is not necessarily mission critical whereas national security is uniformly considered to be very critical.
If you want security, recompiling again and again is not a solution, it is a risk. Validating your binary with some kind of checksum would be useful. Sure you should compile it yourself from validated sources, and then sign the result, and then leave it alone and just check the signature each time you boot.
LI technology has improved in the last ten years and its true those improvements were drive by markets rather than government or national interests but those interests can and will take advantage of those improvements.
Let me put it this way, would you deliberately not take advantage of a secure booting feature which because of hardware and software improvements, works with little or no added overhead?
Yes I would. I despise secure boot. It has its use in a few special cases, but the wast majority of places it is being pushed is purely to try and control peoples hardware. Of course secure booting relies on signed binaries, and certainly does NOT support recompiling the code each time you boot.
You could do that as a matter of personal preference but in enterprise you would lose market share when your bank clients discover your system is not as secure as your competitors.
So you only compile your critical dependency system once at runtime and if and when you make hardware or other critical changes you do it again. There are valid reasons to harden systems and keep them hard. You got 82000 hrs spin time from one of your drives, that's a lot of times between boots.
There is no reason to trust your sources anymore than your precompiled binary at boot time, hence recompiling is plain stupid and serves no purpose. You turn it from a problem of validating your binary, to one of validating your compiler binary and your source code, and wasting a lot of time every boot. Compiling trusted code in a trusted environment and then signing it and using secure boot to validate the signed binary and running it does make sense, but compiling multiple times does not.
I don't just pull this stuff out from under my hat you know, I do a lot of reading. Its just that I'm limited to the stuff that's not above my pay grade or not otherwise trade secrets.
Well it sure seems like you do a lot of the time. -- Len Sorensen

On March 18, 2015 10:34:42 AM EDT, Lennart Sorensen <lsorense@csclub.uwaterloo.ca> wrote:
I consider security in mission critical environments to be a valid
On Wed, Mar 18, 2015 at 04:17:52AM -0400, R Russell Reiter wrote: purpose and note that enterprise is not necessarily mission critical whereas national security is uniformly considered to be very critical.
If you want security, recompiling again and again is not a solution, it is a risk. Validating your binary with some kind of checksum would be useful. Sure you should compile it yourself from validated sources, and then sign the result, and then leave it alone and just check the signature each time you boot.
LI technology has improved in the last ten years and its true those improvements were drive by markets rather than government or national interests but those interests can and will take advantage of those improvements.
Let me put it this way, would you deliberately not take advantage of a secure booting feature which because of hardware and software improvements, works with little or no added overhead?
Yes I would. I despise secure boot. It has its use in a few special cases, but the wast majority of places it is being pushed is purely to try and control peoples hardware.
Of course secure booting relies on signed binaries, and certainly does NOT support recompiling the code each time you boot.
You could do that as a matter of personal preference but in enterprise you would lose market share when your bank clients discover your system is not as secure as your competitors.
So you only compile your critical dependency system once at runtime and if and when you make hardware or other critical changes you do it again. There are valid reasons to harden systems and keep them hard. You got 82000 hrs spin time from one of your drives, that's a lot of times between boots.
There is no reason to trust your sources anymore than your precompiled binary at boot time, hence recompiling is plain stupid and serves no purpose. You turn it from a problem of validating your binary, to one of validating your compiler binary and your source code, and wasting a lot of time every boot. Compiling trusted code in a trusted environment and then signing it and using secure boot to validate the signed binary and running it does make sense, but compiling multiple times does not.
I don't just pull this stuff out from under my hat you know, I do a lot of reading. Its just that I'm limited to the stuff that's not above my pay grade or not otherwise trade secrets.
Well it sure seems like you do a lot of the time.
-- Sent via K-9 Mail.

Compiling trusted code in a trusted
environment and then signing it and using secure boot to validate the signed binary and running it does make sense, but compiling multiple times does not.
I dislike the term secure and prefer trust myself however the demands of enterprise are different than my own. The number of compiles is related to the number of boots again IMHO, this is far less of a problem than you would make it out to be.
I don't just pull this stuff out from under my hat you know, I do a lot of reading. Its just that I'm limited to the stuff that's not above my pay grade or not otherwise trade secrets.
Well it sure seems like you do a lot of the time.
Well that is your opinion and you are entitled to it. My opinion is that there is a need and a call for development of a trusted compiler to be used in a security enhanced OS which compiles at runtime and is based on the modular nature of the hardware itself and the needs of the enterprise in question. -- Sent via K-9 Mail.

On Wed, 2015-03-18 at 10:59 -0400, R. Russell Reiter wrote:
Compiling trusted code in a trusted
environment and then signing it and using secure boot to validate the signed binary and running it does make sense, but compiling multiple times does not.
I dislike the term secure and prefer trust myself however the demands of enterprise are different than my own. The number of compiles is related to the number of boots again IMHO, this is far less of a problem than you would make it out to be.
Ken Thompson's classic Turing Award lecture: <http://cm.bell-labs.com/who/ken/trust.html> Trust is not cheap. If it looks cheap it may just be gullibility. Mel.

On Tue 17 Mar 2015 11:37 -0400, Russell Reiter wrote:
I believe that Debian has moved towards implement Dependency Based Booting with an eye to, at sometime in the future, compiling the OS each time at runtime.
I have contemplated the possibility of something like this with the increased adoption of continuous integration, except recompiles and reboots would be a lot more frequent. This would only really be used for automated build and test systems though, not for general use.

On 19/03/15 10:14 AM, Loui Chang wrote:
On Tue 17 Mar 2015 11:37 -0400, Russell Reiter wrote:
I believe that Debian has moved towards implement Dependency Based Booting with an eye to, at sometime in the future, compiling the OS each time at runtime.
I have contemplated the possibility of something like this with the increased adoption of continuous integration, except recompiles and reboots would be a lot more frequent. This would only really be used for automated build and test systems though, not for general use.
Tangentially related: http://nixos.org Sure you don't compile at runtime. But, you can choose which version of a package you run and can have as many versions installed as you like. Package paths are based on the hash of the compiler input iirc, so every version is unique in its build/run environment. Try it out, lots of fun. Cheers, Jamon

On March 19, 2015 10:14:33 AM EDT, Loui Chang <louipc.ist@gmail.com> wrote:
On Tue 17 Mar 2015 11:37 -0400, Russell Reiter wrote:
I believe that Debian has moved towards implement Dependency Based Booting with an eye to, at sometime in the future, compiling the OS each time at runtime.
I have contemplated the possibility of something like this with the increased adoption of continuous integration, except recompiles and reboots would be a lot more frequent. This would only really be used for automated build and test systems though, not for general use.
You might find this interesting. It is Yale U announcing VLIW reasoning in the 80's. More work with less iron and from my perspective, instructions ordered at compile time makes for fewer clocking issues. At the moment the stumbling blocks appears to be the lack of a robust memory subsystem and flakieness in ram which seem to be due to the limitations in current nano-manufacturing techniques used for ram chipsets. Notwithstanding my complaints about dirty AC's involvement in bit flipping. http://dl.acm.org/citation.cfm?doid=800046.801649
--- Talk Mailing List talk@gtalug.org http://gtalug.org/mailman/listinfo/talk
-- Sent via K-9 Mail.

On Fri, Mar 20, 2015 at 07:34:35AM -0400, R. Russell Reiter wrote:
You might find this interesting. It is Yale U announcing VLIW reasoning in the 80's. More work with less iron and from my perspective, instructions ordered at compile time makes for fewer clocking issues.
Yes it does. Hence why VLIW is working great in DSPs where you know exactly what you will be doing all the time at compile time. In general purpose software on the other hand you don't know at compile time, and the compiler can not generate efficient code for VLIW and ends up rarely generating more than one instruction per VLIW, while a more complex CPU design (like most of the ones people actually buy and use) that do run time instruction reordering are able to actually use multiple execution units per clock cycle. So yes VLIW is more efficient if you order the instructions at compile time. Too bad in reality that has turned out to be impossible in the general case. Makes wonderful DSP/GPU/other stream processing chips though. -- Len Sorensen

<snip>.
So yes VLIW is more efficient if you order the instructions at compile time. Too bad in reality that has turned out to be impossible in the general case. Makes wonderful DSP/GPU/other stream processing chips though.
I note that Transmetta's Caruso uses code-morphing to achieve a kind of bi-polar state. I don't really know if it is truly comfortable with VLIW or not. I think it is well within the realm of possibilities that an ILP environment could offer this as a trusted feature; two channel booting. That is, there are two channels defined and each boots an identical copy of an operating system to a steady ready state, hopefully from the same clock tic. Only one channel exposes itself as user space, the other acts as a dynamic checkum of the kernelspace. Humans may be defined as left or right brained, why not computers. This is sort of like the parallel RAS proposed to deal with current issues with the quality of manufacture of ram and resulting reliability issues. Also notwithstanding the amount of heuristics which are needed at machine level. I'll omit my theories of EM containment in near fields which affect ram and cause bit flips for the moment. I think machine learning is real and that capacity planning does take this into account. Given the exponential growth of IT infrastructure in this wired world; the id, the ego and the superego of the internet are well documented. So IMHO when properly co-ordinated, the future becomes now and even if it is only in my imagination, I can still have fun with it. Sent via K-9 Mail.

On Mon, Mar 16, 2015 at 12:23:58PM -0400, Scott Allen wrote:
Russell,
What's with the change of font sizes in the majority of your posts to this list? You have a small font on some paragraphs and then a larger font on others. Is this something you're doing intentionally?
Maybe it's just how it's presented on my end using the Gmail web reader. Anyone else seeing this?
Is there any way you can change to plain text only posts?
At least the plain text version in the multipart alternative is readable. -- Len Sorensen
participants (13)
-
Anthony de Boer
-
Christopher Browne
-
Jamon Camisso
-
Lennart Sorensen
-
Loui Chang
-
Mel Wilson
-
phiscock@ee.ryerson.ca
-
R Russell Reiter
-
R. Russell Reiter
-
Russell Reiter
-
Scott Allen
-
Walter Dnes
-
William Witteman