Re: [GTALUG] BOOST moved into a IDE using APT-Get

Hello all, Kevin Cozens and/or Stewart C. Russell wrote:
I didn't know anything about Boost until I had to deal with it as a set of dependencies on something I wanted to compile. Some of the imaging libraries I use as part of my document filing system use Boost. Thankfully, all of them can be coerced to use library versions installed by: sudo apt install libboost-all-dev I wouldn't want to venture further than that.
Dan says: The official BOOST site has a bit of disparaging dialog on repackaging efforts of various sorts. I think if your target system is x86, then apt-get is probably a fine choice. But if you have to either x-compile, or worry about updates, or want to enable non specific improvements in the indefinite future of any calling app, the the hard way is the easy way ! I'm just leaning into asio now, the sockets and file system interface package. I have a commercial app running with it all made form Berkeley sockets, and this is ok. But I think the customer base is hankering for more security ( not just http ) etc and all the stuff I don't understand is probably in there someplace. So as a contingency, Im constructing the alternative while the first few hundred users squeek like Guinea pigs some now. regs Dan my ref: boost, gtalug

Dan K via talk wrote:
The official BOOST site has a bit of disparaging dialog on repackaging efforts of various sorts. I think if your target system is x86, then apt-get is probably a fine choice. But if you have to either x-compile, or worry about updates, or want to enable non specific improvements in the indefinite future of any calling app, the the hard way is the easy way !
You can apt-get stuff onto all manner of different architectures with no problem; Debian-style packages are built for quite a range. And the philosophy there (and with most desktop/server-ish Linux distros) is that software is compiled on its native architecture, so the source code gets farmed out to a hodgepodge of different boxes to compile for ARM, MIPS, PowerPC, and whatever else they want to support. Five years ago a bunch of us on this list visited Seneca/York for the launch of Fedora-for-ARM and saw the equipment used to compile that distro on that architecture: there were a few dozen PogoPlugs and similar ziptied to racks with a few "normal" servers providing swap-over-NFS to solve the not-enough-RAM-on-those-bitty-computers problem. Multiply that by the number of distros and number of architectures because I'm sure the others are using similar lashups. Note also that though the Linux distros are coming from x86, nowadays that's getting long in the tooth and the default flavour for many a year now has been AMD64/x86-64. Improved small hardware means things like a Raspberry Pi are more than powerful enough to compile their own code natively, although desktop software is starting to assume 64-bit with huge wodges of memory and becomes problematic on 32-bit even with more RAM than most of us could afford 20 years ago. Meanwhile, the traditional embedded assumption has been that the resources of the target are not sufficient for native compilation, so they go much more for cross-compilers hosted on a reasonably beefy workstation or server. You can even mix-and-match, with an upgraded target device for native compiles but a cross-compiler for developers who need a quick edit-recompile-test loop on one binary, or need to be able to do that on a laptop in the field. Another "gotcha" is that some software wasn't written to cross-compile and has build scripting that _will_ assume the build environment is the target, so if you have any pieces like that then a native compile is the easier route. -- Anthony de Boer

Hello to all Anthony de Boer said: [ I ]
You can apt-get stuff onto all manner of different architectures with no problem; Debian-style packages are built for quite a range. And the philosophy there (and with most desktop/server-ish Linux distros) is that software is compiled on its native architecture, so the source code gets farmed out to a hodgepodge of different boxes to compile for ARM, MIPS, PowerPC, and whatever else they want to support.
[ II ]
Note also that though the Linux distros are coming from x86, nowadays that's getting long in the tooth and the default flavour for many a year now has been AMD64/x86-64. Improved small hardware means things like a Raspberry Pi are more than powerful enough to compile their own code natively, although desktop software is starting to assume 64-bit with huge wodges of memory and becomes problematic on 32-bit even with more RAM than most of us could afford 20 years ago.
[ III ]
You can even mix-and-match, with an upgraded target device for native compiles but a cross-compiler for developers who need a quick edit-recompile-test loop on one binary, or need to be able to do that on a laptop in the field. Another "gotcha" is that some software wasn't written to cross-compile and has build scripting that _will_ assume the build environment is the target, so if you have any pieces like that then a native compile is the easier route.
Thanks for the synopsis of this stuff... Regarding this, we / I haven't really decided long term if the SBC who have made and provisioned should be almost metal without an op system, or have ftp and telnet, and "everything" so they cna be fixed remotely. so for the first few hundred ( made this week ) we split the difference like you said in [ III ] above. Even trusting libraries on the target versus -static is pretty much undetermined. So if we / I read "X-compiles leave more open ended future proofing if you where scuba flippers while compiling". Id put on a pair "just to be safe". Some famous sounding BOOST writeup guy said subsets of prepared work are uncool a little so I have really tried to do as told. Im using BOOST just a little but all over. Maybe the next sweetening will use ASIO the sockets kit. Im trying ot stay ahead of user requirements, and as the first version is in use, prepare work as a contingency. I don't like the hand made sockets stuff I made, but it does seem to work. X-comp with compatibly for C 11 is not trivial. Regards to all, Dan
participants (2)
-
Anthony de Boer
-
dank@enggrp.com