Flatpak: Anyone with Experience or Opinions on It?

Hello, I've seen this new technology (or not new...I'm not sure...it appears to be a new name for xdg-app) pop up on various forums. I had a look at their web site (http://flatpak.org/index.html#page-top) but haven't really investigated it with any rigour. It looks like it may have been developed by people associated with Fedora and may be a replacement for RPM, APT, and the like (https://en.wikipedia.org/wiki/Flatpak). In any case, has anyone on this list looked at this or used it? Is it any good? Is it a good replacement for RPM or APT? Am I off-track here asking that question? Thanks, Brad -- Brad Fonseca Mobile: 416-876-2191 XMPP: brad.fonseca@blah.im XMPP (alt): brad.fonseca@xmpp.dk

| Subject: [GTALUG] Flatpak: Anyone with Experience or Opinions on It? | It looks like it may have been | developed by people associated with Fedora and may be a replacement for | RPM, APT, and the like (https://en.wikipedia.org/wiki/Flatpak). | | In any case, has anyone on this list looked at this or used it? Is it | any good? Is it a good replacement for RPM or APT? Am I off-track here | asking that question? I have not (knowingly) used this technology. But Here's my take anyway (based on guess-work!): Packaging like dpkg and rpm is fine but the result is tied to a release of a distro. The main reason is that the shared libraries are partially bound into the binary package. But there are other subtle and annoying difference between distros that are visible to programs. Packaging little virtual machines is way more portable but somewhat expensive and awkward. This is reasonable for a service but not most programs. Various folks have tried to find a middle ground. The barriers are market buy-in, not technology. There are two that I'm aware of: - Canonical's "snap" - "flatpak" sponsored by Red Hat As usual, Canonical appears to me to be more of a control freak than Red Hat. Naturally, my preference is for the more open one, flatpak. But I'm not 100% on-board. I like old-fashioned packages (mostly). The idea behind both, as I understand it, is to create a sandbox for each application. I think that each application carries around its own libraries etc. I hate the duplication implied. There is also a non-technical idea. Since flatpaks are universal among Linux distros, the distro would no longer be in the business of distributing the package. And no longer standing behind it for bug fixes and security vetting. Win: you no longer are tied to the (possibly old) version that your distro includes. Example: python and perl projects seem to want to run repos for users that are orthogonal to distros, something this model would better support. Win: projects no longer have to package their stuff for the various distros. They no longer have to match the downstream release cadence. Lose: you have to be a connoisseur of each project from which you take flatpaks. Each one has a different development and QA process with possibly unknowable risks and rewards. Lose: when a bug is found and fixed in a shared library, each distro can fix it with one update. But each flatpak and snap that uses that library needs to be re-issued and updated on each installation. Lose: I obsessively do updates. On each of my Fedora systems, that amounts to "sudo dnf update". On Windows, it is annoyingly more intricate: check for updates note: this must be repeated until no change happens for x in firefox chrome Adobe Reader etc. perform x's check-for-updates procedure Microsoft Applications Store check for downloads With flatpaks, Linux becomes more like Windows. Win or lose? python 2 must die. But with flatpaks or snaps, each program has its own universe so we don't need a flag day when everyone switches. But everyone should switch. Yesterday. Right now I'm fairly comfortable with Fedora and Red Hat's QA. I don't really want to take that on myself. Except for a few packages about which I have special knowledge or concerns. As an example of the role of distros, consider the Linux Kernel. It used to be common for folks to take the Linus kernel and build it on their own machine and use it in place of their distro's kernel. It wasn't too hard. Linus went to some trouble to make sure a release was clean. I infer that things have changed. All distros take a raw release and fix it up before shipping it. And you want those fixes. It's not impossible to build a Linus kernel and use it but it is probably not worthwhile.

On 03/11/17 11:33 AM, D. Hugh Redelmeier via talk wrote:
| Subject: [GTALUG] Flatpak: Anyone with Experience or Opinions on It?
| It looks like it may have been | developed by people associated with Fedora and may be a replacement for | RPM, APT, and the like (https://en.wikipedia.org/wiki/Flatpak). | | In any case, has anyone on this list looked at this or used it? Is it | any good? Is it a good replacement for RPM or APT? Am I off-track here | asking that question?
I have not (knowingly) used this technology. But Here's my take anyway (based on guess-work!):
Packaging like dpkg and rpm is fine but the result is tied to a release of a distro. The main reason is that the shared libraries are partially bound into the binary package. But there are other subtle and annoying difference between distros that are visible to programs.
Packaging little virtual machines is way more portable but somewhat expensive and awkward. This is reasonable for a service but not most programs.
Various folks have tried to find a middle ground. The barriers are market buy-in, not technology. There are two that I'm aware of:
- Canonical's "snap"
- "flatpak" sponsored by Red Hat
There's actually an NP_complete problem hiding in the process of avoiding needing two different versions of a library, and numerous people have tried to avoid it by - inventing an OS-level mechanism - moving individual instances of the problem into different virtual machines - restricting the danger area to only one language This has motivated part of the flatpack and related work: for more info that you wanted, see also https://leaflessca.wordpress.com/2017/02/12/dll-hell-and-avoiding-an-np-comp... --dave -- David Collier-Brown, | Always do right. This will gratify System Programmer and Author | some people and astonish the rest davecb@spamcop.net | -- Mark Twain

I've long found it disappointing the way shared libraries are dealt with in linux and other OSs. To me, the obvious solutions is to install every library into a directory named for the version, or name the library itself with a version number. Then, if you wish, a default version can be chosen and linked/symlinked into the default directory. That way, a program that wants a particular version gives the compiler/linker the appropriate search path to find the preferred version. And of course, once you've built a working version, you can choose to static link it rather than use shared libraries. We solved this in the obvious way at UWaterloo in the late 1980's (as part of xhier). I still shake my head that this kind of thing is still a problem. Cheers! John On Fri, 2017/11/03 11:50:55AM -0400, David Collier-Brown via talk <talk@gtalug.org> wrote: | This has motivated part of the flatpack and related work: for more info that | you wanted, see also https://leaflessca.wordpress.com/2017/02/12/dll-hell-and-avoiding-an-np-comp... | | | --dave

On Fri, Nov 3, 2017 at 1:04 PM, John Sellens via talk <talk@gtalug.org> wrote:
I've long found it disappointing the way shared libraries are dealt with in linux and other OSs.
To me, the obvious solutions is to install every library into a directory named for the version, or name the library itself with a version number. Then, if you wish, a default version can be chosen and linked/symlinked into the default directory.
That way, a program that wants a particular version gives the compiler/linker the appropriate search path to find the preferred version.
How do you ensure security updates happen everywhere, or that you are not linking to an insecure version? What about old software which is no longer maintained? Also work is not duplicated? Dhaval

Those are not problems which are specific to linking to/using particular versions of libraries. How do you ensure that security updates of commands and configuration files happen? It's not a new or different problem. One can choose to use the default version, which by implication will be the latest and greatest version that is installed on the machine. And your program/package will get updates as they are installed. If you use a particular version of the library: - a local admin can choose to accept the risk - a package maintainer can label the package risky, and/or delete/disable/deprecate the package - a program maintainer can update the code to use the new version One can't abdicate responsibilty for security by assuming that your binary will run with a secure version of a library. Cheers John On Fri, 2017/11/03 01:09:47PM -0400, Dhaval Giani <dhaval.giani@gmail.com> wrote: | How do you ensure security updates happen everywhere, or that you are | not linking to an insecure version? What about old software which is | no longer maintained? Also work is not duplicated? | | Dhaval

On Fri, Nov 03, 2017 at 01:09:47PM -0400, Dhaval Giani via talk wrote:
How do you ensure security updates happen everywhere, or that you are not linking to an insecure version? What about old software which is no longer maintained? Also work is not duplicated?
Exactly. That is why we don't want multiple versions of the library around. We want to get rid of the buggy version with a big security problem. Containers and thigns like flatpak and snap just make this problem much worse. They are essentially reinventing static linking, which is not a good place to go back to. -- Len Sorensen

On 03/11/17 01:09 PM, Dhaval Giani wrote:
On Fri, Nov 3, 2017 at 1:04 PM, John Sellens via talk <talk@gtalug.org> wrote:
I've long found it disappointing the way shared libraries are dealt with in linux and other OSs.
To me, the obvious solutions is to install every library into a directory named for the version, or name the library itself with a version number. Then, if you wish, a default version can be chosen and linked/symlinked into the default directory.
That way, a program that wants a particular version gives the compiler/linker the appropriate search path to find the preferred version.
How do you ensure security updates happen everywhere, or that you are not linking to an insecure version? What about old software which is no longer maintained? Also work is not duplicated?
Dhaval
I very much like the library maintainers (meaning me!) adding a new version to an interface whenever it has to change. Then the caller by default gets the current one, with the security bugs fixed, but can specify an old one of they need a different semantics. (Or, horror of horrors, if they currently depend on the bug) We did that a lot inside Sun, and the Linux glibc maintainers do too: it made my work a _lot_ easier when stuff was changing quickly. --dave -- David Collier-Brown, | Always do right. This will gratify System Programmer and Author | some people and astonish the rest davecb@spamcop.net | -- Mark Twain

John Sellens via talk wrote:
I've long found it disappointing the way shared libraries are dealt with in linux and other OSs.
To me, the obvious solutions is to install every library into a directory named for the version, or name the library itself with a version number. Then, if you wish, a default version can be chosen and linked/symlinked into the default directory.
That way, a program that wants a particular version gives the compiler/linker the appropriate search path to find the preferred version.
And of course, once you've built a working version, you can choose to static link it rather than use shared libraries.
We solved this in the obvious way at UWaterloo in the late 1980's (as part of xhier). I still shake my head that this kind of thing is still a problem.
Speaking largely from my experience with Node.js this doesn't work so while in practice. It becomes a major headache for "pr developers"[0] and users. pr developers have to use libraries that they aren't 100% sure what the current feature set is and known bugs are. This creates a bunch of patch libraries, to back port existing features and bug fixes to the project and then you have dependencies hell. Users don't like it because it requires extra space to have multiple version of the same library. Especially when it comes to minor version releases. And it's getting quite common for people to run out of disk space again (the average laptop SSD is ~500GB and photos, videos, games, and even applications have gotten a lot large in the past five years). The only place this type of system works is in monolithic repos where the entire source is under one umbrella, ala Facebook and Google (and maybe Mozilla).[1][2][3] [0]: pr developers - people who aren't major contributors on a project but will submit a Pull Request. [1]: <https://cacm.acm.org/magazines/2016/7/204032-why-google-stores-billions-of-lines-of-code-in-a-single-repository/fulltext> [2]: <https://gregoryszorc.com/blog/2014/09/09/on-monolithic-repositories/> [3]: <https://code.facebook.com/posts/218678814984400/scaling-mercurial-at-facebook/>

| From: Myles Braithwaite 👾 via talk <talk@gtalug.org> | John Sellens via talk wrote: | > To me, the obvious solutions is to install every library into a directory | > named for the version, or name the library itself with a version number. | > Then, if you wish, a default version can be chosen and linked/symlinked | > into the default directory. Sure. But I'd rather correctness than stability. Library users should code to a contract (in Unix, the manpage). As time goes on, the contract gets stronger so old programs continue to work. If they don't work, that is a bug somewhere. Fix it. I know: easier said than done. Much easier said than done. But problems pushed to the future tend to snowball. The Python case does not fit this model. Python 3's contract is not strictly stronger than Python 2's contract. So code in Python 2 has to be ported to Python 3. In this case, having both versions around is a good idea. And they should not have an identical name. This is easier with older stable facilities. Like C and its libraries. It is harder for things that don't separate the contract from the code. Not a good model. | Speaking largely from my experience with Node.js this doesn't work so | while in practice. It becomes a major headache for "pr developers"[0] | and users. | | pr developers have to use libraries that they aren't 100% sure what the | current feature set is and known bugs are. This creates a bunch of patch | libraries, to back port existing features and bug fixes to the project | and then you have dependencies hell. Wow. How horrible. How unprofessional. Don't do that. | Users don't like it because it requires extra space to have multiple | version of the same library. A recipe for horrible security flaws. Equifax breach? Struts library that was out of date and bound into something or other. Ditto for the CRA downtime in the run-up to tax deadlines this year. | Especially when it comes to minor version | releases. And it's getting quite common for people to run out of disk | space again (the average laptop SSD is ~500GB and photos, videos, games, | and even applications have gotten a lot large in the past five years). Does that average laptop contain Node.js? n copies of Node.js? Boy required resources grow a lot. For my Altair I wrote a monitor, including a text editor, assembler, debugger, eprom burner, and audio casette I/O package that fit in 7K. Self-hosting: it could reassemble itself and burn new eproms with the result. My new Razer keyboard says it needs 200MB of space for MS Windows drivers (mostly to fash lights, I think). It seems to work fine on Linux without a driver but I would like turn off the green lights on each key. | The only place this type of system works is in monolithic repos where | the entire source is under one umbrella, ala Facebook and Google (and | maybe Mozilla).[1][2][3] And, perhaps, a Linux distro.

On Fri, Nov 03, 2017 at 03:01:04PM -0400, D. Hugh Redelmeier via talk wrote:
My new Razer keyboard says it needs 200MB of space for MS Windows drivers (mostly to fash lights, I think). It seems to work fine on Linux without a driver but I would like turn off the green lights on each key.
15 years ago I encountered an HP all in one printer that insisted you had to install all 600MB of software on the CD to print. You could uninstall most of it afterwards since it was useless scanning software, OCR crap, etc, but the installer insisted you had to have it all. -- Len Sorensen

As an example of the role of distros, consider the Linux Kernel. It used to be common for folks to take the Linus kernel and build it on their own machine and use it in place of their distro's kernel. It wasn't too hard. Linus went to some trouble to make sure a release was clean. I infer that things have changed. All distros take a raw release and fix it up before shipping it. And you want those fixes. It's not impossible to build a Linus kernel and use it but it is probably not worthwhile.
Define: all distros :-). Fedora keeps updating to the latest mainline (stable) and is quite aggressive in its upgrade schedule. RHEL, SLES (and other enterprise distros) not so much. RHEL and SLES have different stability models (which require them to backport a lot of patches). Canonical sticks to one release for its entire release life (which has them somewhere in between). The kernel is actually a really terrible example of flatpaks because it doesn't have dependencies like other packages. Dhaval

| From: Dhaval Giani via talk <talk@gtalug.org> | > As an example of the role of distros, consider the Linux Kernel. It | > used to be common for folks to take the Linus kernel and build it on | > their own machine and use it in place of their distro's kernel. It | > wasn't too hard. Linus went to some trouble to make sure a release | > was clean. I infer that things have changed. All distros take a raw | > release and fix it up before shipping it. And you want those fixes. | > It's not impossible to build a Linus kernel and use it but it is | > probably not worthwhile. | | Define: all distros :-). I think that my description covers them all. That does not in any way imply that all distros do identical things. | Fedora keeps updating to the latest mainline | (stable) and is quite aggressive in its upgrade schedule. RHEL, SLES | (and other enterprise distros) not so much. RHEL and SLES have | different stability models (which require them to backport a lot of | patches). Canonical sticks to one release for its entire release life | (which has them somewhere in between). Sure. I didn't know that about Ubuntu. RHEL seems too conservative for me. The policy might be because some customers create kernel code that might break if an internal kernel API changes, and Linus reserves the right to change that. | The kernel is actually a really | terrible example of flatpaks because it doesn't have dependencies like | other packages. Perhaps. But it is an example of the role of distros (all I claimed). As I said at the start, I don't have any experience with flatpaks or snaps (to my knowledge).
participants (7)
-
Brad Fonseca
-
D. Hugh Redelmeier
-
David Collier-Brown
-
Dhaval Giani
-
John Sellens
-
lsorense@csclub.uwaterloo.ca
-
Myles Braithwaite 👾