The advantage of bundles is that it contains more metadata than a simple tarball.
Compatibility problems might arise, but at least the application launcher could provide more
meaningful feedback to the user.
Or even integrate with the OS’ version control system, especially now that Linux has
PackageKit, to say “hey, the user wants to run this new bundle that needs libfoo-x.y, install
whatever package is necessary to provide that”.
You’d need a package management system that automatically scans built packages for what
libraries they provide; RPM does that, not sure whether DPKG does.
To provide the context: the discussion was on how application vendors can easily target LSB 4 with a single binary image. My post was in direct response to an argument by another reader, that complicated schemes do not add much real value over tarballs, to which I begged to differ, arguing that the metadata available in bundles make the integration of binary applications much easier.
The issue of binary distribution triggers an allergic reaction from some people in the FLOSS community, a reaction that is, in my opinion, rather unwarranted. Even Debian provides, in their non-free repositories, stub packages that will download binary packages and create a standard .DEB package out of them. There are clear advantages to making binary-only applications more well-behaved, in fact the same argument for having package management systems with graph-based dependency tracking in the first place: dependency, dependency, dependency. When installing/upgrading a package, you’d want all its dependencies to be pulled in automatically. When upgrading a library, you want to make sure that all its dependents will still work. When there is a security vulnerability, you want a non-techie end-user to be notified, preferably within a fixed period of the vulnerability being made public (through periodic updates), or the next time the user launches the application concerned.
There have been attempts to create a one-size-fits-all universal package format, that’s distribution-independent and vendor-friendly. This is a red herring, IMHO, for the same reason that the Unix market splintered in the ’70s and ’80s, and that we have a proliferation of Linux distributions — and multiple independent BSD operating systems, each of them with their own ports tree (DragonFly being an exception in that they share NetBSD’s pkgsrc system). It’s nice to control your own packaging format, or if it’s a shared format (like RPM is), to control the naming conventions, etc.
What application bundles can do is provide the best of both worlds: vendors can ship binary-only bundles that declare dependencies in a least-common denominator format that the LSB can standardize, for example:
<br /><Provides> <lib>libbaz-a.b</lib> </Provides> <Requires> <lsb-version>4.0</lsb-version> <bin>convert</bin> <lib>libfoo-x.y</lib> <lib>libbar-z.w</lib> </Requires>
The first time the bundle is launched, the launcher can add it to its index of available bundles. If any of the dependencies are missing, the system-native package management (or a meta management infrastructure such as PackageKit) is triggered to install the missing dependencies. The bundles themselves can be placed anywhere (though library bundles — in NextStep/OpenStep/OS X parlance, “frameworks” should probably be placed in pre-determined paths, e.g. /Library/Frameworks, /System/Library/Frameworks and ~/Library/Frameworks).
The only problem is that the system-provided libraries might not be ABI-compliant with the specified LSB standard, for example, libraries written in C++ after a compiler ABI change. There would probably be a need for the native packages to declare their compliance, or non-compliance, with LSB standards.
And one last nice thing about bundles: fat binaries. It’s easy to provide multi-arch bundles, and stripping away unwanted architectures is a simple rm operation.