Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm surprised to see "GCC as shipped by RHEL 7" used as an argument downthread. If you need to use an older GCC then you can equally well use an older Boost.


Exactly!

People point out RedHat provide newer compilers (and newer patched C++ standard libraries depending on the default one!) and he says "It's not us, it's our customers. And no, they won't use any compiler which isn't the system default".

Either you only use the system defaults, and that includes the old Boost included with RHEL, or you don't. I guess he includes a private Boost copy as part of his project. Well, include a private GCC too then. A dependency is a dependency

Those customers have silly requirements for no reason. The whole world shouldn't make efforts to accommodate them.

I am going to bet those customers are going to argue about "stability". All this comes from people reading "stable" as "rock solid, never crashes, always works as expected" when in the case of RHEL it actually means "it doesn't change, no new bugs introduced, and you can relly on the old bugs staying there". Those customers just need educating.


Customer is king, when one needs them, it is the one trying to make money that needs to accommodate.

When not, others will.


Customer is king indeed but why they expect/want to use latest boost if their infra is old? Use the latest version your infra supports, simple!


The customer

- "Wants" "stable", i.e. the default gcc

- Doesn't care what Boost version is used

- Pays the developer

The developer

- Wants the latest toolchain

- Wants the latest Boost

- Makes money

The developer

- Accepts using an old compiler because Boost keeps compatibility with it

- Does not pay Boost developers for it

There is no single entity that wants old compiler (other than to satisfy others) and new Boost.


You can often have multiple dependencies.

Library A uses Boost and requires an older version of GCC. Library B uses Boost and requires the newer version of Boost. You want to use libraries A and B in the same project, what now?


> Library B uses Boost and requires the newer version of Boost.

but the same question applies to Library B: if your regulations state that you can't update your compiler version past the default distro one, why can you bring in some random recent libraries that are definitely not part of the distro since they depend on a Boost version that is more recent than the one your distro provides?

Of course this all very stupid when you can install GCC 11 and compile in C++20 mode on RH 7 with the official devtoolsets...

But the core problem is as always to tie compilers to linux distros, like a C++ compiler version is relevant in any way to your operating system's stability...


It's not a question of regulations. You either need to use the old library with the new compiler or the new library with the old compiler.

But the old one is crufty and barely maintained and nobody wants to touch it, and the new one is only using one feature of the new version of Boost, so it's a easier to blacklist the newer version of Boost than to overhaul all the old code. But is that what we wanted to cause?

Moreover, with widely used core libraries like this, that sort of thing happens repeatedly, and now the downstream users have to do work they wouldn't have had to do if compatibility was maintained. At scale probably a lot more work than it would be for the widely used thing to maintain compatibility. That seems bad.


If you've chosen to use an ancient OS, with an obsolete compiler, you should not expect to be able to use anything new with it. Just keep the mothballed code as-is. Maybe find an equally old version of the library B if you must.

Or pay RedHat to backport the library B to the old Boost for you :)


Also you can almost certainly use a newer compiler to target old OSes.

The real problem is when you have a big pile of ancient, unmaintained code that uses long-deprecated stuff. At that point, no, you really can't expect to seamlessly interoperate with new code.


Like Nodejs, just have both, dedupe when possible, static compilation, still fight over getting both libraries to co-operate with different Boosts, rage a bit, curse thee, thy name is dependency hell!


That can be made to work. It is not pleasant or easy but it is technically possible to compile different parts of the code with different compilers and library versions and link them together.

The vast majority of the time it is just not worth it.


> what now?

Build each one as a separate shared library and wrap each one with a C interface?


You can expose a C++ interface, as long as all the types used are ABI stable (i.e std library types bit not boost types).

I have used many libraries that use boost internally (often in a custom namespace) but do not expose it on the API


I've never worked in a big C++ code base or had that issue but could you library B in a namespace?


EDIT: I'm not sure the below answers the question as asked, but it does clarify why A and B probably want to use the same version of boost.

In practice A and B would each have their own namespaces in C++ codebases, but that wouldn't resolve the tension if each wanted a different version of boost. One approach to resolve that tension is to figure out how to have two versions of boost in the same dependency tree. The below is addressing that proposal.

---

Practically, no. You could certainly create a new namespace C++ names: functions, classes, global variables, and so on.

But there are other "names" in C++ that don't respect C++ namespaces: C symbols, included headers, preprocessor names, and library names (as in `-lfoobar` on a link line). You'd need to make up new names for all of these and possibly a few more things to make a full duplicate of a library.

Now, if you managed to do all that, there are still problems to watch out for. For instance, it's common for projects to assume that global or static names in a C++ namespace can be treated as unique process-wide values in a running program. As in, `mynamespace::important_mutex` might guard access to specific hardware, so having a `mynamespace2::important_mutex` also thinking it has the same job would be bad.

And if that wasn't a problem, you still have to think about types that show up in APIs. How will downstream code be written when you have a `boost::string_ref` and a `boost2::string_ref` in the same codebase? Which `string_ref` do you use where? Can you efficiently convert from one to the other as needed? Will that require changing a lot of downstream code?


The only sane solution is, for libraries that need wide backward and forward compat, is to only expose abi/api stable types in your interface, but it doesn't. You can use still use boost internally, but make the symbols private and/or in a private namespace.

At the limit a stale interface is a C interface, but it doesn't have to be. GCC std types are fairly stable, and Qt manages a rich interface while maintaining robust ABI compatibility. It is hard work, and not always worth it of course.


Narrowing the interface helps, but the other "interface" is how the linker resolves names to specific addresses to code or data. The example I mention involving mutexes does not require those mutexes show up in public interfaces or necessarily "break" ABI guarantees. The mutexes don't even have to be used by the same source files! I guess you could consider it a library design flaw, but it's basically never mentioned as a design antipattern if it is one.

Note that it's not just mutexes. The same can happen with other kinds of "one per process" resources: memory pools, thread pools, connection pools, caches, registry objects, etc.


Yeah i've had to do this with other dependencies (which we didn't have source for) including an old or broken version of the same library we needed. It's a bit of a pain to get everything in a namespace, and of course the bloat for the executable.

Even more fun when two dependencies both use different versions of the same lib.

I much prefer bringing everything into our source tree up front and doing the build ourselves rather than just linking a prebuilt lib but sometimes you don't have that option.


Do you mean have library B built with its own separate copy of the new version?

e.g. You have Library A using LibDependency-1.0.0 and Library B using a separately compiled LibDependency-2.0.0? Then have MyAwesomeApp linking LibA and LibB and just accept the binary+memory overhead of two copies (albeit different versions) of LibDependency?


Probably, unless you need to share LibDependency data structures between 1.0.0 and 2.0.0, in which case, it depends on the implementation of LibDependency.

Rather than trying to do this at the language level with namespaces or whatever, it's probably easier to compile and link each version of the problem dependency into a separate library (static or dynamic), then to make sure each of your own libraries and executables only directly links to one version of the problem library.

This way, you don't have to rename or namespace anything, because conflicting names will only be exposed externally if you're linking to the problem library dynamically, in which case you should be able to arrange for the names to be qualified by the identity of the correct version of the problem library at dynamic load time (how to ensure this is platform-specific).


I've never tried it but I read that a use for namespaces was talking a library and wrapping all the #include statements in namespace library {} or whatever to avoid one stepping on another. Depending on how the library is written (and if it's all in source form rather than .lib files) I guess it should work?


The problem comes from what happens when you're trying to use LibDependency from your own code, say for example you're have something like:

    LibDependency.h:
      typedef struct SomeOpaqueLibDependencyType * SomeOpaqueLibDependencyTypeRef;
      SomeOpaqueLibDependencyTypeRef MakeTheThing();
      void UseTheThing(SomeOpaqueLibDependencyTypeRef);

   
    LibraryA:
      ...
      SomeOpaqueLibDependencyTypeRef getSomething();

    LibraryB:
      ...
      void doSomething(SomeOpaqueLibDependencyTypeRef);

    MyAwesomeApp:
      LibraryB.doSomething(LibraryA.getSomething()) // pseudocode
The problem is the because LibraryA and LibraryB have distinct copies of LibDependency, the source/api compatible type you're using may have an incompatible internal structure.

As a library author there are things you can do for ABI compatibility, but they all basically boil down to providing a bunch of non-opaque API types that have some kind of versioning (it's either an explicit version number, or it's a slightly implicit size field which IIRC is the MS standard). You also have opaque types where the exposure of details is more more restricted, generally either just an opaque pointer, or maybe a pointer to a type that has a vtable (either an automatic one or a manually constructed one). In general use of the non-opaque portions of the API are fairly restricted because they have ABI implications, so a user of a library will communicate by providing those non opaque data to the library, but the library will provide largely opaque results with an ABI stable API that can be used to ask questions of an otherwise opaque type.

This works in general, and it means you don't have to rebuild everything from scratch any time you update anything. It breaks down however when you have different versions of the same library in the same process. The problem is that while you see a single opaque type, it's not opaque to the library itself so while an opaque type from two different versions of a library may look the same to you, the implementation may differ between the two versions. Take a hypothetical:

    LibDependency.h:

      typedef struct OpaqueArray *ArrayRef;
      struct ArrayCallbacks {
        int version;
        size_t elementSize;
        void (*destroyElement)(void *);
        void (*copyElement)(void *, const void*);
      };
      ArrayRef ArrayCreate(const ArrayCallbacks*, size_t);
      void ArraySet(ArrayRef, size_t, void*);
      void *ArrayGet(ArrayRef, size_t);
which is a kind of generic vaguely ABI stable looking thing (I'm in a comment box, assume real code would have more thought/have fewer errors), but lets imagine the a "plausible" v1.0

    LibDependency-1.0.c:
      struct InternalArrayCallbacks {
        void (*destroyElement)(void *);
        void (*copyElement)(void *, const void*);
      };
      struct OpaqueArray {
        InternalArrayCallbacks callbacks;
        size_t elementSize;
        char buffer[];
      }
      ArrayRef ArrayCreate(const ArrayCallbacks* callbacks, size_t size) {
        size_t allocationSize = sizeof(OpaqueArray) + size * callbacks->elementSize;
        OpaqueArray *result = (OpaqueArray *)malloc(allocationSize);
        /* initialize result, copy appropriate callbacks, etc */
        return result;
      }
      void ArraySet(ArrayRef array, size_t idx, void* value) {
        array->callbacks.copyElement(array->buffer + idx * array->elementSize, value);
      }
etc.

Now v1.1 says "oh maybe we should bounds check":

    LibDependency-1.1.c:
      ...
      struct OpaqueArray {
        InternalArrayCallbacks callbacks;
        size_t elementSize;
        size_t maxSize;
        char buffer[];
      }
      ...
      void ArraySet(ArrayRef array, size_t idx, void* value) {
        if (idx >= array->maxSize) abort();
        array->callbacks.copyElement(array->buffer + idx * array->elementSize, value);
      }
There's been no source change, no feature change, and from a library/OS implementors PoV no ABI change, but if I had an ArrayRef from the 1.0 implementation and passed it somewhere that would be using the 1.1 implementation, or vice versa, the result would be sadness.

As a library implementor there's a lot you have to do and/or think about to ensure ABI stability, and it is manageable, but more or less all of the techniques break down when the scenario is "multiple versions of the same library inside a single process".


We have used two boost versions built in different namespaces. Most of it works but there can be Fun if two now-independent versions of the same funtion use the same resource.


https://developers.redhat.com/products/developertoolset/

this greatly extends of what can be considered supported by RHEL 7 (unless we require - "don't install any packages")


One big use case for Boost is to provide alternative implementation for standard containers and sometimes even language features that whatever toolchain you need to use doesn't have yet. Being bound to an ancient Boost version as well kind of limits the use you can get out of that.


You can build a new compiler on RHEL7 and ship the runtime libstdc++ with your program. We did that in the days of RHEL5, it was a bit of a hassle but it worked.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: