> people want to believe that they can update to the latest OpenSSL and fix bugs without breaking things
Yeah. It's a bug in the culture, really, and culture is much harder to change than software.
> problem when you share data with code you didn't fully control the linking of
Yeah, the data model needs to be versioned too. It's impossible to pass data between applications of different versions without the possibility of a bug. The options I'm aware of are A) provide that loose-abstraction-API and hope for the best, or B) provide versioned drivers that transform the data between versions as needed.
A is what we do today. B would be sort of like how you upgrade between patches, where to go from 6.3.1 to 9.0.0, you upgrade from 6.3.1 -> 6.4.0 -> 7.0.0 -> 8.0.0 -> 9.0.0. For every modified version of the data model you'd write a new driver that just deals with the changes. When OpenSSL 6.3.1 writes data to a file, it would store it with v6.3.1 as metadata. When OpenSSL 9.0.0 reads it, it would first pass it through all the drivers up to 9.0.0. When it writes data, it would pass the data in reverse through the data model drivers and be stored as v6.3.1. To upgrade the data model version permanently, the program could snapshot the old data so you could restore it in case of problems. (Much of this is similar to how database migrations work, although with migrations, going backward usually isn't feasible)
Who's going to write those migration drivers though? Not OpenSSL, because they don't think it's valid to link to multiple versions of their library in the same executable. But also, it will be difficult for it to be anybody else, because the underlying incompatible data structures were supposed to be opaque to the library users. Note that I'm talking about objects that only live in program memory, they're never persisted to disk.
This is the underlying problem: it's the software developers' philosophy and practice that are the limitation, not a technical thing. Doesn't matter if it's program memory or disk or an API or ABI, it's all about what version of X works with what version of Y. If we're explicit about it, we can automatically use the right version of X with the right version of Y. But we can't if the developers decide not to adopt this paradigm. Which is where we are today. :(
Yeah. It's a bug in the culture, really, and culture is much harder to change than software.
> problem when you share data with code you didn't fully control the linking of
Yeah, the data model needs to be versioned too. It's impossible to pass data between applications of different versions without the possibility of a bug. The options I'm aware of are A) provide that loose-abstraction-API and hope for the best, or B) provide versioned drivers that transform the data between versions as needed.
A is what we do today. B would be sort of like how you upgrade between patches, where to go from 6.3.1 to 9.0.0, you upgrade from 6.3.1 -> 6.4.0 -> 7.0.0 -> 8.0.0 -> 9.0.0. For every modified version of the data model you'd write a new driver that just deals with the changes. When OpenSSL 6.3.1 writes data to a file, it would store it with v6.3.1 as metadata. When OpenSSL 9.0.0 reads it, it would first pass it through all the drivers up to 9.0.0. When it writes data, it would pass the data in reverse through the data model drivers and be stored as v6.3.1. To upgrade the data model version permanently, the program could snapshot the old data so you could restore it in case of problems. (Much of this is similar to how database migrations work, although with migrations, going backward usually isn't feasible)