That’s a nice way to word it, @Stephen. It’s exactly as we had been discussing.
However, I still have doubts about this approach that updating the data to make it compatible with a dependency should necessarily increment the MAJOR version number. I think it kind of contradicts this part of the pattern:
PATCH version when you make backwards-compatible fixes, e.g.
- Corrections to existing data
- Changes to metadata
When you combine this pattern with the dependencies pattern, which explicitly models which version of the data it depends on for foreign keys, it seems that an increment in the MAJOR version number of a dependency is already explicit enough. The application can then decide whether or not it will need to use all of its dependencies.
In case it does, and if there is an increment in the MAJOR version number of any of its dependencies, it’s already clear enough that there is a change in the set of ‘the data plus all of its dependencies’ that would break the application.
On the other hand, if the application does not use all of its dependencies (e.g. if it does not require to use the fields that have foreign keys), the change would not break the application. The application can figure this out by looking at the dependencies and deciding whether or not it does need to dip into them. However, if the situation discussed here causes an increment to the MAJOR version number, the application cannot make use of the data because the versioning system is indicating a breaking change, even though the data would still be usable by the unmodified application.
So, maybe this situation should be labeled as a PATCH change in
grants.csv. The specs can then let the application figure out itself whether or not it does need and make use of its dependencies, in which case a MAJOR version number to any of those would be considered a breaking change.