You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 3, 2022. It is now read-only.
The current spec includes all artifacts used to deploy a bundle within the invocation image. This means that any change to a helm chart, a compose file, an OPA Bundle will require the invocation image to be rebuilt. It also avoids the ability to have common invocation images shared across different bundles, or have the common invocation image signed by an external entity.
The proposal is to separate invocation images to be just the binaries, and possibly mix-ins required to do a deployment, from any other artifact used for that deployment.
Scenario: Azure, AWS, Google make a invocation images available for deploying to their respective clouds. The cloud specific invocation images have mix-ins packaged within the invocation image. The invocation images are signed by the respective clouds, so they know what is capable.
A developer wishes to deploy an app that includes Kubernetes, storage, IP addresses, and a Redis Cache instance.
They reference the cloud providers invocation image.
The Helm chart and ARM template are also referenced in the bundle.
When the bundle is "built", the Helm chart is pushed to the registry as an individual OCI artifact type. The ARM template can be pushed as an OCI artifact type, or it could be pushed as a cnab.artifact, meaning CNAB provides for generic artifacts that don't yet have an OCI CLI.
An OCI Index is created that references these artifacts, with a reference to an invocation image, and the images used to run the app in AKS.
Any CNAB configuration information is placed in the invocation images config. Alternatively, the specific CNAB config for this bundle could be a separate artifact as well.
By separating these elements out, we get:
Builds don't actually require docker build. In most cases, a developer is likely referencing existing invocation images and only iterating on the artifacts used for the deployment
Invocation images are well known, signed and approved by the different clouds, or different IT shops of companies. They don't have to worry about any code executing at deployment time.
By referencing mix-ins within an invocation image, the execution environment can understand the intent of the CNAB and decide if this bundle should have specific mix-in rights. If not, it can be blocked before a partial deployment is started, making it near impossible to rollback.
Referencing artifacts as external entities allows re-use, as multiple bundles may reference the same helm chart, OBA, singularity image, Terraform template, ...
The text was updated successfully, but these errors were encountered:
Hey @SteveLasker, thanks tons for the issue. Before I think about this as deeply as I want to, I want to clarify the current problems the issue sees. I read above the following issues with the current implementation:
Any change to invocation image will cause the bundle to be rebuilt.
.... and that seems to be it for current problems with the spec.
On the positive side, if the invocation image were anything other than a specific image (as it is currently), that would enable a complete injection experience and enable lots of sharing/storage benefits. I can see these, but the first thing I want to point out is that so far as I can see there's now nothing of consequence to sign, as there's no coherent statement that can be made about whatever the injected item is, nor can there be any coherent statement about what the runtime does with the thing.
If I am understanding things correctly -- and I expect others to help me here -- we gain the benefits of separation of concerns while losing one of the most critical things: any certainty in the specification about signatures for both import and exported (thick) bundles. Functionally, that would drop the certainty of signature aspect down onto the tooling. Each could in fact make strong guarantees, but only for itself, and not for other tools.
So, first question is simple: Do I understand the knock-on effects of the proposal correctly?
The current spec includes all artifacts used to deploy a bundle within the invocation image. This means that any change to a helm chart, a compose file, an OPA Bundle will require the invocation image to be rebuilt. It also avoids the ability to have common invocation images shared across different bundles, or have the common invocation image signed by an external entity.
The proposal is to separate invocation images to be just the binaries, and possibly mix-ins required to do a deployment, from any other artifact used for that deployment.
Scenario: Azure, AWS, Google make a invocation images available for deploying to their respective clouds. The cloud specific invocation images have mix-ins packaged within the invocation image. The invocation images are signed by the respective clouds, so they know what is capable.
A developer wishes to deploy an app that includes Kubernetes, storage, IP addresses, and a Redis Cache instance.
They reference the cloud providers invocation image.
The Helm chart and ARM template are also referenced in the bundle.
When the bundle is "built", the Helm chart is pushed to the registry as an individual OCI artifact type. The ARM template can be pushed as an OCI artifact type, or it could be pushed as a
cnab.artifact
, meaning CNAB provides for generic artifacts that don't yet have an OCI CLI.An OCI Index is created that references these artifacts, with a reference to an invocation image, and the images used to run the app in AKS.
Any CNAB configuration information is placed in the invocation images config. Alternatively, the specific CNAB config for this bundle could be a separate artifact as well.
By separating these elements out, we get:
The text was updated successfully, but these errors were encountered: