-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[aws-ecr-assets] DockerImageAsset: Specify image tag #2663
Comments
I'd like this too. I saw the aws-cdk/packages/@aws-cdk/aws-ecr-assets/lib/image-asset.ts Lines 39 to 44 in 09304f7
|
Any Updates on this request..? |
+1 |
Additional point: without lifecycle policies (#6692), and since specifying a repository name is now deprecated in lieu of using a cdk-wide repository, there is currently no way to tell my images apart whatsoever - all I see are commit hashes. I have no viable way of cleaning up unused resources. |
I don't think we will be able to support specifying an image tag for docker assets because all the assets go to the same ECR repository (created by the bootstrap stack) and tagged according to their asset hash. I believe the solution to this use case is to use something like "ecr-deployment" (see the proposal in #12597) which will allow you to "deploy" a docker image to a user-defined ECR repository and then you can control everything. @nason can you confirm that #12597 will address your use case and then we can close this as a duplicate? |
It might be a little late for this thread, but I'd like to collect some use cases in here for this feature.
Not exactly sure what this means. If you say, "build [...] an image from a build-specific tag", is putting the following:
In your If your request is actually about controlling the tags that Docker images get pushed TO: it's true, we don't allow that. The same way we push ZIPs under generated names to a single S3 bucket (the asset bucket from the bootstrap stack), we do the same for ECR images. The asset mechanism gets files from "your disk" into "a running CDK application", nothing more and nothing less. How we get them there is at our discretion and is an implementation detail you shouldn't rely on. As such, you also shouldn't rely on repositories where images get pushed, nor on the tags they will get them there. If want to push images to a well-known location for consumption by something else other than your CDK app (again: what is the use case?), the solution would be to set up a CodePipeline that builds and pushes a Docker image to a dedicated ECR repository you control the lifecycle of. If not that, then an EcrDeployment construct as proposed in #12597 would also do it (although as of this writing that construct hasn't been written yet).
That is a fair point, and an unfortunate downside of the method we've chosen. Our plan for addressing this was going to be a Garbage Collector which would clean unused files and images from the bootstrap resources. Please upvote issue #11071 to help us prioritize this work among everything else that people are asking for. |
AWS CLI allows specifying more than one image tag, see https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-retag.html. I could add version tag to a Docker image uploaded by CDK to aws-cdk/assets repository. Is it possible to implement support for optional image tags in DockerImageAsset that will supplement asset hash based auto-generated tag? These tags should be sufficient to enable image cleanup and management. |
@rareelement please take a look at cdk-ecr-deployment. It should address your use case. I am also closing this as a duplicate of #12597 |
|
@eladb Is cdk-ecr-deployment officially supported by AWS? Anyways, it is not going to help with my use case of keeping aws-cdk/assets repository slim, unless the expectation is that aws-cdk/assets repository will be regularly completely cleaned (which is not realistic in most companies with multiple teams/services). This, now closed issue, is different from #12597. If ECR supports multiple image tags, I don't see why CDK cannot support those as well and why AWS users have to incur additional costs by duplicating docker images. |
@rareelement thanks for the additional context. I'll reopen the issue and we'll discuss solutions. |
@phuwin95 wrote:
Seems like a bug. Care to raise a separate issue for it? |
Hey. I'm using image.asset_hash if this help you to not use only latest. Sample python code below:
|
@eladb I found that in CDK v1 you could use ACCOUNT_ID.dkr.ecr.REGION.${Token[AWS.URLSuffix.X]}/aws-cdk/assets:ASSET_HASH so I was able to use something like this: codebuild_project = codebuild.Project(
self,
f"{self.service}-project",
project_name=f"{self.service}-project",
build_spec=BuidlSpecHelper.get_buildspec(
file_location="buildspec.yml",
),
environment=codebuild.BuildEnvironment(
build_image=codebuild.LinuxBuildImage.from_ecr_repository(image.repository, image.image_uri.split(":")[-1]),
compute_type=codebuild.ComputeType.SMALL,
),
logging=codebuild.LoggingOptions(
cloud_watch=codebuild.CloudWatchLoggingOptions(
log_group=logs.LogGroup(
self,
f"{self.service}-codebuild-log-group",
log_group_name=f"/aws/codebuild/{self.service}",
retention=logs.RetentionDays.TWO_WEEKS,
),
)
),
) Unfortunately, this no longer work in CDK v2, since ${Token[TOKEN.X]} That also leads into a similar issue to the one @phuwin95 mentioned because the split is not produced, so the image reference is wrong: ...
"testservicetestserviceprojectXXXXX": {
"Type": "AWS::CodeBuild::Project",
"Properties": {
"Artifacts": {
"Type": "NO_ARTIFACTS"
},
"Environment": {
"ComputeType": "BUILD_GENERAL1_SMALL",
"Image": {
"Fn::Join": [
"",
[
"ACCOUNT_ID.dkr.ecr.us-west-2.",
{
"Ref": "AWS::URLSuffix"
},
"/cdk-CDK_ID-container-assets-ACCOUNT_ID-REGION:",
{
"Fn::Sub": "ACCOUNT_ID.dkr.ecr.REGION.${AWS::URLSuffix}/cdk-CDK_ID-container-assets-ACCOUNT_ID-REGION:ASSET_HASH"
}
]
]
},
... and need an unexpected refactor in the code when migrating using what you suggested here or something like this: codebuild_project = codebuild.Project(
self,
f"{self.service}-project",
project_name=f"{self.service}-project",
build_spec=BuidlSpecHelper.get_buildspec(
file_location="buildspec.yml",
),
environment=codebuild.BuildEnvironment(
build_image=codebuild.LinuxBuildImage.from_ecr_repository(image.repository, image.asset_hash),
compute_type=codebuild.ComputeType.SMALL,
),
logging=codebuild.LoggingOptions(
cloud_watch=codebuild.CloudWatchLoggingOptions(
log_group=logs.LogGroup(
self,
f"{self.service}-codebuild-log-group",
log_group_name=f"/aws/codebuild/{self.service}",
retention=logs.RetentionDays.TWO_WEEKS,
),
)
),
) I already find the issue migrating the stack and have a solution, but not sure if that output was expected or is part of a bug; in any case it maybe should be documented or reported somewhere else, what do you think @eladb? |
So it looks like this has spread out to a couple of different issues. One being managing your own ECR repository which is enabled via #12597 and the ecr-deployment module. The other being keeping the cdk assets ECR repository clean which is being tracked as "garbage collection" here #6692 Closing in favor of those in order to consolidate discussion and tracking. |
|
I still think there's a lot of merit in the original proposal. CDK assets are quite powerful, it helped us get rid of our Docker building setup that ran independently of the CDK build. That required a lot of maintenance regarding cleaning up images, setting permissions correctly and extra overhead maintaining the build pipeline for building Docker images. With that in mind, I would love to keep using CDK assets, specifically Docker assets! It takes care of permissions easily, our pipeline has gotten simpler, and cleaning up images after they are no longer in use is taken care of using this nifty toolkit cleaner. However, it is really hard to find the assets you're looking for when you need to inspect something. For example, fixing vulnerabilities is a pain in the ass when all I have is a random SHA256 hash from ECR to go on. I have to scour through my build logs to see where it was being built. I sometimes use Docker scout to inspect the image, but that still doesn't give me a reference to the actual time and config that was used when creating the image. Some use-cases I can imagine will benefit a lot by being able to tag Docker image assets yourself:
These are all indeed fixable by maintaining your own Docker repository, and using something like the ECR deployment module, but like I said there is a lot of value in keeping those images as part of the CDK assets repository. Therefor, I would like to propose to reopen this issue and see if we can implement it anyway. |
@MrArnoldPalmer could you consider my comments above? :) I don't think my post has a lot of visibility because the issue is closed. |
I think this is now possible with https://docs.aws.amazon.com/cdk/api/v2/docs/app-staging-synthesizer-alpha-readme.html ? |
We are using a
DockerImageAsset
with an ECSContainerDefinition
, and would like to build, tag, and deploy an image from a build-specific tag (ie notlatest
).I see that #2334 made deploy-time context hashes from the image available recently. Is there a way to accomplish this with these? I'm getting stuck trying, and can't seem to specify an image sha in the container definition, only a tag.
Would it make sense to tag+push
DockerImageAsset
with these hashes and/or user provided tags? https://github.com/awslabs/aws-cdk/blob/master/packages/aws-cdk/lib/docker.ts#L64I'd be happy to open a PR if this is a welcome feature! Also, thank you for this great tooling, its been a pleasure to work with on my projects recently 💯
The text was updated successfully, but these errors were encountered: