ECR - What is the registry and what is a repository - amazon-ecs

Trying to familiarize myself with ECR - i understand that you should be able to push as many repositories as you like to Docker registry.
However, ECR has a concept of a 'repository'.
So if i have 10 different containers comprising my app, does it mean i need 10 repositories in a registry?
Can i have one repository and push 10 different containers with their 'latest' tags?
Currently if i tag another image with the same {registry}/{repository_name} pattern, it replaces latest tag on my other image:

If you want to get the detailed descriptions, I would check out the What Is Amazon EC2 Container Registry? page, which describes the components in detail, but the high level difference between the two is this: Each account has a Registry, each Registry can contain several repositories. Each Repository can contain several Images. An image can have Several Tags, a Tag can only exist once per Repository.
If you look at the reference to a repository:
[account].dkr.ecr.[region].amazonaws.com/[repository_name]
The part in front of the first / is your registry, the part after the first / is your repository.
So what you're experiencing here is that by pushing a second image to the same repository, you're changing the reference that the latest tag is pointing to.
If you want to have multiple distinct images, each with their own latest tag, each one should have it's own Repository. Based on the Pricing for ECR, you only pay based on the storage size and transfer from an ECR Repo, so there's little benefit to not creating additional repos.

Related

How does CodePipeline's ECR source action detect a change to an image in ECR?

I am setting up a pipeline with an Amazon ECR source to ECS deploy. Have been following the steps in the tutorial here.
My issue is when my private ECR is updated with a docker image the pipeline is not triggered. I am not applying the latest tag on the image, just using a semantic versioning tag which includes a build number and a short Git commit hash, for eg:
myserver:b21-6d22b379a
myserver:b20-c90b134a
etc..
In the Image Tag option in the ECR source action it says: Choose the image tag that triggers your pipeline when a change occurs in the image repository.
If I leave it blank and just specify the ECR repository name such as myserver, will it look for a new image only if the latest tag is moved to another image with a different SHS digest in ECR?
Or is it smart enough to detect the change in ECR based on the timestamp + SHA digest of a new image even if the image did not have the latest tag applied?
I want to avoid using the latest image tag, as with an ECS Fargate cluster my understanding is a new container will simply pull the latest tag irrespective of if CodeDeploy has published a new task def with a new image tag.
So how does one specify the Image & Tag in the ECR source action if not using the latest tag on the docker image in ECR? Does it require a fixed tag to be used for the auto deploy from ECR to ECS to work?
As per https://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-ECR.html#action-reference-ECR-config :
If a value for ImageTag is not specified, the value defaults to latest
So if you leave "ImageTag" empty, only images with the "latest" tag will get deployed.
And yes it requires a fixed tag for deployment to work. As such the use of tags is limited - you can, for example, use tags to ensure a specific image only gets deployed in a specific environment (e.g. images tagged "staging" get deployed in staging environment, and images tagged "production" get deployed in production environment) - but you can't use tags dynamically.

Sharing Travis configuration among multiple repositories

We develop some custom elements using polymer. Each element lives in a separated repository under one organization. All of them has the same Travis config file so that would be logical to define and edit it in one place.
Is there any way to setup only one .travis.yml file for all repositories under specific organization?
Is there some way to specify some default Travis config for an organization?
Couldn't find any information about the topic in the Travis docs.
There is no mechanism for this (that I'm aware of) provided by the travis service itself. I've 'solved' this by scripting the update of the .travis.yml in my family of modules from a template in a higher-level repository.
This is now possible:
https://docs.travis-ci.com/user/build-config-imports
(still in beta though)

Git Repository is not visible on docker hub for automatic build

I already have 2 automatic builds on hub.docker with Dockerfiles hosted on github. They are working great. My problem is now I want to use a Dockerfile in an github repository which is not my own but I am admin and member and have full access to. I can see several other repositories on the hub.docker page when I try to create a new automatic build. They are very similar to the one I want to use but the one I want to use is not listed there, although I have full access to it. I ready through do documentation from hub.docker and I also logged in and out. Further more I also delinked my github account and relinked it. (with write persissions). So my question is how can I make the other repository visible on hub.docker in order to create an automatic build?

Can you share Docker Images uploaded to Google Container Registry between different accounts?

We'd like to have a separate test and prod project on the Google Cloud Platform but we want to reuse the same docker images in both environments. Is it possible for the Kubernetes cluster running on the test project to use images pushed to the prod project? If so, how?
Looking at your question, I believe by account you mean project.
The command for pulling an image from the registry is:
$ gcloud docker pull gcr.io/your-project-id/example-image
This means as long as your account is a member of the project which the image belongs to, you can pull the image from that project to any other projects that your account is a member of.
Yes, it's possible since the container images are on a per-container basis.

Deploy a Docker image without using a repository

I'm building a Docker image on my build server (using TeamCity). After the build is done I want to take the image and deploy it to some server (staging, production).
All tutorials i have found either
push the image to some repository where it can be downloaded (pulled) by the server(s) which in small projects introduce additional complexity
use Heroku-like approach and build the images "near" or at the machine where it will be run
I really think that nothing special should be done at the (app) servers. Images, IMO, should act as closed, self-sufficient binaries that represent the application as a whole and can be passed between build server, testing, Q&A etc.
However, when I save a standard NodeJS app based on the official node repository it has 1.2 GB. Passing such a file from server to server is not very comfortable.
Q: Is there some way to export/save and "upload" just the changed parts (layers) of an image via SSH without introducing the complexity of a Docker repository? The server would then pull the missing layers from the public hub.docker.com in order to avoid the slow upload from my network to the cloud.
Investingating the content of a saved tarfile it should not be difficult from a technical point of view. The push command does basically just that - it never uploads layers that are already present in the repo.
Q2: Do you think that running a small repo on the docker-host that I'm deploying to in order to achieve this is a good approach?
If your code can live on Github or BitBucket why not just use DockerHub Automated builds for free. That way on you node you just have to docker pull user/image. The github repository and the dockerhub automated build's can both be private so you don't have to expose your code to the world. Although you may have to pay for more than one private repository or build.
If you do still want to build your own images then when you run the build command you see out put similar to the following:
Step 0 : FROM ubuntu
---> c4ff7513909d
Step 1 : MAINTAINER Maluuba Infrastructure Team <infrastructure#maluuba.com>
---> Using cache
---> 858ff007971a
Step 2 : EXPOSE 8080
---> Using cache
---> 493b76d124c0
Step 3 : RUN apt-get -qq update
---> Using cache
---> e66c5ff65137
Each of the hashes e.g. ---> c4ff7513909d are intermediate layers. You can find folders which named with that hash at /var/lib/docker/graph, for example:
ls /var/lib/docker/graph | grep c4ff7513909d
c4ff7513909dedf4ddf3a450aea68cd817c42e698ebccf54755973576525c416
As long as you copy all the intermediate layers to your deployment server you won't need an external docker repository. If you are only changing one of the intermediate layers you only need to recopy that one for a redeployment. If you notice that the steps listed in the DockerFile each lead to an intermediate layer. As long as you only change the last line in the DockerFile you will only need to upload one layer. Therefor I would recommend putting your ADD code line at the end of your docker file.
ADD MyGeneratedCode /var/my_generated_code