List available docker tags when creating a release - azure-devops

Backstory:
We have a web app that creates batch jobs in Azure using docker images. In the application configuration there is a parameter to defines which version of the docker image the batch job should use. In our current setup we need to manually change the parameter if we deploy a new version of the docker image.
What I want to do is choose which docker image to use when I create a release for the web app. I already have a working release pipeline where I manually type in which version of the docker image I want to use, but I would like to be able to choose from the available docker images in the repository. The docker images are built in Azure devops and we have a tag on each build with the version number.
Is it possible to achieve this?

Related

Use output from GitHub Actions container image build to feed tag value

Synopsis
My overarching objective is to automate the build and publish of a container image to the GitHub Packages registry (and/or DockerHub). I have already set up a project that accomplishes this, however there's a "gotcha."
I want to use output from the container image build process to update the tag that's assigned to the container image.
For example:
ghcr.io/pcgeek86/aws-powershell:<versionNumber>
The Caveat
Unfortunately, the GitHub Action for Docker does a build and a push simultaneously. Hence, I am unable to capture the output from the container image build, parse the version number, and then update the environment variable containing the new tag value.
Question
Is there a way to separate the container 1) build and 2) push into separate steps, so that I can capture the build output and use that to modify the container image tag, before it's pushed up to the GitHub Packages registry?

How to manage software updates on docker-compose with one machine per user architecture?

We are deploying a Java backend and React UI application using docker-compose. Our Docker containers are running Java, Caddy, and Postgres.
What's unusual about this architecture is that we are not running the application as a cluster. Each user gets their own server with their own subdomain. Everything is working nicely, but we need a strategy for managing/updating machines as the number of users grows.
We can accept some down time in the middle of the night, so we don't need to have high availability.
We're just not sure what would be the best way to update software on all machines. And we are pretty new to Docker and have no experience with Kubernetes or Ansible, Chef, Puppet, etc. But we are quick to pick things up.
We expect to have hundreds to thousands of users. Each machine runs the same code but has environment variables that are unique to the user. Our original provisioning takes care of that, so we do not anticipate having to change those with software updates. But a solution that can also provide that ability would not be a bad thing.
So, the question is, when we make code changes and want to deploy the updated Java jar or the React application, what would be the best way to get those out there in an automated fashion?
Some things we have considered:
Docker Hub (concerns about rate limiting)
Deploying our own Docker repo
Kubernetes
Ansible
https://containrrr.dev/watchtower/
Other things that we probably need include GitHub actions to build and update the Docker images.
We are open to ideas that are not listed here, because there is a lot we don't know about managing many machines running docker-compose. So please feel free to offer suggestions. Many thanks!
In your case I advice you to use Kubernetes combination with CD tools. One of it is Buddy. I think it is the best way to make such updates in an automated fashion. Of course you can use just Kubernetes, but with Buddy or other CD tools you will make it faster and easier. In my answer I am describing Buddy but there are a lot of popular CD tools for automating workflows in Kubernetes like for example: GitLab or CodeFresh.io - you should pick which one is actually best for you. Take a look: CD-automation-tools-Kubernetes.
With Buddy you can avoid most of these steps while automating updates - (executing kubectl apply, kubectl set image commands ) by doing a simple push to Git.
Every time you updates your application code or Kubernetes configuration, you have two possibilities to update your cluster: kubectl apply or kubectl set image.
Such workflow most often looks like:
1. Edit application code or configuration .YML file
2. Push changes to your Git repository
3. Build an new Docker image
4. Push the Docker image
5. Log in to your K8s cluster
6. Run kubectl apply or kubectl set image commands to apply changes into K8s cluster
Buddy is a CD tool that you can use to automate your whole K8s release workflows like:
managing Dockerfile updates
building Docker images and pushing them to the Docker registry
applying new images on your K8s cluster
managing configuration changes of a K8s Deployment
etc.
With Buddy you will have to configure just one pipeline.
With every change in your app code or the YAML config file, this tool will apply the deployment and Kubernetes will start transforming the containers to the desired state.
Pipeline configuration for running Kubernetes pods or jobs
Assume that we have application on a K8s cluster and the its repository contains:
source code of our application
a Dockerfile with instructions on creating an image of your app
DB migration scripts
a Dockerfile with instructions on creating an image that will run the migration during the deployment (db migration runner)
In this case, we can configure a pipeline that will:
1. Build application and migrate images
2. Push them to the Docker Hub
3. Trigger the DB migration using the previously built image. We can define the image, commands and deployment and use YAML file.
4. Use either Apply K8s Deployment or Set K8s Image to update the image in your K8s application.
You can adjust above workflow properly to your environment/applications properties.
Buddy supports GitLab as a Git provider. Integration of these two tools is easy and only requires authorizing GitLab in your profile. Thanks to this integration you can create pipelines that will build, test and deploy your app code to the server. But of course if you are using GitLab there is no need to set up Buddy as an extra tool because GitLab is also CD tools tool for automating workflows in Kubernetes.
More information you can find here: buddy-workflow-kubernetes.
Read also: automating-workflows-kubernetes.
As it turns out, we found that a paid Docker Hub plan addressed all of our needs. I appreciate the excellent information from #Malgorzata.

Automated deployment for Docker images which is in php technology

I am working on an automated Azure build for Docker application.
I need to connect to container registry and pull the images from container and push it to Docker Swarm resource deployed in Azure.
Can you please suggest me the steps.
I need to automate using a PowerShell script

Can you share Docker Images uploaded to Google Container Registry between different accounts?

We'd like to have a separate test and prod project on the Google Cloud Platform but we want to reuse the same docker images in both environments. Is it possible for the Kubernetes cluster running on the test project to use images pushed to the prod project? If so, how?
Looking at your question, I believe by account you mean project.
The command for pulling an image from the registry is:
$ gcloud docker pull gcr.io/your-project-id/example-image
This means as long as your account is a member of the project which the image belongs to, you can pull the image from that project to any other projects that your account is a member of.
Yes, it's possible since the container images are on a per-container basis.

What are the best practices to deploy and host artifacts for a Docker Multicontainer environment in Elasticbeanstalk for Scala Apps?

I have several Scala applications that I want to deploy in a Docker multi-container environment on Amazon's Elastic Beanstalk.
It seems like the whole process is a bit more complicated that I was expecting. So I'm really looking forward to hear some feedback for best practices and other ways to improve my entire process and be able to "automate" some steps (if possible).
This is my current process:
To generate my projects' artifacts I'm using the sbt-docker plugin. This
plugin generates the projects artifacts (jars and Dockerfile) under
[app-route]/target/docker.
I upload these artifacts (jars and Dockerfile) into a git
repository (currently doing this "manually").
As Amazon's Elastic Beanstalk requires for Docker
multi-containers, I need an online repository to "host" the
images: Could be Docker-Hub or Quay.io. Either require me
to have a git repository in which it can find the artifacts to be
able to generate the project's image.
Having created the multi-container environment in Elastic Beanstalk,
I proceed to upload the Dockerrun.aws.json file as detailed in
Amazon's documentation and also the
.ebextensions/elb-listeners.config file with the settings of the
ports (Since I'm running multiple apps)
Magic! Amazon generates my environment. Same url, different ports
for all my apps (as specified in the configuration files in step
4.
I would love to find a way to automate step 2. Since this requires me to have an extra repo per each app. I have my apps hosted in a git repo, and I have an "extra" repo per each where I host the artifacts generated in step 1 to be able to do step 3.
If you're willing to use a different SBT plugin for step 1, then you can automate step 2.
Although quay.io supports building your image from GitHub, they do not require it. (You can publish a local Docker image directly to your quay.io repository.)
Use the sbt-native-packager plugin in project/plugins.sbt.
Setup the plugin settings in build.sbt, like: dockerRespository := Some("quay.io/myaccount")
Your step 1 becomes: sbt docker:stage
Followed by: sbt docker:publishLocal
Check your image names and tags with docker images. The new image should have a name like quay.io/myaccount/app
Before you can publish to quay.io, you must docker login quay.io. Read their tutorial.
Your step 2 becomes sbt docker:publish. Now your quay.io account should contain the same IMAGE ID as your local Docker daemon.
Proceed with steps 3+ on the AWS side...
I am not really familiar with Scala however I believe the artifacts could be generated by Jenkins/CircleCI inside of your container that is built on Jenkins/CircleCI then the appropriate image tags referenced within your Dockerrun.aws.json.
Hope that helps.