I deploy on AWS ECS a CloudFormation Stack, say teststack, via the command
aws cloudformation deploy --template-file ./CloudFormationTemplate.yml --stack-name teststack --force-upload
My stack executes a certain Docker image, say myname/myimage:latest.
I want to deploy & update the stack via a pipeline (I'm using GitLab, but I guess this is not relevant for the question of interest here).
In this setting, I may modify my Docker image without touching the CloudFormation Template file; I then build & push the new image myname/myimage:latest to my registry; finally, I trigger a new pipeline, which launches again the command aws cloudformation deploy ... --force-upload.
When executing aws cloudformation deploy ... --force-upload, the pipeline returns No changes to deploy. Stack stack-name is up to date.
Evidently, since the stack is executing the latest tagged images, it returns that everything is up to date, whitout making a pull of the new latest image.
Is there a way to force AWS CloudFormation to pull new Docker images from my registry?
Is using tags other than latest an option here? If so then you could tag the latest change you want to update let say myname/myimage:0-0-1 and then update the container definition in your cloudformation template to use this new tag.
If you wish to continue using the latest tag you probably cant force the deployment by a cloudformation template update. In my project when I didnt want to change the tag I ended up doing the update using AWS CLI:
aws ecs update-service --service ${ecsService} --cluster ${ecsCluster} --region ${region} --force-new-deployment
Related
I am relatively new to AWS and kubernetes. I have created a self-managed kubernetes cluster running in AWS (not using EKS). I have successfully created a pipeline in AWS CodePipeline that builds my container and uploads it to ECR. Currently I am manually deploying the created image in the cluster by running the following commands:
kubectl delete deployment my-service
kubectl apply -f my-service_deployment.yaml
How can I automate this manual step in AWS CodePipeline? How can I run the above commands as part of the pipeline?
Regarding my deployment yaml files, where should I store these files? (currently I store them locally in the master node.)
I am missing some best practices for this process.
Your yaml manifests should'nt be on your master node (never), they should be stored in a Version Control System (just like github/gitlab/bitbucket etc.).
To automate the deployment of your docker image based on new artifact version in ECR, you can use a great tools named FluxCD, it is actually very simple to install (https://fluxcd.io/docs/get-started/) and you can easily configure it to automatically deploy your images in your cluster each time there is a new image on your ECR registry.
This way your codePipeline will build the code, do the tests, build the image, tag it and push it to ECR and FluxCD will deploy it to kubernetes. (it is also natively configurable to deploy on each X minutes (based on your configuration) on your cluster, so even if you bring a little change into your manifests, it will be automatically deployed !
bguess
you can also make use of argo cd its very easy to install and use compared to aws codepipeline.
argo cd was specifically designed for Kubernetes thus offers much better way to deploy to K8s
I have a node.js app running in AKS and needs to access a Key Vault. I have used Deployment center in the K8s service to set up DevOps. By mistake I did the setup in Deployment center twice which lead to two copies of .yml files (deploytoAksCluster.yml and deploytoAksCluster-1.yml). I have fixed this, but when I run the following command to enable pod identity I get an error.
az aks update -g $resource_group -n $k8s_name --enable-pod-identity
Error:
(BadRequest) Tag name cannot be hidden-DevOpsInfo:GH:my-GithubOrg/myApplication:main:deploytoAksCluster-1.yml:deploytoAksCluster-1.yml:59a0dfdb:my-akscluster:1646402646541.43;GH:my-GithubOrg/myApplication:main:deploytoAksCluster-1.yml:deploytoAksCluster-1.yml:13350477:my-akscluster:1646924094935.21; or be longer than 512 characters. Please see https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-using-tags for more details.
Currently I have only one workflow in GitHub (deploytoAksCluster.yml), but the error with reference to deploytoAksCluster-1.yml never goes away.
I have used this sample as inspiration: https://learn.microsoft.com/en-us/azure/aks/use-azure-ad-pod-identity#run-a-sample-application
What I have tried
removed the duplicate files
reintroduce the duplicate files
delete the deployment
This is how AKS Deployment center looks.
[1]: https://i.stack.imgur.com/ziFEk.png
Update
59a0dfdb referers to a git commit. This commit resulted in a failed workflow. The workflow has been fixed and everything deploys nicely to K8s, but --enable-pod-identity keeps complaining with the above error. I have removed the commit from github history.
I have even removed the repository in github.
Must be a git history somewhere in k8s that --enable-pod-identity is hung up on somehow?
Please retry deleting the deployment cluster again as in some cases trying twice removed the required cluster which is not required .
Also check if corresponding resource group is deleted and clear the cache .
Try update the git version depending on the type of OS you are using.
Also
NOTE : If you're using existing resources when you're creating a new cluster, such as an IP address or route table, az aks create
overwrites the set of tags. If you delete that cluster later, any tags
set by the cluster will be removed.
from Azure tags on an AKS cluster
To update the tags on an existing cluster, we need to run az aks update with the --tags parameter.
Reference
errors when trying to create update scale delete or upgrade cluster
The tag name of the cluster was auto generated to "hidden-DevOpsInfo:GH:my-GithubOrg/myApplication:main:deploytoAksCluster-1.yml:deploytoAksCluster-1.yml:59a0dfdb:my-akscluster:1646402646541.43;GH:my-GithubOrg/myApplication:main:deploytoAksCluster-1.yml:deploytoAksCluster-1.yml:13350477:my-akscluster:1646924094935.21".
The solution was in the error message: "Tag name cannot be..."
I got the tags by running:
az aks show -g $resource_group -n $k8s_name --query '[tags]'
and updated the tag with:
az aks update --resource-group $resource_group --name $k8s_name --tags "key"="Value"
I tried to start #kubernetes cluster and then run tests and publish results. Do you have any idea on how this can be done?
I created a pipeline but I do not know which yml to use
Which task to add first - kubernetes deploy or something else
We have Kubernetes deployment.yml file>>It takes the container( image )(exampleacr.io/sampleapp) that we going to publish on AKS
App version: app/v1
Service.yml is to just expose the application App version: v1
Both the yml files are to be added .Please refer WAY2 for modifying manually.
WAY 1:
Quick way: Deploy to Azure Kubernetes service will do everything that’s needed because if you use the Deploy to Azure Kubernetes Services template, these variables get defined for you.
Steps:
Create AKS cluster and ACR(container registry) in azure.
In azure Devops:
Create a pipeline>choose any source:for select an application
hosted in git hub
Then select Deploy to Azure Kubernetes service >select your aks
subscription> select the existing cluster>then select the container
registry that you want to put docker image into.keep the remaining
as default
Click on validate and configure
azure pipeline will generate a YAML file.
In the review pipeline YAML of azure-pipelines.yml
You have two stages:Build,deploy
Click save and run:This saves yaml file in master branch and it creates manifests files(deployment.yml and service.yml) for kubernetes deployment.
Click save and run>this will also trigger build.
Reference
WAY2: Using Docker image
To do modifications in the azurepipelines.yml file In the 3rd step from above ,Select Docker image instead of Deploy to Azure Kubernetes service.
Under configure pipeline>If it is in Build.SourcesDirectory in our
application , it will appear as say $
Build.SourcesDirectory/app/Dockerfile
That builds the docker file /pipeline.
In the review pipeline YAML of azure-pipelines.yml
Few things can be modified like
You can change variable tag to repo name: and then deployment and
service.yml file can be added to the yml file by doing few
modifications
Build stage is automatic and no need to modify there.
You have to Add push and deploy stages in yml file as shown in the article
And get source code here
I have been looking for a jenkins plugin to deploy a docker image to a kubernetes cluster using k8s api. This will access the rest api with yaml file with credentials that already configured. If there is no similar plugin, you can let me know other simple examples. Thanks for reading.
I think you are looking for a plugin's which is similar to this
https://wiki.jenkins-ci.org/display/JENKINS/Kubernetes+CI+Plugin
https://wiki.jenkins.io/display/JENKINS/Kubernetes+Pipeline+Plugin
I'm using a simple Execute Shellstep in Jenkins with the fallowing command:
kubectl --server="https://kubeapi.example.com" --token=$ACCESS_TOKEN set image deployment/deployment_name container_name=repo/image:"$BUILD_NUMBER-$SHORT_GIT_COMMIT"
You can save your $ACCESS_TOKENas a Secret text and use it as a variable in Jenkins.
The job is making the build, tag and publish a docker image to docker repo, then set the image in the kubernetes cluster.
I have a docker image pushed to Container Registry with docker push gcr.io/go-demo/servertime and a pod created with kubectl run servertime --image=gcr.io/go-demo-144214/servertime --port=8080.
How can I enable automatic update of the pod everytime I push a new version of the image?
I would suggest switching to some kind of CI to manage the process, and instead of triggering on docker push triggering the process on pushing the commit to git repository. Also if you switch to using a higher level kubernetes construct such as deployment, you will be able to run a rolling-update of your pods to your new image version. Our process is roughly as follows :
git commit #triggers CI build
docker build yourimage:gitsha1
docker push yourimage:gitsha1
sed -i 's/{{TAG}}/gitsha1/g' deployment.yml
kubectl apply -f deployment.yml
Where deployment.yml is a template for our deployment that will be updated to new tag version.
If you do it manually, it might be easier to simply update image in an existing deployment by running kubectl set image deployment/yourdeployment <containernameinpod>=yourimage:gitsha1
I'm on the Spinnaker team.
Might be a bit heavy, but without knowing your other areas of consideration, Spinnaker is a CD platform from which you can trigger k8s deployments from registry updates.
Here's a codelab to get you a started.
If you'd rather shortcut the setup process, you can get a starter Spinnaker instance with k8s and GCR integration pre-setup via the Cloud Launcher.
You can find further support on our slack channel (I'm #stevenkim).
It would need some glue, but you could use Docker Hub, which lets you define a webhook for each repository when a new image is pushed or a new tag created.
This would mean you'd have to build your own web API server to handle the incoming notifications and use them to update the pod. And you'd have to use Docker Hub, not Google Container Repository, which doesn't allow web hooks.
So, probably too many changes for the problem you're trying to solve.