I have been looking for a jenkins plugin to deploy a docker image to a kubernetes cluster using k8s api. This will access the rest api with yaml file with credentials that already configured. If there is no similar plugin, you can let me know other simple examples. Thanks for reading.
I think you are looking for a plugin's which is similar to this
https://wiki.jenkins-ci.org/display/JENKINS/Kubernetes+CI+Plugin
https://wiki.jenkins.io/display/JENKINS/Kubernetes+Pipeline+Plugin
I'm using a simple Execute Shellstep in Jenkins with the fallowing command:
kubectl --server="https://kubeapi.example.com" --token=$ACCESS_TOKEN set image deployment/deployment_name container_name=repo/image:"$BUILD_NUMBER-$SHORT_GIT_COMMIT"
You can save your $ACCESS_TOKENas a Secret text and use it as a variable in Jenkins.
The job is making the build, tag and publish a docker image to docker repo, then set the image in the kubernetes cluster.
Related
I need to create and deploy on an existing Kubernetes cluster, an application based on a docker image which is hosted on private Harbor repo on a remote server.
I could use this if the repo was public:
kubectl create deployment <deployment_name> --image=<full_path_to_remote_repo>:<tag>
Since the repo is private, the username, password etc. are required for it to be pulled. How do I modify the above command to embed that information?
Thanks in advance.
P.S.
I'm looking for a way that doesn't involve creating a secret using kubectl create secret and then creating a yaml defining the deployment.
The goal is to have kubectl pull the image using the supplied creds and deploy it on the cluster without any other steps. Could this be achieved with a single (above) command?
Edit:
Creating and using a secret is acceptable if there was a way to specify the secret as an option in kubectl command rather than specify it in a yaml (really trying to avoid yaml). Is there a way of doing that?
There are no flags to pass an imagePullSecret to kubectl create deployment, unfortunately.
If you're coming from the world of Docker Compose or Swarm, having one line deployments is fairly common. But even these deployment tools use underlying configuration and .yml files, like docker-compose.yml.
For Kubernetes, there is official documentation on pulling images from private registries, and there is even special handling for docker registries. Check out the article on creating Docker config secrets too.
According to the docs, you must define a secret in this way to make it available to your cluster. Because Kubernetes is built for resiliency/scalability, any machine in your cluster may have to pull your private image, and therefore each machine needs access to your secret. That's why it's treated as its own entity, with its own manifest and YAML file.
I am relatively new to AWS and kubernetes. I have created a self-managed kubernetes cluster running in AWS (not using EKS). I have successfully created a pipeline in AWS CodePipeline that builds my container and uploads it to ECR. Currently I am manually deploying the created image in the cluster by running the following commands:
kubectl delete deployment my-service
kubectl apply -f my-service_deployment.yaml
How can I automate this manual step in AWS CodePipeline? How can I run the above commands as part of the pipeline?
Regarding my deployment yaml files, where should I store these files? (currently I store them locally in the master node.)
I am missing some best practices for this process.
Your yaml manifests should'nt be on your master node (never), they should be stored in a Version Control System (just like github/gitlab/bitbucket etc.).
To automate the deployment of your docker image based on new artifact version in ECR, you can use a great tools named FluxCD, it is actually very simple to install (https://fluxcd.io/docs/get-started/) and you can easily configure it to automatically deploy your images in your cluster each time there is a new image on your ECR registry.
This way your codePipeline will build the code, do the tests, build the image, tag it and push it to ECR and FluxCD will deploy it to kubernetes. (it is also natively configurable to deploy on each X minutes (based on your configuration) on your cluster, so even if you bring a little change into your manifests, it will be automatically deployed !
bguess
you can also make use of argo cd its very easy to install and use compared to aws codepipeline.
argo cd was specifically designed for Kubernetes thus offers much better way to deploy to K8s
I'm new in website deployment. I have this assignment where I have to deploy an existing website using kubernetes and improve its performance by doing auto scaling. I choose this app https://github.com/IBM/MAX-Image-Caption-Generator-Web-App. I deploy the app using okteto and here are steps that I've done:
Download the source code into my local computer
Okteto up
Okteto build
kubectl apply -f https://raw.githubusercontent.com/IBM/MAX-Image-Caption-Generator/master/max-image-caption-generator.yaml
kubectl apply -f https://raw.githubusercontent.com/IBM/MAX-Image-Caption-Generator-Web-App/master/max-image-caption-generator-web-app.yaml
App is successfully deployed
My question is how can I auto scale the app? Should I create my own docker image or I could just change the existing yaml configuration on my local and re-deploy the app?
I'm sorry if my explanation and question are not clear enough.
i checked your file YAML, i think you need to create obviously new docker images so that you will be able to deploy your website.
This is their docker image: image: quay.io/codait/max-image-caption-generator-web-app:latest
You can use the YAML for ref at least and update the image to deploy your website.
For auto-scaling you have to use the HPA object in Kubernetes, you can configure the autoscaling using YAML only and apply those changes simply.
Read more about HPA : https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
A simple example for HPA scaling web application with YAML: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
When I deploy to my Windows Service Fabric cluster from Azure Container Registry, the latest image is not pulled from ACR - instead the latest image available on the cluster node is just started.
I tried
deploying as a Service Fabric application
deploying with Compose
over VSTS and manually from the PowerShell command line.
With both options I explicitly referred to the :latest image.
Please use explicit image tags, not 'latest'. This is a best practice.
I have a docker image pushed to Container Registry with docker push gcr.io/go-demo/servertime and a pod created with kubectl run servertime --image=gcr.io/go-demo-144214/servertime --port=8080.
How can I enable automatic update of the pod everytime I push a new version of the image?
I would suggest switching to some kind of CI to manage the process, and instead of triggering on docker push triggering the process on pushing the commit to git repository. Also if you switch to using a higher level kubernetes construct such as deployment, you will be able to run a rolling-update of your pods to your new image version. Our process is roughly as follows :
git commit #triggers CI build
docker build yourimage:gitsha1
docker push yourimage:gitsha1
sed -i 's/{{TAG}}/gitsha1/g' deployment.yml
kubectl apply -f deployment.yml
Where deployment.yml is a template for our deployment that will be updated to new tag version.
If you do it manually, it might be easier to simply update image in an existing deployment by running kubectl set image deployment/yourdeployment <containernameinpod>=yourimage:gitsha1
I'm on the Spinnaker team.
Might be a bit heavy, but without knowing your other areas of consideration, Spinnaker is a CD platform from which you can trigger k8s deployments from registry updates.
Here's a codelab to get you a started.
If you'd rather shortcut the setup process, you can get a starter Spinnaker instance with k8s and GCR integration pre-setup via the Cloud Launcher.
You can find further support on our slack channel (I'm #stevenkim).
It would need some glue, but you could use Docker Hub, which lets you define a webhook for each repository when a new image is pushed or a new tag created.
This would mean you'd have to build your own web API server to handle the incoming notifications and use them to update the pod. And you'd have to use Docker Hub, not Google Container Repository, which doesn't allow web hooks.
So, probably too many changes for the problem you're trying to solve.