I'm trying to understand what a good practice is for what I would assume is a pretty common scenario of building and deploying a kubernetes deployment using kubectl.
Scenario:
I want to build a project and deploy it to my cluster.
I’ve got a yaml file with my deployment (and related things), and some src code with a Dockerfile.
I want to avoid using :latest in the image tag as per Best Practice recommendation and instead use a unique tag for that image.
I have easy access to a unique identifier, either coming from CI or git sha that is available as an environment variable at time of building.
So that seems pretty simple:
docker build -t my-registry/my-app:${BUILD_NUMBER} .
docker push my-registry/my-app:${BUILD_NUMBER}
kubectl apply -f kube.yaml
Except that there is no native way to tell kubectl what image name to use.
I understand that kubectl does not support environment variable template substitution, but I've never seen just the usecase of image tags being mentioned explicitly.
I can use sed replacement but that seems to go against the spirit of 'no variable template support', and I barely understand what sed does when I write it, so I pitty the next developer who has to read my commands.
I feel like for such a basic scenario "I want to build and deploy to my cluster" there might be a simpler solution that doesn’t feel like it’s fighting kubectl.
Skaffold does a great job of assigning a tag and digest for each built image, but it would be nice to have a simple process to start with before bringing in more high level tooling.
Besides "Use Skaffold" or "Use Helm" what should I do to have a simple build + deploy process?
Is templating yaml files okay, either with sed, mo, or other replacement tools?
Are there any example projects that have a simple build and deploy pipeline using just kubectl and maybe kustomize?
kube.yaml (Note the lack of version on the image name)
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: example
name: my-app
spec:
minReadySeconds: 2
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
name: my-app
labels:
app: my-app
spec:
containers:
- name: my-app
image: "my-registry/my-app"
ports:
- containerPort: 3000
resources:
limits:
cpu: 300m
memory: 600m
requests:
cpu: 50m
memory: 100m
Is kubectl set image <deployment> <image> an option? It does what you are looking for, but it won't be reflected in your local copy of kube.yaml, so you would need to specify a dummy image for "my-registry/my-app" in kube.yaml to not confuse other developers that they have to change that value each time they want to re-deploy.
for simple cases you can use kubectl run. for example:
kubectl run my-app --image=my-registry/my-app:${BUILD_NUMBER} --port=3000 --replicas=1 --env VAR=value --expose --limits 'cpu=300m,memory=600Mi'
in theory it's a deprecated command, but it's been deprecated for 2 years now and it's still there ;] anyway, if you feel unsafe about it, the message gives a hint how you can waste a day or 2 to make it in a non-deprecated way ;]
Related
I am trying to update a deployment via the YAML file, similar to this question. I have the following yaml file...
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-server-deployment
labels:
app: simple-server
spec:
replicas: 3
selector:
matchLabels:
app: simple-server
template:
metadata:
labels:
app: simple-server
spec:
containers:
- name: simple-server
image: nginx
ports:
- name: http
containerPort: 80
I tried changing the code by changing replicas: 3 to replicas: 1. Next I redeployed like kubectl apply -f simple-deployment.yml and I get deployment.apps/simple-server-deployment configured. However, when I run kubectl rollout history deployment/simple-server-deployment I only see 1 entry...
REVISION CHANGE-CAUSE
1 <none>
How do I do the same thing while increasing the revision so it is possible to rollback?
I know this can be done without the YAML but this is just an example case. In the real world I will have far more changes and need to use the YAML.
You can use --record flag so in your case the command will look like:
kubectl apply -f simple-deployment.yml --record
However, a few notes.
First, --record flag is deprecated - you will see following message when you will run kubectl apply with the --record flag:
Flag --record has been deprecated, --record will be removed in the future
However, there is no replacement for this flag yet, but keep in mind that in the future there probably will be.
Second thing, not every change will be recorded (even with --record flag) - I tested your example from the main question and there is no new revision. Why? It's because::
#deech this is expected behavior. The Deployment only create a new revision (i.e. another Replica Set) when you update its pod template. Scaling it won't create another revision.
Considering the two above, you need to think (and probably test) if the --record flag is suitable for you. Maybe it's better to use some version control system like git, but as I said, it depends on your requirements.
Change of replicas does not create new history record. You can add --record to you apply command and check the annotation later to see what was the last spec applied.
I have a file for a Job resource, which looks something like below.I need to run multiple instances with this definition with separate arguments for each.
apiVersion: batch/v1
kind: Job
metadata:
generateName: abc-
spec:
template:
spec:
containers:
- name: abc
image: index.docker.io/some/image:latest
imagePullPolicy: Always
imagePullSecrets:
- name: some_secret
restartPolicy: Never
backoffLimit: 4
I can successfully run this job resource with
kubectl create -f my-job.yml
But, I'm not sure how I pass my arguments corresponding to
command:['arg1','arg2']
I think updating the file with my dynamic args for each request is just messy.
I tried kubectl patch -f my-job.yml --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/", "value": {"command": ["arg1","arg2"] } }]', which works well for a Deployment kind but for Job it doesn't work
I tried
sudo kubectl run explicitly-provide-name-which-i-dont-want-to --image=index.docker.io/some/image:latest --restart=Never -- arg1 arg2, but for this I won't be able to pass the imagePullSecrets.
kind of a generic answer here, just trying to guide you. In general what you express is the need to 'parameterize' your kubernetes deployment descriptors. There are different ways some are simple, some others are a bit hacky and finally there is github.com/kubernetes/helm.
Personally I would strongly suggest you go through installing Helm on your cluster and then 'migrate' your job or any vanilla kubernetes deployment descriptor into a helm Chart. This will eventually give you the 'parameterization' power that you need to spin jobs in different ways and with different configs.
But, if this sounds like too much for you, I can recommend something that I was doing before I discover Helm. Using things like 'bash' / 'envsubst' I was eventually - templating manually the parts of the yaml file, with place holders (e.g env variables) and then I was feedind the yaml to tools like 'envsubst' where they were replacing the placeholders with the values from the environment. Ugly? Yes. Maintenable? maybe for a couple of simple examples. Example of envsubst here.
apiVersion: batch/v1
kind: Job
metadata:
spec:
template:
spec:
containers:
- name: abc
image: index.docker.io/some/image:latest
imagePullPolicy: Always
imagePullSecrets:
- name: $SOME_ENV_VALUE
restartPolicy: Never
backoffLimit: 4
Hope that helps..but seriously if you have time, consider checking 'Helm'.
I would also consider sourcing the command arguments from environment variables. These variables are then provided by helm as javapapo has mentioned.
I found this guide via Google:
Start Kubernetes job from command line with parameters
But the helm chart solution suggested by javapopo is the best way I guess.
i was looking for a strightforward way to create kubernetes resources templates (such as pod, deployment, service, etc.), though i couldn't find any good tool that does it. the tools i came across, are no longer maintained and some have stiff learning curve (such as kustomize).
some time ago, kubectl have introduced the generator and it can be used as follow to produce the resource configuration. for instance:
$ kubectl run helloworld --image=helloworld \
--dry-run --output=yaml --generator=run-pod/v1
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: helloworld
name: helloworld
spec:
containers:
- image: helloworld
name: helloworld
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
but kubectl generator should not be used, since most if it was deprecated.
one might issue kubectl get on all the resources, to commit them in a source control and use it to restore the kubernetes cluster, though that implies that the kubernetes resources were created already, where my interest is generating these resource configuration in the first place.
please recommend about your favorite tool for generating\creating kubernetes resources or explain what is the best practice to handle my use case.
Create yaml files and use git to version them is the new best practice. However this approach can be automatized with GitOps approach and flagger. Another way is to create helm chart and customize with template feature.
You should definitely run away from generating from command line either way. You should have source of truth and cli generation will not give you that.
I have a file for a Job resource, which looks something like below.I need to run multiple instances with this definition with separate arguments for each.
apiVersion: batch/v1
kind: Job
metadata:
generateName: abc-
spec:
template:
spec:
containers:
- name: abc
image: index.docker.io/some/image:latest
imagePullPolicy: Always
imagePullSecrets:
- name: some_secret
restartPolicy: Never
backoffLimit: 4
I can successfully run this job resource with
kubectl create -f my-job.yml
But, I'm not sure how I pass my arguments corresponding to
command:['arg1','arg2']
I think updating the file with my dynamic args for each request is just messy.
I tried kubectl patch -f my-job.yml --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/", "value": {"command": ["arg1","arg2"] } }]', which works well for a Deployment kind but for Job it doesn't work
I tried
sudo kubectl run explicitly-provide-name-which-i-dont-want-to --image=index.docker.io/some/image:latest --restart=Never -- arg1 arg2, but for this I won't be able to pass the imagePullSecrets.
kind of a generic answer here, just trying to guide you. In general what you express is the need to 'parameterize' your kubernetes deployment descriptors. There are different ways some are simple, some others are a bit hacky and finally there is github.com/kubernetes/helm.
Personally I would strongly suggest you go through installing Helm on your cluster and then 'migrate' your job or any vanilla kubernetes deployment descriptor into a helm Chart. This will eventually give you the 'parameterization' power that you need to spin jobs in different ways and with different configs.
But, if this sounds like too much for you, I can recommend something that I was doing before I discover Helm. Using things like 'bash' / 'envsubst' I was eventually - templating manually the parts of the yaml file, with place holders (e.g env variables) and then I was feedind the yaml to tools like 'envsubst' where they were replacing the placeholders with the values from the environment. Ugly? Yes. Maintenable? maybe for a couple of simple examples. Example of envsubst here.
apiVersion: batch/v1
kind: Job
metadata:
spec:
template:
spec:
containers:
- name: abc
image: index.docker.io/some/image:latest
imagePullPolicy: Always
imagePullSecrets:
- name: $SOME_ENV_VALUE
restartPolicy: Never
backoffLimit: 4
Hope that helps..but seriously if you have time, consider checking 'Helm'.
I would also consider sourcing the command arguments from environment variables. These variables are then provided by helm as javapapo has mentioned.
I found this guide via Google:
Start Kubernetes job from command line with parameters
But the helm chart solution suggested by javapopo is the best way I guess.
Is there any way to pass image version from a varibale/config when passing a manifest .yaml to kubectl command
Example :
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 1
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:${IMAGE_VERSION}
imagePullPolicy: Always
resources:
limits:
cpu: "1.2"
memory: 100Mi
ports:
- containerPort: 80
Use case is to launch specific image version which is set at kubernetes level, and that the variable is resolved by kubernetes itself at server side.
Thanks and Regards,
Ravi
k8s manifest files are static yaml/json.
If you would like to template the manifests (and manage multiple resources in a bundle-like fashion), I strongly recommend you to have a look at Helm
I've recently created a Workshop which focuses precisely on the "Templating" features of Helm.
Helm does a lot more than just templating, it is built as a full fledged package manager for Kubernetes applications (think Apt/Yum/Homebrew).
If you want to handle everything client side, have a look at https://github.com/errordeveloper/kubegen
Although, at some point, you will need the other features of Helm and a migration will be needed when that time comes - I recommend biting the bullet and going for Helm straight up.
After looking into this recently we decided to just go with sed. Wrap kubectl apply into a small bash script and replace the placeholders before running apply.
We did look into more sophisticated tooling but we only found Helm. However Helm is a complex piece of technology that does way more than just templating. It changes your workflow a lot as you no longer deploy using kubectl and have to have a Helm package repo around to push your packages to. Our assessment was that Helm is not useful for deploying our application and using it for just the templating is overkill.
Here is an example how to do this with sed (it is a excerpt from my typical circleci config):
replaces="s/{.Namespace}/$CIRCLE_BRANCH/;";
replaces="$replaces s/{.CiBuild}/$CIRCLE_BUILD_NUM/; ";
replaces="$replaces s/{.CiCommit}/$CIRCLE_SHA1/; ";
replaces="$replaces s/{.CiUser}/$CIRCLE_USERNAME/; ";
cat ./k8s/app.yaml | sed -e "$replaces" | ./kubectl --kubeconfig=`pwd`/.kube/config apply --namespace=$NAMESPACE -f -