Kubernetes Simple Declarative Management - kubernetes

Is there a way for more simple Declarative Management than writing template files myself?
E.g. create the template files with kubectl command and then use kubectl apply on those templates?
kubectl create deployment my-app --image=nginx:latest --replicas=3 --port=8080 --dry-run=client --output=yaml > my-deployment.yaml
kubectl create service loadbalancer my-app --tcp=80:8080 --dry-run=client -o=yaml > my-service.yaml
And after this apply the generated template files:
kubectl apply -f .
Is it OK to use such approach for production?
Or it is considered not a good practice?

While it's quite common to write the individual YAML manifests
yourself, it's less common to apply them individually as you've shown
in your question. Where I work, we use Kustomize to manage our
manifests; this is a tool that assembles a collection of manifests
into a configuration which you then apply all at once using kubectl apply (other folks use Helm, but I don't have any experience with that tool).
There are lots of examples in the documentation, but for your example, you might do something like this:
Put your deployment manifest in my-deployment.yaml
Put your service manifest in my-service.yaml
Create a file kustomization.yaml with the following content:
resources:
- my-deployment.yaml
- my-service.yaml
To create or update your resources in the cluster, you run:
kustomize build | kubectl apply -f-
Kustomize has a number of options for generating things like
ConfigMaps and Secrets from files, for creating a hierarchical
configuration in which you can override portions of your manifests
with modified content, etc.
Note that a version of Kustomize is actually build into the kubectl
command; for the above example, you could have simply run:
kubectl apply -k .
The version of Kustomize built into kubectl is a little older than the standalone version, but for simple configurations it works just fine.

Related

linkerd Inject with helm or namespace?

I can't seem to find a simple answer to my question,
how to use linkerd inject command / options to add when using helm to install a package, e.g. like postgres?
I have done it with another package but that was by adding the annotation command to a values file and supplying that when running the helm install command.
With istio, all I have to do is to add a label on the namespace and it works?
So I started to look into adding the annotation to the namespaces I am working with, using the kubectl create namespace command?
However, I cant seem to find a way to add any annotation at the point of creating a namespace, unless I use a file.
So, I either need a way to add this annotation to the namespace with the create command or when installing packages with helm?
Thanks,
I think there are a couple of ways to do this. It all depends on what you are trying to achieve and how you'd like to manage your underlying infrastructure.
I assume you want to automate the installation of helm charts. If you are going to create the namespace using kubectl create namespace then you might be able to follow that up with kubectl annotate <created-namespace> "linkerd.io/inject=enabled".
Alternatively, you can make use of the Linkerd CLI and use the inject command provided -- the workflow here would involve a combination of kubectl and linkerd commands so I'm not sure it's what you're looking for. Nonetheless, you can do something like kubectl create namespace <my-namespace> -o yaml | linkerd inject - | kubectl apply -f -.
Last but not least, if you can use kubectl create namespace then you might be able to pipe the namespace manifest to kubectl directly and call it a day? You can use something similar to the snippet below:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: foo
annotations:
linkerd.io/inject: enabled
EOF

How do I actually get the output of kubectl kustomize into my cluster?

I have a very simple kustomization.yaml:
configMapGenerator:
- name: icecast-conifg
files:
- icecast.xml
When I run kubectl kustomize . it spits out a generated configMap properly, but how do I actually load it into my cluster? I'm missing some fundamental step.
With Kustomize you can use the -k (or --kustomize) flag instead of -f when using kubectl apply. Example:
kubectl apply -k <my-folder-or-file>
See Declarative Management of Kubernetes Objects Using Kustomize
You could do for example
kubectl kustomize . | kubectl apply -f -

one or more valid Kubernetes manifests are required to run skaffold

When I run skaffold init in my app directory it shows me:
one or more valid Kubernetes manifests are required to run skaffold
The content of the directory:
Do I have to provide Kubernetes manifests file with for example Pod, Service, etc?
Yes, you need Kubernetes manifests in the same project. Typically a Deployment-manifest and perhaps Service and Ingress as well if you want.
A Deployment-manifest can be generated with (using > to direct output to a file):
kubectl create deployment my-app --image=my-image --dry-run -o yaml > deployment.yaml
Note: There is a alpha feature flag --generate-manifests that might do this for you.
E.g. with
skaffold init --generate-manifests

Is there an imperative command to create daemonsets in kubernetes?

I was wondering if there is an easier way to create daemonsets in k8s other than yaml files.
For eg, for pods we have kubectl run --generator=run-pod/v1 command. I was wondering if there is something similar for DS.
Thanks in advance.
There is no such quick kubectl create type command for creating daemonsets. But you can do it in some other way.
One way to do this is:
$ kubectl create deploy nginx --image=nginx --dry-run=client -o yaml | \
sed '/null\|{}\|replicas/d;/status/,$d;s/Deployment/DaemonSet/g' > nginx-ds.yaml
$ kubectl apply -f nginx-ds.yaml
If you don't want to save the yaml data to any file, here's how you can do this:
$ kubectl create deploy nginx --image=nginx --dry-run=client -o yaml | \
sed '/null\|{}\|replicas/d;/status/,$d;s/Deployment/DaemonSet/g' | \
kubectl apply -f -
You have your daemonset now.
What we are doing here is: at first we're creating a deployment yaml and then replacing kind: Deployment with kind: DaemonSet and remove replicas: 1 from the deployment yaml.
Thats's how we get yaml for daemonset.
You can access Kubernetes Documentation for DaemonSets. You could use the link and get examples of DaemonSet yaml files. However there is no way you could create a Daemonset by imperative way. You could create a deployment specification and change the deployment specification to DaemonSet specification. You need to change the kind to Daemonset, remove strategy, replicas and status fields. That would do.

Kubernetes rolling deployment using the yaml file

I have deployed an application into Kubernetes using the following command.
kubectl apply -f deployment.yaml -n <NAMESPACE>
I have my deployment content in the deployment yaml file.
This is working fine. Now, I have updated few things in the deployment.yaml file and hence would like to update the deployment.
Option 1:- Delete and deploy again
kubectl delete -f deployment.yaml -n <NAMESPACE>
kubectl apply -f deployment.yaml -n <NAMESPACE>
Option 2:- Use set to update changes
kubectl set image deployment/nginx-deployment nginx=nginx:1.91
I don't want to use this approach as I am keeping my deployment.yaml file in GitHUB.
Option 3:- Using edit command
kubectl edit deployment/nginx-deployment
I don't want to use the above 3 options.
Is there any way to update the deployment using the file itself.
Like,
kubectl update deployment.yaml -n NAMESPACE
This way, I will make sure that I will always have the latest deployment file in my GitHub repo.
As #Daisy Shipton has said, what you want to do could be simplified with a simple command: kubectl apply -f deployment.yaml.
I will also add that I don't think it's correct to utilize the Option 2 to update the image utilized by the Pod with an imperative command! If the source of truth is the Deployment file present on your GitHub, you should simply update that file, by modifying the image that is used by your Pod's container there!
Next time you desire to update your Deployment object, unless you don't forget to modify the .yaml file, you will be setting the Pods to use the previous Nginx's image.
So there certainly should exist some restriction in using imperative commands to update the specification of any Kubernetes's object!