Kubernetes apply to get to desired state - kubernetes

I feel like I have a terrible knowledge gap when it comes to managing the resource states within Kubernetes.
Suppose I have 2 deployments in my cluster, foo1 and foo2. They are both defined in separate yaml files, foo1.yaml and foo2.yaml that are both inside a my-dir directory and have been applied with kubectl apply -f my-dir/
Now I want to make a third deployment, but also delete my second deployment. I know that I can do this in 2 steps:
Make another foo3.yaml file inside the directory and then do kubectl apply -f my-dir/foo3.yaml
Run kubectl delete -f my-dir/foo2.yaml to get rid of the second deployment.
My question is, can I do this in one shot by keeping the "desired state" in my directory. i.e. Is there any way that I can delete foo2.yaml, create a new foo3.yaml and then just do kubectl apply -f my-dir/ to let kubernetes handle the deletion of the removed resource file as well? What am I missing here?

The best and easiest way is to use some DevOps tools like jenkins, ansible or terraform for managing your deployments. If you don’t want to use external tools there is a python library for managing kubernetes. You can fetch the details of your kubernetes resources, deployments, pods etc., using this library you can also manage your kubernetes cluster. Similarly if you want to remove the deployment files you just need to add a few more lines for deleting the file.

Related

Migrating resourses from an openshift cluster to another

I have an Openshift cluster and I want to move its resources to another cluster,
e.g. I have 40 Secrets, and 20 ConfigMaps, and some other resources such as deployment configs and more.
Moving these secrets and config maps manually is mind-blowing.
What is the best approach?
I would recommend trying out Monokle's Compare & Sync feature.
It allows you to visually compare the resources of two clusters and deploy resources from one to the other.
Here's a screenshot of the UI:
You can read more about how this works in the docs.
OpenShift has an "official" process for this called "Migration Toolkit for Containers (MTC)":
https://docs.openshift.com/container-platform/4.12/migration_toolkit_for_containers/about-mtc.html
Velero is also a great tool for your scenario. You can backup your namespaces with the granularity of the objects included, and restore them elsewhere with or without making changes:
https://velero.io/docs/v1.10/migration-case/
Follow these steps:
move secrets and config maps
move deployments
move services
move routes
As an example of how I'll do each step mentioned above, follow these steps for each of them:
1 - Login to the first cluster:
oc login --token="your-token-for-first-server" --server="your-first-server"
2 - Export your resources:
oc get -o yaml cm > configmaps.yaml
oc get -o yaml secrets > secrets.yaml
...
There are also some default ConfigMaps and Secrets which you don't need to copy, you can erase them after making the files.
3 - Login to the second cluster:
oc login --token="your-token-for-second-server" --server="your-second-server"
If you forget this step, you may get an error that says resource already exists, but be careful not to forget this step.
4 - Load resources to the second cluster
oc create -f configmaps.yaml
oc create -f secrets.yaml
...
There might be easier ways too, and there are a lot of information about this which is out of my knowledge.
There are also some considerations you need to aware of:
You may not need to move pods, usually they are made and controlled by other resources such as deployment configs.
In some companies, databases are managed completely separately by DBA teams, you may not need to change anything, but if your database is within your cluster, you should consider moving it's PV.
Using Helm chart or Openshift templates can help you make this kind of task so easier.
You can include templates in your GitLab CI/CD pipelines and just change your cluster URL and everything is up and running and redeploy.
In the end, if you are migrating from version 3 to 4, this article might be helpful.

Deleting kubernetes yaml: how to prevent old objects from floating around?

i'm working on a continuous deployment routine for a kubernetes application: everytime i push a git tag, a github action is activated which calls kubectl apply -f kubernetes to apply a bunch of yaml kubernetes definitions
let's say i add yaml for a new service, and deploy it -- kubectl will add it
but then later on, i simply delete the yaml for that service, and redeploy -- kubectl will NOT delete it
is there any way that kubectl can recognize that the service yaml is missing, and respond by deleting the service automatically during continuous deployment? in my local test, the service remains floating around
does the developer have to know to connect kubectl to the production cluster and delete the service manually, in addition to deleting the yaml definition?
is there a mechanism for kubernetes to "know what's missing"?
You need to use a CI/CD tool for Kubernetes to achieve what you need. As mentioned by Sithroo Helm is a very good option.
Helm lets you fetch, deploy and manage the lifecycle of applications,
both 3rd party products and your own.
No more maintaining random groups of YAML files (or very long ones)
describing pods, replica sets, services, RBAC settings, etc. With
helm, there is a structure and a convention for a software package
that defines a layer of YAML templates and another layer that
changes the templates called values. Values are injected into
templates, thus allowing a separation of configuration, and defines
where changes are allowed. This whole package is called a Helm
Chart.
Essentially you create structured application packages that contain
everything they need to run on a Kubernetes cluster; including
dependencies the application requires. Source
Before you start, I recommend you these articles explaining it's quirks and features.
The missing CI/CD Kubernetes component: Helm package manager
Continuous Integration & Delivery (CI/CD) for Kubernetes Using CircleCI & Helm
There's no such way. You can deploy resources from yaml file from anywhere if you can reach the node and configure kube config. So kubernetes will not know how to respond on a file deletion. If you still want to do this, you can write a program (a go code) which checks the availability of files in one place and deletes the corresponding resource whenever the file gets deleted.
There's one way via kubernetes is by using kubernetes operator, and whenever there is any change in your files you can update the crd used to deploy resources via operator.
Before deleting the yaml file, you can run kubectl delete -f file.yaml, this way all the resources created by this file will be deleted.
However, what you are looking for, is achieving the desired state using k8s. You can do this by using tools like Helmfile.
Helmfile, allow you to specify the resources you want to have all in one file, and it will achieve the desired state every time you run helmfile apply

Using full Declarative approach in Kubernetes

We can use a declarative approach for creating and updating kubernetes resources using kubectl apply -f , how can we do the same for recyclying the resources that are no longer needed.
I have used kubectl delete , but that looks like imperative , and sometimes we will need to delete things in proper order.
Is there a way to always use kubectl apply and it figures out itself which resources to keep and which to delete. Just like in Terraform.
Or we should conclude that currently the declarative approach works for resource creation and update only.
Use case:
For example , we have decided not to provide the K8S API to end users and instead provide them a repository where they keep and update thier yaml files that a bot can apply to the cluster on each update when the pull request is merged. So we need this declarative delete as well so that we don't have to clean up things after users. Terraform provider maybe the solution but in that case things will lock to terraform and users will need to learn one more tool instead of using the native k8s format.
Truns out that they have added a declarative approach for pruning the resources that are no longer present in the yaml manifests:
kubectl apply -f <directory/> --prune -l your=label
With too many cautions though.
As an alternative to kubectl delete, you can use kubectl apply to
identify objects to be deleted after their configuration files have
been removed from the directory. Apply with --prune queries the API
server for all objects matching a set of labels, and attempts to match
the returned live object configurations against the object
configuration files. If an object matches the query, and it does not
have a configuration file in the directory, and it has a
last-applied-configuration annotation, it is deleted.

kubernetes - kubectl run vs create and apply

I'm just getting started with kubernetes and setting up a cluster on AWS using kops. In many of the examples I read (and try), there will be commands like:
kubectl run my-app --image=mycompany/myapp:latest --replicas=1 --port=8080
kubectl expose deployment my=app --port=80 --type=LoadBalancer
This seems to do several things behind the scenes, and I can view the manifest files created using kubectl edit deployment, and so forth However, i see many examples where people are creating the manifest files by hand, and using commands like kubectl create -f or kubectl apply -f
Am I correct in assuming that both approaches accomplish the same goals, but that by creating the manifest files yourself, you have a finer grain of control?
Would I then have to be creating Service, ReplicationController, and Pod specs myself?
Lastly, if you create the manifest files yourself, how do people generally structure their projects as far as storing these files? Are they simply in a directory alongside the project they are deploying?
The fundamental question is how to apply all of the K8s objects into the k8s cluster. There are several ways to do this job.
Using Generators (Run, Expose)
Using Imperative way (Create)
Using Declarative way (Apply)
All of the above ways have a different purpose and simplicity. For instance, If you want to check quickly whether the container is working as you desired then you might use Generators .
If you want to version control the k8s object then it's better to use declarative way which helps us to determine the accuracy of data in k8s objects.
Deployment, ReplicaSet and Pods are different layers which solve different problems.All of these concepts provide flexibility to k8s.
Pods: It makes sure that related containers are together and provide efficiency.
ReplicaSet: It makes sure that k8s cluster has desirable replicas of the pods
Deployment: It makes sure that you can have different version of Pods and provide the capability to rollback to the previous version
Lastly, It depends on use case how you want to use these concepts or methodology. It's not about which is good or which is bad.
There is a little more nuance to the difference between apply and create than what is already mentioned here. Kubectl create can be used imperatively on the command line or declaratively against a manifest file.
Kubectl apply is used declaratively against a manifest file. You can't use kubectl apply imperatively.
One key difference is when you already have an object and you want to update something. Even if you used a manifest file with kubectl create, you will get an error when you use kubectl create again to update the same resource. But, if you use kubectl apply, you will not get an error. It will update the resource without any issues.
So, the convention is to use kubectl apply to create AND update resources, kubectl create is used to create resources, and kubectl run is used to create a pod with a specific image, namespace, etc. for experimentation and testing with the --dry-run=client option.

How to Update Kubernetes Deployments + Services

I have put together a simple cluster with several deploys that interact nicely, dns works, etc. However, as I'm using Deployments, and I have a few questions that I could not find answered in the docs.
How do I non-destructively update a deployment with a new copy of the deploy file? I've got edit and replace, but I'd really like to just pass in the file proper with it's changed fields (version, image, ports, etc.)
What's the preferred way of exposing a deployment as a service? There's a standalone file, there's an expose command... anything else I should consider? Is it possible to bundle the service into the deployment file?
How do I non-destructively update a deployment
You can use kubectl replace or kubectl apply. Replace is a full replacement. Apply tries to do a selective patch operation.
What's the preferred way of exposing a deployment as a service?
All of your suggestions are valid. Some people prefer a script, and for that kubectl expose is great. Some people want more control and versioning, so YAML files + kubectl apply or kubectl replace are appropriate. You can bundle multiple YAML "documents" into a single file, just join the blocks with "---" on a line by itself.