Install Order in K8s - kubernetes

I have a set of yaml files which are of different Kinds like
1 PVC
1 PV (The above PVC claims this PV)
1 Service
1 StatefulSet Object (The above Service is for this Stateful Set
1 Config Map (The above Stateful set uses this config map
Does the Install order of these objects matter to bring up an application using these?

If you do kubectl apply -f dir on a directory containing all of those files then it should work, at least if you have the latest version as there have been bugs raised and addressed in this area.
However, there are some dependencies which aren't hard dependencies and for which there is discussion. For this reason some are choosing to order the resources themselves or use a deployment tool like helm which deploys resources in a certain order.

Related

Migrating resourses from an openshift cluster to another

I have an Openshift cluster and I want to move its resources to another cluster,
e.g. I have 40 Secrets, and 20 ConfigMaps, and some other resources such as deployment configs and more.
Moving these secrets and config maps manually is mind-blowing.
What is the best approach?
I would recommend trying out Monokle's Compare & Sync feature.
It allows you to visually compare the resources of two clusters and deploy resources from one to the other.
Here's a screenshot of the UI:
You can read more about how this works in the docs.
OpenShift has an "official" process for this called "Migration Toolkit for Containers (MTC)":
https://docs.openshift.com/container-platform/4.12/migration_toolkit_for_containers/about-mtc.html
Velero is also a great tool for your scenario. You can backup your namespaces with the granularity of the objects included, and restore them elsewhere with or without making changes:
https://velero.io/docs/v1.10/migration-case/
Follow these steps:
move secrets and config maps
move deployments
move services
move routes
As an example of how I'll do each step mentioned above, follow these steps for each of them:
1 - Login to the first cluster:
oc login --token="your-token-for-first-server" --server="your-first-server"
2 - Export your resources:
oc get -o yaml cm > configmaps.yaml
oc get -o yaml secrets > secrets.yaml
...
There are also some default ConfigMaps and Secrets which you don't need to copy, you can erase them after making the files.
3 - Login to the second cluster:
oc login --token="your-token-for-second-server" --server="your-second-server"
If you forget this step, you may get an error that says resource already exists, but be careful not to forget this step.
4 - Load resources to the second cluster
oc create -f configmaps.yaml
oc create -f secrets.yaml
...
There might be easier ways too, and there are a lot of information about this which is out of my knowledge.
There are also some considerations you need to aware of:
You may not need to move pods, usually they are made and controlled by other resources such as deployment configs.
In some companies, databases are managed completely separately by DBA teams, you may not need to change anything, but if your database is within your cluster, you should consider moving it's PV.
Using Helm chart or Openshift templates can help you make this kind of task so easier.
You can include templates in your GitLab CI/CD pipelines and just change your cluster URL and everything is up and running and redeploy.
In the end, if you are migrating from version 3 to 4, this article might be helpful.

Kubernetes apply to get to desired state

I feel like I have a terrible knowledge gap when it comes to managing the resource states within Kubernetes.
Suppose I have 2 deployments in my cluster, foo1 and foo2. They are both defined in separate yaml files, foo1.yaml and foo2.yaml that are both inside a my-dir directory and have been applied with kubectl apply -f my-dir/
Now I want to make a third deployment, but also delete my second deployment. I know that I can do this in 2 steps:
Make another foo3.yaml file inside the directory and then do kubectl apply -f my-dir/foo3.yaml
Run kubectl delete -f my-dir/foo2.yaml to get rid of the second deployment.
My question is, can I do this in one shot by keeping the "desired state" in my directory. i.e. Is there any way that I can delete foo2.yaml, create a new foo3.yaml and then just do kubectl apply -f my-dir/ to let kubernetes handle the deletion of the removed resource file as well? What am I missing here?
The best and easiest way is to use some DevOps tools like jenkins, ansible or terraform for managing your deployments. If you don’t want to use external tools there is a python library for managing kubernetes. You can fetch the details of your kubernetes resources, deployments, pods etc., using this library you can also manage your kubernetes cluster. Similarly if you want to remove the deployment files you just need to add a few more lines for deleting the file.

running Common Lisp application on Kubernetes cluster

I have deployed on prod 10+ java/js microservices in GKE, all is good, none use external volumes, its a simple process in pipeline of generating new image, pushing to container registry and when upgrading the app to new version, just deploy new deployment with the new image and pods using rolling update are upgraded.
My question is how would it look like with Common Lisp application ? The main benefit of the language is that the code can be changed in runtime. Should the config .lisp files be attached as ConfigMap? (update to ConfigMap still requires recreation of pods for the new ConfigMap changes to be applied) Or maybe as some volume? (but what about there being 10x pods of the same deployment? all read from the same volume? what if there are 50 pods or more (wont there be some problems?)) And should the deploy of new version of the application look like v1 and v2 (new pods) or do we use somehow the benefits of runtime changes (with solutions I mentioned above), and the pods version stays the same, while the new code is added via some external solution
I would probably generate an image with the compiled code, and possibly a post-dump image, then rely on Kubernetes to restart pods in your Deployment or StatefulSet in a sensible way. If necessary (and web-based), use Readiness checks to gate what pods will be receiving requests.
As an aside, the projected contents of a ConfigMap should show up in side the container, unless you have specified the filename(s) of the projected keys from the ConfigMap, so it should be possible to keep the source that way, then have either the code itself check for updates or have another mechanism to signal "time for a reload". But, unless you pair that with compilation, you would probably end up with interpreted code.

Deleting kubernetes yaml: how to prevent old objects from floating around?

i'm working on a continuous deployment routine for a kubernetes application: everytime i push a git tag, a github action is activated which calls kubectl apply -f kubernetes to apply a bunch of yaml kubernetes definitions
let's say i add yaml for a new service, and deploy it -- kubectl will add it
but then later on, i simply delete the yaml for that service, and redeploy -- kubectl will NOT delete it
is there any way that kubectl can recognize that the service yaml is missing, and respond by deleting the service automatically during continuous deployment? in my local test, the service remains floating around
does the developer have to know to connect kubectl to the production cluster and delete the service manually, in addition to deleting the yaml definition?
is there a mechanism for kubernetes to "know what's missing"?
You need to use a CI/CD tool for Kubernetes to achieve what you need. As mentioned by Sithroo Helm is a very good option.
Helm lets you fetch, deploy and manage the lifecycle of applications,
both 3rd party products and your own.
No more maintaining random groups of YAML files (or very long ones)
describing pods, replica sets, services, RBAC settings, etc. With
helm, there is a structure and a convention for a software package
that defines a layer of YAML templates and another layer that
changes the templates called values. Values are injected into
templates, thus allowing a separation of configuration, and defines
where changes are allowed. This whole package is called a Helm
Chart.
Essentially you create structured application packages that contain
everything they need to run on a Kubernetes cluster; including
dependencies the application requires. Source
Before you start, I recommend you these articles explaining it's quirks and features.
The missing CI/CD Kubernetes component: Helm package manager
Continuous Integration & Delivery (CI/CD) for Kubernetes Using CircleCI & Helm
There's no such way. You can deploy resources from yaml file from anywhere if you can reach the node and configure kube config. So kubernetes will not know how to respond on a file deletion. If you still want to do this, you can write a program (a go code) which checks the availability of files in one place and deletes the corresponding resource whenever the file gets deleted.
There's one way via kubernetes is by using kubernetes operator, and whenever there is any change in your files you can update the crd used to deploy resources via operator.
Before deleting the yaml file, you can run kubectl delete -f file.yaml, this way all the resources created by this file will be deleted.
However, what you are looking for, is achieving the desired state using k8s. You can do this by using tools like Helmfile.
Helmfile, allow you to specify the resources you want to have all in one file, and it will achieve the desired state every time you run helmfile apply

Kubernetes rolling update vs set image

After some intense google and SO search i couldn't find any document that mentions both rolling update and set image, and can stress the difference between the two.
Can anyone shed light? When would I rather use either of those?
EDIT: It's worth mentioning that i'm already working with deployments (rather than replication controller directly) and that I'm using yaml configuration files. It would also be nice to know if there's a way to perform any of those using configuration files rather than direct commands.
In older k8s versions the ReplicationController was the only resource to manage a group of replicated pods. To update the pods of a ReplicationController you use kubectl rolling-update.
Later, k8s introduced the Deployment which manages ReplicaSet resources. The Deployment could be updated via kubectl set image.
Working with Deployment resources (as you already do) is the preferred way. I guess the ReplicationController and its rolling-update command are mainly still there for backward compatibility.
UPDATE: As mentioned in the comments:
To update a Deployment I used kubectl patch as it could also change things like adding new env vars whereas kubectl set image is rather limited and can only change the image version. Also note, that patch can be applied to all k8s resources and is not restricted to be used with a Deployment.
Later, I shifted my deployment processes to use helm - a really neat and k8s native package management tool. Can highly recommend to have a look at it.