How do people test Kube config locally (Kustomize) - kubernetes

Scenario
We have a large, complex sets of Kustomize with replacements, CRDs, SOPs etc.
We can generate the config locally/CI with
..\kustomize.exe build --load-restrictor=LoadRestrictionsNone .\path > sample-build.yaml
But this doesn't test the actual values that will be used. It doesn't detect things like missing config maps.
kubectl apply -f .\dev-01.sample.yaml --dry-run=server Could be used but would need a cloud for each developer or the developer to have all the containers locally (e.g. docker).
Are there any commands to 'expand' sample-build.yaml so I can see the actual environment variables that will be passed to the container at deploy time, but, without a cluster locally.
What do other people use to CI Kustomize?

Related

Migrating resourses from an openshift cluster to another

I have an Openshift cluster and I want to move its resources to another cluster,
e.g. I have 40 Secrets, and 20 ConfigMaps, and some other resources such as deployment configs and more.
Moving these secrets and config maps manually is mind-blowing.
What is the best approach?
I would recommend trying out Monokle's Compare & Sync feature.
It allows you to visually compare the resources of two clusters and deploy resources from one to the other.
Here's a screenshot of the UI:
You can read more about how this works in the docs.
OpenShift has an "official" process for this called "Migration Toolkit for Containers (MTC)":
https://docs.openshift.com/container-platform/4.12/migration_toolkit_for_containers/about-mtc.html
Velero is also a great tool for your scenario. You can backup your namespaces with the granularity of the objects included, and restore them elsewhere with or without making changes:
https://velero.io/docs/v1.10/migration-case/
Follow these steps:
move secrets and config maps
move deployments
move services
move routes
As an example of how I'll do each step mentioned above, follow these steps for each of them:
1 - Login to the first cluster:
oc login --token="your-token-for-first-server" --server="your-first-server"
2 - Export your resources:
oc get -o yaml cm > configmaps.yaml
oc get -o yaml secrets > secrets.yaml
...
There are also some default ConfigMaps and Secrets which you don't need to copy, you can erase them after making the files.
3 - Login to the second cluster:
oc login --token="your-token-for-second-server" --server="your-second-server"
If you forget this step, you may get an error that says resource already exists, but be careful not to forget this step.
4 - Load resources to the second cluster
oc create -f configmaps.yaml
oc create -f secrets.yaml
...
There might be easier ways too, and there are a lot of information about this which is out of my knowledge.
There are also some considerations you need to aware of:
You may not need to move pods, usually they are made and controlled by other resources such as deployment configs.
In some companies, databases are managed completely separately by DBA teams, you may not need to change anything, but if your database is within your cluster, you should consider moving it's PV.
Using Helm chart or Openshift templates can help you make this kind of task so easier.
You can include templates in your GitLab CI/CD pipelines and just change your cluster URL and everything is up and running and redeploy.
In the end, if you are migrating from version 3 to 4, this article might be helpful.

Kubernetes - how to specify image pull secrets, when creating deployment with image from remote private repo

I need to create and deploy on an existing Kubernetes cluster, an application based on a docker image which is hosted on private Harbor repo on a remote server.
I could use this if the repo was public:
kubectl create deployment <deployment_name> --image=<full_path_to_remote_repo>:<tag>
Since the repo is private, the username, password etc. are required for it to be pulled. How do I modify the above command to embed that information?
Thanks in advance.
P.S.
I'm looking for a way that doesn't involve creating a secret using kubectl create secret and then creating a yaml defining the deployment.
The goal is to have kubectl pull the image using the supplied creds and deploy it on the cluster without any other steps. Could this be achieved with a single (above) command?
Edit:
Creating and using a secret is acceptable if there was a way to specify the secret as an option in kubectl command rather than specify it in a yaml (really trying to avoid yaml). Is there a way of doing that?
There are no flags to pass an imagePullSecret to kubectl create deployment, unfortunately.
If you're coming from the world of Docker Compose or Swarm, having one line deployments is fairly common. But even these deployment tools use underlying configuration and .yml files, like docker-compose.yml.
For Kubernetes, there is official documentation on pulling images from private registries, and there is even special handling for docker registries. Check out the article on creating Docker config secrets too.
According to the docs, you must define a secret in this way to make it available to your cluster. Because Kubernetes is built for resiliency/scalability, any machine in your cluster may have to pull your private image, and therefore each machine needs access to your secret. That's why it's treated as its own entity, with its own manifest and YAML file.

Deploy containers in pod using docker compose volumes

I was given a docker compose file for superset which included volumes mounted from the repo itself.
docker-compose-non-dev.yml
I have to deploy this as containers in a pod in an EKS cluster. I can't figure out how the volumes should be done because the files are mounted locally from the repo when we run:
docker-compose up
[ EDIT ]
I just built the container with the files I needed inside it.
Docker compose is a tool geared towards local deployments (as you may know) and so it optimizes its workflows with that assumption. One way to work this around is by wrapping the docker image(s) that compose up with the additional files you have on your local environment. For example a wrapper dockerfile would be something like
FROM <original image>
ADD <local files to new image>
The resulting image is what you would run in the cloud on EKS.
Of course there are many other ways to work it around such as using Kubernetes volumes and (pre-)populating them with the local files, or bake the local files in the original image from the get go, etc.
All in all the traditional compose model of thinking (with local file mappings) isn't very "cloud deployments friendly".
You can convert docker-compose.yaml files with a tool called kompose.
It's as easy as running
kompose convert
in a directory containing docker-ccompose.yaml file.
This will create a bunch of files which you can deploy with kubectl apply -f . (or kompose up). You can read more here.
However, even though kompose will generate PersistentVolueClaim manifests, no PersistentVolumes will be created. You have to make those yourself (cluster may try to create PVs by itself, but it's strongly based on PVCs generated by kompose, I would not rely on that).
Docker compose is mainly used for devlopment, testing and single host deployments [reference], which is not exactly what Kubernetes was created for (latter being cloud oriented).

Skaffold dev stream logs of pods created by helm hooks

I would like to see the output from my pre-install/post-install helm hooks when using skaffold dev, but this does not seem to work.
Which filters does skaffold use to get all the pods for log tailing? Is there a way to force skaffold to pick up the hooks by applying some labels (e.g. skaffold.dev/run-id: static) ?
Context
Doing dev with local docker, the image building is pretty fast, so for some use cases there is no need to use file sync and special dev-mode container images with file watching inside.
There is this feature request: https://github.com/GoogleContainerTools/skaffold/issues/1441, but this is for adding hooks to skaffold itself.
The pods created by helm hooks are not removed (https://github.com/GoogleContainerTools/skaffold/issues/2876), but this is expected behavior for helm delete.
Thanks #acristu for the question. Skaffold dev here.
Currently, skaffold is unaware of pods deployed in the pre and post helm hooks.
The reason, we don't parse the manifests in these hooks and hence can't transform those to add the required label skaffold.dev/run-id
Currently there is no way to force skaffold to pick up the logs from these pods/containers
That said we had a pending feature request to extend the current log configuration to include resourceType or resourceName like portForward section
portForward: # describes user defined resources to port-forward.
- resourceType: # Kubernetes type that should be port forwarded.
resourceName:
Supporting this in skaffold would be great idea.

Deleting kubernetes yaml: how to prevent old objects from floating around?

i'm working on a continuous deployment routine for a kubernetes application: everytime i push a git tag, a github action is activated which calls kubectl apply -f kubernetes to apply a bunch of yaml kubernetes definitions
let's say i add yaml for a new service, and deploy it -- kubectl will add it
but then later on, i simply delete the yaml for that service, and redeploy -- kubectl will NOT delete it
is there any way that kubectl can recognize that the service yaml is missing, and respond by deleting the service automatically during continuous deployment? in my local test, the service remains floating around
does the developer have to know to connect kubectl to the production cluster and delete the service manually, in addition to deleting the yaml definition?
is there a mechanism for kubernetes to "know what's missing"?
You need to use a CI/CD tool for Kubernetes to achieve what you need. As mentioned by Sithroo Helm is a very good option.
Helm lets you fetch, deploy and manage the lifecycle of applications,
both 3rd party products and your own.
No more maintaining random groups of YAML files (or very long ones)
describing pods, replica sets, services, RBAC settings, etc. With
helm, there is a structure and a convention for a software package
that defines a layer of YAML templates and another layer that
changes the templates called values. Values are injected into
templates, thus allowing a separation of configuration, and defines
where changes are allowed. This whole package is called a Helm
Chart.
Essentially you create structured application packages that contain
everything they need to run on a Kubernetes cluster; including
dependencies the application requires. Source
Before you start, I recommend you these articles explaining it's quirks and features.
The missing CI/CD Kubernetes component: Helm package manager
Continuous Integration & Delivery (CI/CD) for Kubernetes Using CircleCI & Helm
There's no such way. You can deploy resources from yaml file from anywhere if you can reach the node and configure kube config. So kubernetes will not know how to respond on a file deletion. If you still want to do this, you can write a program (a go code) which checks the availability of files in one place and deletes the corresponding resource whenever the file gets deleted.
There's one way via kubernetes is by using kubernetes operator, and whenever there is any change in your files you can update the crd used to deploy resources via operator.
Before deleting the yaml file, you can run kubectl delete -f file.yaml, this way all the resources created by this file will be deleted.
However, what you are looking for, is achieving the desired state using k8s. You can do this by using tools like Helmfile.
Helmfile, allow you to specify the resources you want to have all in one file, and it will achieve the desired state every time you run helmfile apply