How to migrate to k8s kustomize? - kubernetes

I'm a begginer with k8s and I don't have knowlege about helm, kustomization, gitops, flux2 and other related terms.
I have an app on my k8s cluster and my goal is to move to Gitops using Flux2. Flux2 is forcing me to use kustomization so my current goal is to migrate my manifests to kustomization.
What I'm Trying To Do
Migrating the deployment and service manifests from my k8s cluster to kustomization inside a folder in a git repo.
Problem
I extracted the deployment and service manifests but they have (alot of) autogenerated fields. I don't know how to make a minimal yaml out of each manifest.
What I Tried So Far
Extracting my app's deployment and service yamls from my production k8s to a folder (with the autogenerated fields).
Created a kustomization.yaml file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ./service.yaml
- ./deployment.yaml
I setup a dev k8s locally on my map with k3d.
Questions
I don't know what I should do to make a minimal folder with: deployment.yaml, service.yaml and kustomization.yaml.
I'm not even sure this is what I should do to make a minimal example of kustomization.
How can I test my self?

Related

How is managed deletion of K8s objects in skaffold deploy

I am starting to play with skaffold to handle continuous deployment in my Kubernetes cluster.
I have a bunch of yaml files that just wait to be applied with kubectl, at one point a.yaml and b.yaml:
apiVersion: skaffold/v2beta29
kind: Config
metadata:
name: skaffold-deploy
deploy:
kubectl:
manifests:
- a.yaml
- b.yaml
Now, I make a development that needs to delete objects (in terms of kubectl delete) described in b.yaml (and I simply removed the file in my directory)
Is it possible to do so with skaffold?
If I skaffold deploy with this skaffold.yaml file:
apiVersion: skaffold/v2beta29
kind: Config
metadata:
name: skaffold-deploy
deploy:
kubectl:
manifests:
- a.yaml
objects in b.yaml are not deleted nor updated.
I was looking for a way to do this in the official documentation but could not find anything related to it. skaffold delete seems to delete everything that was previously deployed with it.
Thanks a lot in advance

How to generate kubernetes configmap from the Quarkus `application.properties` with its `helm extension`?

How to have the properties of Quarkus application.properties be available as configmap or environment variable in the Kubernetes container?
The quarkus provides helm and kubernetes extensions to generate resources (yaml) during the build, which can be used to deploy the application in kubernetes. However this extension does not elaborate the ways to generate the configmap to hold the application properties set in the application.properties. The site too does not give directions on it.
This is the purpose of the Kubernetes Config extension. Basically, after adding the Kubernetes Config, Kubernetes, and Helm extensions to your Maven/Gradle configuration, you need first to enable it by adding the following properties to your application properties:
quarkus.kubernetes-config.enabled=true
quarkus.kubernetes-config.config-maps=app-config
With these two properties, Quarkus will try to load the config map named app-config at startup as config source.
Where is the ConfigMap named app-config? You need to write it on your own and write the application properties there, for example:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.properties: |
hello.message=Hello %s from configmap
and then add the content at the file src/main/kubernetes/kubernetes.yml. (Note that the name of the file must be kubernetes.yml and the folder src/main/kubernetes). More information is in this link.
The Kubernetes extension will aggregate the resources within the file src/main/kubernetes/kubernetes.yml into the generated target/kubernetes/kubernetes.yml (you will notice your configmap is there).
And finally, the Helm extension will inspect the target/kubernetes folder and create the Helm chart templates accordingly.
You can checkout a complete example in this link.

Jenkins deployment with Kustomize - how to add JENKINS_OPTS

I feel like this should be an already asked question, but I'm having difficulties finding a concrete answer. I'm deploying Jenkins through ArgoCD by defining the deployment via kustomize (kubernetes yaml). I want to inject a prefix to have Jenkins start on /jenkins, but I don't see a way to add it. I saw online that I can have a env tag, but no full example of this was available. Where would I inject a prefix value if using kubernetes yaml for a Jenkins deployment?
So, I solved this issue myself, and I'd like to post the answer as this is the top searched question when searching "Kustomize Jenkins_opts".
In your project, assuming you are using Kustomize to deploy Jenkins (This will work with any app deployment where you want to inject values when deploying), you should have a project structure similar to this:
ProjectA
|
|---> app.yaml //contains the yaml definitions for your deployment
|---> kustomize.yaml //entry file to run Kustomize to deploy your app
Add a new file to your project structure. Name it whatever you want, I named mine something like app-env.yaml. It will look something like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
template:
spec:
containers:
- name: jenkins
env:
- name: JENKINS_OPTS
value: --prefix=/jenkins
This will specifically inject the --prefix flag to assign the prefix value for the URL to Jenkins on deployment to the Jenkins container. You can add multiple env variables. You can inject any value you want. My example is using Jenkins specific flags as this question centered around Jenkins, but it works for any app. Add this file to your Kustomize file from earlier:
namePrefix: kustomize-
resources:
- app.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesStrategicMerge:
- app-env.yaml
When your app is deployed via K8s, it will run the startup process for your app, while passing the values defined in your env file. Hope this helps anyone else.

Application not showing in ArgoCD when applying yaml

I am trying to setup ArgoCD for gitops. I used the ArgoCD helm chart to deploy it to my local Docker Desktop Kubernetes cluster. I am trying to use the app of apps pattern for ArgoCD.
The problem is that when I apply the yaml to create the root app, nothing happens.
Here is the yaml (created by the command helm template apps/ -n argocd from the my public repo https://github.com/gajewa/gitops):
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
server: http://kubernetes.default.svc
namespace: argocd
project: default
source:
path: apps/
repoURL: https://github.com/gajewa/gitops.git
targetRevision: HEAD
syncPolicy:
automated:
prune: true
selfHeal: true
The resource is created but nothing in Argo UI actually happened. No application is visible. So I tried to create the app via the Web UI, even pasting the yaml in there. The application is created in the web ui and it seems to synchronise and see the repo with the yaml templates of prometheus and argo but it doesn't actually create the prometheus application in ArgoCD. And the prometheus part of the root app is forever progressing.
Here are some screenshots:
The main page with the root application (where also argo-cd and prometheus should be visible but aren't):
And then the root app view where something is created for each template but Argo seems that it can't create kubernetes deployments/pods etc from this:
I thought maybe the CRD definitions are not present in the k8s cluster but I checked and they're there:
λ kubectl get crd
NAME CREATED AT
applications.argoproj.io 2021-10-30T16:27:07Z
appprojects.argoproj.io 2021-10-30T16:27:07Z
I've ran out of things to check why the apps aren't actually deployed. I was going by this tutorial: https://www.arthurkoziel.com/setting-up-argocd-with-helm/
the problem is you have to use the below code in your manifest file in metadata:
just please change the namespace with the name your argocd was deployed in that namespace. (default is argocd)
metadata:
namespace: argocd
From another SO post:
https://stackoverflow.com/a/70276193/13641680
It turns out that at the moment ArgoCD can only recognize application declarations made in ArgoCD namespace,
Related GitHub Issue

Single Container Pod yaml

Forgive my ignorance but I can't seem to find a way of using a yaml file to deploy a single container pod (read: kind: Pod). It appears the only way to do it is to use a deployment yaml file (read: kind: Deployment) with a replica of 1.
Is there really no way?
Why I ask is because it would be nice to put everything in source control, including the one off's like databases.
It would be awesome if there was a site with all the available options you can use in a yaml file (like vagrant's vagrantfile). There isn't one, right?
Thanks!
You should be able to find pod yaml files easily. For example, the documentation has an example of a Pod being created.
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec: # specification of the pod's contents
restartPolicy: Never
containers:
- name: hello
image: "ubuntu:14.04"
command: ["/bin/echo", "hello", "world"]
One thing to note is that if a deployment or a replicaset created a resource on your behalf, there is no reason why you couldn't do the same.
kubectl get pod <pod-name> -o yaml should give you the YAML spec of a created pod.
There is Kubernetes charts, which serves as a repository for configuration surrounding complex applications, using the helm package manager. This would serve you well for deploying more complex applications.
Never mind, figured it out. It's possible. You just use the multi-container yaml file (example found here: https://kubernetes.io/docs/user-guide/pods/multi-container/) but only specify one container.
I'd tried it before but had inadvertently mistyped the yaml formatting.
Thanks rubber ducky!