How to install kubernetes / ingress-nginx using kubectl ? (not helm) - kubernetes

I would like to use kubernetes/ingress-nginx using kubectl apply -f when I deploy in AKS (Azure), but I cannot figure out how.
I know that I can do kubectl apply -f https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/cloud/deploy.yaml
but the problem is that this provides a very non-configurable version with a lot of items on it.
Any idea? I don't want to start to edit and customize deploy.yaml in the current form as it's super ugly. It is an option but anyone has any better idea?
I know that I can use help, and that's a current production version, but for some reason, I need to try to move to kubectl apply -f
Thanks in advance.

Have a look at Kustomize
https://kubernetes.io/blog/2018/05/29/introducing-kustomize-template-free-configuration-customization-for-kubernetes/
https://github.com/kubernetes-sigs/kustomize/tree/master/examples/helloWorld
it was made exactly for your use case.

I would suggest using helm3 for installing packages (which allows configuration with maintability as main aim). You can see helm package for nginx-ingress (https://github.com/helm/charts/tree/master/stable/nginx-ingress) and configure the parameters also.
Note there are multiple version of helm for nginx-ingress. You can choose which works for you the best (one if community maintained and other is nginx maintained).
Edit: - helm template can be used to spit out yaml file which can be used with kubectl to apply directly. Moreover helm3 can directly work with kubectl without any server side component.

Related

provide custom command line options for kubectl for kubernetes operator

I have one kubernetes operator (ex: kubectl get oracle_ctrl). Now I want to provide custom arguments for the kubectl command.
ex: kubectl apply oracle_ctrl --auto-discover=true --name=vcn1
I can write one more controller to do the same job. But I don't want to write one more controller and make use of existing controller.
Is it possible to use operator-sdk to provide custom args to kubectl?
No, this isn't possible.
kubernetes/kubectl#914 has a little bit further discussion of this, but its essential description is "we should start the proposal and design process to eventually write something better than kubectl create to support it". Your CRD can define additional columns to be shown in kubectl get but this is really the only kubectl-related extension point. You could potentially create a kubectl plugin or another CLI tool that does what you need.
Rather than using the kubectl imperative tools, it's often a better practice to directly write YAML artifacts and commit them to source control. You can parameterize these using tools like Helm or Kustomize. If kubectl apply -f or helm install is your primary way of loading things into the cluster, then you don't need custom CLI options to make this work.

How to view the manifest file used to create a Kubenetes resource?

I have K8s deployed on an EC2 based cluster,
There is an application running in the deployment, and I am trying to figure out the manifest files that were used to create the resources,
There were deployment, service and ingress files used to create the App setup.
I tried the following command, but I'm not sure if it's the correct one as it's also returning a lot of unusual data like lastTransitionTime, lastUpdateTime and status-
kubectl get deployment -o yaml
What is the correct command to view the manifest yaml files of an existing deployed resource?
There is no specific way to do that. You should store your source files in source control like any other code. Think of it like decompiling, you can do it, but what you get back is not the same as what you put in. That said, check for the last-applied annotation, if you use kubectl apply that would have a JSON version of a more original-ish manifest, but again probably with some defaulted fields.
You can try using the --export flag, but it is deprecated and may not work perfectly.
kubectl get deployment -o yaml --export
Refer: https://github.com/kubernetes/kubernetes/pull/73787
KUBE_EDITOR="cat" kubectl edit secrets rook-ceph-mon -o yaml -n rook-ceph 2>/dev/null >user.yaml

Kubernetes CSI driver upgrade

We are developing k8s CSI driver
Currently in order to upgrade driver we delete the installed operator pods, cdrs and roles and recreate them from new version images.
What is suggested way to do upgrade? Or is uninstall/install is the suggested method?
I couldn't find any relevant information
We also have support of installing from OpenShift. Is there any difference regarding upgrade from OpenShift?
You should start from this documentation:
This page describes to CSI driver developers how to deploy their
driver onto a Kubernetes cluster.
Especially:
Deploying a CSI driver onto Kubernetes is highlighted in detail in
Recommended Mechanism for Deploying CSI Drivers on Kubernetes.
Also, you will find there all the necessary info with an example.
Your question lacks some details regarding your use case but I strongly recommend starting from the guide I have presented you.
Please, let me know if that helps.
CSI drivers can differ, but I believe the best approach is to do rolling update of your plugin's DaemonSet. It will happen automatically once you apply the new DaemonSet configuration, e.g. newer docker image.
For more details, see https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/
For example:
kubectl get -n YOUR-NAMESPACE daemonset YOUR-DAEMONSET --export -o yaml > plugin.yaml
vi plugin.yaml # Update your image tag(s)
kubectl apply -n YOUR-NAMESPACE -f plugin.yaml
A shorted way to update just the image:
kubectl set image ds/YOUR-DAEMONSET-NAME YOUR-CONTAINER-NAME=YOUR-IMAGE-URL:YOUR-TAG -n YOUR-NAMESPACE
Note: I found that I also needed to restart (kill) the pod with the external provisioner. There's probably a more elegant way to handle this, but it works in a pinch.
kubectl delete pod -n YOUR-NAMESPACE YOUR-EXTERNAL-PROVISIONER-POD

What is the recommended alternative to kubectl '--generator' option?

One of the points in the kubectl best practices section in Kubernetes Docs state below:
Pin to a specific generator version, such as kubectl run
--generator=deployment/v1beta1
But then a little down in the doc, we get to learn that except for Pod, the use of --generator option is deprecated and that it would be removed in future versions.
Why is this being done? Doesn't generator make life easier in creating a template file for resource definition of deployment, service, and other resources? What alternative is the kubernetes team suggesting? This isn't there in the docs :(
kubectl create is the recommended alternative if you want to use more than just a pod (like deployment).
https://kubernetes.io/docs/reference/kubectl/conventions/#generators says:
Note: kubectl run --generator except for run-pod/v1 is deprecated in v1.12.
This pull request has the reason why generators (except run-pod/v1) were deprecated:
The direction is that we want to move away from kubectl run because it's over bloated and complicated for both users and developers. We want to mimic docker run with kubectl run so that it only creates a pod, and if you're interested in other resources kubectl create is the intended replacement.
For deployment you can try
kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
and
Note: kubectl run --generator except for run-pod/v1 is deprecated in v1.12.

How to bind kubernetes resource to helm release

If I run kubectl apply -f <some statefulset>.yaml separately, is there a way to bind the stateful set to a previous helm release? (eg by specifying some tags in the yaml file)
As far as I know - you cannot do it.
Yes, you can always create resources via templates before installing the Helm chart.
However, I have never seen a solution for your question.