Kubernetes how to deploy a pod created by configuration file? - kubernetes

I create a pod by configuration file with a volume and privileged security.
How can I deploy this pod?
I try to deploy with kubectl run or deployment configuration file. it's created a new pod without my volume and security privileged.
Best regards,
Daniel

Use these commands to create and verify
This will create the pod
# kubectl create -f abc-pod.yml
This will list the running pods
# kubectl get pods
This will show the details of that pod
# kubectl describe pod <pod_name>
This will show the logs of pods
# kubectl logs pod_name container_name

Related

How to promote a pod to a deployment for scaling

I'm running the example in chapter "Service Discovery" of the book "Kubernetes up and running". The original command to run a deployment is kubectl run alpaca-prod --image=gcr.io/kuar-demo/kuard-amd64:blue --replicas=3 --port=8080 --labels="ver=1,app=alpaca,env=prod", however in K8s version 1.25, the --replicate parameter in run command is not supported any more. I planned to run without replica and then use "kubectl scale" to scale the deployment later. Problem is the run command only creates a pod, not a deployment (the scale command expects a deployment). So how do i promote my pod to a deployment, my kubernetes verion is 1.25?
There is no way to promote it however you can change label and all those stuff but instead of that you can create the new deployment delete the existing POD.
So easy step you can take input of existing running POD to YAML file first
kubectl get pod <POD name> -o yaml > pod-spec.yaml
Create deployment spec YAML file now
kubectl create deployment deploymentname --image=imagename --dry-run=client -o yaml > deployment-spec.yaml
Edit the deployment-spec.yaml file
and in other tab pod-spec.yaml you can copy the Spec part from POD file to new deployment file.
Once deployment-spec.yaml is ready you can apply it. Make sure if you are running service labels get matched properly
kubectl apply -f deployment-spec.yaml
Delete the single running POD
kubectl delete pod <POD name>

Not able to access application running on kubernetes pod (Using Docker-Desktop: Single-node cluster)

See below service is running:
and Below error i am getting while trying to access it:
Kubectl get pods:
Yaml files:
Service:
Deployment:
Check pod status if it's running or not.
Also, you can try with port-forwarding to POD
kubectl port-forward <POD name> 8086:8086 & open localhost:8086

kubectl create doesn't seem to do anything

I am running the command
kubectl create -f mypod.yaml --namespace=mynamespace
as I need to specify the environment variables through a configMap I created and specified in the mypod.yaml file. Kubernetes returns
pod/mypod created
but kubectl get pods doesn't show it in my list of pods and I can't access it by name as if it does not exist. However, if I try to create it again, it says that the pod is already created.
What may cause this, and how would I diagnose the problem?
By default, kubectl commands operate in the default namespace. But you created your pod in the mynamespace namespace.
Try one of the following:
kubectl get pods -n mynamespace
kubectl get pods --all-namespaces

Unabel to deploy mariadb on kubernetes using openstack-helm charts

I am trying to deploy OpenStack on kubernetes using helm charts. I am seeing the below error when trying to deploy MariaDB. Mariadb-server-0 looks for PVC which is in LOST state. I tried creating the PersistentVolume and assign the same but still, the pod looks for a lost PVC as shown in the error below.
2018-10-05T17:05:04.087573+00:00 node2: kubelet[9897]: E1005 17:05:04.087449 9897 desired_state_of_world_populator.go:273] Error processing volume "mysql-data" for pod "mariadb-server-0_openstack(c259471b-c8c0-11e8-9636-441ea14dfc98)": error processing PVC "openstack"/"mysql-data-mariadb-server-0": PVC openstack/mysql-data-mariadb-server-0 has non-bound phase ("Lost") or empty pvc.Spec.VolumeName ("pvc-74e81ef0-bb97-11e8-9636-441ea14dfc98")
Is there a way we can delete the old PVC entry from a cluster, so MariaDB doesn't look for the same while deploying ??
Thanks,
Ab
To delete a PVC, you can just use the typical kubectl commands.
See all the PVCs:
kubectl -n <namespace> get pvcs
To delete PVCs:
kubectl -n <namespace> delete pvc <pvc-id-from-the-previous-command>
Similarly, I would try PVs, to see if there are any dangling PVs.
See all the PVs:
kubectl -n <namespace> get pvcs
To delete PVs:
kubectl -n <namespace> delete pv <pv-id-from-the-previous-command>

How to kill pods on Kubernetes local setup

I am starting exploring runnign docker containers with Kubernetes. I did the following
Docker run etcd
docker run master
docker run service proxy
kubectl run web --image=nginx
To cleanup the state, I first stopped all the containers and cleared the downloaded images. However I still see pods running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-3476088249-w66jr 1/1 Running 0 16m
How can I remove this?
To delete the pod:
kubectl delete pods web-3476088249-w66jr
If this pod is started via some replicaSet or deployment or anything that is creating replicas then find that and delete that first.
kubectl get all
This will list all the resources that have been created in your k8s cluster. To get information with respect to resources created in your namespace kubectl get all --namespace=<your_namespace>
To get info about the resource that is controlling this pod, you can do
kubectl describe web-3476088249-w66jr
There will be a field "Controlled By", or some owner field using which you can identify which resource created it.
When you do kubectl run ..., that's a deployment you create, not a pod directly. You can check this with kubectl get deploy. If you want to delete the pod, you need to delete the deployment with kubectl delete deploy DEPLOYMENT.
I would recommend you to create a namespace for testing when doing this kind of things. You just do kubectl create ns test, then you do all your tests in this namespace (by adding -n test). Once you have finished, you just do kubectl delete ns test, and you are done.
If you defined your object as Pod then
kubectl delete pod <--all | pod name>
will remove all of the generated Pod. But, If wrapped your Pod to Deployment object then running the command above only will trigger a re-creation of them.
In that case, you need to run
kubectl delete deployment <--all | deployment name>
That will also remove the Service object that is related to the deleted Deployment