How to get kubernetes service as spec yaml after has been `exposed`? - kubernetes

I'm very new to kubernetes and I find it a bit confusing and to understand I'd like to know what exactly kubectl expose deployment xxx--type=LoadBalancer --name=xxx does. So I was wondering if is possible to extract this service to yaml spec definition somehow.
I understand that Im creating a service, but not sure how he figures out all the ports automatically. I'd like to have the same thing in a file to run it like kubectl apply -f ./service.yaml.

kubectl expose doesnot assign ports automatically. If any port defination mentioned in deploymwnt yaml then only it uses the port. otherwise it will give error like :
error: couldn't find port via --port flag or introspection
to assign port on run use:
kubectl expose deployment xxx --type=NodePort --name=xxx --port=80 --target-port=8080
you can get the yaml by running this:
kubectl get service xxx -o yaml

Related

k8s create service with port, targetPort and nodePort identical

I need to create Kubernetes services that their nodePort are auto-allocated by K8S, and port/targetPort must equal to nodePort. (The requirement comes from the spec of a spark YARN node as the backend of the services).
Maybe I can first create the service with a fixed dummy port/targetPort and an auto-allocated nodePort, then update the service to set port/targetPort to the same value as the nodePort.
But is there any better way of doing this?
There are two main ways to expose a resource on k8s
First one using kubectl expose command: using this one you can choose the pod/deploy to expose but not the nodePort value. Then as you already know you must set the nodeport value on the created yaml
Another way to expose is using kubectl create service nodeport command: using this one you can set the port, target port and nodeport.
If you know the label of the pod to expose (for example app: superPod ) you can create a file, then replace a label (for example TOREPLACE)
with the value of your choosen port (for example 30456):
On linux:
portValue=30456 && k create service nodeport TOREPLACE \
--node-port=$portValue --tcp=$portValue --dry-run=client -oyaml > file.yaml \
&& sed -i 's/app: TOREPLACE/app: yourselector/g' file.yaml \
&& sed -i 's/name: TOREPLACE/name: yourselector-name/g' file.yaml
This will creates the file with preferred values.
After that, you can apply the file using kubectl -f file.yaml apply
However, depending on your needs, and if you want a reliable customizations of your resources, you could try to use:
https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/
or
https://helm.sh/docs/
Hope it helps.

oc get deployment is returning No resources found

"oc get deployment" command is returning "No resources Found" as the result.
Even if I put an option of assigning or defining the namespace using -n as the option to above command, I am getting the same result.
Whereas, I am getting the correct result of oc get pods command.
Meanwhile, the oc version is
oc - v3.6.0
kubernetes - v1.6.1
openshift - v3.11.380
Check, if you connect to the correct kubernetes environment, (especially if you're running more than one).
If that is correct, I guess, either you don't have any deployments at all, or the deployments are in a different namespace than you think.
Try out listing all deployments:
oc get deployments -A
There are other objects that create pods such as statefulset or deamonset. Because it is OpenShift, my feeling is that the pods created by a deploymentconfig which is popular way to create applications.
Anyway, you can make sure which object is the owner of the pods by looking into the pod annotation. This command should work:
oc get pod -o yaml <podname> | grep ownerReference -A 6

Kong Gateway using Kubernetes

Trying to deploy kong gateway via Kubernetes:
Created a namespace: kong-helm
Applied yaml files (using kubectl on kong-helm namespace) which includes: configmap.yaml, service.yaml, secret.yaml, ingress.yaml.
Upon applying the dbless.yaml(https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/single/all-in-one-dbless.yaml)ingress dbless pod is running.
kubectl get svc --all-namespaces - able to see the service(kong-test-poc) is created.
But when port forward is given: kubectl port-forward service/kong-test-poc 80:8080
Getting the following error: Error from server (NotFound): services "kong-test-poc" not found
Can you please tell how to rectify this error?
I believe you are missing the specific namespace where the service is running to it's going to your default namespace.
kubectl -n kong-helm port-forward service/kong-test-poc 8080:8080
I also recommend using an different port than 80 locally as this a unix reserved port. Also make sure that the kong-test-poc is configured to listen on 8080 (you didn't post the definition)

kubernetes to openshift equivalent command

In kubernetes I have a command as
kubectl create deployment nginx --image=ewoutp/docker-nginx-curl -n web
What should I run if I want to create this inside openshift cluster
I tried this
oc create deployment nginx --image=ewoutp/docker-nginx-curl -n web
I am getting error as error:
no matches for extensions/, Kind=Deployment
Can someone help me?
It might indicate that your openshift cluster is not running. Check oc status to view status of your current project. If it is not running you should create new project.
If you cluster is running you can run oc create deployment nginx --image=ewoutp/docker-nginx-curl -n web -o yaml to verify if apiVersion is correct. Currently used version is apps/v1. If it is incorrect you can save it to file and edit to match current version.

Check working of an service in kubernetes

I create a pod to test my service in kubernetes. But i didn't get anythings. Here is my command
kubectl run --generator=run-pod/v1 nginx-resolver --image=nginx
kubectl expose pod nginx-resolver --name=nginx-resolver-service --port=80 --target-port=80 --type=ClusterIP
kubectl run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 --rm -it -- nslookup nginx-resolver-service
Please help me explain why. Thanks
Run the following command and get things what you are thinking wrong about this cmd.
$ kubectl run --help
Create and run a particular image, possibly replicated.
Creates a deployment or job to manage the created container(s).
Examples:
# Start a single instance of nginx.
kubectl run nginx --image=nginx
# Start a single instance of hazelcast and let the container expose port 5701 .
kubectl run hazelcast --image=hazelcast --port=5701
...
So, kubectl run cmd creates a deployment or a job
If it is a deployment, it creates (a) first, a replicaset, (b) then Pod(s).
If it is a job, it creates a Pod.
But you are trying to expose a Pod which name is not the correct one. You can see the name of the Pod that is/are created by the cmd kubectl run.
$ kubectl get pods --namespace=<namespace> | grep "nginx-resolver"
$ kubectl get pods --namespace=<namespace> | grep "test-nslookup"
Then use those names to expose Pod.
You can optionally expose your Deployment. To do so, see the help of $ kubectl expose deployment --help. Run:
$ kubectl expose deployment --help
Expose a resource as a new Kubernetes service.
Looks up a deployment, service, replica set, replication controller or pod by name and uses the selector for that
resource as the selector for a new service on the specified port. A deployment or replica set will be exposed as a
service only if its selector is convertible to a selector that service supports, i.e. when the selector contains only
the matchLabels component. Note that if no port is specified via --port and the exposed resource has multiple ports, all
will be re-used by the new service. Also if no labels are specified, the new service will re-use the labels from the
resource it exposes.
Possible resources include (case insensitive):
pod (po), service (svc), replicationcontroller (rc), deployment (deploy), replicaset (rs)
Examples:
...
# Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000.
kubectl expose deployment nginx --port=80 --target-port=8000
...
If you want to see the log interactively, you need to set the --restart option of your test-nslookup pod to Never or OnFailure. Otherwise, kubernetes will just restart your pod indefinitely and you won't see anything.
So your last command should be :
kubectl run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 -it --restart=OnFailure -- nslookup nginx-resolver-service
Why ?
Probably because of this issue.
It seems to have a delay of 5s before kubectl run actually print something.
So in order to do it without changing the restart option, you'll need to change your command like this (beware of the sleep 7, so you'll have to wait 7seconds before seeing the logs) :
kubectl run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 -it --rm -- sh -c 'sleep 7; nslookup nginx-resolver-service'