Is there a command to check which pods have a service applied - kubernetes

I have made a service.yaml and have created the service.
kind: Service
apiVersion: v1
metadata:
name: cass-operator-service
spec:
type: LoadBalancer
ports:
- port: 9042
targetPort: 9042
selector:
name: cass-operator
Is there a way to check on which pods the service has been applied?
I want that using the above service, I connect to a cluster in Google Cloud running Kubernetes/Cassandra on external_ip/port (9042 port). But using the above service, I am not able to.
kubectl get svc shows
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cass-operator-service LoadBalancer 10.51.247.82 34.91.214.233 9042:31902/TCP 73s
So probably the service is listening on 9042 but is forwarding to pods in 31902. I want both ports to be 9042. Is it possible to do?

The best way is follow labels and selectros
Your pod have a label section, and the service use it in the selector section, some examples in:
https://kubernetes.io/docs/concepts/services-networking/service/
You can find the selectors of your service with:
kubectl describe svc cass-operator-service
You can list your labels with:
kubectl get pods --show-labels

You can get pods by querying with selector like following
kubectl get pods -l name=cass-operator
You can also list all pods which are serving traffic behind a kubernetes service by running
kubectl get ep <service name> -o=jsonpath='{.subsets[*].addresses[*].ip}' | tr ' ' '\n' | xargs -I % kubectl get pods -o=name --field-selector=status.podIP=%

Related

k3s - Can't access my service based on service name

I have created a service like this:
apiVersion: v1
kind: Service
metadata:
name: amen-sc
spec:
ports:
- name: http
port: 3030
targetPort: 8000
selector:
component: scc-worker
I am able to access this service, from within my pods of the same cluster (& Namespace), using the IP address I get from kubectl get svc, but I am not able to access using the service name like curl amen-sc:3030.
Please advise what could possibly be wrong.
I intend to expose certain pods, only within my cluster and access them using the service-name:port format.
Make sure you have DNS service configured and corresponding pods are running.
kubectl get svc -n kube-system -l k8s-app=kube-dns
and
kubectl get pods -n kube-system -l k8s-app=kube-dns

Expose port from container in a pod minikube kubernetes

I'm new to K8s, I'll try minikube with 2 container running in a pod with this command:
kubectl apply -f deployment.yaml
and this deployment.yml:
apiVersion: v1
kind: Pod
metadata:
name: site-home
spec:
restartPolicy: Never
volumes:
- name: v-site-home
emptyDir: {}
containers:
- name: site-web
image: site-home:1.0.0
ports:
- containerPort: 80
volumeMounts:
- name: v-site-home
mountPath: /usr/share/nginx/html/assets/quotaLago
- name: site-cron
image: site-home-cron:1.0.0
volumeMounts:
- name: v-site-home
mountPath: /app/quotaLago
I've a shared volume so if I understand I cannot use deployment but only pods (maybe stateful set?)
In any case I want to expose the port 80 from the container site-web in the pod site-home.
In the official docs I see this for deployments:
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
but I cannot use for example:
kubectl expose pod site-web --type=LoadBalancer --port=8080
any idea?
but I cannot use for example:
kubectl expose pod site-web --type=LoadBalancer --port=8080
Of course you can, however exposing a single Pod via LoadBalancer Service doesn't make much sense. If you have a Deployment which typically manages a set of Pods between which the real load can be balanced, LoadBalancer does its job. However you can still use it just for exposing a single Pod.
Note that your container exposes port 80, not 8080 (containerPort: 80 in your container specification) so you need to specify it as target-port in your Service. Your kubectl expose command may look like this:
kubectl expose pod site-web --type=LoadBalancer --port=8080 --target-port=80
If you provide only --port=8080 flag to your kubectl expose command it assumes that the target-port's value is the same as value of --port. You can easily check it by yourself looking at the service you've just created:
kubectl get svc site-web -o yaml
and you'll see something like this in spec.ports section:
- nodePort: 32576
port: 8080
protocol: TCP
targetPort: 8080
After exposing your Pod (or Deployment) properly i.e. using:
kubectl expose pod site-web --type=LoadBalancer --port=8080 --target-port=80
you'll see something similar:
- nodePort: 31181
port: 8080
protocol: TCP
targetPort: 80
After issuing kubectl get services you should see similar output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
site-web ClusterIP <Cluster IP> <External IP> 8080:31188/TCP 4m42s
If then you go to http://<External IP>:8080 in your browser or run curl http://<External IP>:8080 you should see your website's frontend.
Keep in mind that this solution makes sense and will be fully functional in cloud environment which is able to provide you with a real load balancer. Note that if you declare such Service type in Minikukbe in fact it creates NodePort service as it is unable to provide you with a real load balancer. So your application will be available on your Node's ( your Minikube VM's ) IP address on randomly selected port in range 30000-32767 (in my example it's port 31181).
As to your question about the volume:
I've a shared volume so if I understand I cannot use deployment but
only pods (maybe stateful set?)
yes, if you want to use specifically EmptyDir volume, it cannot be shared between different Pods (even if they were scheduled on the same node), it is shared only between containers within the same Pod. If you want to use Deployment you'll need to think about another storage solution such as PersistenetVolume.
EDIT:
In the first moment I didn't notice the error in your command:
kubectl expose pod site-web --type=LoadBalancer --port=8080
You're trying to expose non-existing Pod as your Pod's name is site-home, not site-web. site-web is a name of one of your containers (within your site-home Pod). Remember: we're exposing Pod, not containers via Service.
I change 80->8080 but I always come to error:kubectl expose pod
site-home --type=LoadBalancer --port=8080 return:error: couldn't
retrieve selectors via --selector flag or introspection: the pod has
no labels and cannot be exposed See 'kubectl expose -h' for help
and examples.
The key point here is: the pod has no labels and cannot be exposed
It looks like your Pod doesn't have any labels defined which are required so that the Service can select this particular Pod (or set of similar Pods which have the same label) from among other Pods in your cluster. You need at least one label in your Pod definition. Adding simple label name: site-web under Pod's metadata section should help. It may look like this in your Pod definition:
apiVersion: v1
kind: Pod
metadata:
name: site-home
labels:
name: site-web
spec:
...
Now you may even provide this label as selector in your service however it should be handled automatically if you omit --selector flag:
kubectl expose pod site-home --type=LoadBalancer --port=8080 --target-port=80 --selector=name=site-web
Remember: in Minikube real load balancer cannot be created and instead of LoadBalancer NodePort type will be created. Command kubectl get svc will tell you on which port (in range 30000-32767) your application will be available.
and `kubectl expose pod site-web --type=LoadBalancer
--port=8080 return: Error from server (NotFound): pods "site-web" not found. Site-home is the pod, site-web is the container with the port
exposed, what's the issue?
if you don't have a Pod with name "site-web" you can expect such message. Here you are simply trying to expose non-existing Pod.
If I exposed a port from a container the port is automatically exposed
also for the pod ?
Yes, you have the port defined in Container definition. Your Pods automatically expose all ports that are exposed by the Containers within them.

kubectl expose pods using selector as a NodePort service via command-line

I have a requirement to expose pods using selector as a NodePort service from command-line. There can be one or more pods. And the service needs to include the pods dynamically as they come and go. For example,
NAMESPACE NAME READY STATUS RESTARTS AGE
rakesh rakesh-pod1 1/1 Running 0 4d18h
rakesh rakesh-pod2 1/1 Running 0 4d18h
I can create a service that selects the pods I want using a service-definition yaml file -
apiVersion: v1
kind: Service
metadata:
name: np-service
namespace: rakesh
spec:
type: NodePort
ports:
- name: port1
port: 30005
targetPort: 30005
nodePort: 30005
selector:
abc.property: rakesh
However, I need to achieve the same via commandline. I tried the following -
kubectl -n rakesh expose pod --selector="abc.property: rakesh" --port=30005 --target-port=30005 --name=np-service --type=NodePort
and a few other variations without success.
Note: I understand that currently, there is no way to specify the node-port using command-line. A random port is allocated between 30000-32767. I am fine with that as I can patch it later to the required port.
Also note: These pods are not part of a deployment unfortunately. Otherwise, expose deployment might have worked.
So, it kind of boils down to selecting pods based on selector and exposing them as a service.
My kubernetes version: 1.12.5 upstream
My kubectl version: 1.12.5 upstream
You can do:
kubectl expose $(kubectl get po -l abc.property=rakesh -o name) --port 30005 --name np-service --type NodePort
You can create a NodePort service with a specified name using create nodeport command:
kubectl create nodeport NAME [--tcp=port:targetPort] [--dry-run=server|client|none]
For example:
kubectl create service nodeport myservice --node-port=31000 --tcp=3000:80
You can check Kubectl reference for more:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-service-nodeport-em-

Getting an Kubernetes Ingress endpoint/IP address

Base OS : CentOS (1 master 2 minions)
K8S version : 1.9.5 (deployed using KubeSpray)
I am new to Kubernetes Ingress and am setting up 2 different services, each reachable with its own path.
I have created 2 deployments :
kubectl run nginx --image=nginx --port=80
kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080
I have also created their corresponding services :
kubectl expose deployment nginx --target-port=80 --type=NodePort
kubectl expose deployment echoserver --target-port=8080 --type=NodePort
My svc are :
[root#node1 kubernetes]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echoserver NodePort 10.233.48.121 <none> 8080:31250/TCP 47m
nginx NodePort 10.233.44.54 <none> 80:32018/TCP 1h
My NodeIP address is 172.16.16.2 and I can access both pods using
http://172.16.16.2:31250 &
http://172.16.16.2:32018
Now on top of this I want to deploy an Ingress so that I can reach both pods not using 2 IPs and 2 different ports BUT 1 IP address with different paths.
So my Ingress file is :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fanout-nginx-ingress
spec:
rules:
- http:
paths:
- path: /nginx
backend:
serviceName: nginx
servicePort: 80
- path: /echo
backend:
serviceName: echoserver
servicePort: 8080
This yields :
[root#node1 kubernetes]# kubectl describe ing fanout-nginx-ingress
Name: fanout-nginx-ingress
Namespace: development
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/nginx nginx:80 (<none>)
/echo echoserver:8080 (<none>)
Annotations:
Events: <none>
Now when I try accessing the Pods using the NodeIP address (172.16.16.2), I get nothing.
http://172.16.16.2/echo
http://172.16.16.2/nginx
Is there something I have missed in my configs ?
I had the same issue on my bare metal installation - or rather something close to that (kubernetes virtual cluster - set of virtual machines connected via Host-Only-Adapter). Here is link to my kubernetes vlab.
First of all make sure that you have ingress controller installed. Currently there are two ingress controller worth trying kubernetes nginx ingress controller and nginx kubernetes ingress controller -I installed first one.
Installation
Go to installation instructions and execute first step
# prerequisite-generic-deployment-command
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
Next get IP addresses of cluster nodes.
$ kubectl get nodes -o wide
NAME STATUS ROLES ... INTERNAL-IP
master Ready master ... 192.168.121.110
node01 Ready <none> ... 192.168.121.111
node02 Ready <none> ... 192.168.121.112
Further, crate ingress-nginx service of type LoadBalancer. I do it by downloading NodePort template service from installation tutorial and making following adjustments in svc-ingress-nginx-lb.yaml file.
$ curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml > svc-ingress-nginx-lb.yaml
# my changes svc-ingress-nginx-lb.yaml
type: LoadBalancer
externalIPs:
- 192.168.121.110
- 192.168.121.111
- 192.168.121.112
externalTrafficPolicy: Local
# create ingress- service
$ kubectl apply -f svc-ingress-nginx-lb.yaml
Verification
Check that ingress-nginx service was created.
$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.110.127.9 192.168.121.110,192.168.121.111,192.168.121.112 80:30284/TCP,443:31684/TCP 70m
Check that nginx-ingress-controller deployment was created.
$ kubectl get deploy -n ingress-nginx
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-ingress-controller 1 1 1 1 73m
Check that nginx-ingress pod is running.
$ kubectl get pods --all-namespaces -l
app.kubernetes.io/name=ingress-nginx
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-5cd796c58c-lg6d4 1/1 Running 0 75m
Finally, check ingress controller version. Don't forget to change pod name!
$ kubectl exec -it nginx-ingress-controller-5cd796c58c-lg6d4 -n ingress-nginx -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.21.0
Build: git-b65b85cd9
Repository: https://github.com/aledbf/ingress-nginx
-------------------------------------------------------------------------------
Testing
Test that ingress controller is working by executing steps in this tutorial -of course, you will omit minikube part.
Successful, execution of all steps will create ingress controler resource that should look like this.
$ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
ingress-tutorial myminikube.info,cheeses.all 192.168.121.110,192.168.121.111,192.168.121.112 80 91m
And pods that looks like this.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cheddar-cheese-6f94c9dbfd-cll4z 1/1 Running 0 110m
echoserver-55dcfbf8c6-dwl6s 1/1 Running 0 104m
stilton-cheese-5f6bbdd7dd-8s8bf 1/1 Running 0 110m
Finally, test that request to myminikube.info propagates via ingress load balancer.
$ curl myminikube.info
CLIENT VALUES:
client_address=10.44.0.7
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://myminikube.info:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=myminikube.info
user-agent=curl/7.29.0
x-forwarded-for=10.32.0.1
x-forwarded-host=myminikube.info
x-forwarded-port=80
x-forwarded-proto=http
x-original-uri=/
x-real-ip=10.32.0.1
x-request-id=b2fb3ee219507bfa12472c7d481d4b72
x-scheme=http
BODY:
It was a long journey to make ingress working on bear metal like environment.Thus, i will include relevant links that helped me along.
reproducable tutorial
installation of minikube on ubuntu
ingress I
ingress II
digging
reverse engineering on ingress in kubernetes
Check if you have an ingress controller in your cluster:
$ kubectl get po --all-namespaces
You should see something like:
kube-system nginx-ingress-controller-gwts0 1/1 Running 0 18d
It's only possible to create an ingress to address services inside the namespace in which the Ingress resides.
Cross-namespace ingresses are not implemented for security reasons.
It seems that your cluster is missing Ingress controller.
In general, Ingress controller works as follows:
1. search for a certain type of objects (ingress,"nginx") in a cluster
2. parse that object and create configuration section for a specific ingress pod.
3. update that pod object (restart it with updated configuration)
That particular pod is responsible for processing traffic from incoming ports (usually a couple of dedicated ports on nodes) to configured traffic destination in cluster.
You can choose from two supported and maintained controllers - Nginx and GCE
The ingress controller consists of several components that you create during installation.
Here is installation part from Nginx Ingress documentation:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml | kubectl apply -f -
If you have RBAC authorization configured in your cluster:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml | kubectl apply -f -
If no RBAC configured:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/without-rbac.yaml | kubectl apply -f -
In case you create cluster from scratch:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml | kubectl apply -f -
Verify your installation:
kubectl get pods --all-namespaces -l app=ingress-nginx --watch
You should see something like:
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-699cdf846-nj2rw 1/1 Running 0 1h
Check available services and their parameters:
kubectl get services --all-namespaces
If you are using custom service provider deployment (minikube, AWS, Azure, GKE), follow Nginx Ingress documentation for installation details.
See official Kubernetes Ingress documentation for details about Ingress.
I was using microk8s default nginx ingress controller on a later version of k8s (> 1.18) and I noticed this specific annotation was causing me an issue:
kubernetes.io/ingress.class: "nginx"
It's present in a lot of older documentation and examples but it's apparently deprecated (see https://kubernetes.io/docs/concepts/services-networking/ingress/) and I had also defined an ingressClassName of "public" using the newer annotation. I'm not sure if it was the conflict between the two that caused the issue but once I removed the deprecated annotation my address appeared.
For working your ingress resources (fanout-nginx-ingress), you need to first deploy the ingress controller which is by-default not come in your local kubernetes cluster. You need to deploy it yourself.
There are many solution out there and you can use any of them, but nginx ingress controller is fine.
For detail information you can refer a great video on ingress by Mumshad Mannambeth here:
https://www.youtube.com/watch?v=GhZi4DxaxxE

Does the Google Container Engine support DNS based service discovery?

From the kubernetes docs I see that there is a DNS based service discovery mechanism. Does Google Container Engine support this. If so, what's the format of DNS name to discover a service running inside Container Engine. I couldn't find the relevant information in the Container Engine docs.
The DNS name for services is as follow: {service-name}.{namespace}.svc.cluster.local.
Assuming you configured kubectl to work with your cluster you should be able to get your service and namespace details by the following the steps below.
Get your namespace
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
kube-system <none> Active
You should ignore the kube-system entry, because that is for the cluster itself. All other entries are your namespaces. By default there will be one extra namespace called default.
Get your services
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
broker-partition0 name=broker-partition0,type=broker name=broker-partition0 10.203.248.95 5050/TCP
broker-partition1 name=broker-partition1,type=broker name=broker-partition1 10.203.249.91 5050/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.203.240.1 443/TCP
service-frontend name=service-frontend,service=frontend name=service-frontend 10.203.246.16 80/TCP
104.155.61.198
service-membership0 name=service-membership0,partition=0,service=membership name=service-membership0 10.203.246.242 80/TCP
service-membership1 name=service-membership1,partition=1,service=membership name=service-membership1 10.203.248.211 80/TCP
This command lists all the services available in your cluster. So for example, if I want to get the IP address of the service-frontend I can use the following DNS: service-frontend.default.svc.cluster.local.
Verify DNS with busybox pod
You can create a busybox pod and use that pod to execute nslookup command to query the DNS server.
$ kubectl create -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
EOF
Now you can do an nslookup from the pod in your cluster.
$ kubectl exec busybox -- nslookup broker-partition0.default.svc.cluster.local
Server: 10.203.240.10
Address 1: 10.203.240.10
Name: service-frontend.default.svc.cluster.local
Address 1: 10.203.246.16
Here you see that the Addres 1 entry is the IP of the service-frontend service, the same as the IP address listed by the kubectl get services.
It should work the same way as mentioned in the doc you linked to. Have you tried that? (i.e. "my-service.my-ns")