Kubernetes: How to expose a Pod as a service - kubernetes

I am learning kubernetes and created first pod using below command
kubectl run helloworld --image=<image-name> --port=8080
The Pod creation was successful.
But since it is neither a ReplicationController or a Deloyment, how could I expose it as a service. Please advise.

Please refer to the documentation of kubernetes service concept https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
At the end of the page, there also is an interactive tutorial in minikube

You can create the service with the same set of selector and labels
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: helloworld
ports:
- protocol: TCP
port: 80
targetPort: 9376
so if selector matching it will route the traffic to POD and you can expose it.
ref : https://kubernetes.io/docs/concepts/services-networking/service/

You may simply use --expose while creating the pod
$ kubectl run nginx --image=nginx --port=80 --expose
service/nginx created
pod/nginx created

Thanks All
I was able to achieve using below command (thanks to comment from Amit kumar):
# Create a service for a pod valid-pod, which serves on port 444 with the name "frontend"
kubectl expose pod valid-pod --port=444 --name=frontend

Related

how to restrict a pod to connect only to 2 pods using networkpolicy and test connection in k8s in simple way?

Do I still need to expose pod via clusterip service?
There are 3 pods - main, front, api. I need to allow ingress+egress connection to main pod only from the pods- api and frontend. I also created service-main - service that exposes main pod on port:80.
I don't know how to test it, tried:
k exec main -it -- sh
netcan -z -v -w 5 service-main 80
and
k exec main -it -- sh
curl front:80
The main.yaml pod:
apiVersion: v1
kind: Pod
metadata:
labels:
app: main
item: c18
name: main
spec:
containers:
- image: busybox
name: main
command:
- /bin/sh
- -c
- sleep 1d
The front.yaml:
apiVersion: v1
kind: Pod
metadata:
labels:
app: front
name: front
spec:
containers:
- image: busybox
name: front
command:
- /bin/sh
- -c
- sleep 1d
The api.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: api
name: api
spec:
containers:
- image: busybox
name: api
command:
- /bin/sh
- -c
- sleep 1d
The main-to-front-networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: front-end-policy
spec:
podSelector:
matchLabels:
app: main
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: front
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: front
ports:
- port: 8080
What am I doing wrong? Do I still need to expose main pod via service? But should not network policy take care of this already?
Also, do I need to write containerPort:80 in main pod? How to test connectivity and ensure ingress-egress works only for main pod to api, front pods?
I tried the lab from ckad prep course, it had 2 pods: secure-pod and web-pod. There was issue with connectivity, the solution was to create network policy and test using netcat from inside the web-pod's container:
k exec web-pod -it -- sh
nc -z -v -w 1 secure-service 80
connection open
UPDATE: ideally I want answers to these:
a clear explanation of the diff btw service and networkpolicy.
If both service and netpol exist - what is the order of evaluation that the traffic/request goes thru? It first goes thru netpol then service? Or vice versa?
if I want front and api pods to send/receive traffic to main - do I need separate services exposing front and api pods?
Network policies and services are two different and independent Kubernetes resources.
Service is:
An abstract way to expose an application running on a set of Pods as a network service.
Good explanation from the Kubernetes docs:
Kubernetes Pods are created and destroyed to match the state of your cluster. Pods are nonpermanent resources. If you use a Deployment to run your app, it can create and destroy Pods dynamically.
Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.
This leads to a problem: if some set of Pods (call them "backends") provides functionality to other Pods (call them "frontends") inside your cluster, how do the frontends find out and keep track of which IP address to connect to, so that the frontend can use the backend part of the workload?
Enter Services.
Also another good explanation in this answer.
For production you should use a workload resources instead of creating pods directly:
Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.
Here are some examples of workload resources that manage one or more Pods:
Deployment
StatefulSet
DaemonSet
And use services to make requests to your application.
Network policies are used to control traffic flow:
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
Network policies target pods, not services (an abstraction). Check this answer and this one.
Regarding your examples - your network policy is correct (as I tested it below). The problem is that your cluster may not be compatible:
For Network Policies to take effect, your cluster needs to run a network plugin which also enforces them. Project Calico or Cilium are plugins that do so. This is not the default when creating a cluster!
Test on kubeadm cluster with Calico plugin -> I created similar pods as you did, but I changed container part:
spec:
containers:
- name: main
image: nginx
command: ["/bin/sh","-c"]
args: ["sed -i 's/listen .*/listen 8080;/g' /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"]
ports:
- containerPort: 8080
So NGINX app is available at the 8080 port.
Let's check pods IP:
user#shell:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
api 1/1 Running 0 48m 192.168.156.61 example-ubuntu-kubeadm-template-2 <none> <none>
front 1/1 Running 0 48m 192.168.156.56 example-ubuntu-kubeadm-template-2 <none> <none>
main 1/1 Running 0 48m 192.168.156.52 example-ubuntu-kubeadm-template-2 <none> <none>
Let's exec into running main pod and try to make request to the front pod:
root#main:/# curl 192.168.156.61:8080
<!DOCTYPE html>
...
<title>Welcome to nginx!</title>
It is working.
After applying your network policy:
user#shell:~$ kubectl apply -f main-to-front.yaml
networkpolicy.networking.k8s.io/front-end-policy created
user#shell:~$ kubectl exec -it main -- bash
root#main:/# curl 192.168.156.61:8080
...
Not working anymore, so it means that network policy is applied successfully.
Nice option to get more information about applied network policy is to run kubectl describe command:
user#shell:~$ kubectl describe networkpolicy front-end-policy
Name: front-end-policy
Namespace: default
Created on: 2022-01-26 15:17:58 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: app=main
Allowing ingress traffic:
To Port: 8080/TCP
From:
PodSelector: app=front
Allowing egress traffic:
To Port: 8080/TCP
To:
PodSelector: app=front
Policy Types: Ingress, Egress

k3s - Can't access my service based on service name

I have created a service like this:
apiVersion: v1
kind: Service
metadata:
name: amen-sc
spec:
ports:
- name: http
port: 3030
targetPort: 8000
selector:
component: scc-worker
I am able to access this service, from within my pods of the same cluster (& Namespace), using the IP address I get from kubectl get svc, but I am not able to access using the service name like curl amen-sc:3030.
Please advise what could possibly be wrong.
I intend to expose certain pods, only within my cluster and access them using the service-name:port format.
Make sure you have DNS service configured and corresponding pods are running.
kubectl get svc -n kube-system -l k8s-app=kube-dns
and
kubectl get pods -n kube-system -l k8s-app=kube-dns

Kubernetes Service get Connection Refused

I am trying to create an application in Kubernetes (Minikube) and expose its service to other applications in same clusters, but i get connection refused if i try to access this service in Kubernetes node.
This application just listen on HTTP 127.0.0.1:9897 address and send response.
Below is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: exporter-test
namespace: datenlord-monitoring
labels:
app: exporter-test
spec:
replicas: 1
selector:
matchLabels:
app: exporter-test
template:
metadata:
labels:
app: exporter-test
spec:
containers:
- name: prometheus
image: 34342/hello_world
ports:
- containerPort: 9897
---
apiVersion: v1
kind: Service
metadata:
name: exporter-test-service
namespace: datenlord-monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9897'
spec:
selector:
app: exporter-test
type: NodePort
ports:
- port: 8080
targetPort: 9897
nodePort: 30001
After I apply this yaml file, the pod and the service deployed correctly, and I am sure this pod works correctly, since when I login the pod by
kubectl exec -it exporter-test-* -- sh, then just run curl 127.0.0.1:9897, I can get the correct response.
Also, if I run kubectl port-forward exporter-test-* -n datenlord-monitoring 8080:9897, I can get correct response from localhost:8080. So this application should work well.
However, when I trying to access this service from other application in same K8s cluster by exporter-test-service.datenlord-monitoring.svc:30001 or just run curl nodeIp:30001 in k8s node or run curl clusterIp:8080 in k8s node, I got Connection refused
Anyone had same issue before? Appreciate for any help! Thanks!
you are mixing two things here. NodePort is the port the application is available from outside your cluster. Inside your cluster you need to access your service via the service port, not the NodePort.
Try changing exporter-test-service.datenlord-monitoring.svc:30001 to exporter-test-service.datenlord-monitoring.svc:8080
Welcome to the community!
There are no issues with behaviour you observed.
In short words kubernetes cluster (which is minikube in this case) has its own isolated network with internal DNS.
One way to access your service on the node: you specified nodePort for your service and this made the service accessible on the localhost:30001. You can check it by running on your host:
$ kubectl get svc -n datenlord-monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exporter-test-service NodePort 10.111.191.159 <none> 8080:30001/TCP 2m45s
# Test:
curl -I localhost:30001
HTTP/1.1 200 OK
Another way to expose service to the host network is to use minikube tunnel (run in the another console). You'll need to change service type from NodePort to LoadBalancer:
$ kubectl get svc -n datenlord-monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exporter-test-service LoadBalancer 10.111.191.159 10.111.191.159 8080:30001/TCP 18m
# Test:
$ curl -I 10.111.191.159:8080
HTTP/1.1 200 OK
Why some of options doesn't work.
Connection to the service by its DNS + NodePort. NodePort is used to link host IP and NodePort to service port inside kubernetes cluster. Internal DNS is not accessible outside kubernetes cluster (unless you don't add IPs to /etc/hosts on your host machine)
Inside the cluster you should use internal DNS with internal service port which is 8080 in your case. You can check how this works with a separate container in the same namespace (e.g. image curlimages/curl) and get following:
$ kubectl exec -it curl -n datenlord-monitoring -- curl -I exporter-test-service:8080
HTTP/1.1 200 OK
Or from the pod in a different namespace:
$ kubectl exec -it curl-default-ns -- curl -I exporter-test-service.datenlord-monitoring.svc:8080
HTTP/1.1 200 OK
I've attached useful links which help you to understand this difference.
Edit: DNS inside deployed pod
$ kubectl exec -it exporter-test-xxxxxxxx-yyyyy -n datenlord-monitoring -- bash
root#exporter-test-74cf9f94ff-fmcqp:/# cat /etc/resolv.conf
nameserver 10.96.0.10
search datenlord-monitoring.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
Useful links:
DNS for pods and services
Service types
Accessing apps in Minikube
you need to change 127.0.0.1:9897 to 0.0.0.0:9897 so that application listens to all incoming requests

Expose port from container in a pod minikube kubernetes

I'm new to K8s, I'll try minikube with 2 container running in a pod with this command:
kubectl apply -f deployment.yaml
and this deployment.yml:
apiVersion: v1
kind: Pod
metadata:
name: site-home
spec:
restartPolicy: Never
volumes:
- name: v-site-home
emptyDir: {}
containers:
- name: site-web
image: site-home:1.0.0
ports:
- containerPort: 80
volumeMounts:
- name: v-site-home
mountPath: /usr/share/nginx/html/assets/quotaLago
- name: site-cron
image: site-home-cron:1.0.0
volumeMounts:
- name: v-site-home
mountPath: /app/quotaLago
I've a shared volume so if I understand I cannot use deployment but only pods (maybe stateful set?)
In any case I want to expose the port 80 from the container site-web in the pod site-home.
In the official docs I see this for deployments:
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
but I cannot use for example:
kubectl expose pod site-web --type=LoadBalancer --port=8080
any idea?
but I cannot use for example:
kubectl expose pod site-web --type=LoadBalancer --port=8080
Of course you can, however exposing a single Pod via LoadBalancer Service doesn't make much sense. If you have a Deployment which typically manages a set of Pods between which the real load can be balanced, LoadBalancer does its job. However you can still use it just for exposing a single Pod.
Note that your container exposes port 80, not 8080 (containerPort: 80 in your container specification) so you need to specify it as target-port in your Service. Your kubectl expose command may look like this:
kubectl expose pod site-web --type=LoadBalancer --port=8080 --target-port=80
If you provide only --port=8080 flag to your kubectl expose command it assumes that the target-port's value is the same as value of --port. You can easily check it by yourself looking at the service you've just created:
kubectl get svc site-web -o yaml
and you'll see something like this in spec.ports section:
- nodePort: 32576
port: 8080
protocol: TCP
targetPort: 8080
After exposing your Pod (or Deployment) properly i.e. using:
kubectl expose pod site-web --type=LoadBalancer --port=8080 --target-port=80
you'll see something similar:
- nodePort: 31181
port: 8080
protocol: TCP
targetPort: 80
After issuing kubectl get services you should see similar output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
site-web ClusterIP <Cluster IP> <External IP> 8080:31188/TCP 4m42s
If then you go to http://<External IP>:8080 in your browser or run curl http://<External IP>:8080 you should see your website's frontend.
Keep in mind that this solution makes sense and will be fully functional in cloud environment which is able to provide you with a real load balancer. Note that if you declare such Service type in Minikukbe in fact it creates NodePort service as it is unable to provide you with a real load balancer. So your application will be available on your Node's ( your Minikube VM's ) IP address on randomly selected port in range 30000-32767 (in my example it's port 31181).
As to your question about the volume:
I've a shared volume so if I understand I cannot use deployment but
only pods (maybe stateful set?)
yes, if you want to use specifically EmptyDir volume, it cannot be shared between different Pods (even if they were scheduled on the same node), it is shared only between containers within the same Pod. If you want to use Deployment you'll need to think about another storage solution such as PersistenetVolume.
EDIT:
In the first moment I didn't notice the error in your command:
kubectl expose pod site-web --type=LoadBalancer --port=8080
You're trying to expose non-existing Pod as your Pod's name is site-home, not site-web. site-web is a name of one of your containers (within your site-home Pod). Remember: we're exposing Pod, not containers via Service.
I change 80->8080 but I always come to error:kubectl expose pod
site-home --type=LoadBalancer --port=8080 return:error: couldn't
retrieve selectors via --selector flag or introspection: the pod has
no labels and cannot be exposed See 'kubectl expose -h' for help
and examples.
The key point here is: the pod has no labels and cannot be exposed
It looks like your Pod doesn't have any labels defined which are required so that the Service can select this particular Pod (or set of similar Pods which have the same label) from among other Pods in your cluster. You need at least one label in your Pod definition. Adding simple label name: site-web under Pod's metadata section should help. It may look like this in your Pod definition:
apiVersion: v1
kind: Pod
metadata:
name: site-home
labels:
name: site-web
spec:
...
Now you may even provide this label as selector in your service however it should be handled automatically if you omit --selector flag:
kubectl expose pod site-home --type=LoadBalancer --port=8080 --target-port=80 --selector=name=site-web
Remember: in Minikube real load balancer cannot be created and instead of LoadBalancer NodePort type will be created. Command kubectl get svc will tell you on which port (in range 30000-32767) your application will be available.
and `kubectl expose pod site-web --type=LoadBalancer
--port=8080 return: Error from server (NotFound): pods "site-web" not found. Site-home is the pod, site-web is the container with the port
exposed, what's the issue?
if you don't have a Pod with name "site-web" you can expect such message. Here you are simply trying to expose non-existing Pod.
If I exposed a port from a container the port is automatically exposed
also for the pod ?
Yes, you have the port defined in Container definition. Your Pods automatically expose all ports that are exposed by the Containers within them.

kubectl expose pods using selector as a NodePort service via command-line

I have a requirement to expose pods using selector as a NodePort service from command-line. There can be one or more pods. And the service needs to include the pods dynamically as they come and go. For example,
NAMESPACE NAME READY STATUS RESTARTS AGE
rakesh rakesh-pod1 1/1 Running 0 4d18h
rakesh rakesh-pod2 1/1 Running 0 4d18h
I can create a service that selects the pods I want using a service-definition yaml file -
apiVersion: v1
kind: Service
metadata:
name: np-service
namespace: rakesh
spec:
type: NodePort
ports:
- name: port1
port: 30005
targetPort: 30005
nodePort: 30005
selector:
abc.property: rakesh
However, I need to achieve the same via commandline. I tried the following -
kubectl -n rakesh expose pod --selector="abc.property: rakesh" --port=30005 --target-port=30005 --name=np-service --type=NodePort
and a few other variations without success.
Note: I understand that currently, there is no way to specify the node-port using command-line. A random port is allocated between 30000-32767. I am fine with that as I can patch it later to the required port.
Also note: These pods are not part of a deployment unfortunately. Otherwise, expose deployment might have worked.
So, it kind of boils down to selecting pods based on selector and exposing them as a service.
My kubernetes version: 1.12.5 upstream
My kubectl version: 1.12.5 upstream
You can do:
kubectl expose $(kubectl get po -l abc.property=rakesh -o name) --port 30005 --name np-service --type NodePort
You can create a NodePort service with a specified name using create nodeport command:
kubectl create nodeport NAME [--tcp=port:targetPort] [--dry-run=server|client|none]
For example:
kubectl create service nodeport myservice --node-port=31000 --tcp=3000:80
You can check Kubectl reference for more:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-service-nodeport-em-