Expose every replica pods in a deployment/replicaset with nodeport service - kubernetes

I have 2 bare metal worker nodes and a replicaset with 10 pod replicas. How can I expose each pod using nodeport?

You can use the kubectl expose command to create a NodePort Service for a ReplicaSet.
This is a template that may be useful:
kubectl expose rs <REPLICASET_NAME> --port=<PORT> --target-port=<TARGET_PORT> --type=NodePort
The most important flags are:
NOTE: Detailed information on this command can be found in the Kubectl Reference Docs.
--port
The port that the service should serve on. Copied from the resource being exposed, if unspecified
--target-port
Name or number for the port on the container that the service should direct traffic to. Optional.
--type
Type for this service: ClusterIP, NodePort, LoadBalancer, or ExternalName. Default is 'ClusterIP'.
I will create an example to illustrate how it works
First, I created a simple app-1 ReplicaSet:
$ cat app-1-rs.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: app-1
labels:
app: app-1
spec:
replicas: 10
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- name: nginx
image: nginx
$ kubectl apply -f app-1-rs.yaml
replicaset.apps/app-1 created
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
app-1 10 10 10 93s
Then I used the kubectl expose command to create a NodePort Service for the app-1 ReplicaSet:
### kubectl expose rs <REPLICASET_NAME> --port=<PORT> --target-port=<TARGET_PORT> --type=NodePort
$ kubectl expose rs app-1 --port=80 --type=NodePort
service/app-1 exposed
$ kubectl get svc app-1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-1 NodePort 10.8.6.92 <none> 80:32275/TCP 18s
$ kubectl describe svc app-1 | grep -i endpoints
Endpoints: 10.4.0.15:80,10.4.0.16:80,10.4.0.17:80 + 7 more...
If you want to make some modifications before creating the Service, you can export the Service defintion to a manifest file and apply it after the modifications:
kubectl expose rs app-1 --port=80 --type=NodePort --dry-run=client -oyaml > app-1-svc.yaml
Additionally, it's worth considering if you can use Deployment instead of directly using ReplicaSet. As we can find in the Kubernetes ReplicaSet documentation:
Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don't require updates at all.

you can create a service with selector point to the pods, use NodePort type, and be sure deploy kube-proxy on every node.
example yaml like below, you should set right selector and port. get service detail for nodePort after created
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
type: NodePort
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
the official doc is good enough https://kubernetes.io/docs/concepts/services-networking/service/

Related

kubernetes ingress network policy blocking services

I created a kubernetes pod efgh in namespace ns1
kubectl run efgh --image=nginx -n ns1
I created another pod in default namespace
kubectl run apple --image=nginx
I created a service efgh in namespace ns1
kubectl expose pod efgh --port=80 -n ns1
Now I created a network policy to block incoming connections to the pod
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: ns1
spec:
podSelector:
matchLabels:
run: efgh
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
project: ns1
- from:
- namespaceSelector:
matchLabels:
project: default
podSelector:
matchLabels:
run: apple
ports:
- protocol: TCP
port: 80
Checking the pods in ns1 gives me
NAME READY STATUS RESTARTS AGE IP
efgh 1/1 Running 0 3h4m 10.44.0.4
Checking the services in ns1 gives me
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
efgh ClusterIP 10.109.170.238 <none> 80/TCP 164m
Once I open terminal in apple pod and run below it works
curl http://10-44-0-4.ns1.pod
curl http://10.44.0.4
but when I try curl by trying to access the pod through the service it fails.
curl http://10.109.170.238
If i delete the network policy the above curl works
I think this is an issue with my local kubernetes cluster. I tried elsewhere it works
When I did port forward
root#kubemaster:/home/vagrant# kubectl port-forward service/efgh 8080:80 -n ns1
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
See below, more details here ServiceTypes
Publishing Services (ServiceTypes)
For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, that's outside of your cluster.
Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.
Type values and their behaviors are:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.

Call to Kubernetes service failed

kubernetes version: 1.5.2
os: centos 7
etcd version: 3.4.0
First, I create an etcd pod, the etcd dockerfile and etcd pod YAML file like this:
etcd dockerfile:
FROM alpine
COPY . /usr/bin
WORKDIR /usr/bin
CMD etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://0.0.0.0:2379
EXPOSE 2379
pod yaml file
apiVersion: v1
kind: Pod
metadata:
name: etcd
namespace: storageplatform
labels:
app: etcd
spec:
containers:
- name: etcd
image: "karldoenitz/etcd:3.4.0"
ports:
- containerPort: 2379
hostPort: 12379
After created the docker image and push to dockerhub, I run the command kubectl apply -f etcd.yaml to create the etcd pod.
The ip of etcd pod is 10.254.140.117, I ran the command use ETCDCTL_API=3 etcdctl --endpoints=175.24.47.64:12379 put 1 1 and got OK.
My service yaml:
apiVersion: v1
kind: Service
metadata:
name: storageservice
namespace: storageplatform
spec:
type: NodePort
ports:
- port: 12379
targetPort: 12379
nodePort: 32379
selector:
app: etcd
apply the yaml file to create the service.run the command kubectl get services -n storageplatform, I got these infomation.
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
storageplatform storageservice 10.254.140.117 <nodes> 12379:32379/TCP 51s
After all, I run the command
ETCDCTL_API=3 etcdctl --endpoints=10.254.140.117:32379 get 1
or
ETCDCTL_API=3 etcdctl --endpoints={host-ip}:32379 get 1
I got Error: context deadline exceeded.
What's the matter? How to make the service useful?
You defined a service that is available inside the kubernetes network using the service name/ip (10.254.140.117 ) and the service port (12379) and which is available on ALL nodes of the kubernetes cluster even outside the kubernetes network with the node port (32379)
You need to fix the service in order to map to the correct container port: targetPort must match the pod containerPort (and the port in the dockerfile).
If the error Error: context deadline exceeded persits, it hints at a communication problem. This can be explained when using the internal service ip with the external node port (your first get 1). For the node port (your second command) I assume that either the etcd pod is not running properly, or the port is firewalled on the node.
Change the service to refer to containerPort instead of hostPort
apiVersion: v1
kind: Service
metadata:
name: storageservice
namespace: storageplatform
spec:
type: NodePort
ports:
- port: 2379
targetPort: 2379
nodePort: 32379
selector:
app: etcd

Can't to talk with other pods using Kubernetes Service

How I understand that I could be able to talk with other pods from a specific pod by sending from within the pod an HTTP request with the fully qualified domain name of the service (FQDN).
The system runs locally with minikube.
The service's YML -
apiVersion: v1
kind: Service
metadata:
name: kubia
spec:
sessionAffinity: ClientIP
ports:
- port: 80
targetPort: 8080
selector:
app: kubia
The describe of the service -
Name: kubia
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=kubia
Type: ClusterIP
IP: 10.111.178.111
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: 172.17.0.7:8080,172.17.0.8:8080,172.17.0.9:8080
Session Affinity: ClientIP
Events: <none>
I'm trying to do that with -
kubectl exec -it kubia-gqd5l bash
where kubia-gqd5l is the pod.
In the bash I tried to sent a request by -
curl http://kubia
Where kubia is the name of the service.
and I got error -
curl: (6) Could not resolve host: kubia.
It is important to note that I manage to communicate with the service by -
kubectl exec kubia-gqd5l -- curl -s http://10.111.178.111
any idea?
Kubernetes clusters usually have DNS deployed. That allows pod to pod communications within the cluster (among other things) by using the name of the corresponding Kubernetes services. See https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
Does your Kubernetes cluster/minikube have DNS running?
Something else to check is the selector in the Service definition - make sure the pod/deployment has the app: kubia label as specified in the selector.
Otherwise, and per the doc at the link above, because the lookup of the service is from a pod in the same namespace, it shouldn't be needed to use the namespace along with the service name: (quote) "...Assume a Service named foo in the Kubernetes namespace bar. A Pod running in namespace bar can look up this service by simply doing a DNS query for foo. A Pod running in namespace quux can look up this service by doing a DNS query for foo.bar".
Have a look at this answer 2 Kubernetes pod communicating without knowing the exposed address, to target a service it's better to add the namespace with the service.

Kubernetes ingress (hostNetwork=true), can't reach service by node IP - GCP

I am trying to expose deployment using Ingress where DeamonSet has hostNetwork=true which would allow me to skip additional LoadBalancer layer and expose my service directly on the Kubernetes external node IP. Unfortunately I can't reach the Ingress controller from the external network.
I am running Kubernetes version 1.11.16-gke.2 on GCP.
I setup my fresh cluster like this:
gcloud container clusters get-credentials gcp-cluster
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade
helm install --name ingress --namespace ingress-nginx --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true stable/nginx-ingress
I run the deployment:
cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-node
spec:
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- name: hello-node
image: gcr.io/google-samples/node-hello:1.0
ports:
- containerPort: 8080
EOF
Then I create service:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: hello-node
EOF
and ingress resource:
cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: hello-node-single-ingress
spec:
backend:
serviceName: hello-node
servicePort: 80
EOF
I get the node external IP:
12:50 $ kubectl get nodes -o json | jq '.items[] | .status .addresses[] | select(.type=="ExternalIP") | .address'
"35.197.204.75"
Check if ingress is running:
12:50 $ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
hello-node-single-ingress * 35.197.204.75 80 8m
12:50 $ kubectl get pods --namespace ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-ingress-controller-7kqgz 1/1 Running 0 23m
ingress-nginx-ingress-default-backend-677b99f864-tg6db 1/1 Running 0 23m
12:50 $ kubectl get svc --namespace ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-ingress-controller ClusterIP 10.43.250.102 <none> 80/TCP,443/TCP 24m
ingress-nginx-ingress-default-backend ClusterIP 10.43.255.43 <none> 80/TCP 24m
Then trying to connect from the external network:
curl 35.197.204.75
Unfortunately it times out
On Kubernetes Github there is a page regarding ingress-nginx (host-netork: true) setup:
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network
which mentions:
"This approach does not leverage any Service object to expose the NGINX Ingress controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it."
I've tried to follow that and delete ingress-nginx services:
kubectl delete svc --namespace ingress-nginx ingress-nginx-ingress-controller ingress-nginx-ingress-default-backend
but this doesn't help.
Any ideas how to set up the Ingress on the node external IP? What I am doing wrong? The amount of confusion over running Ingress reliably without the LB overwhelms me. Any help much appreciated !
EDIT:
When another service accessing my deployment with NodePort gets created:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node2
spec:
ports:
- port: 80
targetPort: 8080
type: NodePort
selector:
app: hello-node
EOF
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node ClusterIP 10.47.246.91 <none> 80/TCP 2m
hello-node2 NodePort 10.47.248.51 <none> 80:31151/TCP 6s
I still can't access my service e.g. using: curl 35.197.204.75:31151.
However when I create 3rd service with LoadBalancer type:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node3
spec:
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
selector:
app: hello-node
EOF
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node ClusterIP 10.47.246.91 <none> 80/TCP 7m
hello-node2 NodePort 10.47.248.51 <none> 80:31151/TCP 4m
hello-node3 LoadBalancer 10.47.250.47 35.189.106.111 80:31367/TCP 56s
I can access my service using the external LB: 35.189.106.111 IP.
The problem was missing firewall rules on GCP.
Found the answer: https://stackoverflow.com/a/42040506/2263395
Running:
gcloud compute firewall-rules create myservice --allow tcp:80,tcp:30301
Where 80 is the ingress port and 30301 is the NodePort port. On production you would probabaly use just the ingress port.

Google container connect to service

I'm following a course on PluralSight where the course author puts a docker image onto kubernetes and then access it via his browser. I'm trying to replicate what he does but I cannot manage to reach the website. I believe I might be connecting to the wrong IP.
I have a ReplicationController that's running 10 pods:
rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: hello-rc
spec:
replicas: 10
selector:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-pod
image: nigelpoulton/pluralsight-docker-ci:latest
ports:
- containerPort: 8080
I then tried to expose the rc:
kubectl expose rc hello-rc --name=hello-svc --target-port=8080 --type=NodePort
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-svc 10.27.254.160 <nodes> 8080:30488/TCP 30s
kubernetes 10.27.240.1 <none> 443/TCP 1h
My google container endpoint is : 35.xxx.xx.xxx and when running kubectl describe svc hello-svc the NodePort is 30488
Thus I try to access the app at 35.xxx.xx.xxx:30488 but the site can’t be reached.
If you want to access your service via the NodePort port, you need to open your firewall for that port (and that instance).
A better way is to create a service of type LoadBalancer (--type=LoadBalancer) and access it on the IP Google will give you.
Do not forget to delete the load balancer when you are done.