I have a single node k8s cluster with 2 web applications running on 2 NGINX k8s pods.
nginx-deployment1 --> WEBAPP1 --> nginx-svc-app1 --> <K8s_controler_IP>:30080/webapp1
nginx-deployment2 --> WEBAPP2 --> nginx-svc-app2 --> <K8s_controler_IP>:30081/webapp2
Its connecting only to the respective nodeport ip but not connecting to <K8s_controler_IP>:30080/webapp1 and <K8s_controler_IP>:30081/webapp2. Could you please help me understand what am i missing?
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment1
labels:
name: nginx-app1
spec:
replicas: 3
selector:
matchLabels:
name: nginx-app1
template:
metadata:
labels:
name: nginx-app1
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc-app1
labels:
name: nginx-svc-app1
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: app1_port
selector:
name: nginx-app1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment2
labels:
name: nginx-app2
spec:
replicas: 3
selector:
matchLabels:
name: nginx-app2
template:
metadata:
labels:
name: nginx-app2
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc-app2
labels:
name: nginx-svc-app2
spec:
type: NodePort
ports:
- port: 80
nodePort: 30081
name: http
selector:
name: nginx-app2
Your port configurations are incorrect. You need to set targetPort: 80 for both services. When you use the NodePort service type and specify the NodePort, the port field is irrelevant. That is the port that typically receives the incoming traffic for the service endpoint. So, specifying both would mean:
Receive incoming traffic on an endpoint with port 80.
Receive incoming traffic to Node port 30080.
And you are not specifying a targetPort which is the port that receives the traffic in the pods of the deployment. So, the traffic is coming in, but not being forwarded anywhere.
You need to add targetPort: 80 to both services and remove the port parameter.
Additionally, you're not running 2 pods, but you are running 2 deployments, each with 3 pods (replicas) in them. And the traffic you will send to each service, will be 'distributed' by the service, on all the pods that will match the selector - i.e. all 3 of your pods. It's important to understand how the communication takes place.
Also, k8s_controller_ip is an incorrect way of describing the IP address. You should use the term k8s Node IP or k8s_master_node_ip. The IP address belongs to the node that is running the cluster, not to any controller.
Related
i am trying to add IPs manually using endpoint object in yaml. however minikube cluster is getting its defaults ips of endpoints instead of mention in the yaml file. why?
yamlfile:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-app
spec:
replicas: 3
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx-container
image: nginx:1.16
ports:
- containerPort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: nginx-service
subsets:
- ports:
- port: 80
addresses:
- ip: 172.17.0.11 ---> configured ip
- ip: 172.17.0.12 ---> configured ip
- ip: 172.17.0.13 ---> configured ip
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx-app
ports:
- protocol: TCP
nodePort: 30464
port: 90
targetPort: 80
ips in endpoint output: (see 172.17.0.6, 172.17.0.7 and 172.17.0.8 while i have given 172.17.0.11, 172.17.0.12 and 172.17.0.13 in yaml)
/home/ravi/k8s>kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.49.2:8443 36h
nginx-service 172.17.0.6:80,172.17.0.7:80,172.17.0.8:80 5m59s
I have tried replicating your issue and got the configured IP addresses for endpoints.
The changes might have occurred due to the namespaces also, Once check it .
my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
my service yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
enter image description here
enter image description here
and then I curl 10.104.239.140, but get an error curl: (7) Failed connect to 10.104.239.140:80; Connection timed out
Who can tell me what's wrong?
welcome to SO. That service you've deployed is of type ClusterIP which means it can only be accessed from within the cluster. In your case, it seems you're trying to access it from outside the cluster and thus the connection timed out.
What you can do is, deploy a service of type NodePort or LoadBalancer to access it from outside the cluster. You can read more about different service types here.
You're service would end up something like this:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort ## or LoadBalancer(supported by Cloud providers like AWS)
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30001
I have the cluster setup below in AKS
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-example
spec:
replicas: 3
selector:
matchLabels:
app: hpa-example
template:
metadata:
labels:
app: hpa-example
spec:
containers:
- name: hpa-example
image: gcr.io/google_containers/hpa-example
ports:
- name: http-port
containerPort: 80
resources:
requests:
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: hpa-example
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: http-port
protocol: TCP
selector:
app: hpa-example
type: NodePort
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-example-autoscaler
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hpa-example
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
The idea of this is to check AutoScaling
I need to have this available externally so I added
apiVersion: v1
kind: Service
metadata:
name: load-balancer-autoscaler
spec:
selector:
app: hpa-example
ports:
- port: 31001
targetPort: 31001
type: LoadBalancer
This now gives me an external IP however, I cannot connect to it in Postman or via a browser
What have I missed?
I have tried to change the ports between 80 and 31001 but that makes no difference
As posted by user #David Maze:
What's the exact URL you're trying to connect to? What error do you get? (On the load-balancer-autoscaler service, the targetPort needs to match the name or number of a ports: in the pod, or you could just change the hpa-example service to type: LoadBalancer.)
I reproduced your scenario and found out issue in your configuration that could deny your ability to connect to this Deployment.
From the perspective of Deployment and Service of type NodePort everything seems to work okay.
If it comes to the Service of type LoadBalancer on the other hand:
apiVersion: v1
kind: Service
metadata:
name: load-balancer-autoscaler
spec:
selector:
app: hpa-example
ports:
- port: 31001
targetPort: 31001 # <--- CULPRIT
type: LoadBalancer
This definition will send your traffic directly to the pods on port 31001 and it should send it to the port 80 (this is the port your app is responding on). You can change it either by:
targetPort: 80
targetPort: http-port
You could also change the Service of the NodePort (hpa-example) to LoadBalancer as pointed by user #David Maze!
After changing this definition you will be able to run:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
load-balancer-autoscaler LoadBalancer 10.4.32.146 AA.BB.CC.DD 31001:31497/TCP 9m41s
curl AA.BB.CC.DD:31001 and get the reply of OK!
I encourage you to look on the additional resources regarding Kubernetes services:
Docs.microsoft.com: AKS: Network: Services
Stackoverflow.com: Questions: Difference between nodePort and LoadBalancer service types
Kubernetes.io: Docs: Concepts: Service
I have a manifest as the following
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-redis
spec:
selector:
matchLabels:
app: my-redis
replicas: 1
template:
metadata:
labels:
app: my-redis
spec:
containers:
- name: my-redis
image: redis
ports:
- name: redisport1
containerPort: 6379
hostPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: my-redis
spec:
type: NodePort
selector:
name: my-redis
ports:
- name: redisport1
port: 6379
targetPort: 6379
nodePort: 30036
protocol: TCP
This is a sample that reproduces my problem. My intention here is to create a simple cluster that has a pod with a redis container in it, and it should be exposed to my localhost. Still, get services gives me the following output:
redis-service NodePort 10.107.233.66 <none> 6379:30036/TCP 10s
If I swap NodePort with LoadBalancer, I get an external-ip but still port doesn't work.
Can you help me identify why I'm failing to map the 6379 port to my localhost, please?
Thanks,
In order to access your app through node port, you have to use this url
http://{node ip}:{node port}.
If you are using minikube, your minikube ip is the node ip. You can retrieve it using minikube ip command.
You can also use minikube service redis-service --url command to get the url to access your application through node port.
For anybody who's interested in the question, I found the problem. After Ijaz's fix, I also needed to change the selector to match the label in the pod, it was a typo on my end!
pod has "app=my-redis" tag, but Service selector had "name=my-redis". Matching them fixed the access problem.
Dont need the hostPort:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-redis
spec:
selector:
matchLabels:
app: my-redis
replicas: 1
template:
metadata:
labels:
app: my-redis
spec:
containers:
- name: my-redis
image: redis
ports:
- name: redisport1
containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: my-redis
spec:
type: NodePort
selector:
name: my-redis
ports:
- name: redisport1
port: 6379
targetPort: 6379
nodePort: 30036
protocol: TCP
now the nodePort 30036 can be used to access the service on any worker node.
If the cluster node is somewhere else and you want to make the port available on you local client , then just do kubectl port forward
kubectl port-forward svc/redis-service 6379:6379
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
Notes:
On-prem installs of k8s dont support service type of load balancer
ClusterIP is the IP on the pod network
Node IP is the IP of some machine that is running the k8s cluster
Currently, I have working K8s API pods in a K8s service that connects to a K8s Redis service, with K8s pods of it's own. The problem is, I am using NodePort meaning BOTH are exposed to the public. I only want the API accessable to the public. The issue is that if I make the Redis service not public, the API can't see it. Is there a way to connect two Services without exposing one to the public?
This is my API service yaml:
apiVersion: v1
kind: Service
metadata:
name: app-api-svc
spec:
selector:
app: app-api
tier: api
ports:
- protocol: TCP
port: 5000
nodePort: 30400
type: NodePort
And this is my Redis service yaml:
apiVersion: v1
kind: Service
metadata:
name: app-api-redis-svc
spec:
selector:
app: app-api-redis
tier: celery_broker
ports:
- protocol: TCP
port: 6379
nodePort: 30537
type: NodePort
First, configure the Redis service as a ClusterIP service. It will be private, visible only for other services. This is could be done removing the line with the option type.
apiVersion: v1
kind: Service
metadata:
name: app-api-redis-svc
spec:
selector:
app: app-api-redis
tier: celery_broker
ports:
- protocol: TCP
port: 6379
targetPort: [the port exposed by the Redis pod]
Finally, when you configure the API to reach Redis, the address should be app-api-redis-svc:6379
And that's all. I have a lot of services communicating each other in this way. If this doesn't work for you, let me know in the comments.
I'm going to try to take the best from all answers and my own research and make a short guide that I hope you will find helpful:
1. Test connectivity
Connect to a different pod, eg ruby pod:
kubectl exec -it some-pod-name -- /bin/sh
Verify it can ping to the service in question:
ping redis
Can it connect to the port? (I found telnet did not work for this)
nc -zv redis 6379
2. Verify your service selectors are correct
If your service config looks like this:
kind: Service
apiVersion: v1
metadata:
name: redis
labels:
app: redis
role: master
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
verify those selectors are also set on your pods?
get pods --selector=app=redis,role=master,tier=backend
Confirm that your service is tied to your pods by running:
$> describe service redis
Name: redis
Namespace: default
Labels: app=redis
role=master
tier=backend
Annotations: <none>
Selector: app=redis,role=master,tier=backend
Type: ClusterIP
IP: 10.47.250.121
Port: <unset> 6379/TCP
Endpoints: 10.44.0.16:6379
Session Affinity: None
Events: <none>
check the Endpoints: field and confirm it's not blank
More info can be found at:
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#my-service-is-missing-endpoints
I'm not sure about redis, but I have a similar application. I have a Java web application running as a pod that is exposed to the outside world through a nodePort. I have a mongodb container running as a pod.
In the webapp deployment specifications, I map it to the mongodb service through its name by passing the service name as parameter, I have pasted the specification below. You can modify accordingly.There should be a similar mapping parameter in Redis also where you would have to use the service name which is "mongoservice" in my case.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: empappdepl
labels:
name: empapp
spec:
replicas: 1
template:
metadata:
labels:
name: empapp
spec:
containers:
- resources:
limits:
cpu: 0.2
image: registryip:5000/employee:1
imagePullPolicy: IfNotPresent
name: wsemp
ports:
- containerPort: 8080
name: wsemp
command: ["java","-Dspring.data.mongodb.uri=mongodb://mongoservice/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
labels:
name: empwhatever
name: empservice
spec:
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http
nodePort: 30062
type: NodePort
selector:
name: empapp
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongodbdepl
labels:
name: mongodb
spec:
replicas: 1
template:
metadata:
labels:
name: mongodb
spec:
containers:
- resources:
limits:
cpu: 0.3
image: mongo
imagePullPolicy: IfNotPresent
name: mongodb
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
labels:
name: mongowhatever
name: mongoservice
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
selector:
name: mongodb
Note that the mongodb service doesnt need to be exposed as a NodePort.
Kubernetes enables inter service communication by allowing services communicate with other services using their service name.
In your scenario, redis service should be accessible from other services on
http://app-api-redis-svc.default:6379. Here default is the namespace under which your service is running.
This internally routes your requests to your redis pod running on the target container port
Checkout this link for different modes of service discovery options provided by kubernetes
Hope it helps