I have a list of pods like so:
❯ kubectl get pods -l app=test-pod (base)
NAME READY STATUS RESTARTS AGE
test-deployment-674667c867-jhvg4 1/1 Running 0 14m
test-deployment-674667c867-ssx6h 1/1 Running 0 14m
test-deployment-674667c867-t4crn 1/1 Running 0 14m
I have a service
kubectl get services (base)
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default test-service ClusterIP 10.100.4.138 <none> 4000/TCP 15m
I perform a dns query:
❯ kubectl exec -ti test-deployment-674667c867-jhvg4 -- /bin/bash (base)
root#test-deployment-674667c867-jhvg4:/# busybox nslookup test-service
Server: 10.100.0.10
Address: 10.100.0.10:53
Name: test-service.default.svc.cluster.local
Address: 10.100.4.138
My config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: python-http-server
image: python:2.7
command: ["/bin/bash"]
args: ["-c", "echo \" Hello from $(hostname)\" > index.html; python -m SimpleHTTPServer 80"]
ports:
- name: http
containerPort: 80
kind: Service
apiVersion: v1
metadata:
name: test-service
spec:
selector:
app: test-pod
ports:
- protocol: TCP
port: 4000
targetPort: http
How can I instead get a list of all the pods's ip addresses via a dns query?
Ideally I would like to perform an nslookup of a name and get a list of all the pod's ips in a list.
You have to use a headless service with selectors. It returns the ip addresses of the pods.
See here:
https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
.spec.clusterIP must be "None"
Related
I'm running Cassandra cluster, which I want to access from the client running in the same cluster through service. After installing Cassandra helm chart using my pods and services look like this:
NAME READY STATUS RESTARTS AGE
pod/cassandra-0 1/1 Running 0 10m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cassandra ClusterIP 10.97.169.106 <none> 9042/TCP,9160/TCP,8080/TCP 10m
service/cassandra-headless ClusterIP None <none> 7000/TCP,7001/TCP,7199/TCP,9042/TCP,9160/TCP 10m
and I'm informd that Cassandra can be accessed through cassandra.default.svc.cluster.local:9042 from within the cluster.
After applying configmap, which looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: cassandra-configmap
data:
database_address: cassandra.default.svc.cluster.local
and db client deployment file, which looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cassandra-web
labels:
app: cassandra-web
spec:
replicas: 1
selector:
matchLabels:
app: cassandra-web
template:
metadata:
labels:
app: cassandra-web
spec:
containers:
- name: cassandra-web
image: metavige/cassandra-web
ports:
- containerPort: 9042
env:
- name: CASSANDRA_HOST
valueFrom:
configMapKeyRef:
name: cassandra-configmap
key: database_address
- name: CASSANDRA_USER
value: cassandra
- name: CASSANDRA_PASSWORD
value: somePassword
The problem is my client can't translate service name to address and is unable to connect to the database, saying IPAddr::InvalidAddressError: invalid address. If I replace cassandra.default.svc.cluster.local with 10.97.169.106, everything works great, so there has to be some problem in resolving names.
I tried to run busybox, which looks OK.
$kubectl create -f https://k8s.io/examples/admin/dns/busybox.yaml
$kubectl exec -ti busybox -- nslookup cassandra.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: cassandra.default.svc.cluster.local
Address 1: 10.97.169.106 cassandra.default.svc.cluster.local
I apply a deployment of 2 some http pods, and a service for it, and it works fine. I can curl the serviceip or servicename. The service did round robin well.
But after I delete one pod, k8s create a new one to replace it. When I curl the service, the new pod doesn't return, only the other old one is OK.
The question is why k8s not update new pod to the service so I can curl the serviceip or servicename as before?
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mvn-dp
spec:
selector:
matchLabels:
run: mvn-demo
replicas: 2
template:
metadata:
labels:
run: mvn-demo
spec:
containers:
- name: mvndemo
image: 192.168.0.193:59999/mvndemo
ports:
- containerPort: 8080
Service:
apiVersion: v1
kind: Service
metadata:
name: mvn-svc
labels:
run: mvn-demo
spec:
ports:
- port: 8080
protocol: TCP
#type: NodePort
selector:
run: mvn-demo
kdes svc mvn-svc
Name: mvn-svc
Namespace: default
Labels: run=mvn-demo
Annotations: Selector: run=mvn-demo
Type: ClusterIP
IP: 10.97.21.218
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
Endpoints: 100.101.153.220:8080,100.79.233.220:8080
Session Affinity: None
Events: <none>
kpod
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mvn-dp-8f59c694f-2mwq8 1/1 Running 0 81m 100.79.233.220 worker2 <none> <none>
mvn-dp-8f59c694f-xmt6m 1/1 Running 0 87m 100.101.153.220 worker3 <none> <none>
[root#master1 k8s-yaml]# curl http://10.97.21.218:8080
Hello Docker World, from: mvn-dp-8f59c694f-xmt6m
[root#master1 k8s-yaml]# curl http://10.97.21.218:8080
**curl: (7) Failed connect to 10.97.21.218:8080; 连接超时(connetion timeout)**
As u can see the age of mvn-dp-8f59c694f-2mwq8 is newer than the other one, because I deleted one pod and k8s replace it with this new one.
Set a label to your deployment metadata
apiVersion: apps/v1
kind: Deployment
metadata:
name: mvn-dp
labels:
run: mvn-demo
it will work
I'm doing a deployment on the GKE service and I find that when I try to access the page the message
ERR_CONNECTION_REFUSED
I have defined a load balancing service for deployment and the configuration is as follows.
This is the .yaml for the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: "eu.gcr.io/diaphanum/onboarding-iocash-master_web:v1"
ports:
- containerPort: 3000
This is the service .yaml file.
apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 3000
This working fine, and all is green in GKE :)
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-8586b9b699-flhbn 1/1 Running 0 3h23m
pod/bonsai-onboarding-8586b9b699-p9sn9 1/1 Running 0 3h23m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP XX.xx.yy.YY <none> 443/TCP 29d
service/lb-onboarding LoadBalancer XX.xx.yy.YY XX.xx.yy.YY 3000:32618/TCP 3h
Then when i tried to connect the error is ERR_CONNECTION_REFUSED
I think is about the network because y did the next test from my local machine
Ping [load balancer IP] ---> Correct
Telnet [Load Balancer IP] 3000 ---> Correct
From cloud shell i forward the port 3000 to 8080 and in other cloudShell make a Curl http://localhost:8080, and work fine.
Any idea about the problem?
Thanks in advance
I've changed a little bit your deployment to check it on my cluster because your image was unreachable:
deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: nginx:latest
ports:
- containerPort: 80
service:
apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 80
and it works out of the box:
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-7bdf584499-j2nv7 1/1 Running 0 6m58s
pod/bonsai-onboarding-7bdf584499-vc7kh 1/1 Running 0 6m58s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.XXX.XXX.1 <none> 443/TCP 8m35s
service/lb-onboarding LoadBalancer 10.XXX.XXX.230 35.XXX.XXX.235 3000:31637/TCP 67s
and I'm able reach 35.XXX.XXX.235:3000 from any IP:
Welcome to nginx!
...
Thank you for using nginx.
You can check if your app is reachable using this command:
nmap -Pn $(kubectl get svc lb-onboarding -o jsonpath='{.status.loadBalancer.ingress[*].ip}')
Maybe the cause of your problem with "ERR_CONNECTION_REFUSED" in configuration of your image? I found no problem with your deployment and load balancer configuration.
Ensure containerPort is defined in the spec of the deployment/statefulset/pod and the application is listening on that port. Also ensure your firewall rules are not blocking the nodeport.
gcloud compute firewall-rules create myservice --allow tcp:3000
I created one busy pod to test db connection by following yaml
pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: marks-dummy-pod
spec:
containers:
- name: marks-dummy-pod
image: djtijare/ubuntuping:v1
command: ["/bin/bash", "-ec", "while :; do echo '.'; sleep 5 ; done"]
restartPolicy: Never
Dockerfile used :-
FROM ubuntu
RUN apt-get update && apt-get install -y iputils-ping
CMD bash
I create service as
postgresservice.yaml
kind: Service
apiVersion: v1
metadata:
name: postgressvc
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
Endpoint for created service as
kind: Endpoints
apiVersion: v1
metadata:
name: postgressvc
subsets:
- addresses:
- ip: 172.31.6.149
ports:
- port: 5432
Then i ran ping 172.31.6.149 inside pod (kubectl exec -it mark-dummy-pod bash) but not working.(ping localhost is working)
output of kubectl get pods,svc,ep -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/marks-dummy-pod 1/1 Running 0 43m 192.168.1.63 ip-172-31-11-87 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/postgressvc ClusterIP 10.107.58.81 <none> 5432/TCP 33m <none>
NAME ENDPOINTS AGE
endpoints/postgressvc 172.31.6.149:5432 32m
Output for answer by P Ekambaram
kubectl get pods,svc,ep -o wide gives
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/postgres-855696996d-w6h6c 1/1 Running 0 44s 192.168.1.66 ip-172-31-11-87 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/postgres NodePort 10.110.203.204 <none> 5432:31076/TCP 44s app=postgres
NAME ENDPOINTS AGE
endpoints/postgres 192.168.1.66:5432 44s
So problem was in my DNS pod in namespace=kube-system
I just create new kubernetes setup and make sure that DNS is working
For new setup refer to my answer of another question
How to start kubelet service??
postgres pod is missing?
did you create endpoint object or was it auto generated?
share the pod definition YAML
you shouldnt be creating endpoint. it is wrong. follow the below deployment for postgres.
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: example
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11
imagePullPolicy: Always
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
emptyDir:
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres
undeploy postgres service and endpoint and deploy the above YAML.
it should work
why NODE ip is prefixed with ip-
you should create deployment for your database and then make a service that target this deployment and then ping using this service why ping with ip ?
I use OS Centos 7.
My Pod:
apiVersion: v1
kind: Pod
metadata:
name: tomcat
spec:
containers:
- image: ec2-73-99-254-8.eu-central-1.compute.amazonaws.com:5000/tom
name: tomcat
command: ["sh","-c","/opt/tomcat/bin/deploy-and-run.sh"]
volumeMounts:
- mountPath: /maven
name: app-volume
ports:
- containerPort: 8080
volumes:
- name: app-volume
hostPath:
path: /maven
My Sevice:
apiVersion: v1
kind: Service
metadata:
name: tomcat
spec:
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
selector:
name: tomcat
Services looks like:
# kubectl get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.254.0.1 <none> 443/TCP <none> 14h
tomcat 10.254.206.26 <none> 80/TCP name=tomcat 13h
And Pods:
# kubectl get pod
NAME READY STATUS RESTARTS AGE
tomcat 1/1 Running 0 13h
And when I run Curl:
curl 10.254.206.26
curl: (56) Recv failure: Connection reset by peer
Kube-proxy logs at that moment show somthing like this:
kube-proxy[22273]: Couldn't find an endpoint for default/tomcat:http: missing service entry
kube-proxy[22273]: Failed to connect to balancer: missing service entry
But when I run curl directly to the pod ip address and port 8080 - it works fine.
When I run command kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 195.234.109.11:6443 14h
tomcat <none> 14h
Field ENDPOINTS in this output with "none" looks strange.
What's wrong?
Services work by matching labels. You are attempting to match based on the name of your pod. Try changing the metadata for your pod to
metadata:
name: tomcat
labels:
name: tomcat
and see if that helps.