Kubernetes pod can't access other pods exposed by a service - kubernetes

New to Kubernetes.
To build our testing environment, I'm trying to set up a PostgreSQL instance in Kubernetes, that's accessible to other pods in the testing cluster.
The pod and service are both syntactically valid and running. Both show in the output from kubectl get [svc/pods]. But when another pod tries to access the database, it times out.
Here's the specification of the pod:
# this defines the postgres server
apiVersion: v1
kind: Pod
metadata:
name: postgres
labels:
app: postgres
spec:
hostname: postgres
restartPolicy: OnFailure
containers:
- name: postgres
image: postgres:9.6.6
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
protocol: TCP
And here is the definition of the service:
# this defines a "service" that makes the postgres server publicly visible
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
type: ClusterIP
ports:
- port: 5432
protocol: TCP
I'm certain that something is wrong with at least one of those, but I'm not sufficiently familiar with Kubernetes to know which.
If it's relevant, we're running on Google Kubernetes Engine.
Help appreciated!

Related

Cannot Access Application Deployment from Outside in Kubernetes

I'm trying to access my Golang Microservice that is running in the Kubernetes Cluster and has following Manifest..
apiVersion: apps/v1
kind: Deployment
metadata:
name: email-application-service
namespace: email-namespace
spec:
selector:
matchLabels:
run: internal-service
template:
metadata:
labels:
run: internal-service
spec:
containers:
- name: email-service-application
image: some_image
ports:
- containerPort: 8000
hostPort: 8000
protocol: TCP
envFrom:
- secretRef:
name: project-secrets
imagePullPolicy: IfNotPresent
So to access this Deployment from the Outside of the Cluster I'm using Service as well,
And I've set up some External IP for test purposes, which suppose to forward HTTP requests to the port 8000, where my application is actually running at.
apiVersion: v1
kind: Service
metadata:
name: email-internal-service
namespace: email-namespace
spec:
type: ClusterIP
externalIPs:
- 192.168.0.10
selector:
run: internal-service
ports:
- name: http
port: 8000
targetPort: 8000
protocol: TCP
So the problem is that When I'm trying to send a GET request from outside the Cluster by executing curl -f http:192.168.0.10:8000/ it just stuck until the timeout.
I've checked the state of the pods, logs of the application, matching of the selector/template names at the Service and Application Manifests, namespaces, but everything of this is fine and working properly...
(There is also a secret config but It Deployed and also working file)
Thanks...
Making reference to jordanm's solution: you want to put it back to clusterIP and then use port-forward with kubectl -n email-namespace port-forward svc/email-internal-service 8000:8000. You will then be able to access the service via http://localhost:8000. You may also be interested in github.com/txn2/kubefwd

How can you connect to a database inside a k8 cluster that's behind a headless service?

given a database that is part of a statefulset and behind a headless service, how can I use a local client (outside of the cluster) to access the database? Is it possible to create a separate service that targets a specific pod by its stable id?
There are multiple ways you can conect to this database service
You can use
Port-forward : https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
Service as LoadBalancer : https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
Service as Nodeport : https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
Example MySQL database running on K8s : https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The easiest way is to try with port-forwarding :
kubectl port-forward -n <NAMESPACE Name> <POD name> 3306:3306
using the above command you can create the proxy from local to K8s cluster and test the localhost:3306
This is not a method for Prod use case it's can be used for debugging.
NodePort : Expose the port but use the worker node IPs so if worker not get killed during autoscaling IP may changed with time
I would recommend creating a new service with the respective label and type as LoadBalancer.

Access redis by service name in Kubernetes

I created a redis deployment and service in kubernetes,
I can access redis from another pod by service ip, but I can't access it by service name
the redis yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: myapp-ns
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: myapp-ns
spec:
type: ClusterIP
selector:
component: redis
ports:
- port: 6379
targetPort: 6379
I applied your file, and I am able to ping and telnet to the service both from within the same namespace and from a different namespace. To test this, I created pods in the same namespace and in a different namespace and installed telnet and ping. Then I exec'ed into them and did the below tests:
Same Namespace
kubectl exec -it <same-namespace-pod> /bin/bash
# ping redis
PING redis.<redis-namespace>.svc.cluster.local (172.20.211.84) 56(84) bytes of data.
# telnet redis 6379
Trying 172.20.211.84...
Connected to redis.<redis-namespace>.svc.cluster.local.
Escape character is '^]'.
Different Namespace
kubectl exec -it <different-namespace-pod> /bin/bash
# ping redis.<redis-namespace>.svc.cluster.local
PING redis.test.svc.cluster.local (172.20.211.84) 56(84) bytes of data.
# telnet redis.<redis-namespace>.svc.cluster.local 6379
Trying 172.20.211.84...
Connected to redis.<redis-namespace>.svc.cluster.local.
Escape character is '^]'.
If you are not able to do that due to dns resolution issues, you could look at your /etc/resolv.conf in your pod to make sure it has the search prefixes svc.cluster.local and cluster.local
I created a redis deployment and service in kubernetes, I can access
redis from another pod by service ip, but I can't access it by service
name
Keep in mind that you can use the Service name to access the backend Pods it exposes only within the same namespace. Looking at your Deployment and Service yaml manifests, we can see they're deployed within myapp-ns namespace. It means that only from a Pod which is deployed within this namespace you can access your Service by using it's name.
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: myapp-ns ### 👈
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: myapp-ns ### 👈
spec:
type: ClusterIP
selector:
component: redis
ports:
- port: 6379
targetPort: 6379
So if you deploy the following Pod:
apiVersion: v1
kind: Pod
metadata:
name: redis-client
namespace: myapp-ns ### 👈
spec:
containers:
- name: redis-client
image: debian
you will be able to access your Service by its name, so the following commands (provided you've installed all required tools) will work:
redis-cli -h redis
telnet redis 6379
However if your redis-cliet Pod is deployed to completely different namespace, you will need to use fully qualified domain name (FQDN) which is build according to the rule described here:
redis-cli -h redis.myapp-ns.svc.cluster.local
telnet redis.myapp-ns.svc.cluster.local 6379

How to access pod by it's hostname from within other pod of the same namespace?

Is there a way to access a pod by its hostname?
I have a pod with hostname: my-pod-1 that need to connect to another pod with hostname:
my-pod-2.
What is the best way to achieve this without services?
Through your description, Headless-Service is you want to find. You can access pod by accessing podName.svc with headless service.
OR access pod by pod ip address.
In order to connect from one pod to another by name (and not by IP),
replace the other pod's IP with the service name that points on it.
for example,
If my-pod-1 (172.17.0.2) is running rabbitmq,
And my-pod-2 (172.17.0.4) is running a rabbitmq consumer (let's say in python).
In my-pod-2 instead of running:
spec:
containers:
- name: consumer-container
image: shlomimn/rabbitmq_consumer:latest
args: ["/app/consumer.py","-p","5672","-s","172.17.0.2"]
Use:
spec:
containers:
- name: consumer-container
image: shlomimn/rabbitmq_consumer:latest
args: ["/app/consumer.py","-p","5672","-s","rabbitmq-svc"]
Where rabbitmq_service.yaml is,
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-svc
namespace: rabbitmq-ns
spec:
selector:
app: rabbitmq
ports:
- name: rabbit-main
protocol: TCP
port: 5672
targetPort: 5672
Shlomi

Using a pod without using the node ip

I have a postgres pod running locally on a coreOS vm.
I am able to access postgres using the ip of the minion it is on but I'm attempting to set it up in such a manner as to not have to know exactly which minion the pod is on but still be able to use postgres.
Here is my pod
apiVersion: v1
kind: Pod
metadata:
name: postgresql
labels:
role: postgres-client
spec:
containers:
- image: postgres:latest
name: postgres
ports:
- containerPort: 5432
hostPort: 5432
name: pg-port
volumeMounts:
- name: nfs
mountPath: /mnt
volumes:
- name: nfs
nfs:
server: nfs.server
path: /
and here is a service I tried to set-up but it doesn't seem correct
apiVersion: v1
kind: Service
metadata:
name: postgres-client
spec:
ports:
- port: 5432
targetPort: 5432
selector:
app: postgres-client
I'm guessing that the selector for your service is not finding any matching backends.
Try changing
app: postgres-client
to
role: postgres-client
in the service definition (or vice versa in the pod definition above).
The label selector has to match both the key and value (i.e. role and postgres-client). See the Labels doc for more details.