I created a redis deployment and service in kubernetes,
I can access redis from another pod by service ip, but I can't access it by service name
the redis yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: myapp-ns
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: myapp-ns
spec:
type: ClusterIP
selector:
component: redis
ports:
- port: 6379
targetPort: 6379
I applied your file, and I am able to ping and telnet to the service both from within the same namespace and from a different namespace. To test this, I created pods in the same namespace and in a different namespace and installed telnet and ping. Then I exec'ed into them and did the below tests:
Same Namespace
kubectl exec -it <same-namespace-pod> /bin/bash
# ping redis
PING redis.<redis-namespace>.svc.cluster.local (172.20.211.84) 56(84) bytes of data.
# telnet redis 6379
Trying 172.20.211.84...
Connected to redis.<redis-namespace>.svc.cluster.local.
Escape character is '^]'.
Different Namespace
kubectl exec -it <different-namespace-pod> /bin/bash
# ping redis.<redis-namespace>.svc.cluster.local
PING redis.test.svc.cluster.local (172.20.211.84) 56(84) bytes of data.
# telnet redis.<redis-namespace>.svc.cluster.local 6379
Trying 172.20.211.84...
Connected to redis.<redis-namespace>.svc.cluster.local.
Escape character is '^]'.
If you are not able to do that due to dns resolution issues, you could look at your /etc/resolv.conf in your pod to make sure it has the search prefixes svc.cluster.local and cluster.local
I created a redis deployment and service in kubernetes, I can access
redis from another pod by service ip, but I can't access it by service
name
Keep in mind that you can use the Service name to access the backend Pods it exposes only within the same namespace. Looking at your Deployment and Service yaml manifests, we can see they're deployed within myapp-ns namespace. It means that only from a Pod which is deployed within this namespace you can access your Service by using it's name.
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: myapp-ns ### π
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: myapp-ns ### π
spec:
type: ClusterIP
selector:
component: redis
ports:
- port: 6379
targetPort: 6379
So if you deploy the following Pod:
apiVersion: v1
kind: Pod
metadata:
name: redis-client
namespace: myapp-ns ### π
spec:
containers:
- name: redis-client
image: debian
you will be able to access your Service by its name, so the following commands (provided you've installed all required tools) will work:
redis-cli -h redis
telnet redis 6379
However if your redis-cliet Pod is deployed to completely different namespace, you will need to use fully qualified domain name (FQDN) which is build according to the rule described here:
redis-cli -h redis.myapp-ns.svc.cluster.local
telnet redis.myapp-ns.svc.cluster.local 6379
Related
I am going to access a Kubernetes NodePort service in MacOS in the browser, running on minikube, but I can't.
This is the Service definition file:
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30004
selector:
app: myapp
And this is the Pod difinition file:
apiVersion: v1
kind: Pod
metadata:
name: nginx-2
labels:
env: production
app: myapp
spec:
containers:
- name: nginx
image: nginx
And this is the Deployment definition file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
tier: frontend
app: myapp
spec:
selector:
matchLabels:
env: production
replicas: 6
template:
metadata:
name: nginx-2
labels:
env: production
spec:
containers:
- name: nginx
image: nginx
I cannot access the NodePort service using this URL on the browser:
http://localhost:30004
Also by entering minikube ip instead of localhost in the top command, a timeout happens.
And finally, by running the below command:
minikube service myapp-service --url
A sample output like this is generated:
http://127.0.0.1:53751
β Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
But an the below error is shown to the user:
The connection was reset
Update : The issue is because the label app:myapp was not mentioned in the Deployment definition file -> template section
Look, your are not exposing your localhost to the nodePort. You are exposing the NodePort on your Kupernetes cluster.
So you have to access it http://nodeip:nodePort.
Think about what is localhost.
You have localhost on your Pc.
You have localhost on the VM (minkube node).
You have localhost in each container running inside your cluster.
If you want to use your Pc localhost to access the port inside a container in a Pod you can do this:
kubectl port-forward svc/serviceName reachablePortFromyourPc:containerPort
For example:
kubectl port-forward svc/serviceName 80:80
This starts a port forwarding. As long it is running you can access it from your browser.
This is good only for testing.
To access the NodePort use.
minikubeIp:NodePort
I am working my way through a kubernetes tutorial using GKE, but it was written with Azure in mind - tho it has been working ok so far.
The first part where it has not worked has been with exercises regarding coreDNS - which I understand does not exist on GKE - it's kubedns only?
Is this why I can't get a pod endpoint with:
export PODIP=$(kubectl get endpoints hello-world-clusterip -o jsonpath='{ .subsets[].addresses[].ip}')
and then curl:
curl http://$PODIP:8080
My deployment is definitely on the right port:
ports:
- containerPort: 8080
And, in fact, the deployment for the tut is from a google sample.
Is this to do with coreDNS or authorisation/needing a service account? What can I do to make the curl request work?
Deployment yaml is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
ports:
- port: 80
protocol: TCP
targetPort: 8080
Having a deeper insight on what Gari comments, when exposing a service outside your cluster, this services must be configured as NodePort or LoadBalancer, since ClusterIP only exposes the Service on a cluster-internal IP making the service only reachable from within the cluster, and since Cloud Shell is a a shell environment for managing resources hosted on Google Cloud, and not part of the cluster, that's why you're not getting any response. To change this, you can change your yaml file with the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
After redeploying your service, you can run command kubectl get all -o wide on cloud shell to validate that NodePort type service has been created with a node and target port.
To test your deployment just throw a CURL test to he external IP from one of your nodes incluiding the node port that was assigned, the command should look like something like:
curl <node_IP_address>:<Node_port>
I am trying to expose a deployment I made on minikube:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-test
labels:
app: debian
spec:
replicas: 1
selector:
matchLabels:
app: debian
strategy: {}
template:
metadata:
labels:
app: debian
spec:
containers:
- image: agracia10/debian_bash:latest
name: debian
ports:
- containerPort: 8006
resources: {}
restartPolicy: Always
status: {}
I decided to follow was is written on here
I try to expose the deployment using the following command:
kubectl expose pod deployment-test-8497d6f458-xxhgm --type=NodePort --port=8080 --target-port=80
but when I try to then access the service created by the expose command, using the url provided by
minikube service deployment-test-8497d6f458-xxhgm --url
it throws an error using packetsender to try and connect to the service:
packet sender log
Im not really sure what the reason for this could be, I think it has something to do with the fact that when I get the services it says on the external ip field. Also, when I try and retrieve the node IP using minikube ip it gives an address, but when the minikube service --url it gives the 127.0.0.1 address. In any case, using either one does not work.
it's not working due to a port configuration mismatch.
You deployment container running on the 8006 but you have exposed the 8080 and your target port is : --target-port=80
so due to this it's not working.
Ideal flow of traffic goes like :
service (node port, cluster IP or any) > Deployment > PODs
Below sharing the example for deployment and service
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-app-server-instance
labels:
app: blog-app
spec:
replicas: 1
selector:
matchLabels:
app: blog-app
template:
metadata:
labels:
app: blog-app
spec:
containers:
- name: agracia10/debian_bash:latest
image: blog-app-server
ports:
- containerPort: 8006
---
apiVersion: v1
kind: Service
metadata:
name: blog-app-service
labels:
app: blog-app
spec:
selector:
app: blog-app
type: NodePort
ports:
- port: 80
nodePort: 31364
targetPort: 8006
protocol: TCP
name: HTTP
so things I have changed are image and target port.
Once your Node port service is up and running you will send the request on Port 80 or 31364
i will redirect the request internally to the target port which is 8006 for the container also.
Using this command you exposed your deployment on wrong target point
kubectl expose pod deployment-test-8497d6f458-xxhgm --type=NodePort --port=8080 --target-port=80
ideally it should be 8006
As I know the simplest way to expose the deployment to service we can run this command, you don't expose the pod but expose the deployment.
kubectl expose deployment deployment-test --port 80
My application has to deployments with a POD.
Can I create a Service to distribute load across these 2 PODs, part of different deployments ?
If so, How ?
Yes it is possible to achieve. Good explanation how to do it can be found on Kubernete documentation. However, keep in mind that both deployments should provide the same functionality, as the output should have the same format.
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
Based on example from Documentation.
1. nginx Deployment. Keep in mind that Deployment can have more than 1 label.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
run: nginx
env: dev
replicas: 2
template:
metadata:
labels:
run: nginx
env: dev
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
2. nginx-second Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-second
spec:
selector:
matchLabels:
run: nginx
env: prod
replicas: 2
template:
metadata:
labels:
run: nginx
env: prod
spec:
containers:
- name: nginx-second
image: nginx
ports:
- containerPort: 80
Now to pair Deployments with Services you have to use Selector based on Deployments labels. Below you can find 2 service YAMLs. nginx-service which pointing to both deployments and nginx-service-1 which points only to nginx-second deployment.
## Both Deployments
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
selector:
run: nginx
---
### To nginx-second deployment
apiVersion: v1
kind: Service
metadata:
name: nginx-service-1
spec:
ports:
- port: 80
protocol: TCP
selector:
env: prod
You can verify that service binds to deployment by checking the endpoints.
$ kubectl get pods -l run=nginx -o yaml | grep podIP
podIP: 10.32.0.9
podIP: 10.32.2.10
podIP: 10.32.0.10
podIP: 10.32.2.11
$ kk get ep nginx-service
NAME ENDPOINTS AGE
nginx-service 10.32.0.10:80,10.32.0.9:80,10.32.2.10:80 + 1 more... 3m33s
$ kk get ep nginx-service-1
NAME ENDPOINTS AGE
nginx-service-1 10.32.0.10:80,10.32.2.11:80 3m36s
Yes, you can do that.
Add a common label key pair to both the deployment pod spec and use that common label as selector in service definition
With the above defined service the requests would be load balanced across all the matching pods.
I have a manifest as the following
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-redis
spec:
selector:
matchLabels:
app: my-redis
replicas: 1
template:
metadata:
labels:
app: my-redis
spec:
containers:
- name: my-redis
image: redis
ports:
- name: redisport1
containerPort: 6379
hostPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: my-redis
spec:
type: NodePort
selector:
name: my-redis
ports:
- name: redisport1
port: 6379
targetPort: 6379
nodePort: 30036
protocol: TCP
This is a sample that reproduces my problem. My intention here is to create a simple cluster that has a pod with a redis container in it, and it should be exposed to my localhost. Still, get services gives me the following output:
redis-service NodePort 10.107.233.66 <none> 6379:30036/TCP 10s
If I swap NodePort with LoadBalancer, I get an external-ip but still port doesn't work.
Can you help me identify why I'm failing to map the 6379 port to my localhost, please?
Thanks,
In order to access your app through node port, you have to use this url
http://{node ip}:{node port}.
If you are using minikube, your minikube ip is the node ip. You can retrieve it using minikube ip command.
You can also use minikube service redis-service --url command to get the url to access your application through node port.
For anybody who's interested in the question, I found the problem. After Ijaz's fix, I also needed to change the selector to match the label in the pod, it was a typo on my end!
pod has "app=my-redis" tag, but Service selector had "name=my-redis". Matching them fixed the access problem.
Dont need the hostPort:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-redis
spec:
selector:
matchLabels:
app: my-redis
replicas: 1
template:
metadata:
labels:
app: my-redis
spec:
containers:
- name: my-redis
image: redis
ports:
- name: redisport1
containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: my-redis
spec:
type: NodePort
selector:
name: my-redis
ports:
- name: redisport1
port: 6379
targetPort: 6379
nodePort: 30036
protocol: TCP
now the nodePort 30036 can be used to access the service on any worker node.
If the cluster node is somewhere else and you want to make the port available on you local client , then just do kubectl port forward
kubectl port-forward svc/redis-service 6379:6379
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
Notes:
On-prem installs of k8s dont support service type of load balancer
ClusterIP is the IP on the pod network
Node IP is the IP of some machine that is running the k8s cluster