I am following a very simple tutorial where it spawns a simple pod with an http endpoint and a service to expose that app using kubernetes.
The setup is very simple:
app-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: hello-pod
labels:
app: web
spec:
containers:
- name: web-ctr
image: nigelpoulton/getting-started-k8s:1.0
ports:
- containerPort: 8080
And the nodeport service:
apiVersion: v1
kind: Service
metadata:
name: ps-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 31111
protocol: TCP
selector:
app: web
The service and pod seem to be healthy:
But I can't reach the running app:
locahost:31111
Give " This site can't be reached message"
I am new to this stuff so any help will be appreciated.
In Kubernetes Kind cluster, by default, NodePort may not be bound to localhost. Please check the following resources:
https://kind.sigs.k8s.io/docs/user/quick-start/#mapping-ports-to-the-host-machine
How to use NodePort with kind?
The simplest way to access the service from localhost (like you are trying to do) would be to use
kubectl port-forward
e.g. the following command would work in your case - which forwards traffic from localhost -> ps-nodeport service
kubectl port-forward service/ps-nodeport 31111: 31111
Related
I'm trying to access my Golang Microservice that is running in the Kubernetes Cluster and has following Manifest..
apiVersion: apps/v1
kind: Deployment
metadata:
name: email-application-service
namespace: email-namespace
spec:
selector:
matchLabels:
run: internal-service
template:
metadata:
labels:
run: internal-service
spec:
containers:
- name: email-service-application
image: some_image
ports:
- containerPort: 8000
hostPort: 8000
protocol: TCP
envFrom:
- secretRef:
name: project-secrets
imagePullPolicy: IfNotPresent
So to access this Deployment from the Outside of the Cluster I'm using Service as well,
And I've set up some External IP for test purposes, which suppose to forward HTTP requests to the port 8000, where my application is actually running at.
apiVersion: v1
kind: Service
metadata:
name: email-internal-service
namespace: email-namespace
spec:
type: ClusterIP
externalIPs:
- 192.168.0.10
selector:
run: internal-service
ports:
- name: http
port: 8000
targetPort: 8000
protocol: TCP
So the problem is that When I'm trying to send a GET request from outside the Cluster by executing curl -f http:192.168.0.10:8000/ it just stuck until the timeout.
I've checked the state of the pods, logs of the application, matching of the selector/template names at the Service and Application Manifests, namespaces, but everything of this is fine and working properly...
(There is also a secret config but It Deployed and also working file)
Thanks...
Making reference to jordanm's solution: you want to put it back to clusterIP and then use port-forward with kubectl -n email-namespace port-forward svc/email-internal-service 8000:8000. You will then be able to access the service via http://localhost:8000. You may also be interested in github.com/txn2/kubefwd
I have a running pod that was created with the following pod-definition.yaml:
apiVersion: v1
kind: Pod
metadata:
name: microservice-one-pod-name
labels:
app: microservice-one-app-label
type: front-end
spec:
containers:
- name: microservice-one
image: vismarkjuarez1994/microserviceone
ports:
- containerPort: 2019
I then created a Service using the following service-definition.yaml:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 30008
targetPort: 2019
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
I then ran kubectl describe node minikube to find the Node IP I should be connecting to -- which yielded:
Addresses:
InternalIP: 192.168.49.2
Hostname: minikube
But I get no response when I run the following curl command:
curl 192.168.49.2:30008
The request also times out when I try to access 192.168.49.2:30008 from a browser.
The pod logs show that the container is up and running. Why can't I access my Service?
The problem is that you are trying to access your service at the port parameter which is the internal port at which the service will be exposed, even when using NodePort type.
The parameter you were searching is called nodePort, which can optionally be specified together with port and targetPort. Quoting the documentation:
By default and for convenience, the Kubernetes control plane will
allocate a port from a range (default: 30000-32767)
Since you didn't specify the nodePort, one in the range was automatically picked up. You can check which one by:
kubectl get svc -owide
And then access your service externally at that port.
As an alternative, you can change your service definition to be something like:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 30008
targetPort: 2019
nodePort: 30008
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
But take in mind that you may need to delete your service and create it again in order to change the nodePort allocated.
I think you missed the Port in your service:
apiVersion: v1
kind: Pod
metadata:
name: microservice-one-pod-name
labels:
app: microservice-one-app-label
type: front-end
spec:
containers:
- name: microservice-one
image: vismarkjuarez1994/microserviceone
ports:
- containerPort: 2019
and your service should be like this:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 2019
targetPort: 2019
nodePort: 30008
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
You can access to your app after enabling the Minikube ingress if you want trying Ingress with Minikube.
minikube addons enable ingress
I've got this webserver config:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
and this web service config:
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
run: web-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: webserver
I was expecting to be able to connect to the webserver via http://192.168.99.100:80 with this config but Chrome gives me a ERR_CONNECTION_REFUSED.
I tried minikube service --url web-service which gives http://192.168.99.100:30276 however this also has a ERR_CONNECTION_REFUSED.
Any further suggestions?
UPDATE
I updated the port / targetPort to 80.
However, I now get:
ERR_CONNECTION_REFUSED for http://192.168.99.100:80/
and
an nginx 403 for http://192.168.99.100:31540/
In your service, you can define a nodePort
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
run: web-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 32700
protocol: TCP
selector:
app: webserver
Now, you will be able to access it on http://:32700
Be careful with port 80. Ideally, you would have an nginx ingress controller running on port 80 and all traffic will be routed through it. Using port 80 as nodePort will mess up your deployment.
In your service, you did not specify a targetPort, so the service is using the port value as targetPort, however your container is listening on 80. Add a targetPort: 80 to the service.
NodePort port range varies from 30000-32767(default). When you expose a service without specifying a port, kubernetes picks up a random port from the above range and provide you.
You can check the port by typing the below command
kubectl get svc
In your case - the application is port forwarded to 31540. Your issues seems to be the niginx configuration. Check for the nginx logs.
Please check permissions of mounted volume /home/docker/vol
To fix this you have to make the mounted directory and its contents publicly readable:
chmod -R o+rX /home/docker/vol
I'm running a simple spring microservice project with Minikube. I have two projects: lucky-word-client (on port 8080) and lucky-word-server (on port 8888). But I can't communicate client with server. Infact if lucky-word-client communicates with lucky-word-server, the result is the word "Evviva", else the word is "Default". When I run on terminal: minikube service lucky-client the output is Default, instead of Evviva. I want communicate client with server through DNS. I saw the guide: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ but without success. How i can modify the service or pods to have the link between client and server?
This is the pod of lucky-word-client:
apiVersion: v1
kind: Pod
metadata:
name: lucky-client
namespace: default
spec:
containers:
- image: lucky-client-img
imagePullPolicy: IfNotPresent
name: lucky-client
This is the pod of lucky-word-server:
apiVersion: v1
kind: Pod
metadata:
name: lucky-server
namespace: default
spec:
containers:
- image: lucky-server-img
imagePullPolicy: IfNotPresent
name: lucky-server
This is the service, where the lucky-word-client communicate with lucky-word-server:
kind: Service
apiVersion: v1
metadata:
name: lucky-client
spec:
selector:
app: lucky-client
ports:
- protocol: TCP
targetPort: 8080
port: 80
type: NodePort
If you want to have DNS based service discovery to communicate with the server follow the below steps:
Enable kube-dns addon via minikube addons enable kube-dns command. This will enable the service discovery in your kubernetes cluster.
Make sure that kube-dns addon is enable using minikube addons list command.
In your client application code change the server URL endpoint to the following : http://lucky-server:8888. "lucky-server" is the metadata.name property of your Kubernetes server service yaml definition.
Or else instead of lucky-server you can use fully qualified name lucky-server.default.svc.cluster.local in the server URL since you are deploying your service in default namespace.
You need a service for your lucky-server :
kind: Service
apiVersion: v1
metadata:
name: lucky-server
spec:
selector:
app: lucky-server
ports:
- protocol: TCP
targetPort: 8888
port: 80
type: NodePort
Currently, I have working K8s API pods in a K8s service that connects to a K8s Redis service, with K8s pods of it's own. The problem is, I am using NodePort meaning BOTH are exposed to the public. I only want the API accessable to the public. The issue is that if I make the Redis service not public, the API can't see it. Is there a way to connect two Services without exposing one to the public?
This is my API service yaml:
apiVersion: v1
kind: Service
metadata:
name: app-api-svc
spec:
selector:
app: app-api
tier: api
ports:
- protocol: TCP
port: 5000
nodePort: 30400
type: NodePort
And this is my Redis service yaml:
apiVersion: v1
kind: Service
metadata:
name: app-api-redis-svc
spec:
selector:
app: app-api-redis
tier: celery_broker
ports:
- protocol: TCP
port: 6379
nodePort: 30537
type: NodePort
First, configure the Redis service as a ClusterIP service. It will be private, visible only for other services. This is could be done removing the line with the option type.
apiVersion: v1
kind: Service
metadata:
name: app-api-redis-svc
spec:
selector:
app: app-api-redis
tier: celery_broker
ports:
- protocol: TCP
port: 6379
targetPort: [the port exposed by the Redis pod]
Finally, when you configure the API to reach Redis, the address should be app-api-redis-svc:6379
And that's all. I have a lot of services communicating each other in this way. If this doesn't work for you, let me know in the comments.
I'm going to try to take the best from all answers and my own research and make a short guide that I hope you will find helpful:
1. Test connectivity
Connect to a different pod, eg ruby pod:
kubectl exec -it some-pod-name -- /bin/sh
Verify it can ping to the service in question:
ping redis
Can it connect to the port? (I found telnet did not work for this)
nc -zv redis 6379
2. Verify your service selectors are correct
If your service config looks like this:
kind: Service
apiVersion: v1
metadata:
name: redis
labels:
app: redis
role: master
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
verify those selectors are also set on your pods?
get pods --selector=app=redis,role=master,tier=backend
Confirm that your service is tied to your pods by running:
$> describe service redis
Name: redis
Namespace: default
Labels: app=redis
role=master
tier=backend
Annotations: <none>
Selector: app=redis,role=master,tier=backend
Type: ClusterIP
IP: 10.47.250.121
Port: <unset> 6379/TCP
Endpoints: 10.44.0.16:6379
Session Affinity: None
Events: <none>
check the Endpoints: field and confirm it's not blank
More info can be found at:
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#my-service-is-missing-endpoints
I'm not sure about redis, but I have a similar application. I have a Java web application running as a pod that is exposed to the outside world through a nodePort. I have a mongodb container running as a pod.
In the webapp deployment specifications, I map it to the mongodb service through its name by passing the service name as parameter, I have pasted the specification below. You can modify accordingly.There should be a similar mapping parameter in Redis also where you would have to use the service name which is "mongoservice" in my case.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: empappdepl
labels:
name: empapp
spec:
replicas: 1
template:
metadata:
labels:
name: empapp
spec:
containers:
- resources:
limits:
cpu: 0.2
image: registryip:5000/employee:1
imagePullPolicy: IfNotPresent
name: wsemp
ports:
- containerPort: 8080
name: wsemp
command: ["java","-Dspring.data.mongodb.uri=mongodb://mongoservice/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
labels:
name: empwhatever
name: empservice
spec:
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http
nodePort: 30062
type: NodePort
selector:
name: empapp
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongodbdepl
labels:
name: mongodb
spec:
replicas: 1
template:
metadata:
labels:
name: mongodb
spec:
containers:
- resources:
limits:
cpu: 0.3
image: mongo
imagePullPolicy: IfNotPresent
name: mongodb
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
labels:
name: mongowhatever
name: mongoservice
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
selector:
name: mongodb
Note that the mongodb service doesnt need to be exposed as a NodePort.
Kubernetes enables inter service communication by allowing services communicate with other services using their service name.
In your scenario, redis service should be accessible from other services on
http://app-api-redis-svc.default:6379. Here default is the namespace under which your service is running.
This internally routes your requests to your redis pod running on the target container port
Checkout this link for different modes of service discovery options provided by kubernetes
Hope it helps