I can't curl nginx which I deployed on k8s cluster - kubernetes

my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
my service yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
enter image description here
enter image description here
and then I curl 10.104.239.140, but get an error curl: (7) Failed connect to 10.104.239.140:80; Connection timed out
Who can tell me what's wrong?

welcome to SO. That service you've deployed is of type ClusterIP which means it can only be accessed from within the cluster. In your case, it seems you're trying to access it from outside the cluster and thus the connection timed out.
What you can do is, deploy a service of type NodePort or LoadBalancer to access it from outside the cluster. You can read more about different service types here.
You're service would end up something like this:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort ## or LoadBalancer(supported by Cloud providers like AWS)
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30001

Related

Why can't I curl endpoint on GCP?

I am working my way through a kubernetes tutorial using GKE, but it was written with Azure in mind - tho it has been working ok so far.
The first part where it has not worked has been with exercises regarding coreDNS - which I understand does not exist on GKE - it's kubedns only?
Is this why I can't get a pod endpoint with:
export PODIP=$(kubectl get endpoints hello-world-clusterip -o jsonpath='{ .subsets[].addresses[].ip}')
and then curl:
curl http://$PODIP:8080
My deployment is definitely on the right port:
ports:
- containerPort: 8080
And, in fact, the deployment for the tut is from a google sample.
Is this to do with coreDNS or authorisation/needing a service account? What can I do to make the curl request work?
Deployment yaml is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
ports:
- port: 80
protocol: TCP
targetPort: 8080
Having a deeper insight on what Gari comments, when exposing a service outside your cluster, this services must be configured as NodePort or LoadBalancer, since ClusterIP only exposes the Service on a cluster-internal IP making the service only reachable from within the cluster, and since Cloud Shell is a a shell environment for managing resources hosted on Google Cloud, and not part of the cluster, that's why you're not getting any response. To change this, you can change your yaml file with the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
After redeploying your service, you can run command kubectl get all -o wide on cloud shell to validate that NodePort type service has been created with a node and target port.
To test your deployment just throw a CURL test to he external IP from one of your nodes incluiding the node port that was assigned, the command should look like something like:
curl <node_IP_address>:<Node_port>

Not able to access the application using Load Balancer service in Azure Kubernetes Service

I have created small nginx deployment and type as LoadBalancer in Azure Kubernetes service, but I was unable to access the application using LoadBalaner service. Can some one provide the solution
I have already updated security group to allow all traffic, but no use.
Do I need to update any security group to access the application?
Please find the deployment file.
cat nginx.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-kubernetes
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-kubernetes
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: nginx:latest
ports:
- containerPort: 8080
Nginx container is using port 80 by default and you are trying to connect to port 8080 where nothing is listening and thus getting connection refused.
Take a look here at nginx conateiner Dockerfile. What port do you see?
All you need to do to make it work is to change target port like following:
apiVersion: v1
kind: Service
metadata:
name: nginx-kubernetes
spec:
ports:
- port: 8080
targetPort: 80
selector:
app: hello-kubernetes
Additionally it would be nice to change containerPort as following:
spec:
containers:
- name: hello-kubernetes
image: nginx:latest
ports:
- containerPort: 80

GKE NodePort service refusing incoming traffic

I have created a Node port service in Google cloud with the following specification... I have a firewall rule created to allow traffic from 0.0.0.0/0 for the port '30100' ,I have verified stackdriver logs and traffic is allowed but when I either use curl or from browser to hit http://:30100 I am not getting any response. I couldn't proceed how to debug the issue also... can someone please suggest on this ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginxv1
template:
metadata:
labels:
app: nginxv1
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginxv1
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
nodePort: 30100
selector:
app: nginxv1
type: NodePort
Thanks.
You need to fix the container port, it must be 80 because the nginx container exposes this port as you can see here
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginxv1
template:
metadata:
labels:
app: nginxv1
spec:
containers:
- name: nginx
image: nginx:latest
---
apiVersion: v1
kind: Service
metadata:
name: nginxv1
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 30100
selector:
app: nginxv1
type: NodePort
Also, you need to create a firewall rule to permit the traffic to the node, as mentioned by #danyL in comments:
gcloud compute firewall-rules create test-node-port --allow tcp:30100
Get the node IP with the command
kubectl get nodes -owide
And them try to access the nginx page with:
curl http://<NODEIP>:30100

Can't access service in my local kubernetes cluster using NodePort

I have a manifest as the following
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-redis
spec:
selector:
matchLabels:
app: my-redis
replicas: 1
template:
metadata:
labels:
app: my-redis
spec:
containers:
- name: my-redis
image: redis
ports:
- name: redisport1
containerPort: 6379
hostPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: my-redis
spec:
type: NodePort
selector:
name: my-redis
ports:
- name: redisport1
port: 6379
targetPort: 6379
nodePort: 30036
protocol: TCP
This is a sample that reproduces my problem. My intention here is to create a simple cluster that has a pod with a redis container in it, and it should be exposed to my localhost. Still, get services gives me the following output:
redis-service NodePort 10.107.233.66 <none> 6379:30036/TCP 10s
If I swap NodePort with LoadBalancer, I get an external-ip but still port doesn't work.
Can you help me identify why I'm failing to map the 6379 port to my localhost, please?
Thanks,
In order to access your app through node port, you have to use this url
http://{node ip}:{node port}.
If you are using minikube, your minikube ip is the node ip. You can retrieve it using minikube ip command.
You can also use minikube service redis-service --url command to get the url to access your application through node port.
For anybody who's interested in the question, I found the problem. After Ijaz's fix, I also needed to change the selector to match the label in the pod, it was a typo on my end!
pod has "app=my-redis" tag, but Service selector had "name=my-redis". Matching them fixed the access problem.
Dont need the hostPort:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-redis
spec:
selector:
matchLabels:
app: my-redis
replicas: 1
template:
metadata:
labels:
app: my-redis
spec:
containers:
- name: my-redis
image: redis
ports:
- name: redisport1
containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: my-redis
spec:
type: NodePort
selector:
name: my-redis
ports:
- name: redisport1
port: 6379
targetPort: 6379
nodePort: 30036
protocol: TCP
now the nodePort 30036 can be used to access the service on any worker node.
If the cluster node is somewhere else and you want to make the port available on you local client , then just do kubectl port forward
kubectl port-forward svc/redis-service 6379:6379
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
Notes:
On-prem installs of k8s dont support service type of load balancer
ClusterIP is the IP on the pod network
Node IP is the IP of some machine that is running the k8s cluster

Kubernetes (Minikube): link between client and server

I'm running a simple spring microservice project with Minikube. I have two projects: lucky-word-client (on port 8080) and lucky-word-server (on port 8888). But I can't communicate client with server. Infact if lucky-word-client communicates with lucky-word-server, the result is the word "Evviva", else the word is "Default". When I run on terminal: minikube service lucky-client the output is Default, instead of Evviva.
This is the file Dockerfile of lucky-word-server:
FROM frolvlad/alpine-oraclejdk8
ADD build/libs/common-config-server-0.0.1-SNAPSHOT.jar common-config-server.jar
EXPOSE 8888
ENTRYPOINT ["/usr/bin/java", "-Xmx128m", "-Xms128m"]
CMD ["-jar", "common-config-server.jar"]
This is the file Dockerfile of lucky-word-client:
FROM frolvlad/alpine-oraclejdk8
ADD build/libs/lucky-word-client-0.0.1-SNAPSHOT.jar lucky-word-client.jar
EXPOSE 8080
ENTRYPOINT ["/usr/bin/java", "-Xmx128m", "-Xms128m"]
CMD ["-jar", "-Dspring.profiles.active=italian", "lucky-word-client.jar"]
This is deployment of lucky-word-server:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lucky-server
spec:
selector:
matchLabels:
app: lucky-server
replicas: 1
template:
metadata:
labels:
app: lucky-server
spec:
containers:
- name: lucky-server
image: lucky-server-img
imagePullPolicy: Never
ports:
- containerPort: 8888
This is the service of lucky-word-server:
kind: Service
apiVersion: v1
metadata:
name: lucky-server
spec:
selector:
app: lucky-server
ports:
- protocol: TCP
port: 8888
type: NodePort
This is the deployment of lucky-word-client:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lucky-client
spec:
selector:
matchLabels:
app: lucky-client
replicas: 1
template:
metadata:
labels:
app: lucky-client
spec:
containers:
- name: lucky-client
image: lucky-client-img
imagePullPolicy: Never
ports:
- containerPort: 8080
This is the service of lucky-word-client:
kind: Service
apiVersion: v1
metadata:
name: lucky-client
spec:
selector:
app: lucky-client
ports:
- protocol: TCP
port: 8080
type: NodePort
As #suren stated you should specify the target port in the service definition.
And you should change the endpoint URL of the server that client calls to reflect minikube_host_ip. There are couple of ways to achieve that. The naive method would be as follows.
Change your Kubernetes service for the server to have a static Nodeport as follows:
kind: Service
apiVersion: v1
metadata:
name: lucky-server
spec:
selector:
app: lucky-server
ports:
- protocol: TCP
port: 8080
nodePort: 32002
type: NodePort
And in your client code just change the endpoint of the server as follows:
http://{minikube_host_ip}:32002 Replace your {minikube_host_ip} with the ip address of the minikube host here.
But if you don't want to hard code the minikube ip you can inject it as an environment variable in your Kuberenetes deployment script. And that environment variable should be captured in your docker file.
Your services are sending the requests to port 80 now. You need to specify parameter targetPort. Should look like this:
kind: Service
apiVersion: v1
metadata:
name: lucky-server
spec:
selector:
app: lucky-server
ports:
- protocol: TCP
targetPort: 8888 #this is your container port. where to send the requests
port: 8888 #this is the service port. it is running on svc-ip:8888
type: NodePort
You should do the same with the other service. Also check the service port. Now it is on 8080 and 8888. You might be hitting them on port 80.
There might be more issues, but for now, these for sure cause a problem.