Docker for Mac Kubernetes load balancer NoHttpResponseException - kubernetes

I'm trying to run a simple load balancing server connecting to a Deployment pod.
I've installed Docker for Mac edge version.
The problem is that when I try to make a GET request to the exposed load balancer url http://localhost:8081/api/v1/posts/health, the error appearing is:
org.apache.http.NoHttpResponseException: localhost:8081 failed to respond
When doing:
k get services
I get:
Clearly, the service is running, but localhost:8081 fails to respond, no idea why, I keep struggling with this.
My service resource:
---
apiVersion: v1
kind: Service
metadata:
name: posts-api-svc
# namespace: nginx-ingress
labels:
app: posts-api
rel: beta
env: dev
spec:
type: LoadBalancer
selector:
app: posts-api
rel: beta
env: dev
ports:
- protocol: TCP
port: 8081
My deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: posts-api-deployment
# namespace: nginx-ingress
labels:
app: posts-api
rel: beta
env: dev
spec:
replicas: 1
selector:
matchLabels:
app: posts-api
env: dev
rel: beta
template:
metadata:
labels:
app: posts-api
env: dev
rel: beta
spec:
containers:
- name: posts-api
image: kimgysen/posts-api:latest
ports:
- containerPort: 8083
livenessProbe:
httpGet:
path: /api/v1/posts/health
port: 8083
initialDelaySeconds: 120
timeoutSeconds: 1
Should be a basic setup!
My deployment pod does not show any restarts, everything looks good:
Any advice welcome!
Note:
Edit
When using port 31082, I get the error:
org.apache.http.conn.HttpHostConnectException: Connect to
localhost:31082 [localhost/127.0.0.1] failed: Connection refused
(Connection refused)
There is no specific reason why I used port 8083.
It is because I tried nodeport first (with multiple services), now Load Balancer.
Next step will be ingress, but it didn't really work out for me the first time, and so I try to go step by step.
I used port 8081 instead of port 80 because I read somewhere that on Mac OSX port 80 is only to be used by root user.

The Service port had to correspond to the Deployment containerPort.
I can now access the api on localhost:8083.

Related

Cannot Access Application Deployment from Outside in Kubernetes

I'm trying to access my Golang Microservice that is running in the Kubernetes Cluster and has following Manifest..
apiVersion: apps/v1
kind: Deployment
metadata:
name: email-application-service
namespace: email-namespace
spec:
selector:
matchLabels:
run: internal-service
template:
metadata:
labels:
run: internal-service
spec:
containers:
- name: email-service-application
image: some_image
ports:
- containerPort: 8000
hostPort: 8000
protocol: TCP
envFrom:
- secretRef:
name: project-secrets
imagePullPolicy: IfNotPresent
So to access this Deployment from the Outside of the Cluster I'm using Service as well,
And I've set up some External IP for test purposes, which suppose to forward HTTP requests to the port 8000, where my application is actually running at.
apiVersion: v1
kind: Service
metadata:
name: email-internal-service
namespace: email-namespace
spec:
type: ClusterIP
externalIPs:
- 192.168.0.10
selector:
run: internal-service
ports:
- name: http
port: 8000
targetPort: 8000
protocol: TCP
So the problem is that When I'm trying to send a GET request from outside the Cluster by executing curl -f http:192.168.0.10:8000/ it just stuck until the timeout.
I've checked the state of the pods, logs of the application, matching of the selector/template names at the Service and Application Manifests, namespaces, but everything of this is fine and working properly...
(There is also a secret config but It Deployed and also working file)
Thanks...
Making reference to jordanm's solution: you want to put it back to clusterIP and then use port-forward with kubectl -n email-namespace port-forward svc/email-internal-service 8000:8000. You will then be able to access the service via http://localhost:8000. You may also be interested in github.com/txn2/kubefwd

Unable to connect to gRPC server in Kubernetes cluster, but can connect when I port-forward [Connection refused]

Been stuck on this error for the past few days!
I have an HTTP server that is meant to connect with the gRPC server through the client. It works fine on my local machine when I start the gRPC server and start my HTTP server. However, When I try to deploy it in a cluster, the HTTP server is unable to connect with the error message error receiving stream from timer rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 10.109.237.114:5996: connect: connection refused"
What I find particularly weird is that if I forward the gRPC server from the cluster, my local HTTP server connects to it just fine. Inspecting the connection within the cluster I can see that the port is open but it still refuses connections.
image of netstat inspection
Notes
Experiene this issue on minikube and DOKS.
Built these images on an M1 mac.
There is no gRPC authenticationg => rpc.DialContext(ctx, serverAddress, grpc.WithTransportCredentials(insecure.NewCredentials()))
GRPC SERVER FILE
apiVersion: apps/v1
kind: Deployment
metadata:
name: x-service
labels:
type: xx
service: x-svc
spec:
replicas: 1
selector:
matchLabels:
type: xx
service: x-svc
template:
metadata:
labels:
type:xx
service: x-svc
spec:
containers:
- name: x-api
image: x/image
---
apiVersion: v1
kind: Service
metadata:
name: x-service
spec:
ports:
- protocol: TCP
port: 5996
targetPort: 5996
selector:
type: xx
service: x-svc
HTTP SERVER FILE
apiVersion: apps/v1
kind: Deployment
metadata:
name: b-service
labels:
type: be
service: be-svc
spec:
replicas: 1
selector:
matchLabels:
type: be
service: be-svc
template:
metadata:
labels:
type: be
service: be-svc
spec:
containers:
- name: bapi
image: x/grpc
imagePullPolicy: Always
env:
- name: X_ADDRESS
value: x-service:5996
---
apiVersion: v1
kind: Service
metadata:
name: b-api-svc
spec:
type: NodePort
ports:
- port: 8080
selector:
type: be
service: be-svc
A few days ago I was running into a similar situation as you are:
"Locally you can communicate with the gRPC service via port-forward, but it fail to communicate inside the Cluster (between Pods)"
If both the Pods are inside the same cluster, you should use the service name and port-number instead of the host name. In your case this should be x-service:5996. If you have a namespace then it should be x-service.<enter-namespace-here>:5996
In the Java context, your code should look similar to this:
ManagedChannel channel = ManagedChannelBuilder
.forTarget("x-service:5996")
.usePlaintext()
.build();
If I am correct, within the k8s cluster your pods can interact with each other using their services names (and ports). However, if you want to establish communication via Ingress, check this documentation https://doc.traefik.io/traefik/routing/providers/kubernetes-ingress/
Hopefully this helps you.

I have a Kubernetes Deployment and Service working, but my Ingress somehow only shows up talking to port 80 no matter what

I have a GKE cluster that I'm working to get going on https load balancing.
So far I have:
deployment
service (x 2 -- see below)
ingress
SSL cert -- google managed version
All of these seem to be working, but I'm getting a 502 error when connecting to the hostname via https:
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
When trying to trace this down I found a debugging post but when combing through it I found that his ingress shows ports 80,443 ... while I can never get mine to show anything but port 80.
This is even after I split my service into two different services, one on port 443 and one on port 80, and now am only telling the ingress about the 443 service and it still shows up with just port 80 and I'm still getting the 502 error.
The YAML for the deployment (asked by the commenter below):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: gcr.io/myapp-dev/myapp-container:latest
ports:
- containerPort: 8080
The YAML for the '443 service':
apiVersion: v1
kind: Service
metadata:
name: my-service443
spec:
type: NodePort
selector:
app: myapp
ports:
- name: https
protocol: TCP
port: 443
targetPort: 8080
And the Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.global-static-ip-name: "kubething"
networking.gke.io/managed-certificates: clearspring-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: my-service443
port:
name: https
I don't understand (a) why the ingress is showing only port 80 and why I'm still getting 502 errors.
Thanks much for any help whatsoever!
It looks like it was missing readiness and liveness probes; when I changed the deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cleardev-deployment
labels:
app: clearspring
spec:
replicas: 2
selector:
matchLabels:
app: clearspring
template:
metadata:
labels:
app: clearspring
spec:
containers:
- name: clearspring
image: gcr.io/clearspring-dev/clearspring-container:latest
ports:
- containerPort: 8080
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
Then the status changed from UNHEALTHY to Unknown ... but I was still getting the 502 error.
The liveness probe did its job: the procedure was not running on port 8080 on all hosts, just on 127.0.0.1. I fixed that ... still not working but tried EXPOSE 8080 in the Dockerfile and now I guess I need to look at firewall rules because liveness/readiness probes can't connect.
Note that I had to delete and recreate the cluster to get this far ... I think. I just tried first updating the deployment and I didn't get any change from UNHEALTHY.

Why can't I curl endpoint on GCP?

I am working my way through a kubernetes tutorial using GKE, but it was written with Azure in mind - tho it has been working ok so far.
The first part where it has not worked has been with exercises regarding coreDNS - which I understand does not exist on GKE - it's kubedns only?
Is this why I can't get a pod endpoint with:
export PODIP=$(kubectl get endpoints hello-world-clusterip -o jsonpath='{ .subsets[].addresses[].ip}')
and then curl:
curl http://$PODIP:8080
My deployment is definitely on the right port:
ports:
- containerPort: 8080
And, in fact, the deployment for the tut is from a google sample.
Is this to do with coreDNS or authorisation/needing a service account? What can I do to make the curl request work?
Deployment yaml is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
ports:
- port: 80
protocol: TCP
targetPort: 8080
Having a deeper insight on what Gari comments, when exposing a service outside your cluster, this services must be configured as NodePort or LoadBalancer, since ClusterIP only exposes the Service on a cluster-internal IP making the service only reachable from within the cluster, and since Cloud Shell is a a shell environment for managing resources hosted on Google Cloud, and not part of the cluster, that's why you're not getting any response. To change this, you can change your yaml file with the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-customdns
spec:
replicas: 3
selector:
matchLabels:
app: hello-world-customdns
template:
metadata:
labels:
app: hello-world-customdns
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
dnsPolicy: "None"
dnsConfig:
nameservers:
- 9.9.9.9
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-customdns
spec:
selector:
app: hello-world-customdns
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
After redeploying your service, you can run command kubectl get all -o wide on cloud shell to validate that NodePort type service has been created with a node and target port.
To test your deployment just throw a CURL test to he external IP from one of your nodes incluiding the node port that was assigned, the command should look like something like:
curl <node_IP_address>:<Node_port>

SFTP server is not accessible when deployed to Kubernetes (GKE)

SFTP server is not accessible when exposed using a NodePort service and an Kubernetes Ingress. However, if the same deployment is exposed using a Service of type LoadBalancer it works fine.
Below is the deployment file for SFTP in GKE using atmoz/sftp Dockerfile.
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: sftp
labels:
environment: production
app: sftp
spec:
replicas: 1
minReadySeconds: 10
template:
metadata:
labels:
environment: production
app: sftp
annotations:
container.apparmor.security.beta.kubernetes.io/sftp: runtime/default
spec:
containers:
- name: sftp
image: atmoz/sftp:alpine
imagePullPolicy: Always
args: ["user:pass:1001:100:upload"]
ports:
- containerPort: 22
securityContext:
capabilities:
add: ["SYS_ADMIN"]
resources: {}
If I expose this deployment normally using a Kubernetes Service of type LoadBalancer like below:
apiVersion: v1
kind: Service
metadata:
labels:
environment: production
name: sftp-service
spec:
type: LoadBalancer
ports:
- name: sftp-port
port: 22
protocol: TCP
targetPort: 22
selector:
app: sftp
Above Service gets an external IP which I can simply use in the command sftp xxx.xx.xx.xxx command to get access using the pass password.
However, I try to expose the same deployment using GKE Ingress it does not work. Below is the manifest for the ingress:
# First I create a NodePort service to expose the deployment internally
---
apiVersion: v1
kind: Service
metadata:
labels:
environment: production
name: sftp-service
spec:
type: NodePort
ports:
- name: sftp-port
port: 22
protocol: TCP
targetPort: 22
nodePort: 30063
selector:
app: sftp
# Ingress service has SFTP service as it's default backend
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress-2
spec:
backend:
serviceName: sftp-service
servicePort: 22
rules:
- http:
paths:
# "http-service-sample" is a service exposing a simple hello-word app deployment
- path: /sample
backend:
serviceName: http-service-sample
servicePort: 80
After an external IP is assigned to the Ingress (I know it takes a few minutes to completely set up) and xxx.xx.xx.xxx/sample starts working but sftp -P 80 xxx.xx.xx.xxx doesn't work.
Below is the error I get from the server:
ssh_exchange_identification: Connection closed by remote host
Connection closed
What am I doing wrong in the above set-up? Why does LoadBalancer service is able to allow access to SFTP service, while Ingress fails?
That's currently not fully supported to route in Kubernetes Ingress any other traffic than HTTP/HTTPS protocols (see docs).
You can try to make some workaround as described there: Kubernetes: Routing non HTTP Request via Ingress to Container