Connecting Frontend API application to Backend database - kubernetes

Can someone please help me complete this scenario as I have everything running successfully but with no end result as expected. I have a PostgreSQL database and Redis (back-end) Pods along with a front-end API application that I will be using to send API requests. Unfortunately, after setting up everything and making sure they are all running on the Kubernetes dashboard, when sending an API request there is nothing happening as if there is no connection between my services. Before desperately posting here I did some research to find a solution to this problem, I tried this tutorial of using a ingress-service to connect my services but did not work. I also came across this tutorial that will connect my back-end service to my front-end service using some sort of upstream configuration file but it did not do the trick.
Here is my YAML configuration files if anyone is interested:
apiVersion: apps/v1
kind: Deployment
metadata:
name: dictionary-project
labels:
app: net
spec:
replicas: 1
selector:
matchLabels:
app: dictionary
tier: frontend
template:
metadata:
labels:
app: dictionary
tier: frontend
spec:
hostNetwork: true
containers:
- name: dictionaryapi
image: amin/dictionary_server_api:latest
ports:
- containerPort: 400
apiVersion: v1
kind: Service
metadata:
name: dictionary-service
labels:
app: net
spec:
selector:
app: dictionary
tier: frontend
type: NodePort
ports:
- nodePort: 31003
port: 67
targetPort: 400
protocol: TCP
apiVersion: v1
kind: Service
metadata:
name: postgrebackendservice
labels:
run: backend
spec:
selector:
app: postgres
tier: backend
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
name: postgresdb
apiVersion: v1
kind: Service
metadata:
name: reddisbackendservice
labels:
run: backend
spec:
selector:
app: reddis
tier: backend
type: ClusterIP
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: gossip
Output of all services:
Name: dictionary-service
Namespace: default
Labels: app=net
Annotations: <none>
Selector: app=dictionary,tier=frontend
Type: NodePort
IP: 10.126.146.18
Port: <unset> 67/TCP
TargetPort: 400/TCP
NodePort: <unset> 31003/TCP
Endpoints: 192.168.x.x:400
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: postgrebackendservice
Namespace: default
Labels: run=backend
Annotations: Selector: app=postgres,tier=backend
Type: ClusterIP
IP: 10.182.374.13
Port: postgresdb 5432/TCP
TargetPort: 5432/TCP
Endpoints: 192.168.x.x:5432
Session Affinity: None
Events: <none>
Name: reddisbackendservice
Namespace: default
Labels: run=backend
Annotations: Selector: app=reddis,tier=backend
Type: ClusterIP
IP: 10.182.0.60
Port: client 6379/TCP
TargetPort: 6379/TCP
Endpoints: 192.168.x.x:6379
Port: gossip 16379/TCP
TargetPort: 16379/TCP
Endpoints: 192.168.x.x:16379
Session Affinity: None
Events: <none>
I am testing my front-end API application on a web browser by sending a http://{workernodeIP}:31003/swagger but the page is not loading due to no connection to the server.
Cluster information:
Kubernetes version: v1.18.6
Environment being used: bare-metal, 1 VM Master Node and 1 VM Worker Node
Installation method: kubeadm
Host OS: Ubuntu 18.04
CNI and version: calico
CRI and version: Docker 19.03.6

From the docs here
containerPort:
List of ports to expose from the container. Exposing a port here gives
the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port
here DOES NOT prevent that port from being exposed. Any port which is
listening on the default "0.0.0.0" address inside a container will be
accessible from the network. Cannot be updated.
As you can see containerPort is information and does not make the app listen on that port. For really making the app listen on port 400 you need change port in code.

Related

Service Endpoint not created although container port is online

I have a simple Service that connects to a port from a container inside a pod.
All pretty straight forward.
This was working too but out of nothing, the endpoint is not created for port 18080.
So I began to investigate and looked at this question but nothing that helped there.
The container is up, no errors/events, all green.
I can also call the request with the pods ip:18080 from an internal container, so the endpoint should be reachable for the service.
I can't see errors in:
journalctl -u snap.microk8s.daemon-*
I am using microk8s v1.20.
Where else can I debug this situation?
I am out of tools.
Service:
kind: Service
apiVersion: v1
metadata:
name: aedi-service
spec:
selector:
app: server
ports:
- name: aedi-host-ws #-port
port: 51056
protocol: TCP
targetPort: host-ws-port
- name: aedi-http
port: 18080
protocol: TCP
targetPort: fcs-http
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
labels:
app: server
spec:
replicas: 1
selector:
matchLabels:
app: server
template:
metadata:
labels:
app: server
srv: os-port-mapping
name: dns-service
spec:
hostname: fcs
containers:
- name: fcs
image: '{{$fcsImage}}'
imagePullPolicy: {{$pullPolicy}}
ports:
- containerPort: 18080
Service Description:
Name: aedi-service
Namespace: fcs-only
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: fcs-only
meta.helm.sh/release-namespace: fcs-only
Selector: app=server
Type: ClusterIP
IP Families: <none>
IP: 10.152.183.247
IPs: 10.152.183.247
Port: aedi-host-ws 51056/TCP
TargetPort: host-ws-port/TCP
Endpoints: 10.1.116.70:51056
Port: aedi-http 18080/TCP
TargetPort: fcs-http/TCP
Endpoints:
Session Affinity: None
Events: <none>
Pod Info:
NAME READY STATUS RESTARTS AGE LABELS
server-deployment-76b5789754-q48xl 6/6 Running 0 23m app=server,name=dns-service,pod-template-hash=76b5789754,srv=os-port-mapping
kubectl get svc aedi-service -o wide:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
aedi-service ClusterIP 10.152.183.247 <none> 443/TCP,1884/TCP,51052/TCP,51051/TCP,51053/TCP,51056/TCP,18080/TCP,51055/TCP 34m app=server
Your service spec refer to a port named "fcs-http" but it was not declared in the deployment. Try:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
...
ports:
- containerPort: 18080
name: fcs-http # <-- add the name here
...
Wrong service configuration
- name: aedi-http
port: 18080 -----> which expose service, it has not related with container port.
protocol: TCP
targetPort: fcs-http -----> Here should be 18080, correspond to container port
If you still want to use name instead of port number, you should define name too in deployment yaml, like below:
containers:
- name: fcs
image: '{{$fcsImage}}'
imagePullPolicy: {{$pullPolicy}}
ports:
- containerPort: 18080
name: fcs-http

How to deploy custom nginx app on kubernetes?

I want to deploy a custom nginx app on my kubernetes cluster.
I have three raspberry in a cluster. My deplotment file looks as follows
kubepodDeploy.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: privateRepo/my-nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx
How can I deploy it so that I can access my app by ipadress. Which service type do I need?
my service details are:
kubectl describe service my-nginx ~/Project/htmlBasic
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Annotations: Selector: run=my-nginx
Type: NodePort
IP: 10.99.107.194
Port: http 8080/TCP
TargetPort: 80/TCP
NodePort: http 30488/TCP
Endpoints: 10.32.0.4:80,10.32.0.5:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32430/TCP
Endpoints: 10.32.0.4:443,10.32.0.5:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
You can not access the application on ipaddress:8080 without using a proxy server in front or changing IPtable rules(not good idea). NodePort service type will always expose service in port range 30000-32767
So at any point, your service will be running on ipaddress:some_higher_port
Running a proxy in front which, redirects the traffic to node port and since 8080 is your requirement so run proxy server also on 8080 port.
Just to add proxy server will not be part of Kubernetes cluster
If you are on cloud consider using LoadBalancer service and access you app on some DNS name.

Kubernetes Load Balancer Type not responding to External IP Address

I've been trying to use the below to expose my application to a public IP. This is being done on Azure. The public IP is generated but when I browse to it I get nothing.
This is a Django app which runs the container on Port 8000. The service runs at Port 80 at the moment but even if I configure the service to run at port 8000 it still doesn't work.
Is there something wrong with the way my service is defined?
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- port: 80
selector:
app: hmweb
tier: frontend
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hmweb-deployment
labels:
app: hmweb
spec:
replicas: 1
selector:
matchLabels:
app: hmweb
template:
metadata:
labels:
app: hmweb
spec:
containers:
- name: hmweb
image: nw_webimage
envFrom:
- configMapRef:
name: new-config
command: ["/bin/sh","-c"]
args: ["gunicorn saleor.wsgi -w 2 -b 0.0.0.0:8000"]
ports:
- containerPort: 8000
imagePullSecrets:
- name: key
Output of kubectl describe service web (name of service:)
Name: web
Namespace: default
Labels: app=hmweb
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"hmweb"},"name":"web","namespace":"default"},"spec":{"ports":[{"port":...
Selector: app=hmweb
Type: LoadBalancer
IP: 10.0.86.131
LoadBalancer Ingress: 13.69.127.16
Port: <unset> 80/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31827/TCP
Endpoints: 10.244.0.112:8000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 8m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 7m service-controller Ensured load balancer
The reason behind that is your service has two selector app: hmweb and tier: frontend and your deployment pods has only single label named app: hmweb. Hence when your service is created it could not find the pods which has both the labels and doesn't connect to any pods. Also, if you have container running on 8000 port then you must define targetPort which has the value of container port on which container is running, else it will take both targetPort and port value as same you defined in your service i.e. port: 80
The correct yaml for your deployment is:
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- port: 80
targetPort: 8000
protocol: TCP
selector:
app: hmweb
type: LoadBalancer
Hope this helps.

Connection timedout when attempting to access any service in kubernetes

I've create a deployment and a service and deployed them using kubernetes, and when i tried to access them by curl, always i got a connection timed out error.
Here's my yml files:
Deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: locations-service
name: locations-service
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: locations-service
template:
metadata:
labels:
app: locations-service
spec:
containers:
- image: dropwizard:latest
imagePullPolicy: Never # just for testing!
name: locations-service
ports:
- containerPort: 8080
protocol: TCP
name: app-port
- containerPort: 8081
protocol: TCP
name: admin-port
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
Service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: locations-service
name: locations-service
namespace: default
spec:
type: NodePort
externalTrafficPolicy: Cluster
ports:
- name: "8080"
port: 8080
targetPort: 8080
protocol: TCP
- name: "8081"
port: 8081
targetPort: 8081
protocol: TCP
selector:
app: locations-service
Also i tried to add ingress routes, and tried to hit them but still the same problem occur.
Note that the application is successfully deployed, and i can check the logs from k8s dashboard
Another example, i have the following svc
kubectl describe service webapp1-svc
Name: webapp1-svc
Namespace: default
Labels: app=webapp1
Annotations: <none>
Selector: app=webapp1
Type: NodePort
IP: 10.0.0.219
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30080/TCP
Endpoints: 172.17.0.4:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
and tried to access it using
curl -v 10.0.0.219:30080
* Rebuilt URL to: 10.0.0.219:30080/
* Trying 10.0.0.219...
* connect to 10.0.0.219 port 30080 failed: Connection timed out
* Failed to connect to 10.0.0.219 port 30080: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to 10.0.0.219 port 30080: Connection timed out

How to make two Kubernetes Services talk to each other?

Currently, I have working K8s API pods in a K8s service that connects to a K8s Redis service, with K8s pods of it's own. The problem is, I am using NodePort meaning BOTH are exposed to the public. I only want the API accessable to the public. The issue is that if I make the Redis service not public, the API can't see it. Is there a way to connect two Services without exposing one to the public?
This is my API service yaml:
apiVersion: v1
kind: Service
metadata:
name: app-api-svc
spec:
selector:
app: app-api
tier: api
ports:
- protocol: TCP
port: 5000
nodePort: 30400
type: NodePort
And this is my Redis service yaml:
apiVersion: v1
kind: Service
metadata:
name: app-api-redis-svc
spec:
selector:
app: app-api-redis
tier: celery_broker
ports:
- protocol: TCP
port: 6379
nodePort: 30537
type: NodePort
First, configure the Redis service as a ClusterIP service. It will be private, visible only for other services. This is could be done removing the line with the option type.
apiVersion: v1
kind: Service
metadata:
name: app-api-redis-svc
spec:
selector:
app: app-api-redis
tier: celery_broker
ports:
- protocol: TCP
port: 6379
targetPort: [the port exposed by the Redis pod]
Finally, when you configure the API to reach Redis, the address should be app-api-redis-svc:6379
And that's all. I have a lot of services communicating each other in this way. If this doesn't work for you, let me know in the comments.
I'm going to try to take the best from all answers and my own research and make a short guide that I hope you will find helpful:
1. Test connectivity
Connect to a different pod, eg ruby pod:
kubectl exec -it some-pod-name -- /bin/sh
Verify it can ping to the service in question:
ping redis
Can it connect to the port? (I found telnet did not work for this)
nc -zv redis 6379
2. Verify your service selectors are correct
If your service config looks like this:
kind: Service
apiVersion: v1
metadata:
name: redis
labels:
app: redis
role: master
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
verify those selectors are also set on your pods?
get pods --selector=app=redis,role=master,tier=backend
Confirm that your service is tied to your pods by running:
$> describe service redis
Name: redis
Namespace: default
Labels: app=redis
role=master
tier=backend
Annotations: <none>
Selector: app=redis,role=master,tier=backend
Type: ClusterIP
IP: 10.47.250.121
Port: <unset> 6379/TCP
Endpoints: 10.44.0.16:6379
Session Affinity: None
Events: <none>
check the Endpoints: field and confirm it's not blank
More info can be found at:
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#my-service-is-missing-endpoints
I'm not sure about redis, but I have a similar application. I have a Java web application running as a pod that is exposed to the outside world through a nodePort. I have a mongodb container running as a pod.
In the webapp deployment specifications, I map it to the mongodb service through its name by passing the service name as parameter, I have pasted the specification below. You can modify accordingly.There should be a similar mapping parameter in Redis also where you would have to use the service name which is "mongoservice" in my case.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: empappdepl
labels:
name: empapp
spec:
replicas: 1
template:
metadata:
labels:
name: empapp
spec:
containers:
- resources:
limits:
cpu: 0.2
image: registryip:5000/employee:1
imagePullPolicy: IfNotPresent
name: wsemp
ports:
- containerPort: 8080
name: wsemp
command: ["java","-Dspring.data.mongodb.uri=mongodb://mongoservice/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
labels:
name: empwhatever
name: empservice
spec:
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http
nodePort: 30062
type: NodePort
selector:
name: empapp
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongodbdepl
labels:
name: mongodb
spec:
replicas: 1
template:
metadata:
labels:
name: mongodb
spec:
containers:
- resources:
limits:
cpu: 0.3
image: mongo
imagePullPolicy: IfNotPresent
name: mongodb
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
labels:
name: mongowhatever
name: mongoservice
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
selector:
name: mongodb
Note that the mongodb service doesnt need to be exposed as a NodePort.
Kubernetes enables inter service communication by allowing services communicate with other services using their service name.
In your scenario, redis service should be accessible from other services on
http://app-api-redis-svc.default:6379. Here default is the namespace under which your service is running.
This internally routes your requests to your redis pod running on the target container port
Checkout this link for different modes of service discovery options provided by kubernetes
Hope it helps