How can my services communicate with each other in a kubernetes deployment? - kubernetes

Part of my deployment looks like this
client -- main service __ service 1
|__ service 2
NOTE: Each of these 4 services is a container and I'm trying to do this where each is in it's own Pod (without using multi container pod)
Where main service must make a call to service 1, get results then send those results to service 2, get that result and send it back to the web client
main service operates in this order
receive request from web client pot :80
make request to http://localhost:8000 (service 1)
make request to http://localhost:8001 (service 2)
merge results
respond to web client with result
My deployments for service 1 and 2 look like this
SERVICE 1
apiVersion: v1
kind: Service
metadata:
name: serviceone
spec:
selector:
run: serviceone
ports:
- port: 80
targetPort: 5050
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: serviceone-deployment
spec:
replicas: 1
selector:
matchLabels:
run: serviceone
template:
metadata:
labels:
run: serviceone
spec:
containers:
- name: serviceone
image: test.azurecr.io/serviceone:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5050
SERVICE 2
apiVersion: v1
kind: Service
metadata:
name: servicetwo
spec:
selector:
run: servicetwo
ports:
- port: 80
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: servicetwo-deployment
spec:
replicas: 1
selector:
matchLabels:
run: servicetwo
template:
metadata:
labels:
run: servicetwo
spec:
containers:
- name: servicetwo
image: test.azurecr.io/servicetwo:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
But I don't know what the service and deployment would look like for the main service that has to make request to two other services.
EDIT: This is my attempt at the service/deployment for main service
apiVersion: v1
kind: Service
metadata:
name: mainservice
spec:
selector:
run: mainservice
ports:
- port: 80 # incoming traffic from web client pod
targetPort: 80 # traffic goes to container port 80
selector:
run: serviceone
ports:
- port: ?
targetPort: 8000 # the port the container is hardcoded to send traffic to service one
selector:
run: servicetwo
ports:
- port: ?
targetPort: 8001 # the port the container is hardcoded to send traffic to service two
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mainservice-deployment
spec:
replicas: 1
selector:
matchLabels:
run: mainservice
template:
metadata:
labels:
run: mainservice
spec:
containers:
- name: mainservice
image: test.azurecr.io/mainservice:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
EDIT 2: alternate attempt at the service after finding this https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services
apiVersion: v1
kind: Service
metadata:
name: mainservice
spec:
selector:
run: mainservice
ports:
- name: incoming
port: 80 # incoming traffic from web client pod
targetPort: 80 # traffic goes to container port 80
- name: s1
port: 8080
targetPort: 8000 # the port the container is hardcoded to send traffic to service one
- name: s2
port: 8081
targetPort: 8001 # the port the container is hardcoded to send traffic to service two

The main service doesn't need to know anything about the services it calls other than their names. Simply access those services using the name of the Service, i.e. service1 and service2 (http://service1:80) and the requests will be forwarded to the correct pod.
Reference: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

Related

Digitalocean kubernetes cluster load balancer doesn't work properly as round robin

I created a Digitalocean kubernetes cluster and create a service with 4 replicas. the service type is LoadBalancer. the load balancer is also created. but I posted my request to the target endpoint by using postman. and I have written the endpoint with the pod host name. but every time I get the response from the same pod. if the load is balanced by the load balancer, the requests should go to each and every pods. but is not happening as I expected.
my manifest file like below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service-deployment
labels:
app: user-service
spec:
replicas: 3
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: myRepo/ms-user-service:1.0.1
ports:
- containerPort: 8080
env:
- name: SPRING_PROFILES_ACTIVE
value: proud
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
---
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
selector:
app: user-service
type: LoadBalancer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
please resolve this problem.
you have to disable the keepalive feature (service.beta.kubernetes.io/do-loadbalancer-enable-backend-keepalive) when you create the service. under the metadata.annotations you can configure it like the below. read for more detail.
apiVersion: v1
kind: Service
metadata:
name: user-service
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-backend-keepalive: "false"
spec:
selector:
app: user-service
type: LoadBalancer
ports:
- protocol: TCP
port: 8080
targetPort: 8080

Access pod from another pod with kubernetes url

I have two pods created with deployment and service. my problem is as follows the pod "my-gateway" accesses the url "adm-contact" of "http://127.0.0.1:3000/adm-contact" which accesses another pod called "my-adm-contact" as can i make this work? I tried the following command: kubectl port-forward my-gateway-5b85498f7d-5rwnn 3000:3000 8879:8879 but it gives this error:
E0526 21:56:34.024296 12428 portforward.go:400] an error occurred forwarding 3000 -> 3000: error forwarding port 3000 to pod 2d5811c20c3762c6c249a991babb71a107c5dd6b080c3c6d61b4a275b5747815, uid : exit status 1: 2022/05/27 00:56:35 socat[2494] E connect(16, AF=2 127.0.0.1:3000, 16): Connection refused
Remembering that the images created with dockerfile are with EXPOSE 3000 8879
follow my yamls:
Deployment my-adm-contact:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
selector:
matchLabels:
run: my-adm-contact
template:
metadata:
labels:
run: my-adm-contact
spec:
containers:
- name: my-adm-contact
image: my-contact-adm
imagePullPolicy: Never
ports:
- containerPort: 8879
hostPort: 8879
name: admcontact8879
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Sevice my-adm-contact:
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact
labels:
run: my-adm-contact
spec:
selector:
app: my-adm-contact
ports:
- name: 8879-my-adm-contact
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
Deployment my-gateway:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
selector:
matchLabels:
run: my-gateway
template:
metadata:
labels:
run: my-gateway
spec:
containers:
- name: my-gateway
image: api-gateway
imagePullPolicy: Never
ports:
- containerPort: 3000
hostPort: 3000
name: home
#- containerPort: 8879
# hostPort: 8879
# name: adm
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
path: /
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Service my-gateway:
apiVersion: v1
kind: Service
metadata:
name: my-gateway
labels:
run: my-gateway
spec:
selector:
app: my-gateway
ports:
- name: 3000-my-gateway
port: 3000
protocol: TCP
targetPort: 3000
- name: 8879-my-gateway
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
What k8s-cluster environment are you running this in? I ask because the service.type of LoadBalancer is a special kind: at pod initialisation your cloud provider's admission controller will spot this and add in a loadbalancer config See https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
If you're not deploying this in a suitable cloud environment, your services won't do anything.
I had a quick look at your SO profile and - sorry if this is presumptious, I don't mean to be - it looks like you're relatively new to k8s. You shouldn't need to do any port-forwarding/kubectl proxying, and this should be a lot simpler than you might think.
When you create a service k8s will 'create' a DNS entry for you which points to the pod(s) specified by your selector.
I think you're trying to reach a setup where code running in my-gateway pod can connect to http://adm-contact on port 3000 and reach a listening service on the adm-contact pod. Is that correct?
If so, the outline solution is to expose tcp/3000 in the adm-contact pod, and create a service called adm-contact that has a selector for adm-contact pod.
This is a sample manifest I've just created which runs nginx and then creates a service for it, allowing any pod on the cluster to connect to it e.g. curl http://nginx-service.default.svc In this example I'm exposing port 80 because I didn't want to have to modify the nginx config, but the principle is the same.
apiVersion: v1
kind: Pod
metadata:
labels:
app: nginx
name: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: ClusterIP
The k8s docs on Services are pretty helpful if you want more https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
a service can be reached on it's own name from pods in it's namespace:
so a service foo in namespace bar can be reached at http://foo from a pod in namespace bar
from other namespaces that service is reachable at http://foo.bar.svc.cluster.local. Change out the servicename and namespace for your usecase.
k8s dns is explained here in the docs:
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
I have taken the YAML you provided and assembled it here.
From another comment I see the URL you're trying to connect to is: http://gateway-service.default.svc.cluster.local:3000/my-adm-contact-service
The ability to resolve service names to pods only functions inside the cluster: coredns (a k8s pod) is the part which recognises when a service has been created and what IP(s) it's available at.
So another pod in the cluster e.g. one created by kubectl run bb --image=busybox -it -- sh would be able to resolve the command ping gateway-service, but pinging gateway-service from your desktop will fail because they're not both seeing the same DNS.
The api-gateway container will be able to make a connect to my-adm-contact-service on ports 3000 or 8879, and the my-adm-contact container will equally be able to connect to gateway-service on port 3000 - but only when those containers are running inside the cluster.
I think you're trying to access this from outside the cluster, so now the port/service types are correct you could re-try a kubectl port-forward svc/gateway-service 3000:3000 This will let you connect to 127.0.0.1:3000 and the traffic will be routed to port 3000 on the api-gateway container.
If you need to proxy to the other my-adm-contact-service then you'll have to issue similar kubectl commands in other shells, one per service:port combination. For completeness, if you wanted to route traffic from your local machine to all three container/port sets, you'd run:
# format kubectl port-forward svc/name src:dest (both TCP)
kubectl port-forward svc/gateway-service 3000:3000
kubectl port-forward svc/my-adm-contact-service 8879:8879
kubectl port-forward svc/my-adm-contact-service 3001:3000 #NOTE the changed local port, because localhost:3000 is already used
You will need a new shell for each kubectl, or run it as a background job.
apiVersion: v1
kind: Pod
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
containers:
- image: my-contact-adm
imagePullPolicy: Never
name: my-adm-contact
ports:
- containerPort: 8879
protocol: TCP
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
spec:
ports:
- port: 8879
protocol: TCP
targetPort: 8879
name: adm8879
- port: 3000
protocol: TCP
targetPort: 3000
name: adm3000
selector:
app: my-adm-contact
type: ClusterIP
---
apiVersion: v1
kind: Pod
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
containers:
- image: api-gateway
imagePullPolicy: Never
name: my-gateway
ports:
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: gateway-service
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: my-gateway
type: ClusterIP

Not able to access the application using Load Balancer service in Azure Kubernetes Service

I have created small nginx deployment and type as LoadBalancer in Azure Kubernetes service, but I was unable to access the application using LoadBalaner service. Can some one provide the solution
I have already updated security group to allow all traffic, but no use.
Do I need to update any security group to access the application?
Please find the deployment file.
cat nginx.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-kubernetes
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-kubernetes
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: nginx:latest
ports:
- containerPort: 8080
Nginx container is using port 80 by default and you are trying to connect to port 8080 where nothing is listening and thus getting connection refused.
Take a look here at nginx conateiner Dockerfile. What port do you see?
All you need to do to make it work is to change target port like following:
apiVersion: v1
kind: Service
metadata:
name: nginx-kubernetes
spec:
ports:
- port: 8080
targetPort: 80
selector:
app: hello-kubernetes
Additionally it would be nice to change containerPort as following:
spec:
containers:
- name: hello-kubernetes
image: nginx:latest
ports:
- containerPort: 80

LoadBalancer service not reachable

I have a very simple web app based on HTML, javascript + little bit jquery, angularjs. It is tested locally on eclipse Jee and on Tomcat and working fine. And its image is working fine on docker locally.
I can access on browser using localhost:8080/xxxx, 127.0.0.1:8080/xxxx, 0.0.0.0:8080. But when I deploy to google Kubernetes, I'm getting "This site can not be reached" if I use the external IP on the browser. I can ping my external IP, but curl is not working. It's not a firewall issue because sample voting app from dockerhub is working fine on my Kubernetes.
my Dockerfile:
FROM tomcat:9.0
ADD GeoWebv3.war /usr/local/tomcat/webapps/GeoWeb.war
expose 8080
my pod yaml
apiVersion: v1
kind: Pod
metadata:
name: front-app-pod
labels:
name: front-app-pod
app: demo-geo-app
spec:
containers:
- name: front-app
image: myrepo/mywebapp:v2
ports:
- containerPort: 80
my service yaml
apiVersion: v1
kind: Service
metadata:
name: front-service
labels:
name: front-service
app: demo-geo-app
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
name: front-app-pod
app: demo-geo-app
Change your yamls like this
apiVersion: v1
kind: Service
metadata:
name: front-service
labels:
name: front-service
app: demo-geo-app
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
name: front-app-pod
app: demo-geo-app
apiVersion: v1
kind: Pod
metadata:
name: front-app-pod
labels:
name: front-app-pod
app: demo-geo-app
spec:
containers:
- name: front-app
image: myrepo/mywebapp:v2
ports:
- containerPort: 8080
You expose the port 8080 in the docker image. Hence in the service you have to say that the targetPort: 8080 to redirect the traffic coming to load balancer on port 80 to the container's port 8080

Kubernetes (Minikube): link between client and server

I'm running a simple spring microservice project with Minikube. I have two projects: lucky-word-client (on port 8080) and lucky-word-server (on port 8888). But I can't communicate client with server. Infact if lucky-word-client communicates with lucky-word-server, the result is the word "Evviva", else the word is "Default". When I run on terminal: minikube service lucky-client the output is Default, instead of Evviva.
This is the file Dockerfile of lucky-word-server:
FROM frolvlad/alpine-oraclejdk8
ADD build/libs/common-config-server-0.0.1-SNAPSHOT.jar common-config-server.jar
EXPOSE 8888
ENTRYPOINT ["/usr/bin/java", "-Xmx128m", "-Xms128m"]
CMD ["-jar", "common-config-server.jar"]
This is the file Dockerfile of lucky-word-client:
FROM frolvlad/alpine-oraclejdk8
ADD build/libs/lucky-word-client-0.0.1-SNAPSHOT.jar lucky-word-client.jar
EXPOSE 8080
ENTRYPOINT ["/usr/bin/java", "-Xmx128m", "-Xms128m"]
CMD ["-jar", "-Dspring.profiles.active=italian", "lucky-word-client.jar"]
This is deployment of lucky-word-server:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lucky-server
spec:
selector:
matchLabels:
app: lucky-server
replicas: 1
template:
metadata:
labels:
app: lucky-server
spec:
containers:
- name: lucky-server
image: lucky-server-img
imagePullPolicy: Never
ports:
- containerPort: 8888
This is the service of lucky-word-server:
kind: Service
apiVersion: v1
metadata:
name: lucky-server
spec:
selector:
app: lucky-server
ports:
- protocol: TCP
port: 8888
type: NodePort
This is the deployment of lucky-word-client:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lucky-client
spec:
selector:
matchLabels:
app: lucky-client
replicas: 1
template:
metadata:
labels:
app: lucky-client
spec:
containers:
- name: lucky-client
image: lucky-client-img
imagePullPolicy: Never
ports:
- containerPort: 8080
This is the service of lucky-word-client:
kind: Service
apiVersion: v1
metadata:
name: lucky-client
spec:
selector:
app: lucky-client
ports:
- protocol: TCP
port: 8080
type: NodePort
As #suren stated you should specify the target port in the service definition.
And you should change the endpoint URL of the server that client calls to reflect minikube_host_ip. There are couple of ways to achieve that. The naive method would be as follows.
Change your Kubernetes service for the server to have a static Nodeport as follows:
kind: Service
apiVersion: v1
metadata:
name: lucky-server
spec:
selector:
app: lucky-server
ports:
- protocol: TCP
port: 8080
nodePort: 32002
type: NodePort
And in your client code just change the endpoint of the server as follows:
http://{minikube_host_ip}:32002 Replace your {minikube_host_ip} with the ip address of the minikube host here.
But if you don't want to hard code the minikube ip you can inject it as an environment variable in your Kuberenetes deployment script. And that environment variable should be captured in your docker file.
Your services are sending the requests to port 80 now. You need to specify parameter targetPort. Should look like this:
kind: Service
apiVersion: v1
metadata:
name: lucky-server
spec:
selector:
app: lucky-server
ports:
- protocol: TCP
targetPort: 8888 #this is your container port. where to send the requests
port: 8888 #this is the service port. it is running on svc-ip:8888
type: NodePort
You should do the same with the other service. Also check the service port. Now it is on 8080 and 8888. You might be hitting them on port 80.
There might be more issues, but for now, these for sure cause a problem.