Problems to communicate kubernetes pod with external endpoints (rest services, sql server, kafka, redis etc...) - kubernetes

I have a kubernetes cluster of one node. I have java services dockerized that access to rest services, sql server, kafka and another endpoints outside kubernetes cluster but in the same google cloud network.
The main reason cause I ask for help is that I can't connect the java services dockerized inside the pod to before mentioned external endpoints.
I've try before with flannel network but now I've reset the cluster and I've installed calico network without positive results.
Pods of the custer running by default:
Cluster nodes:
I deploy some java services dockerized as cronjobs, others as deployments. To comunicate this cronjobs or deployments with external endpoints like Kafka, Sql Server, etc I use services.
An example of each of them:
Cronjob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronjob-name
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
cronjob1: cronjob-name
spec:
containers:
- image: repository/repository-name:service-name:version
imagePullPolicy: ""
name: service-name
resources: {}
restartPolicy: OnFailure
selector:
matchLabels:
cronjob1: cronjob-name
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
deployment1: deployment_name
name: deployment_name
spec:
replicas: 1
selector:
matchLabels:
deployment1: deployment_name
strategy: {}
template:
metadata:
labels:
deployment1: deployment_name
spec:
containers:
- image: repository/repository-name:service-name:version
imagePullPolicy: ""
name: service-name
resources: {}
imagePullSecrets:
- name: dockerhub
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
Service:
apiVersion: v1
kind: Service
metadata:
name: sqlserver
spec:
type: ClusterIP
selector:
cronjob1: cronjob1
deployment1: deployment1
ports:
- protocol: TCP
port: 1433
targetPort: 1433
My problem is that from java services I can't connect, for example, with Sql Server Instance. I've verified DNS and calico pods logs and there weren't errors. I've try to connect by ssh to pods while it's running and from pod inside I can't do telnet to Sql Server instance.
¿Could you give me some idea about the problem is? or ¿what test could I do?
¡Thank you very much!

I resolved the problem configuring Kubernetes cluster again but with calico instead of fannel.Thanks for the replies. I hope this help anyone else.

Related

I expose my pod in kubernetes but I can´t seem to establish a connection with it

I am trying to expose a deployment I made on minikube:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-test
labels:
app: debian
spec:
replicas: 1
selector:
matchLabels:
app: debian
strategy: {}
template:
metadata:
labels:
app: debian
spec:
containers:
- image: agracia10/debian_bash:latest
name: debian
ports:
- containerPort: 8006
resources: {}
restartPolicy: Always
status: {}
I decided to follow was is written on here
I try to expose the deployment using the following command:
kubectl expose pod deployment-test-8497d6f458-xxhgm --type=NodePort --port=8080 --target-port=80
but when I try to then access the service created by the expose command, using the url provided by
minikube service deployment-test-8497d6f458-xxhgm --url
it throws an error using packetsender to try and connect to the service:
packet sender log
Im not really sure what the reason for this could be, I think it has something to do with the fact that when I get the services it says on the external ip field. Also, when I try and retrieve the node IP using minikube ip it gives an address, but when the minikube service --url it gives the 127.0.0.1 address. In any case, using either one does not work.
it's not working due to a port configuration mismatch.
You deployment container running on the 8006 but you have exposed the 8080 and your target port is : --target-port=80
so due to this it's not working.
Ideal flow of traffic goes like :
service (node port, cluster IP or any) > Deployment > PODs
Below sharing the example for deployment and service
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-app-server-instance
labels:
app: blog-app
spec:
replicas: 1
selector:
matchLabels:
app: blog-app
template:
metadata:
labels:
app: blog-app
spec:
containers:
- name: agracia10/debian_bash:latest
image: blog-app-server
ports:
- containerPort: 8006
---
apiVersion: v1
kind: Service
metadata:
name: blog-app-service
labels:
app: blog-app
spec:
selector:
app: blog-app
type: NodePort
ports:
- port: 80
nodePort: 31364
targetPort: 8006
protocol: TCP
name: HTTP
so things I have changed are image and target port.
Once your Node port service is up and running you will send the request on Port 80 or 31364
i will redirect the request internally to the target port which is 8006 for the container also.
Using this command you exposed your deployment on wrong target point
kubectl expose pod deployment-test-8497d6f458-xxhgm --type=NodePort --port=8080 --target-port=80
ideally it should be 8006
As I know the simplest way to expose the deployment to service we can run this command, you don't expose the pod but expose the deployment.
kubectl expose deployment deployment-test --port 80

Can we create service to link two PODs from different Deployments >

My application has to deployments with a POD.
Can I create a Service to distribute load across these 2 PODs, part of different deployments ?
If so, How ?
Yes it is possible to achieve. Good explanation how to do it can be found on Kubernete documentation. However, keep in mind that both deployments should provide the same functionality, as the output should have the same format.
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
Based on example from Documentation.
1. nginx Deployment. Keep in mind that Deployment can have more than 1 label.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
run: nginx
env: dev
replicas: 2
template:
metadata:
labels:
run: nginx
env: dev
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
2. nginx-second Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-second
spec:
selector:
matchLabels:
run: nginx
env: prod
replicas: 2
template:
metadata:
labels:
run: nginx
env: prod
spec:
containers:
- name: nginx-second
image: nginx
ports:
- containerPort: 80
Now to pair Deployments with Services you have to use Selector based on Deployments labels. Below you can find 2 service YAMLs. nginx-service which pointing to both deployments and nginx-service-1 which points only to nginx-second deployment.
## Both Deployments
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
selector:
run: nginx
---
### To nginx-second deployment
apiVersion: v1
kind: Service
metadata:
name: nginx-service-1
spec:
ports:
- port: 80
protocol: TCP
selector:
env: prod
You can verify that service binds to deployment by checking the endpoints.
$ kubectl get pods -l run=nginx -o yaml | grep podIP
podIP: 10.32.0.9
podIP: 10.32.2.10
podIP: 10.32.0.10
podIP: 10.32.2.11
$ kk get ep nginx-service
NAME ENDPOINTS AGE
nginx-service 10.32.0.10:80,10.32.0.9:80,10.32.2.10:80 + 1 more... 3m33s
$ kk get ep nginx-service-1
NAME ENDPOINTS AGE
nginx-service-1 10.32.0.10:80,10.32.2.11:80 3m36s
Yes, you can do that.
Add a common label key pair to both the deployment pod spec and use that common label as selector in service definition
With the above defined service the requests would be load balanced across all the matching pods.

How to get de pods on a running state

I'm trying to set up Cassandra on a Kubernetes cluster made of three virtual machines using two different files (Deployment and Service). In order to do this I use the command
kubectl create -f file.yaml
The service file works perfectly but when I start the other one with three replicas, the state of the pods is CrashLoopBackOff instead of running.
The configuration of the deployment file is the following
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: cassandra
labels:
app: cassandra
spec:
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
containers:
- name: cassandra
image: gcr.io/google_containers/cassandra:v5
ports:
- containerPort: 9042
And this is the service file
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
type: NodePort
ports:
- port: 9042
selector:
app: cassandra
I appreciate any help on this.
You shouldnt be using Deployment for running stateful applications. StatefulSets are recommended for running databases like cassandra.
follow the below link for reference --> https://kubernetes.io/docs/tutorials/stateful-application/cassandra/

Grafana is not working on kubernetes cluster while using k8s Service

I am trying to setup a very simple monitoring cluster for my k8s cluster. I have successfully created prometheus pod and is running fine.
When I tried to create grafana pod the same way, its not accessible through the node port.
My Grafana deploy file is-
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana-deployment
namespace: monitoring
spec:
replicas: 1
template:
metadata:
labels:
app: grafana-server
spec:
containers:
- name: grafana
image: grafana/grafana:5.1.0
ports:
- containerPort: 3000
And Service File is --
apiVersion: v1
kind: Service
metadata:
name: grafana-service
namespace: monitoring
spec:
selector:
app: grafana-server
type: NodePort
ports:
- port: 3000
targetPort: 3000
Note- When I am creating a simple docker container on the same host using same image, its working fine.
I have come to know that my servers provider had not enabled these ports (like grafana-3000, kibana-5601). Never thought of this since i am using these servers from quite a long time and never faced such blocker. They implemented these rules recently.
Well, after some port approvals, I tried the same config again and it worked like a charm.

How to expose an "election-based master and secondaries" service outside Kubernetes cluster?

I've been trying to implement a service inside Kubernetes where each Pod needs to be accessible from outside the Cluster.
The topology of my service is simple: 3 members, one of them acting as master at any time (election based); writes go to primary; reads go to secondaries. This is MongoDB replica set by the way.
They work with no issues inside the Kubernetes cluster, but from outside the only thing I have is a NodePort service type that load balances incoming connections to one of them, but I need to access each on of them, separately, depending on what I want to do from my client (write or read).
What kind of Kubernetes resource should I use to give individual access to each one of the members of my service?
In order to access every pod from outside you can create a separate service for each pod and use NodePort type.
Because Service uses selectors to get to available backends, you can create just one Service for a master:
apiVersion: v1
kind: Service
metadata:
name: my-master
labels:
run: my-master
spec:
type: NodePort
ports:
- port: #your-external-port
targetPort: #your-port-exposed-in-pod
protocol: TCP
selector:
run: my-master
-------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-master
spec:
selector:
matchLabels:
run: my-master
replicas: 1
template:
metadata:
labels:
run: my-master
spec:
containers:
- name: mongomaster
image: yourcoolimage:lates
ports:
- containerPort: #your-port-exposed-in-pod
Also, you can use one Service for all your read-only replicas and this service will balance requests between all of them.
apiVersion: v1
kind: Service
metadata:
name: my-replicas
labels:
run: my-replicas
spec:
type: LoadBalancer
ports:
- port: #your-external-port
targetPort: #your-port-exposed-in-pod
protocol: TCP
selector:
run: my-replicas
---------
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-replicas
spec:
selector:
matchLabels:
run: my-replicas
replicas: 2
template:
metadata:
labels:
run: my-replicas
spec:
containers:
- name: mongoreplica
image: yourcoolimage:lates
ports:
- containerPort: #your-port-exposed-in-pod
I also suggest you do not expose Pod outside of your network because of security reasons. It would be better to create strict firewall rules to restrict any unexpected connections.