Service not reachable in Kubernetes - mongodb

For my research project, I need to deploy Graylog in our Kubernetes infrastructure. Graylog uses MongoDB which is deployed on the same cluster.
kubectl describe svc -n mongodb
Name: mongodb
Namespace: mongodb
Labels: app=mongodb
Annotations: Selector: app=mongodb
Type: ClusterIP
IP: 10.109.195.209
Port: 27017 27017/TCP
TargetPort: 27017/TCP
Endpoints: 10.244.2.21:27017
Session Affinity: None
Events: <none>
I use the deployment script bellow to deploy Graylog:
apiVersion: v1
kind: Service
metadata:
name: graylog3
spec:
type: NodePort
selector:
app: graylog-deploy
ports:
- name: "9000"
port: 9000
targetPort: 9000
nodePort: 30003
- name: "12201"
port: 12201
targetPort: 12201
nodePort: 30004
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: graylog-deploy
labels:
app: graylog-deploy
spec:
replicas: 1
selector:
matchLabels:
name: graylog-deploy
template:
metadata:
labels:
name: graylog-deploy
app: graylog-deploy
spec:
containers:
- name: graylog3
image: graylog/graylog:3.0
env:
- name: GRAYLOG_PASSWORD_SECRET
value: g0ABP9MJnWCjWtBX9JHFgjKAmD3wGXP3E0JQNOKlquDHnCn5689QAF8rRL66HacXLPA6fvwMY8BZoVVw0JqHnSAZorDDOdCk
- name: GRAYLOG_ROOT_PASSWORD_SHA2
value: 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- name: GRAYLOG_HTTP_EXTERNAL_URI
value: http://Master_IP:30003/
- name: GRAYLOG_ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: GRAYLOG_MONGODB_URI
value: mongodb://mongodb:27017/graylog
ports:
- containerPort: 9000
- containerPort: 12201
Graylog is throwing an exception:
Caused by: java.net.UnknownHostException: mongodb
But when deploying it using the MongoDB IP, it runs successfully.
I am new to Kubernetes and don't know what I am doing wrong here.
Thanks.

Since your mongodb is running in a different namespace called mongodb , you need to provide the FQDN for the service in that namespace. Your graylog is in default namespace.
So to access the mongodb service in mongodb namespace change your yaml as below
- name: GRAYLOG_MONGODB_URI
value: mongodb://mongodb.mongodb:27017/graylog
Here is link that might provide more insights.

Related

Can we create multiple instances of a Kubernetes cluster & access it through different external IPs?

In Brief, I want to create multiple instances of a pod contains MongoDB & Mongo-express containers which can be accessed through different external IPs. Even when we change anything from mongo-express GUI then the change should be reflected into that particular instance's Mongo database not others. That means, each instance should create a separate volume also.
My current YAML code is below: (Now it creates a single instance only and can be access through the localhost)
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-configmap
data:
database_url: mongodb-service
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb-statefulset
spec:
serviceName: "mongodb-service"
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "admin"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "pass"
volumeMounts:
- name: mongo-initdb
mountPath: /docker-entrypoint-initdb.d
restartPolicy: Always
volumes:
- name: mongo-initdb
configMap:
name: mongo-initdb
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
clusterIP: None
selector:
app: mongodb
ports:
- port: 27017
targetPort: 27017
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
hostname: mongo-express
subdomain: mongodb-service
containers:
# Container for Mongo-Service
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
value: "admin"
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
value: "pass"
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: database_url
---
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- port: 8081
targetPort: 8081
protocol: TCP
name: port1
You don't need to create multiple instances of a cluster, you just need 2 instances of your application. Essentially, two copies of your application that will be isolated in different namespaces, and both will be reachable from the outside through externalIP.
Let's call these namespaces instance-1 and instance-2. This way, you can even use same resource names. It will work for pods because their IP addresses will be different since the internal DNS name uses namespace as one of the parameters. E-g mongo-express-service.instance1 will be a different DNS name than mongo-express-service.instance2. And it will be different for ConfigMaps because they are namespaced as well, you can have the same name if you want in both namespaces. Same true for your Deployment and StatefulSet.
That being said, you can very easily set an externalIP on your LoadBalancer type service (mongo-express) by adding the following field to your Service manifest:
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- port: 8081
targetPort: 8081
protocol: TCP
name: port1
externalIPs:
- 0.0.0.0 #<======= CHANGE_ME
Simply create two separate namespaces:
kubectl create namespace instance-1
kubectl create namespace instance-2
Then configure the right externalIP field for each service. E-g
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- port: 8081
targetPort: 8081
protocol: TCP
name: port1
externalIPs:
- 1.1.1.1 #<======= CHANGE_ME
and for your second instance
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- port: 8081
targetPort: 8081
protocol: TCP
name: port1
externalIPs:
- 2.2.2.2 #<======= CHANGE_ME
Now, let's assume you will have 2 copies of your YAML file, first one, my-application-1.yaml which will have externalIP set to 1.1.1.1 and all other resources the same. Another one called my-application-2.yaml with externalIP set to 2.2.2.2 and all the rest of the resources the same.
Now you can apply all components of your application once to your instance-1 namespace.
# Assuming this is the service with externalIP 1.1.1.1
kubectl apply -n instance-1 my-application-1.yaml
And the same components, with modified externalIP to another namespace:
# Assuming this is the service with externalIP 2.2.2.2
kubectl apply -n instance-2 my-application-2.yaml
Now you can reach your services easily on external IPs at the endpoints:
1.1.1.1:8081 # This is instance 1
2.2.2.2:8081 # This is instance 2
I hope this helps!

Service Endpoint not created although container port is online

I have a simple Service that connects to a port from a container inside a pod.
All pretty straight forward.
This was working too but out of nothing, the endpoint is not created for port 18080.
So I began to investigate and looked at this question but nothing that helped there.
The container is up, no errors/events, all green.
I can also call the request with the pods ip:18080 from an internal container, so the endpoint should be reachable for the service.
I can't see errors in:
journalctl -u snap.microk8s.daemon-*
I am using microk8s v1.20.
Where else can I debug this situation?
I am out of tools.
Service:
kind: Service
apiVersion: v1
metadata:
name: aedi-service
spec:
selector:
app: server
ports:
- name: aedi-host-ws #-port
port: 51056
protocol: TCP
targetPort: host-ws-port
- name: aedi-http
port: 18080
protocol: TCP
targetPort: fcs-http
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
labels:
app: server
spec:
replicas: 1
selector:
matchLabels:
app: server
template:
metadata:
labels:
app: server
srv: os-port-mapping
name: dns-service
spec:
hostname: fcs
containers:
- name: fcs
image: '{{$fcsImage}}'
imagePullPolicy: {{$pullPolicy}}
ports:
- containerPort: 18080
Service Description:
Name: aedi-service
Namespace: fcs-only
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: fcs-only
meta.helm.sh/release-namespace: fcs-only
Selector: app=server
Type: ClusterIP
IP Families: <none>
IP: 10.152.183.247
IPs: 10.152.183.247
Port: aedi-host-ws 51056/TCP
TargetPort: host-ws-port/TCP
Endpoints: 10.1.116.70:51056
Port: aedi-http 18080/TCP
TargetPort: fcs-http/TCP
Endpoints:
Session Affinity: None
Events: <none>
Pod Info:
NAME READY STATUS RESTARTS AGE LABELS
server-deployment-76b5789754-q48xl 6/6 Running 0 23m app=server,name=dns-service,pod-template-hash=76b5789754,srv=os-port-mapping
kubectl get svc aedi-service -o wide:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
aedi-service ClusterIP 10.152.183.247 <none> 443/TCP,1884/TCP,51052/TCP,51051/TCP,51053/TCP,51056/TCP,18080/TCP,51055/TCP 34m app=server
Your service spec refer to a port named "fcs-http" but it was not declared in the deployment. Try:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
...
ports:
- containerPort: 18080
name: fcs-http # <-- add the name here
...
Wrong service configuration
- name: aedi-http
port: 18080 -----> which expose service, it has not related with container port.
protocol: TCP
targetPort: fcs-http -----> Here should be 18080, correspond to container port
If you still want to use name instead of port number, you should define name too in deployment yaml, like below:
containers:
- name: fcs
image: '{{$fcsImage}}'
imagePullPolicy: {{$pullPolicy}}
ports:
- containerPort: 18080
name: fcs-http

unable to access the application deployed on kubernetes cluster using kubernetes playground

I have a 3 node cluster created on kubernetes playground
The 3 nodes as seen on the UI are :
192.168.0.13 : Master
192.168.0.12 : worker
192.168.0.11 : worker
I have a front end app connected to backend mysql.
The deployment and service definition for front end is as below.
apiVersion: v1
kind: Service
metadata:
name: springboot-app
spec:
type: NodePort
ports:
- port: 8080
selector:
app: springboot-app
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: springboot-app
spec:
replicas: 3
selector:
matchLabels:
app: springboot-app
template:
metadata:
labels:
app: springboot-app
spec:
containers:
- image: chinmayeepdas/springbootapp:1.0
name: springboot-app
env:
- name: DATABASE_HOST
value: demo-mysql
- name: DATABASE_NAME
value: chinmayee
- name: DATABASE_USER
value: root
- name: DATABASE_PASSWORD
value: root
- name: DATABASE_PORT
value: "3306"
ports:
- containerPort: 8080
name: app-port
My PODs for UI and backend are up and running.
[node1 ~]$ kubectl describe service springboot-app
Name: springboot-app
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=springboot-app
Type: NodePort
IP: 10.96.187.226
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30373/TCP
Endpoints: 10.32.0.2:8080,10.32.0.3:8080,10.40.0.3:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Now when i do,
http://192.168.0.12:30373/employee/getAll
I dont see any result. I get This site can’t be reached
What IP address i have to give in the URL?
try this solution
kubectl proxy --address 0.0.0.0
Then access it as http://localhost:30373/employee/getAll
or maybe:
http://localhost:8080/employee/getAll
let me know if this fixes the access issue and which one works.

Kubernetes on Spinnaker - Interservice communication

I have a sample application running on a Kubernetes cluster. Two microservices, one is a mongodb container and the other is a java springboot container.
The springboot container interacts with the mongodb container thro a service and stores data into the mongodb container.
The specs are provided below.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: empappdepl
labels:
name: empapp
spec:
replicas: 1
template:
metadata:
labels:
name: empapp
spec:
containers:
-
resources:
limits:
cpu: 0.5
image: 11.168.xx.xx:5000/employee:latest
imagePullPolicy: IfNotPresent
name: wsemp
ports:
- containerPort: 8080
name: wsemp
command: ["java","-Dspring.data.mongodb.uri=mongodb://mongoservice/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
labels:
name: empwhatever
name: empservice
spec:
ports:
- port: 8080
nodePort: 30062
type: NodePort
selector:
name: empapp
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongodbdepl
labels:
name: mongodb
spec:
replicas: 1
template:
metadata:
labels:
name: mongodb
spec:
containers:
- resources:
limits:
cpu: 1
image: mongo
imagePullPolicy: IfNotPresent
name: mongodb
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
labels:
name: mongowhatever
name: mongoservice
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
type: NodePort
selector:
name: mongodb
I would like to know how this communication can be accomplished in spinnaker since it creates its own labels and selectors.
Thanks,
This is how it needs to be done.
Each loadbalancer created for the application is the service. So for mongodb application, after a loadbalancer is created with the nodeport settings, get the name of the service eg: mongodb-dev. The server group for mongodb also needs to be created.
Then when creating the employee server group, you need to specify the commands one by one in a separate line for that container as mentioned here
https://github.com/spinnaker/spinnaker/issues/2021#issuecomment-334885467
"java","-Dspring.data.mongodb.uri=mongodb://name-of-mongodb-service/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"
Now when the employee and mongodb pod starts, it is able to get its mapping and able to communicate properly.

How to make two Kubernetes Services talk to each other?

Currently, I have working K8s API pods in a K8s service that connects to a K8s Redis service, with K8s pods of it's own. The problem is, I am using NodePort meaning BOTH are exposed to the public. I only want the API accessable to the public. The issue is that if I make the Redis service not public, the API can't see it. Is there a way to connect two Services without exposing one to the public?
This is my API service yaml:
apiVersion: v1
kind: Service
metadata:
name: app-api-svc
spec:
selector:
app: app-api
tier: api
ports:
- protocol: TCP
port: 5000
nodePort: 30400
type: NodePort
And this is my Redis service yaml:
apiVersion: v1
kind: Service
metadata:
name: app-api-redis-svc
spec:
selector:
app: app-api-redis
tier: celery_broker
ports:
- protocol: TCP
port: 6379
nodePort: 30537
type: NodePort
First, configure the Redis service as a ClusterIP service. It will be private, visible only for other services. This is could be done removing the line with the option type.
apiVersion: v1
kind: Service
metadata:
name: app-api-redis-svc
spec:
selector:
app: app-api-redis
tier: celery_broker
ports:
- protocol: TCP
port: 6379
targetPort: [the port exposed by the Redis pod]
Finally, when you configure the API to reach Redis, the address should be app-api-redis-svc:6379
And that's all. I have a lot of services communicating each other in this way. If this doesn't work for you, let me know in the comments.
I'm going to try to take the best from all answers and my own research and make a short guide that I hope you will find helpful:
1. Test connectivity
Connect to a different pod, eg ruby pod:
kubectl exec -it some-pod-name -- /bin/sh
Verify it can ping to the service in question:
ping redis
Can it connect to the port? (I found telnet did not work for this)
nc -zv redis 6379
2. Verify your service selectors are correct
If your service config looks like this:
kind: Service
apiVersion: v1
metadata:
name: redis
labels:
app: redis
role: master
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
verify those selectors are also set on your pods?
get pods --selector=app=redis,role=master,tier=backend
Confirm that your service is tied to your pods by running:
$> describe service redis
Name: redis
Namespace: default
Labels: app=redis
role=master
tier=backend
Annotations: <none>
Selector: app=redis,role=master,tier=backend
Type: ClusterIP
IP: 10.47.250.121
Port: <unset> 6379/TCP
Endpoints: 10.44.0.16:6379
Session Affinity: None
Events: <none>
check the Endpoints: field and confirm it's not blank
More info can be found at:
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#my-service-is-missing-endpoints
I'm not sure about redis, but I have a similar application. I have a Java web application running as a pod that is exposed to the outside world through a nodePort. I have a mongodb container running as a pod.
In the webapp deployment specifications, I map it to the mongodb service through its name by passing the service name as parameter, I have pasted the specification below. You can modify accordingly.There should be a similar mapping parameter in Redis also where you would have to use the service name which is "mongoservice" in my case.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: empappdepl
labels:
name: empapp
spec:
replicas: 1
template:
metadata:
labels:
name: empapp
spec:
containers:
- resources:
limits:
cpu: 0.2
image: registryip:5000/employee:1
imagePullPolicy: IfNotPresent
name: wsemp
ports:
- containerPort: 8080
name: wsemp
command: ["java","-Dspring.data.mongodb.uri=mongodb://mongoservice/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
labels:
name: empwhatever
name: empservice
spec:
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http
nodePort: 30062
type: NodePort
selector:
name: empapp
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongodbdepl
labels:
name: mongodb
spec:
replicas: 1
template:
metadata:
labels:
name: mongodb
spec:
containers:
- resources:
limits:
cpu: 0.3
image: mongo
imagePullPolicy: IfNotPresent
name: mongodb
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
labels:
name: mongowhatever
name: mongoservice
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
selector:
name: mongodb
Note that the mongodb service doesnt need to be exposed as a NodePort.
Kubernetes enables inter service communication by allowing services communicate with other services using their service name.
In your scenario, redis service should be accessible from other services on
http://app-api-redis-svc.default:6379. Here default is the namespace under which your service is running.
This internally routes your requests to your redis pod running on the target container port
Checkout this link for different modes of service discovery options provided by kubernetes
Hope it helps