I have 3 services which are axon, command and query. I am trying running them via Kubernetes. With docker-compose and swarm works perfectly. But somehow not working via K8s.
Getting following error:
Connecting to AxonServer node axonserver:8124 failed: UNAVAILABLE: Unable to resolve host axonserver
Below are my config files.
`
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: axonserver
labels:
app: axonserver
spec:
serviceName: axonserver
replicas: 1
selector:
matchLabels:
app: axonserver
template:
metadata:
labels:
app: axonserver
spec:
containers:
- name: axonserver
image: axoniq/axonserver
env:
- name: AXONSERVER_HOSTNAME
value: axonserver
imagePullPolicy: Always
ports:
- name: grpc
containerPort: 8124
protocol: TCP
- name: gui
containerPort: 8024
protocol: TCP
`
Here is command-service yaml contains service as well.
apiVersion:
kind: Pod
metadata:
name: command-service
labels:
name: peanuts
app: axonserver
spec:
replicas: 1
template:
metadata:
labels:
app: axonserver
spec:
containers:
- image: celcin/command-svc
name: command-service
ports:
- containerPort: 8080
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: command-service
labels:
name: peanuts
app: axonserver
spec:
ports:
- name: "8081"
port: 8081
targetPort: 8080
selector:
labels:
app: axonserver
`
Here is last service as query-service yml file
` apiVersion: v1
kind: Pod
metadata:
name: query-service
labels:
name: peanuts
app: axonserver
spec:
replicas: 1
template:
metadata:
labels:
app: axonserver
spec:
containers:
- image: celcin/query-svc
name: query-service
ports:
- containerPort: 8080
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: query-service
labels:
name: peanuts
app: axonserver
spec:
ports:
- name: "8082"
port: 8082
targetPort: 8080
selector:
labels:
app: axonserver`
your YAML is somehow mixed. If I understood you correctly, you have three services:
command-service
query-service
axonserver
Your setup should be configured in a way that command-service and query-service expose their ports, but both use ports exposed by axonserver. Here is my attempt for your YAML:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: axonserver
labels:
app: axonserver
spec:
serviceName: axonserver
replicas: 1
selector:
matchLabels:
app: axonserver
template:
metadata:
labels:
app: axonserver
spec:
containers:
- name: axonserver
image: axoniq/axonserver
imagePullPolicy: Always
- name: grpc
containerPort: 8124
protocol: TCP
- name: gui
containerPort: 8024
protocol: TCP
The ports your defined in:
ports:
- name: command-srv
containerPort: 8081
protocol: TCP
- name: query-srv
containerPort: 8082
protocol: TCP
are not ports of Axon Server, but of your command-service and query-service and should be exposed in those containers.
Kind regards,
Simon
Related
Flask deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-api
spec:
replicas: 1
selector:
matchLabels:
app: flask-api
template:
metadata:
labels:
app: flask-api
spec:
containers:
- name: flask-api-container
image: umarrafaqat/flask-app:latest
imagePullPolicy: Always
ports:
- containerPort: 5000
protocol: TCP
----
apiVersion: v1
kind: Service
metadata:
name: flask-app-service
spec:
type: ClusterIP
ports:
- port: 5000
selector:
app: flask-api
React deployments
apiVersion: apps/v1
kind: Deployment
metadata:
name: react-app
spec:
replicas: 1
selector:
matchLabels:
app: react-app
template:
metadata:
labels:
app: react-app
spec:
containers:
- name: react-app-container
image: umarrafaqat/react-app:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
protocol: TCP
----
apiVersion: v1
kind: Service
metadata:
name: react-app-service
spec:
ports:
- port: 3000
selector:
app: react-app
Ingresss
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: react-app-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
rules:
- host: localhost
http:
paths:
- backend:
service:
name: react-app-service
port:
number: 3000
path: /
pathType: Prefix
I want to access this app on a local host but cannot do so. I am running it on minikube
I have a flask pod that connects to a mongodb service through the environment variable SERVICE_HOST (DNS discovery didn't work for some reason), when I change something in mongodb service and re-apply it, the flask pod won't be able to connect to the service anymore since the service host changes, I have to recreate it everytime manually, is there a way to automate this, sort of like docker-compose depends_on directive ?
flask yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy23-api-deployment
labels:
app: proxy23-api
spec:
replicas: 2
selector:
matchLabels:
app: proxy23-api
template:
metadata:
labels:
app: proxy23-api
spec:
containers:
- name: proxy23-api
image: my_image
ports:
- containerPort: 5000
env:
- name: DB_URI
value: mongodb://$(PROXY23_DB_SERVICE_SERVICE_HOST):27017
- name: DB_NAME
value: db
- name: PORT
value: "5000"
imagePullSecrets:
- name: registry-credentials
---
apiVersion: v1
kind: Service
metadata:
name: proxy23-api-service
spec:
selector:
app: proxy23-api
type: NodePort
ports:
- port: 9002
targetPort: 5000
nodePort: 30002
mongodb yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy23-db-deployment
labels:
app: proxy23-db
spec:
replicas: 1
selector:
matchLabels:
app: proxy23-db
template:
metadata:
labels:
app: proxy23-db
spec:
containers:
- name: proxy23-db
image: mongo:bionic
ports:
- containerPort: 27017
volumeMounts:
- name: proxy23-storage
mountPath: /data/db
volumes:
- name: proxy23-storage
persistentVolumeClaim:
claimName: proxy23-db-pvc
---
apiVersion: v1
kind: Service
metadata:
name: proxy23-db-service
spec:
selector:
app: proxy23-db
type: NodePort
ports:
- port: 27017
targetPort: 27017
nodePort: 30003
I have a golang webapp pod running in kubernetes cluster, and I tried to deploy a prometheus pod to monitor the golang webapp pod.
I specified prometheus.io/port: to 2112 in the service.yaml file, which is the port that the golang webapp is listening on, but when I go to the Prometheus dashboard, it says that the 2112 endpoint is down.
I'm following this guide, tried this thread's solution thread, but still getting result saying 2112 endpoint is down.
Below is the my service.yaml and deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'
spec:
selector:
app: prometheus-server
type: NodePort
ports:
- port: 8080
targetPort: 9090
nodePort: 30000
---
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
spec:
type: NodePort
selector:
app: golang
ports:
- name: main
protocol: TCP
port: 80
targetPort: 2112
nodePort: 30001
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: monitoring
labels:
app: prometheus-server
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: monitoring
name: golang
spec:
replicas: 1
template:
metadata:
labels:
app: golang
spec:
containers:
- name: gogo
image: effy77/gogo2
ports:
- containerPort: 2112
selector:
matchLabels:
app: golang
I will try add prometheus.io/port: 2112 to the prometheus deployment part, as I suspect that might be the cause.
I was confused with where to put the annotations,got my clarifications from this thread, I needed to put it under the service's metadata that needs to be scraped by prothemeus. So in my case it needs to be in goapp's metadata.
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'
I'm well versed in Docker, but must be doing something wrong here with K8. I'm running skaffold with minikube and trying to get DNS between containers working. Here's my deployment:
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api
labels:
app: my-api
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
name: my-api
labels:
app: my-api
spec:
containers:
- name: my-api-postgres
image: postgres:11.2-alpine
env:
- name: POSTGRES_USER
value: "my-api"
- name: POSTGRES_DB
value: "my-api"
- name: POSTGRES_PASSWORD
value: "my-pass"
ports:
- containerPort: 5432
- name: my-api-redis
image: redis:5.0.4-alpine
command: ["redis-server"]
args: ["--appendonly", "yes"]
ports:
- containerPort: 6379
- name: my-api-node
image: my-api-node
command: ["npm"]
args: ["run", "start-docker-dev"]
ports:
- containerPort: 3000
However, in this scenario my-api-node can't contact my-api-postgres via the DNS hostname my-api-postgres. Any idea what I'm doing wrong?
You have defined all 3 containers as part of the same pod. Pods have a common network namespace so in your current setup (which is not correct, more on that in a second), you could talk to the other containers using localhost:<port>.
The 'correct' way of doing this would be to create a deployment for each application, and front those deployments with services.
Your example would roughly become (untested):
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api-node
namespace: my-api
labels:
app: my-api-node
spec:
replicas: 1
selector:
matchLabels:
app: my-api-node
template:
metadata:
name: my-api-node
labels:
app: my-api-node
spec:
containers:
- name: my-api-node
image: my-api-node
command: ["npm"]
args: ["run", "start-docker-dev"]
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
namespace: my-api
name: my-api-node
spec:
selector:
app: my-api-node
ports:
- protocol: TCP
port: 3000
targetPort: 3000
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api-redis
namespace: my-api
labels:
app: my-api-redis
spec:
replicas: 1
selector:
matchLabels:
app: my-api-redis
template:
metadata:
name: my-api-redis
labels:
app: my-api-redis
spec:
containers:
- name: my-api-redis
image: redis:5.0.4-alpine
command: ["redis-server"]
args: ["--appendonly", "yes"]
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
namespace: my-api
name: my-api-redis
spec:
selector:
app: my-api-redis
ports:
- protocol: TCP
port: 6379
targetPort: 6379
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api-postgres
namespace: my-api
labels:
app: my-api-postgres
spec:
replicas: 1
selector:
matchLabels:
app: my-api-postgres
template:
metadata:
name: my-api-postgres
labels:
app: my-api-postgres
spec:
containers:
- name: my-api-postgres
image: postgres:11.2-alpine
env:
- name: POSTGRES_USER
value: "my-api"
- name: POSTGRES_DB
value: "my-api"
- name: POSTGRES_PASSWORD
value: "my-pass"
ports:
- containerPort: 5432
---
apiVersion: v1
kind: Service
metadata:
namespace: my-api
name: my-api-postgres
spec:
selector:
app: my-api-postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
DNS records get registered for services so you are connecting to those and being forwarded to the pods behind it (simplified). If you need to get to your node app from the outside world, that's a whole additional deal, and you should look at LoadBalancer type services, or Ingress.
As an addition to johnharris85 DNS, when you will separate your apps, which you should do in your scenario.
Multi-container Pods are usually used in specific use cases, like for example sidecar containers to help the main container with some particular tasks or proxies, bridges and adapters to for example provide connectivity to some specific destination.
In your case you can easily separate them. In this case you have a deployment with 1 Pod in which there are 3 containers which communicate with each other by localhost and not DNS names as already mentioned.
After which I recommend you to read about DNS inside of Kubernetes and how the communication works with the services stepping up into the game.
In case of pods you can read more here.
I am deploying nginx image using following deployment files in Google Cloud.
For Replicationcontroller :
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-web
labels:
name: nginx-web
app: demo
spec:
replicas: 2
template:
metadata:
labels:
name: nginx-web
spec:
containers:
- name: nginx-web
image: nginx
ports:
- containerPort: 5000
name: http
protocol: TCP
For Service Deployment
apiVersion: v1
kind: Service
metadata:
name: nginx-web
labels:
name: nginx-web
app: demo
spec:
selector:
name: nginx-web
type: LoadBalancer
ports:
- port: 84
targetPort: 5000
protocol: TCP
But when I do curl on external_IP (I got from loadbalancer) on port 84, I get connection refused error. What might be the issue?
The nginx image you are using in your replication controller is listening on port 80 (that's how the image is build).
You need to fix your replication controller spec like this:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-web
labels:
name: nginx-web
app: demo
spec:
replicas: 2
template:
metadata:
labels:
name: nginx-web
spec:
containers:
- name: nginx-web
image: nginx
ports:
- containerPort: 80
name: http
protocol: TCP
And also adjust your service like this:
apiVersion: v1
kind: Service
metadata:
name: nginx-web
labels:
name: nginx-web
app: demo
spec:
selector:
name: nginx-web
type: LoadBalancer
ports:
- port: 84
targetPort: 80
protocol: TCP