I setup the hazelcast kubernetes configuration as per the explanations given in the below link,
https://vertx.io/docs/vertx-hazelcast/java/#_configuring_for_kubernetes
But, hazelcast can identify all members on one node and not able to find across all the nodes in the cluster.
Please help us to solve this issue
Following is the service file for hazelcast of type ClusterIP,
apiVersion: v1
kind: Service
metadata:
name: cb-hazelcast-service
spec:
selector:
component: cb-hazelcast-service
type: ClusterIP
clusterIP: None
ports:
- name: hz-port-name
port: 5701
protocol: TCP
Following is the deployment file for microservice 1,
apiVersion: apps/v1
kind: Deployment
metadata:
name: cb-agent-service
spec:
replicas: 1
minReadySeconds: 30
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app: cb-agent-service
template:
metadata:
labels:
app: cb-agent-service
component: cb-hazelcast-service
spec:
containers:
- name: cb-agent-service
image: <docker-image-hub>/agent-service:hz-dns-001
#imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /usr/data/logs
name: shared-logs
ports:
- containerPort: 8085
name: cbport
ports:
- name: hazelcast
containerPort: 5701
volumes:
- name: shared-logs
hostPath:
path: /usr/data/logs
---
apiVersion: v1
kind: Service
metadata:
name: cb-agent-service
labels:
vertx-cluster: "true"
spec:
type: NodePort
ports:
- port: 80
targetPort: 8085
selector:
app: cb-agent-service
following is deployment for another microservice,
apiVersion: apps/v1
kind: Deployment
metadata:
name: cb-transaction-service
spec:
replicas: 1
minReadySeconds: 30
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app: cb-transaction-service
template:
metadata:
labels:
app: cb-transaction-service
component: cb-hazelcast-service
spec:
containers:
- name: cb-transaction-service
image: <docker-iamge-hub>/transaction-service:hz-dns-001
#imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /usr/data/logs
name: shared-logs
ports:
- containerPort: 8085
name: cbport
ports:
- name: hazelcast
containerPort: 5701
nodeSelector:
service: transaction
volumes:
- name: shared-logs
hostPath:
path: /usr/data/logs
---
apiVersion: v1
kind: Service
metadata:
name: cb-transaction-service
labels:
vertx-cluster: "true"
spec:
type: NodePort
ports:
- port: 80
targetPort: 8085
selector:
app: cb-transaction-service
Following is the cluster.xml file for all the microservices
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.6.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<properties>
<property name="hazelcast.memcache.enabled">false</property>
<property name="hazelcast.wait.seconds.before.join">0</property>
<property name="hazelcast.logging.type">slf4j</property>
<property name="hazelcast.health.monitoring.delay.seconds">2</property>
<property name="hazelcast.max.no.heartbeat.seconds">5</property>
<property name="hazelcast.max.no.master.confirmation.seconds">10</property>
<property name="hazelcast.master.confirmation.interval.seconds">10</property>
<property name="hazelcast.member.list.publish.interval.seconds">10</property>
<property name="hazelcast.connection.monitor.interval">10</property>
<property name="hazelcast.connection.monitor.max.faults">2</property>
<property name="hazelcast.partition.migration.timeout">10</property>
<property name="hazelcast.migration.min.delay.on.member.removed.seconds">3</property>
<!-- at the moment the discovery needs to be activated explicitly -->
<property name="hazelcast.discovery.enabled">true</property>
<property name="hazelcast.rest.enabled">false</property>
</properties>
<network>
<port auto-increment="true" port-count="10000">5701</port>
<outbound-ports>
<ports>0</ports>
</outbound-ports>
<join>
<multicast enabled="false"/>
<tcp-ip enabled="false"/>
<discovery-strategies>
<discovery-strategy enabled="true"
class="com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy">
<properties>
<property name="service-dns">cb-hazelcast-service.default.svc.cluster.local</property>
</properties>
</discovery-strategy>
</discovery-strategies>
</join>
</network>
<partition-group enabled="false"/>
<executor-service name="default">
<pool-size>16</pool-size>
<!--Queue capacity. 0 means Integer.MAX_VALUE.-->
<queue-capacity>0</queue-capacity>
</executor-service>
<map name="__vertx.subs">
<backup-count>1</backup-count>
<time-to-live-seconds>0</time-to-live-seconds>
<max-idle-seconds>0</max-idle-seconds>
<max-size policy="PER_NODE">0</max-size>
<eviction-percentage>25</eviction-percentage>
<merge-policy>com.hazelcast.map.merge.LatestUpdateMapMergePolicy</merge-policy>
</map>
<semaphore name="__vertx.*">
<initial-permits>1</initial-permits>
</semaphore>
</hazelcast>
Related
I have a flask pod that connects to a mongodb service through the environment variable SERVICE_HOST (DNS discovery didn't work for some reason), when I change something in mongodb service and re-apply it, the flask pod won't be able to connect to the service anymore since the service host changes, I have to recreate it everytime manually, is there a way to automate this, sort of like docker-compose depends_on directive ?
flask yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy23-api-deployment
labels:
app: proxy23-api
spec:
replicas: 2
selector:
matchLabels:
app: proxy23-api
template:
metadata:
labels:
app: proxy23-api
spec:
containers:
- name: proxy23-api
image: my_image
ports:
- containerPort: 5000
env:
- name: DB_URI
value: mongodb://$(PROXY23_DB_SERVICE_SERVICE_HOST):27017
- name: DB_NAME
value: db
- name: PORT
value: "5000"
imagePullSecrets:
- name: registry-credentials
---
apiVersion: v1
kind: Service
metadata:
name: proxy23-api-service
spec:
selector:
app: proxy23-api
type: NodePort
ports:
- port: 9002
targetPort: 5000
nodePort: 30002
mongodb yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy23-db-deployment
labels:
app: proxy23-db
spec:
replicas: 1
selector:
matchLabels:
app: proxy23-db
template:
metadata:
labels:
app: proxy23-db
spec:
containers:
- name: proxy23-db
image: mongo:bionic
ports:
- containerPort: 27017
volumeMounts:
- name: proxy23-storage
mountPath: /data/db
volumes:
- name: proxy23-storage
persistentVolumeClaim:
claimName: proxy23-db-pvc
---
apiVersion: v1
kind: Service
metadata:
name: proxy23-db-service
spec:
selector:
app: proxy23-db
type: NodePort
ports:
- port: 27017
targetPort: 27017
nodePort: 30003
I created a kubernetes cluster in amazon. Then I run my pod (container) and volume into this cluster. Now I want to run the samba server into the volume and connect my pod to samba server. Is there any tutorial how can I solve this problem? By the way I am working at windows 10. Here is my deployment code with volume:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
labels:
app : application
spec:
replicas: 2
selector:
matchLabels:
project: k8s
template:
metadata:
labels:
project: k8s
spec:
containers:
- name : k8s-web
image: mine/flask:latest
volumeMounts:
- mountPath: /test-ebs
name: my-volume
ports:
- containerPort: 8080
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: pv0004
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0004
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: [my-Id-volume]
you can check out the smaba container docker image at : https://github.com/dperson/samba
---
kind: Service
apiVersion: v1
metadata:
name: smb-server
labels:
app: smb-server
spec:
type: LoadBalancer
selector:
app: smb-server
ports:
- port: 445
name: smb-server
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: smb-server
spec:
replicas: 1
selector:
matchLabels:
app: smb-server
template:
metadata:
name: smb-server
labels:
app: smb-server
spec:
containers:
- name: smb-server
image: dperson/samba
env:
- name: PERMISSIONS
value: "0777"
args: ["-u", "username;test","-s","share;/smbshare/;yes;no;no;all;none","-p"]
volumeMounts:
- mountPath: /smbshare
name: data-volume
ports:
- containerPort: 445
volumes:
- name: data-volume
hostPath:
path: /smbshare
type: DirectoryOrCreate
I have a golang webapp pod running in kubernetes cluster, and I tried to deploy a prometheus pod to monitor the golang webapp pod.
I specified prometheus.io/port: to 2112 in the service.yaml file, which is the port that the golang webapp is listening on, but when I go to the Prometheus dashboard, it says that the 2112 endpoint is down.
I'm following this guide, tried this thread's solution thread, but still getting result saying 2112 endpoint is down.
Below is the my service.yaml and deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'
spec:
selector:
app: prometheus-server
type: NodePort
ports:
- port: 8080
targetPort: 9090
nodePort: 30000
---
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
spec:
type: NodePort
selector:
app: golang
ports:
- name: main
protocol: TCP
port: 80
targetPort: 2112
nodePort: 30001
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: monitoring
labels:
app: prometheus-server
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: monitoring
name: golang
spec:
replicas: 1
template:
metadata:
labels:
app: golang
spec:
containers:
- name: gogo
image: effy77/gogo2
ports:
- containerPort: 2112
selector:
matchLabels:
app: golang
I will try add prometheus.io/port: 2112 to the prometheus deployment part, as I suspect that might be the cause.
I was confused with where to put the annotations,got my clarifications from this thread, I needed to put it under the service's metadata that needs to be scraped by prothemeus. So in my case it needs to be in goapp's metadata.
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'
I have 3 services which are axon, command and query. I am trying running them via Kubernetes. With docker-compose and swarm works perfectly. But somehow not working via K8s.
Getting following error:
Connecting to AxonServer node axonserver:8124 failed: UNAVAILABLE: Unable to resolve host axonserver
Below are my config files.
`
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: axonserver
labels:
app: axonserver
spec:
serviceName: axonserver
replicas: 1
selector:
matchLabels:
app: axonserver
template:
metadata:
labels:
app: axonserver
spec:
containers:
- name: axonserver
image: axoniq/axonserver
env:
- name: AXONSERVER_HOSTNAME
value: axonserver
imagePullPolicy: Always
ports:
- name: grpc
containerPort: 8124
protocol: TCP
- name: gui
containerPort: 8024
protocol: TCP
`
Here is command-service yaml contains service as well.
apiVersion:
kind: Pod
metadata:
name: command-service
labels:
name: peanuts
app: axonserver
spec:
replicas: 1
template:
metadata:
labels:
app: axonserver
spec:
containers:
- image: celcin/command-svc
name: command-service
ports:
- containerPort: 8080
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: command-service
labels:
name: peanuts
app: axonserver
spec:
ports:
- name: "8081"
port: 8081
targetPort: 8080
selector:
labels:
app: axonserver
`
Here is last service as query-service yml file
` apiVersion: v1
kind: Pod
metadata:
name: query-service
labels:
name: peanuts
app: axonserver
spec:
replicas: 1
template:
metadata:
labels:
app: axonserver
spec:
containers:
- image: celcin/query-svc
name: query-service
ports:
- containerPort: 8080
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: query-service
labels:
name: peanuts
app: axonserver
spec:
ports:
- name: "8082"
port: 8082
targetPort: 8080
selector:
labels:
app: axonserver`
your YAML is somehow mixed. If I understood you correctly, you have three services:
command-service
query-service
axonserver
Your setup should be configured in a way that command-service and query-service expose their ports, but both use ports exposed by axonserver. Here is my attempt for your YAML:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: axonserver
labels:
app: axonserver
spec:
serviceName: axonserver
replicas: 1
selector:
matchLabels:
app: axonserver
template:
metadata:
labels:
app: axonserver
spec:
containers:
- name: axonserver
image: axoniq/axonserver
imagePullPolicy: Always
- name: grpc
containerPort: 8124
protocol: TCP
- name: gui
containerPort: 8024
protocol: TCP
The ports your defined in:
ports:
- name: command-srv
containerPort: 8081
protocol: TCP
- name: query-srv
containerPort: 8082
protocol: TCP
are not ports of Axon Server, but of your command-service and query-service and should be exposed in those containers.
Kind regards,
Simon
I've setup Kubernetes to use the Traefik Ingress to provide name based routing. I am a little lost in terms of how to configure for the automatic LetsEncrypt SSL certs. How do I reference the TOML files and configure for HTTPs. I am using a simple container below with the NGINX image to test this.
The below is my YAML for the deployment/service/ingress.
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
app: hmweb
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hmweb-deployment
labels:
app: hmweb
spec:
replicas: 1
selector:
matchLabels:
app: hmweb
template:
metadata:
labels:
app: hmweb
spec:
containers:
- name: hmweb
image: nginx:latest
envFrom:
- configMapRef:
name: config
ports:
- containerPort: 80
I have also included my ingress.yaml
--
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: LoadBalancer
You could build a custom image and include the toml file that way, however that would NOT be best practice. Here's how I did it:
1) Deploy your toml configuration to kubernetes as a ConfigMap like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: cfg-traefik
labels:
app: traefik
data:
traefik.toml: |
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "you#email.com"
storage = "/storage/acme.json"
entryPoint = "https"
acmeLogging = true
onHostRule = true
[acme.tlsChallenge]
2) Connect the configuration to your Traefik deployment. Here's my configuration:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: dpl-traefik
labels:
k8s-app: traefik
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik
template:
metadata:
labels:
k8s-app: traefik
name: traefik
spec:
serviceAccountName: svc-traefik
terminationGracePeriodSeconds: 60
volumes:
- name: config
configMap:
name: cfg-traefik
- name: cert-storage
persistentVolumeClaim:
claimName: pvc-traefik
containers:
- image: traefik:alpine
name: traefik
volumeMounts:
- mountPath: "/config"
name: "config"
- mountPath: "/storage"
name: cert-storage
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
- --configFile=/config/traefik.toml