I am trying to install datastax cassandra (DSE) on Kubernetes on Ubuntu. But getting the below error in the pod logs.
Apache Cassandra: Unable to gossip with any seeds
Replicas: 2
Heap memory given: 8GB
Ram Given: 16GB
Server: AWS EC2
Disk: Local provision
Note: Minikube is not required for this process
Here's my cluster.yaml file
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
namespace: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 2
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: datastax/dse-server:6.7.7
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "8000m"
memory: 16Gi
requests:
cpu: "8000m"
memory: 16Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 8G
- name: HEAP_NEWSIZE
value: 8G
- name: SEEDS
value: "10.32.0.2,10.32.0.5,10.32.0.6"
- name: CLUSTER_NAME
value: "Demo Cluster"
- name: DC
value: "dc1"
- name: RACK
value: "rack1"
- name: NUM_TOKENS
value: "128"
- name: DS_LICENSE
value: "accept"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- "cp /ready_probe.sh /tmp/ready_probe.sh && chmod 777 /tmp/ready_probe.sh && /tmp/ready_probe.sh"
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: cassandra-data
mountPath: /var/lib/cassandra
subPath: cassandra
- name: cassandra-data
mountPath: /var/lib/spark
subPath: spark
- name: cassandra-data
mountPath: /var/lib/dsefs
subPath: dsefs
- name: cassandra-data
mountPath: /var/log/cassandra
subPath: log-cassandra
- name: cassandra-data
mountPath: /var/log/spark
subPath: log-spark
- name: cassandra-data
mountPath: /config
subPath: config
- name: cassandra-data
mountPath: /var/lib/opscenter
subPath: opscenter
- name: cassandra-data
mountPath: /var/lib/datastax-studio
subPath: datastax-studio
- name: script
mountPath: /ready_probe.sh
subPath: ready.sh
volumes:
- name: script
configMap:
name: script
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ubuntuserver
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-storage
resources:
requests:
storage: 128Gi
Can anyone please help me with that?
Related
I would like to add a fluent-bit agent as a sidecar container to an existing Istio Ingress Gateway Deployment that is generated via external tooling (istioctl). I figured using ytt and its overlays would be a good way to accomplish this since it should let me append an additional container to the Deployment and a few extra volumes while leaving the rest of the generated YAML intact.
Here's a placeholder Deployment that approximates an istio-ingressgateay to help visualize the structure:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: istio-ingressgateway
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
template:
metadata:
labels:
app: istio-ingressgateway
spec:
containers:
- args:
- example-args
command: ["example-command"]
image: gcr.io/istio/proxyv2
imagePullPolicy: Always
name: istio-proxy
volumes:
- name: example-volume-secret
secret:
secretName: example-secret
- name: example-volume-configmap
configMap:
name: example-configmap
I want to add a container to this that looks like:
- name: fluent-bit
image: fluent/fluent-bit
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc
- name: varlog
mountPath: /var/log
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
and volumes that look like:
- name: fluent-bit-config
configMap:
name: ingressgateway-fluent-bit-forwarder-config
- name: varlog
hostPath:
path: /var/log
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
I managed to hack something together by modifying the overylay files example in the ytt playground, this looks like this:
## load("#ytt:overlay", "overlay")
##overlay/match by=overlay.subset({"kind": "Deployment", "metadata":{"name":"istio-ingressgateway"}}),expects=1
---
spec:
template:
spec:
containers:
##overlay/append
- name: fluent-bit
image: fluent/fluent-bit
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc
- name: varlog
mountPath: /var/log
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
##overlay/match by=overlay.subset({"kind": "Deployment", "metadata":{"name":"istio-ingressgateway"}}),expects=1
---
spec:
template:
spec:
volumes:
##overlay/append
- name: fluent-bit-config
configMap:
name: ingressgateway-fluent-bit-forwarder-config
##overlay/append
- name: varlog
hostPath:
path: /var/log
##overlay/append
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
What I am wondering, though, is what is the best, most idiomatic way of using ytt to do this?
Thanks!
What you have now is good! The one suggestion I would make is that, if the volumes and containers always need to be added together, they be combined in to the same overlay, like so:
## load("#ytt:overlay", "overlay")
##overlay/match by=overlay.subset({"kind": "Deployment", "metadata":{"name":"istio-ingressgateway"}}),expects=1
---
spec:
template:
spec:
containers:
##overlay/append
- name: fluent-bit
image: fluent/fluent-bit
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc
- name: varlog
mountPath: /var/log
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
##overlay/append
- name: fluent-bit-config
configMap:
name: ingressgateway-fluent-bit-forwarder-config
##overlay/append
- name: varlog
hostPath:
path: /var/log
##overlay/append
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
This will guarantee any time the container is added, the appropriate volumes will be included as well.
I have deployed Prometheus-Grafana on kubernetes cluster with following manifest file :
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: kube-monitoring
labels:
app: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:6.3.2
imagePullPolicy: IfNotPresent
env:
- name: GF_SECURITY_ADMIN_USER
value: admin
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: grafana-secret
key: admin-password
ports:
- containerPort: 3000
resources:
limits:
cpu: 500m
memory: 2500Mi
requests:
cpu: 100m
memory: 100Mi
livenessProbe:
exec:
command:
- wget
- localhost:3000
- --spider
initialDelaySeconds: 30
periodSeconds: 30
volumeMounts:
- mountPath: /var/lib/grafana
subPath: grafana
name: grafana-storage
readOnly: false
- mountPath: /etc/grafana/provisioning/datasources/
name: grafana-datasource-conf
readOnly: true
- mountPath: /etc/grafana/provisioning/dashboards/
name: grafana-dashboards-conf
readOnly: false
- mountPath: /var/lib/grafana/dashboards/0/k8s-cluster-usage
name: grafana-dashboard-k8s-cluster-usage
readOnly: false
- mountPath: /var/lib/grafana/dashboards/0/k8s-cluster-nodes
name: grafana-dashboard-k8s-cluster-nodes
readOnly: false
- mountPath: /var/lib/grafana/dashboards/0/k8s-core-dns
name: grafana-dashboard-k8s-core-dns
readOnly: false
securityContext:
runAsUser: 472
fsGroup: 472
restartPolicy: Always
volumes:
- name: grafana-storage
persistentVolumeClaim:
claimName: pvc-grafana
- name: grafana-datasource-conf
configMap:
name: grafana-datasource-conf
items:
- key: datasource.yaml
path: datasource.yaml
- name: grafana-dashboards-conf
configMap:
name: grafana-dashboards-conf
items:
- key: dashboards.yaml
path: dashboards.yaml
- name: grafana-dashboard-k8s-cluster-usage
configMap:
name: grafana-dashboard-k8s-cluster-usage
- name: grafana-dashboard-k8s-cluster-nodes
configMap:
name: grafana-dashboard-k8s-cluster-nodes
- name: grafana-dashboard-k8s-core-dns
configMap:
name: grafana-dashboard-k8s-core-dns
and dashboard config is https://pastebin.com/zAYn9BhY (its too long)
Among the list Core DNS & Cluster Usages shows proper data & graphs, but Cluster Nodes doesn't show any data all metric says No data points
Anyone can help here ?
Cluster Nodes won't show you any metrics, because you are probably missing metric-server.
If you are staring with whole Prometheus stack I would consider using prometheus-operator deployed via helm. It is a little overwhelming, but in a fairly easy way, you can start with it and prometheus-operator will deploy metrics-server too.
I am running an application as a StatefulSet with 2 Pods. I recently discovered an issue which requires clearing some contents on disk and restarting the application.
I would like to minimize customer impact by having atleast one Pod running.
This is fairly trivial for pod-1 as I can scale it down and do the needful and scale it backup. However StatefulSets will not run pod-1 if pod-0 is not running therefore I can't just take pod-0 out of the Service.
I am aware that there is perhaps a way to relabel the Pod to take pod-0 out of the Service. This unfortunately is not an option as it will spin up a new pod-0 (from what I understand).
Is there method to expose a select Pod via the Service or to remove it from the Service Endpoints and re-add it?
Example Spec File
spec:
podManagementPolicy: OrderedReady
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: myapp
serviceName: myapp-headless
template:
metadata:
creationTimestamp: null
labels:
app: myapp
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: NotIn
values:
- confluence
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- myapp
topologyKey: kubernetes.io/hostname
containers:
- env:
- name: JVM_MINIMUM_MEMORY
value: 4g
- name: JVM_MAXIMUM_MEMORY
value: 4g
- name: CATALINA_CONNECTOR_PROXYNAME
value: myapp.dev.example.com
- name: CATALINA_CONNECTOR_PROXYPORT
value: "443"
- name: CATALINA_CONNECTOR_SCHEME
value: https
- name: CATALINA_CONNECTOR_SECURE
value: "true"
- name: CLUSTER
value: "true"
- name: CLUSTER_DOMAIN
value: myapp-headless.proteus.svc.cluster.local
- name: CROWD_SSO
value: "false"
- name: CROWD_APP_NAME
value: myapp
- name: CROWD_APP_PASSWORD
value: xxx
- name: CROWD_BASEURL
value: https://crowd.dev.example.com
image: xxx
imagePullPolicy: IfNotPresent
name: myapp
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 40001
name: ehcache
protocol: TCP
resources:
limits:
memory: 8Gi
requests:
memory: 4Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/vendor/application-data/myapp-home
name: home
- mountPath: /var/vendor/application-data/myapp-home/shared
name: shared
- mountPath: /var/vendor/application-data/myapp-home/dbconfig.xml
name: myapp-db-config
subPath: dbconfig.xml
- mountPath: /opt/vendor/myapp/logs
name: tomcat-logs
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: ecr
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 2
runAsUser: 2
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: myapp-dbconfig-cm
name: myapp-db-config
- name: shared
persistentVolumeClaim:
claimName: myapp-shared
- emptyDir: {}
name: tomcat-logs
updateStrategy:
type: OnDelete
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: home
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: myapp-home-volume
I am creating an OrientDB cluster with Kubernetes (in Minikube) and I use Stateful Sets to create pods. I am trying to mount all the OrientDB cluster configs into a folder named configs. Using command I copy the files after mounting into the standard /orientdb/config folder. However, after I have added this functionality the pods were created at a slower rate and sometimes I get exceptions like:
Unable to connect to the server: net/http: TLS handshake timeout
Before that, I tried to mount configs directly into the /orientdb/config folder. But I had an error with permissions so I have researched that mounting to the root folder is prohibited.
How can I approach such an issue? And can I find some workaround?
The Stateful Set looks like this:
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: orientdbservice
spec:
serviceName: orientdbservice
replicas: 3
selector:
matchLabels:
service: orientdb
type: container-deployment
template:
metadata:
labels:
service: orientdb
type: container-deployment
spec:
containers:
- name: orientdbservice
image: orientdb:2.2.36
command: ["/bin/sh","-c", "cp /configs/* /orientdb/config/ ; /orientdb/bin/server.sh -Ddistributed=true" ]
env:
- name: ORIENTDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: orientdb-password
key: password.txt
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 2424
name: port-binary
- containerPort: 2480
name: port-http
volumeMounts:
- name: config
mountPath: /orientdb/config
- name: orientdb-config-backups
mountPath: /configs/backups.json
subPath: backups.json
- name: orientdb-config-events
mountPath: /configs/events.json
subPath: events.json
- name: orientdb-config-distributed
mountPath: /configs/default-distributed-db-config.json
subPath: default-distributed-db-config.json
- name: orientdb-config-hazelcast
mountPath: /configs/hazelcast.xml
subPath: hazelcast.xml
- name: orientdb-config-server
mountPath: /configs/orientdb-server-config.xml
subPath: orientdb-server-config.xml
- name: orientdb-config-client-logs
mountPath: /configs/orientdb-client-log.properties
subPath: orientdb-client-log.properties
- name: orientdb-config-server-logs
mountPath: /configs/orientdb-server-log.properties
subPath: orientdb-server-log.properties
- name: orientdb-config-plugin
mountPath: /configs/pom.xml
subPath: pom.xml
- name: orientdb-databases
mountPath: /orientdb/databases
- name: orientdb-backup
mountPath: /orientdb/backup
- name: orientdb-data
mountPath: /orientdb/bin/data
volumes:
- name: config
emptyDir: {}
- name: orientdb-config-backups
configMap:
name: orientdb-configmap-backups
- name: orientdb-config-events
configMap:
name: orientdb-configmap-events
- name: orientdb-config-distributed
configMap:
name: orientdb-configmap-distributed
- name: orientdb-config-hazelcast
configMap:
name: orientdb-configmap-hazelcast
- name: orientdb-config-server
configMap:
name: orientdb-configmap-server
- name: orientdb-config-client-logs
configMap:
name: orientdb-configmap-client-logs
- name: orientdb-config-server-logs
configMap:
name: orientdb-configmap-server-logs
- name: orientdb-config-plugin
configMap:
name: orientdb-configmap-plugin
- name: orientdb-data
hostPath:
path: /import_data
type: Directory
volumeClaimTemplates:
- metadata:
name: orientdb-databases
labels:
service: orientdb
type: pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
- metadata:
name: orientdb-backup
labels:
service: orientdb
type: pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
I need to expose two of my named ports via environment variables. This is my kubernetes deployment yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: flink-taskmanager1255
spec:
replicas: 1
template:
metadata:
labels:
app: flink
component: taskmanager
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: taskmanager
image: myrepo:9555/flink
args:
- taskmanager
resources:
limits:
cpu: "1"
memory: "2Gi"
requests:
cpu: "0.5"
memory: "1Gi"
ports:
- containerPort: 5021
name: data
- containerPort: 5022
name: rpc
- containerPort: 5125
name: query
livenessProbe:
tcpSocket:
port: data
initialDelaySeconds: 35
periodSeconds: 10
volumeMounts:
- mountPath: "/usr/share/flink/"
name: task-pv-storage
env:
- name: JOB_MANAGER_RPC_ADDRESS
value: flink-jobmanager
- name: FLINK_CONFIG_PATH
value: /usr/share/flink/flink-conf.yaml
- name: FLINK_LOG_DIR
value: /usr/share/flink/logs/
- name: TM_RPC_PORT
valueFrom:
resourceFieldRef:
containerName: taskmanager
fieldPath: ports.rpc
- name: TM_DATA_PORT
valueFrom:
resourceFieldRef:
containerName: taskmanager
fieldPath: ports.data
I get this error:
error: error converting YAML to JSON: yaml: line 51: mapping values are not allowed in this context
I believe the way I am trying to access my named ports is wrong, but I have no clue what the right way is. What is the right way to access the named ports?
for starters try to rearrange your yaml like this (one indent less)
- name: TM_RPC_PORT
valueFrom:
resourceFieldRef:
containerName: taskmanager
fieldPath: ports.rpc