Expose named ports through environment variables in kubernetes - kubernetes

I need to expose two of my named ports via environment variables. This is my kubernetes deployment yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: flink-taskmanager1255
spec:
replicas: 1
template:
metadata:
labels:
app: flink
component: taskmanager
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: taskmanager
image: myrepo:9555/flink
args:
- taskmanager
resources:
limits:
cpu: "1"
memory: "2Gi"
requests:
cpu: "0.5"
memory: "1Gi"
ports:
- containerPort: 5021
name: data
- containerPort: 5022
name: rpc
- containerPort: 5125
name: query
livenessProbe:
tcpSocket:
port: data
initialDelaySeconds: 35
periodSeconds: 10
volumeMounts:
- mountPath: "/usr/share/flink/"
name: task-pv-storage
env:
- name: JOB_MANAGER_RPC_ADDRESS
value: flink-jobmanager
- name: FLINK_CONFIG_PATH
value: /usr/share/flink/flink-conf.yaml
- name: FLINK_LOG_DIR
value: /usr/share/flink/logs/
- name: TM_RPC_PORT
valueFrom:
resourceFieldRef:
containerName: taskmanager
fieldPath: ports.rpc
- name: TM_DATA_PORT
valueFrom:
resourceFieldRef:
containerName: taskmanager
fieldPath: ports.data
I get this error:
error: error converting YAML to JSON: yaml: line 51: mapping values are not allowed in this context
I believe the way I am trying to access my named ports is wrong, but I have no clue what the right way is. What is the right way to access the named ports?

for starters try to rearrange your yaml like this (one indent less)
- name: TM_RPC_PORT
valueFrom:
resourceFieldRef:
containerName: taskmanager
fieldPath: ports.rpc

Related

Apache Cassandra: Unable to gossip with any seeds on Kubernetes/ Ubuntu Server

I am trying to install datastax cassandra (DSE) on Kubernetes on Ubuntu. But getting the below error in the pod logs.
Apache Cassandra: Unable to gossip with any seeds
Replicas: 2
Heap memory given: 8GB
Ram Given: 16GB
Server: AWS EC2
Disk: Local provision
Note: Minikube is not required for this process
Here's my cluster.yaml file
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
namespace: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 2
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: datastax/dse-server:6.7.7
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "8000m"
memory: 16Gi
requests:
cpu: "8000m"
memory: 16Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 8G
- name: HEAP_NEWSIZE
value: 8G
- name: SEEDS
value: "10.32.0.2,10.32.0.5,10.32.0.6"
- name: CLUSTER_NAME
value: "Demo Cluster"
- name: DC
value: "dc1"
- name: RACK
value: "rack1"
- name: NUM_TOKENS
value: "128"
- name: DS_LICENSE
value: "accept"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- "cp /ready_probe.sh /tmp/ready_probe.sh && chmod 777 /tmp/ready_probe.sh && /tmp/ready_probe.sh"
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: cassandra-data
mountPath: /var/lib/cassandra
subPath: cassandra
- name: cassandra-data
mountPath: /var/lib/spark
subPath: spark
- name: cassandra-data
mountPath: /var/lib/dsefs
subPath: dsefs
- name: cassandra-data
mountPath: /var/log/cassandra
subPath: log-cassandra
- name: cassandra-data
mountPath: /var/log/spark
subPath: log-spark
- name: cassandra-data
mountPath: /config
subPath: config
- name: cassandra-data
mountPath: /var/lib/opscenter
subPath: opscenter
- name: cassandra-data
mountPath: /var/lib/datastax-studio
subPath: datastax-studio
- name: script
mountPath: /ready_probe.sh
subPath: ready.sh
volumes:
- name: script
configMap:
name: script
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ubuntuserver
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-storage
resources:
requests:
storage: 128Gi
Can anyone please help me with that?

Grafana is generating links with Base URL : http://localhost:3000 instead of using my url

I deployed grafana 7 with Kubernetes, here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana-core
namespace: monitoring
labels:
app: grafana
component: core
spec:
selector:
matchLabels:
app: grafana
replicas: 1
template:
metadata:
labels:
app: grafana
component: core
spec:
initContainers:
- name: init-chown-data
image: grafana/grafana:7.0.3
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
command: ["chown", "-R", "472:472", "/var/lib/grafana"]
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var/lib/grafana
containers:
- image: grafana/grafana:7.0.3
name: grafana-core
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 472
# env:
envFrom:
- secretRef:
name: grafana-env
env:
# The following env variables set up basic auth twith the default admin user and admin password.
- name: GF_INSTALL_PLUGINS
value: grafana-clock-panel,grafana-simple-json-datasource,camptocamp-prometheus-alertmanager-datasource
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: grafana
key: admin-username
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: grafana
key: admin-password
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "false"
readinessProbe:
httpGet:
path: /login
port: 3000
initialDelaySeconds: 30
timeoutSeconds: 1
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var/lib/grafana
- name: grafana-datasources
mountPath: /etc/grafana/provisioning/datasources
volumes:
- name: grafana-persistent-storage
persistentVolumeClaim:
claimName: grafana-storage
- name: grafana-datasources
configMap:
name: grafana-datasources
nodeSelector:
kops.k8s.io/instancegroup: monitoring-nodes
It is working well, but each time it generates an URL, it does it with base url : http://localhost:3000 instead of using https://grafana.company.com
Where can I configure that ? I couldn't find a env var that handle it.
Configure the root_url option of [server] in your Grafana config file or env variable GF_SERVER_ROOT_URL to https://grafana.company.com/.
I have fount it can be done through using the env variable inside the grafana pod. This set up is a tricky one, misuse of the url format of the GF_SERVER_ROOT_URL to your.url with no quotes, "your.url" without https:// or http:// and even "http://your.url" with no / at the end may cause problems.
grafana:
env:
GF_SERVER_ROOT_URL: "http://your.url/"
notifiers:
notifiers.yaml:
notifiers:
- name: telegram
type: telegram
uid: telegram
is_default: true
settings:
bottoken: "yourbottoken"
chatid: "-yourchatid"
and then use uid: "telegram" in the provisioned dashboards

Grafana dashboard not showing data (on EKS)

I have deployed Prometheus-Grafana on kubernetes cluster with following manifest file :
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: kube-monitoring
labels:
app: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:6.3.2
imagePullPolicy: IfNotPresent
env:
- name: GF_SECURITY_ADMIN_USER
value: admin
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: grafana-secret
key: admin-password
ports:
- containerPort: 3000
resources:
limits:
cpu: 500m
memory: 2500Mi
requests:
cpu: 100m
memory: 100Mi
livenessProbe:
exec:
command:
- wget
- localhost:3000
- --spider
initialDelaySeconds: 30
periodSeconds: 30
volumeMounts:
- mountPath: /var/lib/grafana
subPath: grafana
name: grafana-storage
readOnly: false
- mountPath: /etc/grafana/provisioning/datasources/
name: grafana-datasource-conf
readOnly: true
- mountPath: /etc/grafana/provisioning/dashboards/
name: grafana-dashboards-conf
readOnly: false
- mountPath: /var/lib/grafana/dashboards/0/k8s-cluster-usage
name: grafana-dashboard-k8s-cluster-usage
readOnly: false
- mountPath: /var/lib/grafana/dashboards/0/k8s-cluster-nodes
name: grafana-dashboard-k8s-cluster-nodes
readOnly: false
- mountPath: /var/lib/grafana/dashboards/0/k8s-core-dns
name: grafana-dashboard-k8s-core-dns
readOnly: false
securityContext:
runAsUser: 472
fsGroup: 472
restartPolicy: Always
volumes:
- name: grafana-storage
persistentVolumeClaim:
claimName: pvc-grafana
- name: grafana-datasource-conf
configMap:
name: grafana-datasource-conf
items:
- key: datasource.yaml
path: datasource.yaml
- name: grafana-dashboards-conf
configMap:
name: grafana-dashboards-conf
items:
- key: dashboards.yaml
path: dashboards.yaml
- name: grafana-dashboard-k8s-cluster-usage
configMap:
name: grafana-dashboard-k8s-cluster-usage
- name: grafana-dashboard-k8s-cluster-nodes
configMap:
name: grafana-dashboard-k8s-cluster-nodes
- name: grafana-dashboard-k8s-core-dns
configMap:
name: grafana-dashboard-k8s-core-dns
and dashboard config is https://pastebin.com/zAYn9BhY (its too long)
Among the list Core DNS & Cluster Usages shows proper data & graphs, but Cluster Nodes doesn't show any data all metric says No data points
Anyone can help here ?
Cluster Nodes won't show you any metrics, because you are probably missing metric-server.
If you are staring with whole Prometheus stack I would consider using prometheus-operator deployed via helm. It is a little overwhelming, but in a fairly easy way, you can start with it and prometheus-operator will deploy metrics-server too.

Kubernetes cannot access Cassandra within same namespace

I get
All host(s) tried for query failed (tried: 10.244.0.72/10.244.0.72:9042 (com.datastax.driver.core.exceptions.TransportException: [10.244.0.72/10.244.0.72:9042] Channel has been closed))
when trying to access Cassandra within the same namespace. Although when I forward ports it works ok from localhost. keyspace is created successfully.
kubectl port-forward cassandra1-0 9042:9042
My yaml
apiVersion: v1
kind: Service
metadata:
name: cassandra1
labels:
app: cassandra1
spec:
ports:
- name: "cql"
protocol: "TCP"
port: 9042
targetPort: 9042
- name: "thrift"
protocol: "TCP"
port: 9160
targetPort: 9160
selector:
app: cassandra1
type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra1
labels:
app: cassandra1
spec:
serviceName: cassandra1
replicas: 1
selector:
matchLabels:
app: cassandra1
template:
metadata:
labels:
app: cassandra1
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra1
image: gcr.io/google-samples/cassandra:v13
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
- containerPort: 9160
name: thrift
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra1-0.cassandra1.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "cassandra1"
- name: CASSANDRA_DC
value: "DC1-cassandra1"
- name: CASSANDRA_RACK
value: "Rack1-cassandra1"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: cassandra1-data
mountPath: /cassandra1_data
volumeClaimTemplates:
- metadata:
name: cassandra1-data
namespace: default
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Cassandra starts with following properties:
Starting Cassandra on 10.244.0.72
CASSANDRA_CONF_DIR /etc/cassandra
CASSANDRA_CFG /etc/cassandra/cassandra.yaml
CASSANDRA_AUTO_BOOTSTRAP true
CASSANDRA_BROADCAST_ADDRESS 10.244.0.72
CASSANDRA_BROADCAST_RPC_ADDRESS 10.244.0.72
CASSANDRA_CLUSTER_NAME cassandra1
CASSANDRA_COMPACTION_THROUGHPUT_MB_PER_SEC
CASSANDRA_CONCURRENT_COMPACTORS
CASSANDRA_CONCURRENT_READS
CASSANDRA_CONCURRENT_WRITES
CASSANDRA_COUNTER_CACHE_SIZE_IN_MB
CASSANDRA_DC DC1-cassandra1
CASSANDRA_DISK_OPTIMIZATION_STRATEGY ssd
CASSANDRA_ENDPOINT_SNITCH SimpleSnitch
CASSANDRA_GC_WARN_THRESHOLD_IN_MS
CASSANDRA_INTERNODE_COMPRESSION
CASSANDRA_KEY_CACHE_SIZE_IN_MB
CASSANDRA_LISTEN_ADDRESS 10.244.0.72
CASSANDRA_LISTEN_INTERFACE
CASSANDRA_MEMTABLE_ALLOCATION_TYPE
CASSANDRA_MEMTABLE_CLEANUP_THRESHOLD
CASSANDRA_MEMTABLE_FLUSH_WRITERS
CASSANDRA_MIGRATION_WAIT 1
CASSANDRA_NUM_TOKENS 32
CASSANDRA_RACK Rack1-cassandra1
CASSANDRA_RING_DELAY 30000
CASSANDRA_RPC_ADDRESS 0.0.0.0
CASSANDRA_RPC_INTERFACE
CASSANDRA_SEEDS cassandra1-0.cassandra1.default.svc.cluster.local
CASSANDRA_SEED_PROVIDER org.apache.cassandra.locator.SimpleSeedProvider
changed ownership of '/cassandra_data/data' from root to cassandra
changed ownership of '/cassandra_data' from root to cassandra
In my application that runs in the same namespace i tried setting cassandraport to 9042 and host to:
10.240.0.4 (hostIP)
10.244.0.72 (podIP)
cassandra1 (name of the service)
cassandra1.default
cassandra1.default.svc.cluster.local
cassandra1-0.cassandra1.default.svc.cluster.local
_cql._tcp.cassandra1.default.svc.cluster.local
I also tried different types of a service:
headless, ClusterIP, NodePort
Does anybody has ANY ideas what is wrong or what else can i try to get this to work?

Getting Consul and Registrator to work in Kubernetes

I'm trying to use Consul with Registrator in GCE & K8s. Everything launches fine except `Registrator'.
Here is my deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: consul
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
service: consul
spec:
restartPolicy: Always
containers:
- name: consul
image: eu.gcr.io/xxx/consul
ports:
- containerPort: 8300
protocol: TCP
- containerPort: 8400
protocol: TCP
- containerPort: 8500
protocol: TCP
- containerPort: 53
protocol: UDP
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
- -server
- -bootstrap
- -advertise=$(MY_POD_IP)
- name: registrator
args:
- -internal
- -ip=$(MY_POD_IP)
- consul://localhost:8500
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: eu.gcr.io/xxx/registrator
volumeMounts:
- mountPath: /tmp/docker.sock
name: registrator-claim0
volumes:
- name: registrator-claim0
persistentVolumeClaim:
claimName: registrator-claim0
status: {}
Here are the log outputs:
Consul:
Registrator:
In docker-compose everything works fine, but I haven't got my head completeley around K8s and GCE. Thanks for the help!
I have switched to Linkerd which works very well together with k8s.