Kubernetes cannot access Cassandra within same namespace - kubernetes

I get
All host(s) tried for query failed (tried: 10.244.0.72/10.244.0.72:9042 (com.datastax.driver.core.exceptions.TransportException: [10.244.0.72/10.244.0.72:9042] Channel has been closed))
when trying to access Cassandra within the same namespace. Although when I forward ports it works ok from localhost. keyspace is created successfully.
kubectl port-forward cassandra1-0 9042:9042
My yaml
apiVersion: v1
kind: Service
metadata:
name: cassandra1
labels:
app: cassandra1
spec:
ports:
- name: "cql"
protocol: "TCP"
port: 9042
targetPort: 9042
- name: "thrift"
protocol: "TCP"
port: 9160
targetPort: 9160
selector:
app: cassandra1
type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra1
labels:
app: cassandra1
spec:
serviceName: cassandra1
replicas: 1
selector:
matchLabels:
app: cassandra1
template:
metadata:
labels:
app: cassandra1
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra1
image: gcr.io/google-samples/cassandra:v13
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
- containerPort: 9160
name: thrift
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra1-0.cassandra1.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "cassandra1"
- name: CASSANDRA_DC
value: "DC1-cassandra1"
- name: CASSANDRA_RACK
value: "Rack1-cassandra1"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: cassandra1-data
mountPath: /cassandra1_data
volumeClaimTemplates:
- metadata:
name: cassandra1-data
namespace: default
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Cassandra starts with following properties:
Starting Cassandra on 10.244.0.72
CASSANDRA_CONF_DIR /etc/cassandra
CASSANDRA_CFG /etc/cassandra/cassandra.yaml
CASSANDRA_AUTO_BOOTSTRAP true
CASSANDRA_BROADCAST_ADDRESS 10.244.0.72
CASSANDRA_BROADCAST_RPC_ADDRESS 10.244.0.72
CASSANDRA_CLUSTER_NAME cassandra1
CASSANDRA_COMPACTION_THROUGHPUT_MB_PER_SEC
CASSANDRA_CONCURRENT_COMPACTORS
CASSANDRA_CONCURRENT_READS
CASSANDRA_CONCURRENT_WRITES
CASSANDRA_COUNTER_CACHE_SIZE_IN_MB
CASSANDRA_DC DC1-cassandra1
CASSANDRA_DISK_OPTIMIZATION_STRATEGY ssd
CASSANDRA_ENDPOINT_SNITCH SimpleSnitch
CASSANDRA_GC_WARN_THRESHOLD_IN_MS
CASSANDRA_INTERNODE_COMPRESSION
CASSANDRA_KEY_CACHE_SIZE_IN_MB
CASSANDRA_LISTEN_ADDRESS 10.244.0.72
CASSANDRA_LISTEN_INTERFACE
CASSANDRA_MEMTABLE_ALLOCATION_TYPE
CASSANDRA_MEMTABLE_CLEANUP_THRESHOLD
CASSANDRA_MEMTABLE_FLUSH_WRITERS
CASSANDRA_MIGRATION_WAIT 1
CASSANDRA_NUM_TOKENS 32
CASSANDRA_RACK Rack1-cassandra1
CASSANDRA_RING_DELAY 30000
CASSANDRA_RPC_ADDRESS 0.0.0.0
CASSANDRA_RPC_INTERFACE
CASSANDRA_SEEDS cassandra1-0.cassandra1.default.svc.cluster.local
CASSANDRA_SEED_PROVIDER org.apache.cassandra.locator.SimpleSeedProvider
changed ownership of '/cassandra_data/data' from root to cassandra
changed ownership of '/cassandra_data' from root to cassandra
In my application that runs in the same namespace i tried setting cassandraport to 9042 and host to:
10.240.0.4 (hostIP)
10.244.0.72 (podIP)
cassandra1 (name of the service)
cassandra1.default
cassandra1.default.svc.cluster.local
cassandra1-0.cassandra1.default.svc.cluster.local
_cql._tcp.cassandra1.default.svc.cluster.local
I also tried different types of a service:
headless, ClusterIP, NodePort
Does anybody has ANY ideas what is wrong or what else can i try to get this to work?

Related

Can't read ingress-nginx controller pod's logs after configuring filebeat

currently we are working on persist ingress-nginx logs,we implemented filebeat sidecar put forlogs to logstash,but we cannot read acces.log and error.log this path{/var/log/nginx/access.log} now we required read logs for this path, please give solution for this issue
ingress-nginx deployed manifest files like this
filebeat configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-configmap
namespace: ingress-nginx
data:
filebeat.yml: |
filebeat:
config:
modules:
path: /usr/share/filebeat/modules.d/*.yml
reload:
enabled: true
modules:
- module: nginx
access:
var.paths: ["/var/log/nginx/access.log*"]
error:
var.paths: ["/var/log/nginx/error.log*"]
output:
logstash:
hosts: ["logstash-logstash-headless:9600"]
loadbalance: true
deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-3.10.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.41.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: k8s.gcr.io/ingress-nginx/controller:v0.41.2#sha256:1f4f402b9c14f3ae92b11ada1dfe9893a88f0faeb0b2f4b903e2c67a0c3bf0de
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --annotations-prefix=nginx.ingress.kubernetes.io
- --enable-ssl-passthrough
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
- name: prometheus
containerPort: 10254
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
- name: nginx-logs
mountPath: var/log/nginx
resources:
requests:
cpu: 100m
memory: 90Mi
- name: filebeat-nginx
image: docker.elastic.co/beats/filebeat:7.13.0
volumeMounts:
- name: nginx-logs
mountPath: var/log/nginx
- name: filebeat-config
mountPath: /usr/share/filebeat/filebeat.yml
subPath: filebeat.yml
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: nginx-logs
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
- name: filebeat-config
configMap:
name: filebeat-configmap
items:
- key: filebeat.yml
path: filebeat.yml
It looks like there is a typo in the file path in the following code:
- name: nginx-logs
mountPath: var/log/nginx
resources:
requests:
cpu: 100m
memory: 90Mi
- name: filebeat-nginx
image: docker.elastic.co/beats/filebeat:7.13.0
volumeMounts:
- name: nginx-logs
mountPath: var/log/nginx
The file paths need to be changed from var/log/nginx to /var/log/nginx (forward slash in the beginning of the path)

K8s deployment Minio How to access the Console?

How do I access the Minio console?
minio.yaml
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
spec:
clusterIP: None
ports:
- port: 9000
name: minio
selector:
app: minio
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: minio
spec:
serviceName: minio
replicas: 4
selector:
matchLabels:
app: minio
template:
metadata:
labels:
app: minio
spec:
terminationGracePeriodSeconds: 20
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- minio
topologyKey: kubernetes.io/hostname
containers:
- name: minio
env:
- name: MINIO_ACCESS_KEY
value: "hengshi"
- name: MINIO_SECRET_KEY
value: "hengshi202020"
image: minio/minio:RELEASE.2018-08-02T23-11-36Z
args:
- server
- http://minio-0.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/
- http://minio-1.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/
- http://minio-2.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/
- http://minio-3.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/
ports:
- containerPort: 9000
- containerPort: 9001
volumeMounts:
- name: minio-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: minio-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 300M
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: NodePort
ports:
- name: server-port
port: 9000
targetPort: 9000
protocol: TCP
nodePort: 30009
- name: console-port
port: 9001
targetPort: 9001
protocol: TCP
nodePort: 30010
selector:
app: minio
curl http://NodeIP:30010 is failed
I tried container --args --console-address ":9001" or env MINIO_BROWSER still not accessible
One more question, what is the latest image startup parameter for Minio? There seems to be something wrong with my args
enter image description here
You can specify --console-address :9001 in your deployment.yaml file as below in args: section .
args:
- server
- --console-address
- :9001
- /data
Same way your Service and Ingress needs to point to 9001 port now with the latest Minio.
ports:
- protocol: TCP
port: 9001

Zonal network endpoint group unhealthy even though that container application working properly

I've created a Kubernetes cluster on Google Cloud and even though the application is running properly (which I've checked running requests inside the cluster) it seems that the NEG health check is not working properly. Any ideas on the cause?
I've tried to change the service from NodePort to LoadBalancer, different ways of adding annotations to the service. I was thinking that perhaps it might be related to the https requirement in the django side.
# [START kubernetes_deployment]
apiVersion: apps/v1
kind: Deployment
metadata:
name: moner-app
labels:
app: moner-app
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: moner-app
template:
metadata:
labels:
app: moner-app
spec:
containers:
- name: moner-core-container
image: my-template
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
limits:
memory: "512Mi"
startupProbe:
httpGet:
path: /ht/
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
failureThreshold: 30
timeoutSeconds: 10
periodSeconds: 10
initialDelaySeconds: 90
readinessProbe:
initialDelaySeconds: 120
httpGet:
path: "/ht/"
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
periodSeconds: 10
failureThreshold: 3
timeoutSeconds: 10
livenessProbe:
initialDelaySeconds: 30
failureThreshold: 3
periodSeconds: 30
timeoutSeconds: 10
httpGet:
path: "/ht/"
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
volumeMounts:
- name: cloudstorage-credentials
mountPath: /secrets/cloudstorage
readOnly: true
env:
# [START_secrets]
- name: THIS_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: GRACEFUL_TIMEOUT
value: '120'
- name: GUNICORN_HARD_TIMEOUT
value: '90'
- name: DJANGO_ALLOWED_HOSTS
value: '*,$(THIS_POD_IP),0.0.0.0'
ports:
- containerPort: 5000
args: ["/start"]
# [START proxy_container]
- image: gcr.io/cloudsql-docker/gce-proxy:1.16
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=moner-dev:us-east1:core-db=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
resources:
requests:
memory: "64Mi"
limits:
memory: "128Mi"
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir: {}
- name: cloudstorage-credentials
secret:
secretName: cloudstorage-credentials
# [END volumes]
# [END kubernetes_deployment]
---
# [START service]
apiVersion: v1
kind: Service
metadata:
name: moner-svc
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"5000":{}}}' # Creates an NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "moner-backendconfig"}'
labels:
app: moner-svc
spec:
type: NodePort
ports:
- name: moner-core-http
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: moner-app
# [END service]
---
# [START certificates_setup]
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
spec:
domains:
- domain.com
- app.domain.com
# [END certificates_setup]
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: moner-backendconfig
spec:
customRequestHeaders:
headers:
- "X-Forwarded-Proto:https"
healthCheck:
checkIntervalSec: 15
port: 5000
type: HTTP
requestPath: /ht/
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: moner-ssl
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: moner-svc
port:
name: moner-core-http
Apparently, you didn’t have a GCP firewall rule to allow traffic on port 5000 to your GKE nodes. Creating an ingress firewall rule with IP range - 0.0.0.0/0 and port - TCP 5000 targeted to your GKE nodes could allow your setup to work even with port 5000.
I'm still not sure why, but i've managed to work when moved the service to port 80 and kept the health check on 5000.
Service config:
# [START service]
apiVersion: v1
kind: Service
metadata:
name: moner-svc
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"5000":{}}}' # Creates an NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "moner-backendconfig"}'
labels:
app: moner-svc
spec:
type: NodePort
ports:
- name: moner-core-http
port: 80
protocol: TCP
targetPort: 5000
selector:
app: moner-app
# [END service]
Backend config:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: moner-backendconfig
spec:
customRequestHeaders:
headers:
- "X-Forwarded-Proto:https"
healthCheck:
checkIntervalSec: 15
port: 5000
type: HTTP
requestPath: /ht/

Why does the pods hostname don't resolve?

i have a problem with pods in kubernetes.
i have an app pod (invoice) with init container that checks if mysql pod (invoice-mysql) is running
invoice-mysql is running and ready but the init container in the invoice pod is not seeing it
here is the logs of the init container
DB is not yet reachable;sleep for 10s before retry
DB is not yet reachable;sleep for 10s before retry
nc: bad address 'invoice-mysql'
nc: bad address 'invoice-mysql'
DB is not yet reachable;sleep for 10s before retry
nc: bad address 'invoice-mysql'
DB is not yet reachable;sleep for 10s before retry
nc: bad address 'invoice-mysql'
here is the yaml of invoice
apiVersion: apps/v1
kind: Deployment
metadata:
name: invoice
namespace: jhipster
spec:
replicas: 1
selector:
matchLabels:
app: invoice
version: 'v1'
template:
metadata:
labels:
app: invoice
version: 'v1'
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- invoice
topologyKey: kubernetes.io/hostname
weight: 100
initContainers:
- name: init-ds
image: busybox:latest
command:
- '/bin/sh'
- '-c'
- |
while true
do
rt=$(nc -z -w 1 invoice-mysql 3306)
if [ $? -eq 0 ]; then
echo "DB is UP"
break
fi
echo "DB is not yet reachable;sleep for 10s before retry"
sleep 10
done
containers:
- name: invoice-app
image: docker.pkg.github.com/morsi84/kubernetes/invoice
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: SPRING_CLOUD_CONFIG_URI
value: http://admin:${jhipster.registry.password}#jhipster-registry.Jhipster.svc.cluster.local:8761/config
- name: JHIPSTER_REGISTRY_PASSWORD
valueFrom:
secretKeyRef:
name: registry-secret
key: registry-admin-password
- name: EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE
value: http://admin:${jhipster.registry.password}#jhipster-registry.Jhipster.svc.cluster.local:8761/eureka/
- name: SPRING_DATASOURCE_URL
value: jdbc:mysql://invoice-mysql.Jhipster.svc.cluster.local:3306/invoice?useUnicode=true&characterEncoding=utf8&useSSL=false&useLegacyDatetimeCode=false&serverTimezone=UTC&createDatabaseIfNotExist=true
- name: SPRING_SLEUTH_PROPAGATION_KEYS
value: 'x-request-id,x-ot-span-context'
- name: JAVA_OPTS
value: ' -Xmx256m -Xms256m'
resources:
requests:
memory: '512Mi'
cpu: '500m'
limits:
memory: '1Gi'
cpu: '1'
ports:
- name: http
containerPort: 8081
readinessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 120
imagePullSecrets:
- name: regcred
here is the yaml of invoice-mysql
apiVersion: apps/v1
kind: Deployment
metadata:
name: invoice-mysql
namespace: jhipster
spec:
replicas: 1
selector:
matchLabels:
app: invoice-mysql
template:
metadata:
labels:
app: invoice-mysql
spec:
volumes:
- name: data
emptyDir: {}
containers:
- name: mysql
image: mysql:8.0.20
env:
- name: MYSQL_USER
value: root
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: 'yes'
- name: MYSQL_DATABASE
value: invoice
args:
- --lower_case_table_names=1
- --skip-ssl
- --character_set_server=utf8mb4
- --explicit_defaults_for_timestamp
ports:
- containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql/
resources:
requests:
memory: '512Mi'
cpu: '500m'
limits:
memory: '1Gi'
cpu: '1'
---
apiVersion: v1
kind: Service
metadata:
name: invoice-mysql
namespace: jhipster
spec:
selector:
app: invoice-mysql
ports:
- port: 3306
here is a description of the invoice-mysql service
Name: invoice-mysql
Namespace: jhipster
Labels: <none>
Annotations: <none>
Selector: app=invoice-mysql
Type: ClusterIP
IP: 10.98.220.110
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
Endpoints: 10.244.1.66:3306
Session Affinity: None
Events: <none>
working environment:
faulty environment:
Can you help me please
The problem in fact was due to the firwall on my centos hosts, i disabled the firewall and the names started resolving.
I think you should use a full service domain instead of invoice-mysql. Can you try using invoice-mysql.Jhipster.svc.cluster.local instead?

How to connect nats streaming cluster

I am new to kubernetes and trying to setup nats streaming cluster. I am using following manifest file. But I am confused with how can I access nats streaming server in my application. I am using azure kubernetes service.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: stan-config
data:
stan.conf: |
# listen: nats-streaming:4222
port: 4222
http: 8222
streaming {
id: stan
store: file
dir: /data/stan/store
cluster {
node_id: $POD_NAME
log_path: /data/stan/log
# Explicit names of resulting peers
peers: ["nats-streaming-0", "nats-streaming-1", "nats-streaming-2"]
}
}
---
apiVersion: v1
kind: Service
metadata:
name: nats-streaming
labels:
app: nats-streaming
spec:
type: ClusterIP
selector:
app: nats-streaming
ports:
- port: 4222
targetPort: 4222
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nats-streaming
labels:
app: nats-streaming
spec:
selector:
matchLabels:
app: nats-streaming
serviceName: nats-streaming
replicas: 3
volumeClaimTemplates:
- metadata:
name: stan-sts-vol
spec:
accessModes:
- ReadWriteOnce
volumeMode: "Filesystem"
resources:
requests:
storage: 1Gi
template:
metadata:
labels:
app: nats-streaming
spec:
# Prevent NATS Streaming pods running in same host.
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nats-streaming
# STAN Server
containers:
- name: nats-streaming
image: nats-streaming
ports:
- containerPort: 8222
name: monitor
- containerPort: 7777
name: metrics
args:
- "-sc"
- "/etc/stan-config/stan.conf"
# Required to be able to define an environment variable
# that refers to other environment variables. This env var
# is later used as part of the configuration file.
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: config-volume
mountPath: /etc/stan-config
- name: stan-sts-vol
mountPath: /data/stan
# Disable CPU limits.
resources:
requests:
cpu: 0
livenessProbe:
httpGet:
path: /
port: 8222
initialDelaySeconds: 10
timeoutSeconds: 5
volumes:
- name: config-volume
configMap:
name: stan-config
I tried using nats://nats-streaming:4222, but it gives following error.
stan: connect request timeout (possibly wrong cluster ID?)
I am referring https://docs.nats.io/nats-on-kubernetes/minimal-setup
You did not specified client port 4222 of nats in the StatefulSet, which you are calling inside your Service
...
ports:
- port: 4222
targetPort: 4222
...
As you can see from the simple-nats.yml they have setup the following ports:
...
containers:
- name: nats
image: nats:2.1.0-alpine3.10
ports:
- containerPort: 4222
name: client
hostPort: 4222
- containerPort: 7422
name: leafnodes
hostPort: 7422
- containerPort: 6222
name: cluster
- containerPort: 8222
name: monitor
- containerPort: 7777
name: metrics
command:
- "nats-server"
- "--config"
- "/etc/nats-config/nats.conf"
...
As for exposing the service outside, I would recommend reading Using a Service to Expose Your App and Exposing an External IP Address to Access an Application in a Cluster.
There is also nice article, maybe a bit old (2017) Exposing ports to Kubernetes pods on Azure, you can also check Azure docs about Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI