Why does the pods hostname don't resolve? - kubernetes

i have a problem with pods in kubernetes.
i have an app pod (invoice) with init container that checks if mysql pod (invoice-mysql) is running
invoice-mysql is running and ready but the init container in the invoice pod is not seeing it
here is the logs of the init container
DB is not yet reachable;sleep for 10s before retry
DB is not yet reachable;sleep for 10s before retry
nc: bad address 'invoice-mysql'
nc: bad address 'invoice-mysql'
DB is not yet reachable;sleep for 10s before retry
nc: bad address 'invoice-mysql'
DB is not yet reachable;sleep for 10s before retry
nc: bad address 'invoice-mysql'
here is the yaml of invoice
apiVersion: apps/v1
kind: Deployment
metadata:
name: invoice
namespace: jhipster
spec:
replicas: 1
selector:
matchLabels:
app: invoice
version: 'v1'
template:
metadata:
labels:
app: invoice
version: 'v1'
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- invoice
topologyKey: kubernetes.io/hostname
weight: 100
initContainers:
- name: init-ds
image: busybox:latest
command:
- '/bin/sh'
- '-c'
- |
while true
do
rt=$(nc -z -w 1 invoice-mysql 3306)
if [ $? -eq 0 ]; then
echo "DB is UP"
break
fi
echo "DB is not yet reachable;sleep for 10s before retry"
sleep 10
done
containers:
- name: invoice-app
image: docker.pkg.github.com/morsi84/kubernetes/invoice
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: SPRING_CLOUD_CONFIG_URI
value: http://admin:${jhipster.registry.password}#jhipster-registry.Jhipster.svc.cluster.local:8761/config
- name: JHIPSTER_REGISTRY_PASSWORD
valueFrom:
secretKeyRef:
name: registry-secret
key: registry-admin-password
- name: EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE
value: http://admin:${jhipster.registry.password}#jhipster-registry.Jhipster.svc.cluster.local:8761/eureka/
- name: SPRING_DATASOURCE_URL
value: jdbc:mysql://invoice-mysql.Jhipster.svc.cluster.local:3306/invoice?useUnicode=true&characterEncoding=utf8&useSSL=false&useLegacyDatetimeCode=false&serverTimezone=UTC&createDatabaseIfNotExist=true
- name: SPRING_SLEUTH_PROPAGATION_KEYS
value: 'x-request-id,x-ot-span-context'
- name: JAVA_OPTS
value: ' -Xmx256m -Xms256m'
resources:
requests:
memory: '512Mi'
cpu: '500m'
limits:
memory: '1Gi'
cpu: '1'
ports:
- name: http
containerPort: 8081
readinessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 120
imagePullSecrets:
- name: regcred
here is the yaml of invoice-mysql
apiVersion: apps/v1
kind: Deployment
metadata:
name: invoice-mysql
namespace: jhipster
spec:
replicas: 1
selector:
matchLabels:
app: invoice-mysql
template:
metadata:
labels:
app: invoice-mysql
spec:
volumes:
- name: data
emptyDir: {}
containers:
- name: mysql
image: mysql:8.0.20
env:
- name: MYSQL_USER
value: root
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: 'yes'
- name: MYSQL_DATABASE
value: invoice
args:
- --lower_case_table_names=1
- --skip-ssl
- --character_set_server=utf8mb4
- --explicit_defaults_for_timestamp
ports:
- containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql/
resources:
requests:
memory: '512Mi'
cpu: '500m'
limits:
memory: '1Gi'
cpu: '1'
---
apiVersion: v1
kind: Service
metadata:
name: invoice-mysql
namespace: jhipster
spec:
selector:
app: invoice-mysql
ports:
- port: 3306
here is a description of the invoice-mysql service
Name: invoice-mysql
Namespace: jhipster
Labels: <none>
Annotations: <none>
Selector: app=invoice-mysql
Type: ClusterIP
IP: 10.98.220.110
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
Endpoints: 10.244.1.66:3306
Session Affinity: None
Events: <none>
working environment:
faulty environment:
Can you help me please

The problem in fact was due to the firwall on my centos hosts, i disabled the firewall and the names started resolving.

I think you should use a full service domain instead of invoice-mysql. Can you try using invoice-mysql.Jhipster.svc.cluster.local instead?

Related

K8s deployment Minio How to access the Console?

How do I access the Minio console?
minio.yaml
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
spec:
clusterIP: None
ports:
- port: 9000
name: minio
selector:
app: minio
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: minio
spec:
serviceName: minio
replicas: 4
selector:
matchLabels:
app: minio
template:
metadata:
labels:
app: minio
spec:
terminationGracePeriodSeconds: 20
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- minio
topologyKey: kubernetes.io/hostname
containers:
- name: minio
env:
- name: MINIO_ACCESS_KEY
value: "hengshi"
- name: MINIO_SECRET_KEY
value: "hengshi202020"
image: minio/minio:RELEASE.2018-08-02T23-11-36Z
args:
- server
- http://minio-0.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/
- http://minio-1.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/
- http://minio-2.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/
- http://minio-3.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/
ports:
- containerPort: 9000
- containerPort: 9001
volumeMounts:
- name: minio-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: minio-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 300M
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: NodePort
ports:
- name: server-port
port: 9000
targetPort: 9000
protocol: TCP
nodePort: 30009
- name: console-port
port: 9001
targetPort: 9001
protocol: TCP
nodePort: 30010
selector:
app: minio
curl http://NodeIP:30010 is failed
I tried container --args --console-address ":9001" or env MINIO_BROWSER still not accessible
One more question, what is the latest image startup parameter for Minio? There seems to be something wrong with my args
enter image description here
You can specify --console-address :9001 in your deployment.yaml file as below in args: section .
args:
- server
- --console-address
- :9001
- /data
Same way your Service and Ingress needs to point to 9001 port now with the latest Minio.
ports:
- protocol: TCP
port: 9001

Zonal network endpoint group unhealthy even though that container application working properly

I've created a Kubernetes cluster on Google Cloud and even though the application is running properly (which I've checked running requests inside the cluster) it seems that the NEG health check is not working properly. Any ideas on the cause?
I've tried to change the service from NodePort to LoadBalancer, different ways of adding annotations to the service. I was thinking that perhaps it might be related to the https requirement in the django side.
# [START kubernetes_deployment]
apiVersion: apps/v1
kind: Deployment
metadata:
name: moner-app
labels:
app: moner-app
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: moner-app
template:
metadata:
labels:
app: moner-app
spec:
containers:
- name: moner-core-container
image: my-template
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
limits:
memory: "512Mi"
startupProbe:
httpGet:
path: /ht/
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
failureThreshold: 30
timeoutSeconds: 10
periodSeconds: 10
initialDelaySeconds: 90
readinessProbe:
initialDelaySeconds: 120
httpGet:
path: "/ht/"
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
periodSeconds: 10
failureThreshold: 3
timeoutSeconds: 10
livenessProbe:
initialDelaySeconds: 30
failureThreshold: 3
periodSeconds: 30
timeoutSeconds: 10
httpGet:
path: "/ht/"
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
volumeMounts:
- name: cloudstorage-credentials
mountPath: /secrets/cloudstorage
readOnly: true
env:
# [START_secrets]
- name: THIS_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: GRACEFUL_TIMEOUT
value: '120'
- name: GUNICORN_HARD_TIMEOUT
value: '90'
- name: DJANGO_ALLOWED_HOSTS
value: '*,$(THIS_POD_IP),0.0.0.0'
ports:
- containerPort: 5000
args: ["/start"]
# [START proxy_container]
- image: gcr.io/cloudsql-docker/gce-proxy:1.16
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=moner-dev:us-east1:core-db=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
resources:
requests:
memory: "64Mi"
limits:
memory: "128Mi"
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir: {}
- name: cloudstorage-credentials
secret:
secretName: cloudstorage-credentials
# [END volumes]
# [END kubernetes_deployment]
---
# [START service]
apiVersion: v1
kind: Service
metadata:
name: moner-svc
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"5000":{}}}' # Creates an NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "moner-backendconfig"}'
labels:
app: moner-svc
spec:
type: NodePort
ports:
- name: moner-core-http
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: moner-app
# [END service]
---
# [START certificates_setup]
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
spec:
domains:
- domain.com
- app.domain.com
# [END certificates_setup]
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: moner-backendconfig
spec:
customRequestHeaders:
headers:
- "X-Forwarded-Proto:https"
healthCheck:
checkIntervalSec: 15
port: 5000
type: HTTP
requestPath: /ht/
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: moner-ssl
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: moner-svc
port:
name: moner-core-http
Apparently, you didn’t have a GCP firewall rule to allow traffic on port 5000 to your GKE nodes. Creating an ingress firewall rule with IP range - 0.0.0.0/0 and port - TCP 5000 targeted to your GKE nodes could allow your setup to work even with port 5000.
I'm still not sure why, but i've managed to work when moved the service to port 80 and kept the health check on 5000.
Service config:
# [START service]
apiVersion: v1
kind: Service
metadata:
name: moner-svc
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"5000":{}}}' # Creates an NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "moner-backendconfig"}'
labels:
app: moner-svc
spec:
type: NodePort
ports:
- name: moner-core-http
port: 80
protocol: TCP
targetPort: 5000
selector:
app: moner-app
# [END service]
Backend config:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: moner-backendconfig
spec:
customRequestHeaders:
headers:
- "X-Forwarded-Proto:https"
healthCheck:
checkIntervalSec: 15
port: 5000
type: HTTP
requestPath: /ht/

HPA showing unknown in k8s

I configured HPA using a command as shown below
kubectl autoscale deployment isamruntime-v1 --cpu-percent=20 --min=1 --max=3 --namespace=default
horizontalpodautoscaler.autoscaling/isamruntime-v1 autoscaled
However, the HPA cannot identify the CPU load.
pranam#UNKNOWN kubernetes % kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
isamruntime-v1 Deployment/isamruntime-v1 <unknown>/20% 1 3 0 3s
I read a number of articles which suggested installing metrics server. So, I did that.
pranam#UNKNOWN kubernetes % kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator configured
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader configured
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io configured
serviceaccount/metrics-server configured
deployment.apps/metrics-server configured
service/metrics-server configured
clusterrole.rbac.authorization.k8s.io/system:metrics-server configured
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server configured
I can see the metrics server.
pranam#UNKNOWN kubernetes % kubectl get pods -o wide --namespace=kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-7d88b45844-lz8zw 1/1 Running 0 22d 10.164.27.28 10.164.27.28 <none> <none>
calico-node-bsx6p 1/1 Running 0 8d 10.164.27.39 10.164.27.39 <none> <none>
calico-node-g229m 1/1 Running 0 8d 10.164.27.46 10.164.27.46 <none> <none>
calico-node-slwrh 1/1 Running 0 22d 10.164.27.28 10.164.27.28 <none> <none>
calico-node-tztjg 1/1 Running 0 8d 10.164.27.44 10.164.27.44 <none> <none>
coredns-7d6bb98ccc-d8nrs 1/1 Running 0 25d 172.30.93.205 10.164.27.28 <none> <none>
coredns-7d6bb98ccc-n28dm 1/1 Running 0 25d 172.30.93.204 10.164.27.28 <none> <none>
coredns-7d6bb98ccc-zx5jx 1/1 Running 0 25d 172.30.93.197 10.164.27.28 <none> <none>
coredns-autoscaler-848db65fc6-lnfvf 1/1 Running 0 25d 172.30.93.201 10.164.27.28 <none> <none>
dashboard-metrics-scraper-576c46d9bd-k6z85 1/1 Running 0 25d 172.30.93.195 10.164.27.28 <none> <none>
ibm-file-plugin-7c57965855-494bz 1/1 Running 0 22d 172.30.93.216 10.164.27.28 <none> <none>
ibm-iks-cluster-autoscaler-7df84fb95c-fhtgv 1/1 Running 0 2d23h 172.30.137.98 10.164.27.46 <none> <none>
ibm-keepalived-watcher-9w4gb 1/1 Running 0 8d 10.164.27.39 10.164.27.39 <none> <none>
ibm-keepalived-watcher-ps5zm 1/1 Running 0 8d 10.164.27.46 10.164.27.46 <none> <none>
ibm-keepalived-watcher-rzxbs 1/1 Running 0 8d 10.164.27.44 10.164.27.44 <none> <none>
ibm-keepalived-watcher-w6mxb 1/1 Running 0 25d 10.164.27.28 10.164.27.28 <none> <none>
ibm-master-proxy-static-10.164.27.28 2/2 Running 0 25d 10.164.27.28 10.164.27.28 <none> <none>
ibm-master-proxy-static-10.164.27.39 2/2 Running 0 8d 10.164.27.39 10.164.27.39 <none> <none>
ibm-master-proxy-static-10.164.27.44 2/2 Running 0 8d 10.164.27.44 10.164.27.44 <none> <none>
ibm-master-proxy-static-10.164.27.46 2/2 Running 0 8d 10.164.27.46 10.164.27.46 <none> <none>
ibm-storage-watcher-67466b969f-ps55m 1/1 Running 0 22d 172.30.93.217 10.164.27.28 <none> <none>
kubernetes-dashboard-c6b4b9d77-27zwb 1/1 Running 2 22d 172.30.93.218 10.164.27.28 <none> <none>
metrics-server-79d847cf58-6frsf 2/2 Running 0 3m23s 172.30.93.226 10.164.27.28 <none> <none>
public-crbro6um6l04jalpqrsl5g-alb1-8465f75bb4-88vl5 4/4 Running 0 11h 172.30.93.225 10.164.27.28 <none> <none>
public-crbro6um6l04jalpqrsl5g-alb1-8465f75bb4-vx68d 4/4 Running 0 11h 172.30.137.104 10.164.27.46 <none> <none>
vpn-58b48cdc7c-4lp9c 1/1 Running 0 25d 172.30.93.193 10.164.27.28 <none> <none>
I am using Istio and sysdig. Not sure if that breaks anything. My k8s versions are shown below.
pranam#UNKNOWN kubernetes % kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.7", GitCommit:"be3d344ed06bff7a4fc60656200a93c74f31f9a4", GitTreeState:"clean", BuildDate:"2020-02-11T19:34:02Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.7+IKS", GitCommit:"3305158dfe9ee1f89f596ef260135dcba881848c", GitTreeState:"clean", BuildDate:"2020-06-17T18:32:22Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
My YAML file is
#Assumes create-docker-store-secret.sh used to create dockerlogin secret
#Assumes create-secrets.sh used to create key file, sam admin, and cfgsvc secrets
apiVersion: storage.k8s.io/v1beta1
# Create StorageClass with gidallocate=true to allow non-root user access to mount
# This is used by PostgreSQL container
kind: StorageClass
metadata:
name: ibmc-file-bronze-gid
labels:
kubernetes.io/cluster-service: "true"
provisioner: ibm.io/ibmc-file
parameters:
type: "Endurance"
iopsPerGB: "2"
sizeRange: "[1-12000]Gi"
mountOptions: nfsvers=4.1,hard
billingType: "hourly"
reclaimPolicy: "Delete"
classVersion: "2"
gidAllocate: "true"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldaplib
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldapslapd
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldapsecauthority
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgresqldata
spec:
storageClassName: ibmc-file-bronze-gid
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: isamconfig
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openldap
labels:
app: openldap
spec:
selector:
matchLabels:
app: openldap
replicas: 1
template:
metadata:
labels:
app: openldap
spec:
volumes:
- name: ldaplib
persistentVolumeClaim:
claimName: ldaplib
- name: ldapslapd
persistentVolumeClaim:
claimName: ldapslapd
- name: ldapsecauthority
persistentVolumeClaim:
claimName: ldapsecauthority
- name: openldap-keys
secret:
secretName: openldap-keys
containers:
- name: openldap
image: ibmcom/isam-openldap:9.0.7.0
ports:
- containerPort: 636
env:
- name: LDAP_DOMAIN
value: ibm.com
- name: LDAP_ADMIN_PASSWORD
value: Passw0rd
- name: LDAP_CONFIG_PASSWORD
value: Passw0rd
volumeMounts:
- mountPath: /var/lib/ldap
name: ldaplib
- mountPath: /etc/ldap/slapd.d
name: ldapslapd
- mountPath: /var/lib/ldap.secAuthority
name: ldapsecauthority
- mountPath: /container/service/slapd/assets/certs
name: openldap-keys
# This line is needed when running on Kubernetes 1.9.4 or above
args: [ "--copy-service"]
# useful for debugging startup issues - can run bash, then exec to the container and poke around
# command: [ "/bin/bash"]
# args: [ "-c", "while /bin/true ; do sleep 5; done" ]
# Just this line to get debug output from openldap startup
# args: [ "--loglevel" , "trace","--copy-service"]
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: openldap
labels:
app: openldap
spec:
ports:
- port: 636
name: ldaps
protocol: TCP
selector:
app: openldap
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresql
labels:
app: postgresql
spec:
selector:
matchLabels:
app: postgresql
replicas: 1
template:
metadata:
labels:
app: postgresql
spec:
securityContext:
runAsNonRoot: true
runAsUser: 70
fsGroup: 0
volumes:
- name: postgresqldata
persistentVolumeClaim:
claimName: postgresqldata
- name: postgresql-keys
secret:
secretName: postgresql-keys
containers:
- name: postgresql
image: ibmcom/isam-postgresql:9.0.7.0
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: Passw0rd
- name: POSTGRES_DB
value: isam
- name: POSTGRES_SSL_KEYDB
value: /var/local/server.pem
- name: PGDATA
value: /var/lib/postgresql/data/db-files/
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresqldata
- mountPath: /var/local
name: postgresql-keys
# useful for debugging startup issues - can run bash, then exec to the container and poke around
# command: [ "/bin/bash"]
# args: [ "-c", "while /bin/true ; do sleep 5; done" ]
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
ports:
- port: 5432
name: postgresql
protocol: TCP
selector:
app: postgresql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamconfig
labels:
app: isamconfig
spec:
selector:
matchLabels:
app: isamconfig
replicas: 1
template:
metadata:
labels:
app: isamconfig
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
persistentVolumeClaim:
claimName: isamconfig
- name: isamconfig-logs
emptyDir: {}
containers:
- name: isamconfig
image: ibmcom/isam:9.0.7.1_IF4
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamconfig-logs
env:
- name: SERVICE
value: config
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: ADMIN_PWD
valueFrom:
secretKeyRef:
name: samadmin
key: adminpw
readinessProbe:
tcpSocket:
port: 9443
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 9443
initialDelaySeconds: 120
periodSeconds: 20
# command: [ "/sbin/bootstrap.sh" ]
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamconfig
spec:
# To make the LMI internet facing, make it a NodePort
type: NodePort
ports:
- port: 9443
name: isamconfig
protocol: TCP
# make this one statically allocated
nodePort: 30442
selector:
app: isamconfig
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamwrprp1-v1
labels:
app: isamwrprp1
spec:
selector:
matchLabels:
app: isamwrprp1
version: v1
replicas: 1
template:
metadata:
labels:
app: isamwrprp1
version: v1
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamwrprp1-logs
emptyDir: {}
containers:
- name: isamwrprp1
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamwrprp1-logs
env:
- name: SERVICE
value: webseal
- name: INSTANCE
value: rp1
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamwrprp1-v2
labels:
app: isamwrprp1
spec:
selector:
matchLabels:
app: isamwrprp1
version: v2
replicas: 1
template:
metadata:
labels:
app: isamwrprp1
version: v2
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamwrprp1-logs
emptyDir: {}
containers:
- name: isamwrprp1
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamwrprp1-logs
env:
- name: SERVICE
value: webseal
- name: INSTANCE
value: rp1
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamwrprp1
spec:
type: NodePort
sessionAffinity: ClientIP
ports:
- port: 443
name: isamwrprp1
protocol: TCP
nodePort: 30443
selector:
app: isamwrprp1
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamwrpmobile
labels:
app: isamwrpmobile
spec:
selector:
matchLabels:
app: isamwrpmobile
replicas: 1
template:
metadata:
labels:
app: isamwrpmobile
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamwrpmobile-logs
emptyDir: {}
containers:
- name: isamwrpmobile
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamwrpmobile-logs
env:
- name: SERVICE
value: webseal
- name: INSTANCE
value: mobile
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamwrpmobile
spec:
type: NodePort
sessionAffinity: ClientIP
ports:
- port: 443
name: isamwrpmobile
protocol: TCP
nodePort: 30444
selector:
app: isamwrpmobile
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamruntime-v1
labels:
app: isamruntime
spec:
selector:
matchLabels:
app: isamruntime
version: v1
replicas: 1
template:
metadata:
labels:
app: isamruntime
version: v1
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamruntime-logs
emptyDir: {}
containers:
- name: isamruntime
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamruntime-logs
env:
- name: SERVICE
value: runtime
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamruntime-v2
labels:
app: isamruntime
spec:
selector:
matchLabels:
app: isamruntime
version: v2
replicas: 1
template:
metadata:
labels:
app: isamruntime
version: v2
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamruntime-logs
emptyDir: {}
containers:
- name: isamruntime
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamruntime-logs
env:
- name: SERVICE
value: runtime
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
apiVersion: v1
kind: Service
metadata:
name: isamruntime
spec:
ports:
- port: 443
name: isamruntime
protocol: TCP
selector:
app: isamruntime
---
I am not sure why the CPU load is shown as unknown. Have I missed a step or made any mistake ? Can someone help ?
Regards
Pranam
Based on the issue shown, it appears that you have not set the resource limits in the delployment.yaml file.
if you go for executing kubectl explain deployment then you will see in containers specs -
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
If you add values to above mentioned keys then surely the hpa issue will get solved
Did you specify the resources block when you defined your app's deployment?
I don't remembered where it was stated but I encountered this case once when I forgot that.
More information about managing resources for containers: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

Unable to update kubernetes container /etc/hosts file using hostAliases

I am trying to update kubernetes container /etc/hosts file through hostAliases with Deployment kind, however this is not updating /etc/hosts but deployment is successful.
Unable to make out what is stopping to updating hostAliases. And also, please suggest any alternative way to update /etc/hosts entries of a kubernetes container
Here is the deployment.yml file I am using, appreciate your help
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-services
namespace: dev
labels:
app: test-services
spec:
replicas: 1
revisionHistoryLimit: 0
selector:
matchLabels:
app: test-services
template:
metadata:
labels:
app: test-services
spec:
containers:
- name: test-services
image: test-services:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 60
periodSeconds: 5
env:
- name: JAVA_OPTS
value: "-Dspring.profiles.active=dev"
- name: DB_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: DB_PASS
valueFrom:
secretKeyRef:
name: db-credentials
key: password
restartPolicy: Always
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
imagePullSecrets:
- name: registry-secret

Kubernetes cannot access Cassandra within same namespace

I get
All host(s) tried for query failed (tried: 10.244.0.72/10.244.0.72:9042 (com.datastax.driver.core.exceptions.TransportException: [10.244.0.72/10.244.0.72:9042] Channel has been closed))
when trying to access Cassandra within the same namespace. Although when I forward ports it works ok from localhost. keyspace is created successfully.
kubectl port-forward cassandra1-0 9042:9042
My yaml
apiVersion: v1
kind: Service
metadata:
name: cassandra1
labels:
app: cassandra1
spec:
ports:
- name: "cql"
protocol: "TCP"
port: 9042
targetPort: 9042
- name: "thrift"
protocol: "TCP"
port: 9160
targetPort: 9160
selector:
app: cassandra1
type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra1
labels:
app: cassandra1
spec:
serviceName: cassandra1
replicas: 1
selector:
matchLabels:
app: cassandra1
template:
metadata:
labels:
app: cassandra1
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra1
image: gcr.io/google-samples/cassandra:v13
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
- containerPort: 9160
name: thrift
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra1-0.cassandra1.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "cassandra1"
- name: CASSANDRA_DC
value: "DC1-cassandra1"
- name: CASSANDRA_RACK
value: "Rack1-cassandra1"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: cassandra1-data
mountPath: /cassandra1_data
volumeClaimTemplates:
- metadata:
name: cassandra1-data
namespace: default
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Cassandra starts with following properties:
Starting Cassandra on 10.244.0.72
CASSANDRA_CONF_DIR /etc/cassandra
CASSANDRA_CFG /etc/cassandra/cassandra.yaml
CASSANDRA_AUTO_BOOTSTRAP true
CASSANDRA_BROADCAST_ADDRESS 10.244.0.72
CASSANDRA_BROADCAST_RPC_ADDRESS 10.244.0.72
CASSANDRA_CLUSTER_NAME cassandra1
CASSANDRA_COMPACTION_THROUGHPUT_MB_PER_SEC
CASSANDRA_CONCURRENT_COMPACTORS
CASSANDRA_CONCURRENT_READS
CASSANDRA_CONCURRENT_WRITES
CASSANDRA_COUNTER_CACHE_SIZE_IN_MB
CASSANDRA_DC DC1-cassandra1
CASSANDRA_DISK_OPTIMIZATION_STRATEGY ssd
CASSANDRA_ENDPOINT_SNITCH SimpleSnitch
CASSANDRA_GC_WARN_THRESHOLD_IN_MS
CASSANDRA_INTERNODE_COMPRESSION
CASSANDRA_KEY_CACHE_SIZE_IN_MB
CASSANDRA_LISTEN_ADDRESS 10.244.0.72
CASSANDRA_LISTEN_INTERFACE
CASSANDRA_MEMTABLE_ALLOCATION_TYPE
CASSANDRA_MEMTABLE_CLEANUP_THRESHOLD
CASSANDRA_MEMTABLE_FLUSH_WRITERS
CASSANDRA_MIGRATION_WAIT 1
CASSANDRA_NUM_TOKENS 32
CASSANDRA_RACK Rack1-cassandra1
CASSANDRA_RING_DELAY 30000
CASSANDRA_RPC_ADDRESS 0.0.0.0
CASSANDRA_RPC_INTERFACE
CASSANDRA_SEEDS cassandra1-0.cassandra1.default.svc.cluster.local
CASSANDRA_SEED_PROVIDER org.apache.cassandra.locator.SimpleSeedProvider
changed ownership of '/cassandra_data/data' from root to cassandra
changed ownership of '/cassandra_data' from root to cassandra
In my application that runs in the same namespace i tried setting cassandraport to 9042 and host to:
10.240.0.4 (hostIP)
10.244.0.72 (podIP)
cassandra1 (name of the service)
cassandra1.default
cassandra1.default.svc.cluster.local
cassandra1-0.cassandra1.default.svc.cluster.local
_cql._tcp.cassandra1.default.svc.cluster.local
I also tried different types of a service:
headless, ClusterIP, NodePort
Does anybody has ANY ideas what is wrong or what else can i try to get this to work?