"No appropriate python interpreter found. command terminated with exit code 1" - kubernetes

I am trying to perform replication in Cassandra. The cassandra nodes are setup in kubernetes. I am running it locally. I am just a beginner. I am unable to run cqlsh on the cassandra pods.
The configuration file for the pods is:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: gcr.io/google-samples/cassandra:v14
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "100m"
memory: 1Gi
requests:
cpu: "100m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8Demo"
- name: CASSANDRA_DC
value: "DC1-K8Demo"
- name: CASSANDRA_RACK
value: "Rack1-K8Demo"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
# readinessProbe:
# exec:
# command:
# - /bin/bash
# - -c
# - /ready-probe.sh
# initialDelaySeconds: 20
# timeoutSeconds: 10
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: cassandra-data-storage
mountPath: "/mnt/data"
volumes:
- name: cassandra-data-storage
persistentVolumeClaim:
claimName: cassandra-claim
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
# do not use these in production until ssd GCEPersistentDisk or other ssd pd
# volumeClaimTemplates:
# - metadata:
# name: cassandra-claim
# spec:
# storageClassName: localstorage
# accessModes: ["ReadWriteOnce"]
# resources:
# requests:
# storage: 1Gi
# selector:
# matchLabels:
# type: local
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: cassandra-data-volume
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cassandra-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Details of pods running & the error:
ubuntu#ubuntu-VirtualBox:~/cassandra$ k get all
NAME READY STATUS RESTARTS AGE
pod/nginx-sample-9456bbbf9-9v78w 1/1 Running 2 (40m ago) 18h
pod/nginx-sample-9456bbbf9-4kdq5 1/1 Running 2 (40m ago) 18h
pod/nginx-sample-9456bbbf9-4nc8d 1/1 Running 2 (40m ago) 18h
pod/cassandra-1 1/1 Running 0 12m
pod/cassandra-2 1/1 Running 0 12m
pod/cassandra-0 1/1 Running 0 12m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 25d
service/cassandra ClusterIP None <none> 9042/TCP 170m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-sample 3/3 3 3 18h
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-sample-9456bbbf9 3 3 3 18h
NAME READY AGE
statefulset.apps/cassandra 3/3 12m
ubuntu#ubuntu-VirtualBox:~/cassandra$ kubectl exec --stdin --tty cassandra-0 -- cqlsh
No appropriate python interpreter found.
command terminated with exit code 1
ubuntu#ubuntu-VirtualBox:~/cassandra$ kubectl exec --stdin --tty cassandra-0 -- /bin/bash
root#cassandra-0:/# cqlsh
No appropriate python interpreter found.
root#cassandra-0:/#

Related

Cannot create private Kubernetes registry

I want to create a private Kubernetes registry from this tutorial: https://www.linuxtechi.com/setup-private-docker-registry-kubernetes/
I implemented this:
Generate Self-Signed Certificate
cd /opt
sudo mkdir certs
cd certs
sudo touch registry.key
cd /opt
sudo openssl req -newkey rsa:4096 -nodes -sha256 -keyout \
./certs/registry.key -x509 -days 365 -out ./certs/registry.crt
ls -l certs/
Create registry folder
cd /opt
mkdir registry
Copy-paste private-registry.yaml into /opt/registry
apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: certs-vol
hostPath:
path: /opt/certs
type: Directory
- name: registry-vol
hostPath:
path: /opt/registry
type: Directory
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: certs-vol
mountPath: /certs
- name: registry-vol
mountPath: /var/lib/registry
kubernetes#kubernetes1:/opt/registry$ kubectl create -f private-registry.yaml
deployment.apps/private-repository-k8s created
kubernetes#kubernetes1:/opt/registry$ kubectl get deployments private-repositor y-k8s
NAME READY UP-TO-DATE AVAILABLE AGE
private-repository-k8s 0/1 1 0 12s
kubernetes#kubernetes1:/opt/registry$
I have the following questions:
I have a control plane and 2 work nodes. Is it possible to have a folder located only on the control plane under /opt/registry and deploy images on all work nodes without using shared folders?
As alternative more resilient solution I want to have a control plane and 2 work nodes. Is it possible to have a folder located on all work nodes and on the control plane under /opt/registry and deploy images on all work nodes without using manually created shared folders? I want Kubernetes to manage repository replication on all nodes. i.e data into /opt/registry to be synchronized automatically by Kubernetes.
Do you know how I can debug this configuration? As you can see pod is not starting.
EDIT: Log file:
kubernetes#kubernetes1:/opt/registry$ kubectl logs private-repository-k8s-6ddbcd9c45-s6dfq
Error from server (BadRequest): container "private-repository-k8s" in pod "private-repository-k8s-6ddbcd9c45-s6dfq" is waiting to start: ContainerCreating
kubernetes#kubernetes1:/opt/registry$
Attempt 2:
I tried this configuration deployed from control plane:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes1
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteOnce # only 1 node will read/write on the path.
# - ReadWriteMany # multiple nodes will read/write on the path
Note! control plane hostname is kubernetes1 so I changed the value into above configuration. I get this:
kubernetes#kubernetes1:~$ cd /opt/registry
kubernetes#kubernetes1:/opt/registry$ kubectl create -f private-registry1.yaml
persistentvolume/pv1 created
kubernetes#kubernetes1:/opt/registry$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default private-repository-k8s-6ddbcd9c45-s6dfq 0/1 ContainerCreating 0 2d1h
kube-system calico-kube-controllers-58dbc876ff-dgs77 1/1 Running 4 (125m ago) 2d13h
kube-system calico-node-czmzc 1/1 Running 4 (125m ago) 2d13h
kube-system calico-node-q4lxz 1/1 Running 4 (125m ago) 2d13h
kube-system coredns-565d847f94-k94z2 1/1 Running 4 (125m ago) 2d13h
kube-system coredns-565d847f94-nt27m 1/1 Running 4 (125m ago) 2d13h
kube-system etcd-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-apiserver-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-controller-manager-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-proxy-97djs 1/1 Running 5 (125m ago) 2d13h
kube-system kube-proxy-d8bzs 1/1 Running 4 (125m ago) 2d13h
kube-system kube-scheduler-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kubernetes#kubernetes1:/opt/registry$ kubectl logs private-repository-k8s-6ddbcd9c45-s6dfq
Error from server (BadRequest): container "private-repository-k8s" in pod "private-repository-k8s-6ddbcd9c45-s6dfq" is waiting to start: ContainerCreating
Unfortunately again the image is not created.
For 1st question, you can try creating PersistentVolume with node affinity set to specific controlplane node and tie it with the deployment via PersistentVolumeClaim.Here's an example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes.io/hostname
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteOnce # only 1 node will read/write on the path.
# - ReadWriteMany # multiple nodes will read/write on the path
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv1-claim
spec: # should match specs added in the PersistenVolume
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 256Mi
apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: task-pv-storage
mountPath: /opt/registry
For question # 2, can you share the logs of your pod?
You can try with following file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: task-pv-storage
mountPath: /opt/registry
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv1-claim
spec: # should match specs added in the PersistenVolume
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 256Mi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes1
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteMany

Kubernetes and mongo, PV, PVC

Hi just A noobiew question.
I manage(?) to implement PV and PVC over mongo DB. I'm using PV as local and not on the cloud.
There is a way to save the data when k8s runs on my pc after container restart ?
I'm not sure I got this right but what I need is to save the mongo data after he restart. What is the best way ? (no mongo atlas)
UPDATE:
I managed to make tickets service db work great, but I have 2 other services that it just wont work ! i update the yaml files so u can see the current state. the auth-mongo is just the same as tickets-mongo so why it wont work ?
the ticket-depl-mongo yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tickets-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: tickets-mongo
template:
metadata:
labels:
app: tickets-mongo
spec:
containers:
- name: tickets-mongo
image: mongo
args: ["--dbpath", "data/auth"]
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
volumeMounts:
- mountPath: /data/auth
name: tickets-data
volumes:
- name: tickets-data
persistentVolumeClaim:
claimName: tickets-pvc
---
apiVersion: v1
kind: Service
metadata:
name: tickets-mongo-srv
spec:
selector:
app: tickets-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
auth-mongo-depl.yaml :
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
args: ["--dbpath", "data/db"]
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
volumeMounts:
- mountPath: /data/db
name: auth-data
volumes:
- name: auth-data
persistentVolumeClaim:
claimName: auth-pvc
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-auth 1Gi RWO Retain Bound default/auth-pvc auth 78m
pv-orders 1Gi RWO Retain Bound default/orders-pvc orders 78m
pv-tickets 1Gi RWO Retain Bound default/tickets-pvc tickets 78m
I'm using mongo containers with tickets, orders, and auth services.
Just adding some info to make it clear.
NAME READY STATUS RESTARTS AGE
auth-depl-66c5d54988-ffhwc 1/1 Running 0 36m
auth-mongo-depl-594b98fcc5-k9hj8 1/1 Running 0 36m
client-depl-787cf6c7c6-xxks9 1/1 Running 0 36m
expiration-depl-864d846445-b95sh 1/1 Running 0 36m
expiration-redis-depl-64bd9fdb95-sg7fc 1/1 Running 0 36m
nats-depl-7d6c7dc46-m6mcg 1/1 Running 0 36m
orders-depl-5478cf4dfd-zmngj 1/1 Running 0 36m
orders-mongo-depl-5f974847d7-bz9s4 1/1 Running 0 36m
payments-depl-78f85d94fd-4zs55 1/1 Running 0 36m
payments-mongo-depl-5d5c47494b-7zjrl 1/1 Running 0 36m
tickets-depl-84d59fd47c-cs4k5 1/1 Running 0 36m
tickets-mongo-depl-66798d9874-cfbqb 1/1 Running 0 36m
example for pv:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-tickets
labels:
type: local
spec:
storageClassName: tickets
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp"
All I had to do is to change the path of hostPath in each PV. the same path will make the app to faill.
pv1:
hostPath:
path: "/path/x1"
pv2:
hostPath:
path: "/path/x2"
like so.. just not the same path.

Why my GKE node pool does not auto-scale down?

I've got a preemptible node pool which is clearly under-utilized:
The node pool hosts a deployment with HPA with the following setup:
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
initContainers:
- name: wait-for-database
image: ### IMAGE ###
command: ['bash', 'init.sh']
containers:
- name: backend
image: ### IMAGE ###
command: ["bash", "entrypoint.sh"]
imagePullPolicy: Always
resources:
requests:
memory: "200M"
cpu: "50m"
ports:
- name: probe-port
containerPort: 8080
hostPort: 8080
volumeMounts:
- name: static-shared-data
mountPath: /static
readinessProbe:
httpGet:
path: /readiness/
port: probe-port
failureThreshold: 5
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
- name: nginx
image: nginx:alpine
resources:
requests:
memory: "400M"
cpu: "20m"
ports:
- containerPort: 80
volumeMounts:
- name: nginx-proxy-config
mountPath: /etc/nginx/conf.d/default.conf
subPath: app.conf
- name: static-shared-data
mountPath: /static
volumes:
- name: nginx-proxy-config
configMap:
name: backend-nginx
- name: static-shared-data
emptyDir: {}
nodeSelector:
cloud.google.com/gke-nodepool: app-dev
tolerations:
- effect: NoSchedule
key: workload
operator: Equal
value: dev
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: backend
namespace: default
spec:
maxReplicas: 12
minReplicas: 8
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: backend
metrics:
- resource:
name: cpu
targetAverageUtilization: 50
type: Resource
---
The node pool also has the toleration label.
The HPA utilization shows this:
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
backend-develop Deployment/backend-develop 10%/50% 8 12 8 38d
But the node pool does not scale down for about a day. No heavy load on this deployment:
NAME STATUS ROLES AGE VERSION
gke-dev-app-dev-fee1a901-fvw9 Ready <none> 22h v1.14.10-gke.36
gke-dev-app-dev-fee1a901-gls7 Ready <none> 22h v1.14.10-gke.36
gke-dev-app-dev-fee1a901-lf3f Ready <none> 24h v1.14.10-gke.36
gke-dev-app-dev-fee1a901-lgw9 Ready <none> 3d10h v1.14.10-gke.36
gke-dev-app-dev-fee1a901-qxkz Ready <none> 3h35m v1.14.10-gke.36
gke-dev-app-dev-fee1a901-s10l Ready <none> 22h v1.14.10-gke.36
gke-dev-app-dev-fee1a901-sj4d Ready <none> 22h v1.14.10-gke.36
gke-dev-app-dev-fee1a901-vdnw Ready <none> 27h v1.14.10-gke.36
There's no affinity settings for this deployment and node pool. Some of the nodes easily pack several same pods, but others just hold one pod for hours, no scale down happens.
What could be wrong?
The issue was:
hostPort: 8080
This lead to FailedScheduling didn't have free ports.
That's why the nodes were kept online.

Prometheus server in pending state after installation using Helm

I am new to k8s and trying to setup prometheus monitoring for k8s. I used
"helm install" to setup prometheus. Now:
two pods are still in pending state:
prometheus-server
prometheus-alertmanager
I manually created persistent volume for both
Can anyone help me with how to map these PV with PVC created by helm chart?
[centos#k8smaster1 ~]$ kubectl get pod -n monitoring
NAME READY STATUS RESTARTS AGE
prometheus-alertmanager-7757d759b8-x6bd7 0/2 Pending 0 44m
prometheus-kube-state-metrics-7f85b5d86c-cq9kr 1/1 Running 0 44m
prometheus-node-exporter-5rz2k 1/1 Running 0 44m
prometheus-pushgateway-5b8465d455-672d2 1/1 Running 0 44m
prometheus-server-7f8b5fc64b-w626v 0/2 Pending 0 44m
[centos#k8smaster1 ~]$ kubectl get pv
prometheus-alertmanager 3Gi RWX Retain Available 22m
prometheus-server 12Gi RWX Retain Available 30m
[centos#k8smaster1 ~]$ kubectl get pvc -n monitoring
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
prometheus-alertmanager Pending 20m
prometheus-server Pending 20m
[centos#k8smaster1 ~]$ kubectl describe pvc prometheus-alertmanager -n monitoring
Name: prometheus-alertmanager
Namespace: monitoring
StorageClass:
Status: Pending
Volume:
Labels: app=prometheus
chart=prometheus-8.15.0
component=alertmanager
heritage=Tiller
release=prometheus
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 116s (x83 over 22m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Mounted By: prometheus-alertmanager-7757d759b8-x6bd7
I am expecting the pods to get into running state
!!!UPDATE!!!
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
prometheus-alertmanager Pending local-storage 4m29s
prometheus-server Pending local-storage 4m29s
[centos#k8smaster1 prometheus_pv_storage]$ kubectl describe pvc prometheus-server -n monitoring
Name: prometheus-server
Namespace: monitoring
StorageClass: local-storage
Status: Pending
Volume:
Labels: app=prometheus
chart=prometheus-8.15.0
component=server
heritage=Tiller
release=prometheus
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 11s (x22 over 4m59s) persistentvolume-controller waiting for first consumer to be created before binding
Mounted By: prometheus-server-7f8b5fc64b-bqf42
!!UPDATE-2!!
[centos#k8smaster1 ~]$ kubectl get pods prometheus-server-7f8b5fc64b-bqf42 -n monitoring -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2019-08-18T16:10:54Z"
generateName: prometheus-server-7f8b5fc64b-
labels:
app: prometheus
chart: prometheus-8.15.0
component: server
heritage: Tiller
pod-template-hash: 7f8b5fc64b
release: prometheus
name: prometheus-server-7f8b5fc64b-bqf42
namespace: monitoring
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: prometheus-server-7f8b5fc64b
uid: c1979bcb-c1d2-11e9-819d-fa163ebb8452
resourceVersion: "2461054"
selfLink: /api/v1/namespaces/monitoring/pods/prometheus-server-7f8b5fc64b-bqf42
uid: c19890d1-c1d2-11e9-819d-fa163ebb8452
spec:
containers:
- args:
- --volume-dir=/etc/config
- --webhook-url=http://127.0.0.1:9090/-/reload
image: jimmidyson/configmap-reload:v0.2.2
imagePullPolicy: IfNotPresent
name: prometheus-server-configmap-reload
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/config
name: config-volume
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: prometheus-server-token-7h2df
readOnly: true
- args:
- --storage.tsdb.retention.time=15d
- --config.file=/etc/config/prometheus.yml
- --storage.tsdb.path=/data
- --web.console.libraries=/etc/prometheus/console_libraries
- --web.console.templates=/etc/prometheus/consoles
- --web.enable-lifecycle
image: prom/prometheus:v2.11.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /-/healthy
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: prometheus-server
ports:
- containerPort: 9090
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /-/ready
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/config
name: config-volume
- mountPath: /data
name: storage-volume
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: prometheus-server-token-7h2df
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
serviceAccount: prometheus-server
serviceAccountName: prometheus-server
terminationGracePeriodSeconds: 300
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: prometheus-server
name: config-volume
- name: storage-volume
persistentVolumeClaim:
claimName: prometheus-server
- name: prometheus-server-token-7h2df
secret:
defaultMode: 420
secretName: prometheus-server-token-7h2df
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-08-18T16:10:54Z"
message: '0/2 nodes are available: 1 node(s) didn''t find available persistent
volumes to bind, 1 node(s) had taints that the pod didn''t tolerate.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: BestEffort
Also I have the volumes created and assigned to local storage
[centos#k8smaster1 prometheus_pv]$ kubectl get pv -n monitoring
prometheus-alertmanager 3Gi RWX Retain Available local-storage 2d19h
prometheus-server 12Gi RWX Retain Available local-storage 2d19h
If you are in EKS, your node need to have the next permission
arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
and the Amazon EBS CSI Driver Add-on
Prometheus will try to create PersiatentVolumeClaims with accessModes as ReadWriteOnce, PVC will get matched to PersistentVolume only if accessmodes are same. Change your accessmode of PV to ReadWriteOnce, it should work.

How can I explore Kubernetes folder?

I tried to start fabric on kubernetes.
Then I get this issue CrashLoopBackOff. After search a bit, I can see from the logs that
2019-06-05 07:30:19.216 UTC [main] main -> ERRO 001 Cannot run peer because error when setting up MSP from directory /etc/hyperledger/fabric/msp: err Could not load a valid signer certificate from directory /etc/hyperledger/fabric/msp/signcerts, err stat /etc/hyperledger/fabric/msp/signcerts: no such file or directory
How can I see if I am mounting the correct folder?
I want to access my crashed container to check if my msp folder are there.
Any help is appreciated!
edit 1: kubectl pod describe for peer1 org 1
Name: peer1-org1-7b9cf7fbd4-74b7q
Namespace: org1
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Wed, 05 Jun 2019 17:48:21 +0900
Labels: app=hyperledger
org=org1
peer-id=peer1
pod-template-hash=7b9cf7fbd4
role=peer
Annotations: <none>
Status: Running
IP: 172.17.0.9
Controlled By: ReplicaSet/peer1-org1-7b9cf7fbd4
Containers:
couchdb:
Container ID: docker://7b5e80103491476843d365dc234316ae55a92d66f2ea009cf9162583a76907fb
Image: hyperledger/fabric-couchdb:x86_64-1.0.0
Image ID: docker-pullable://hyperledger/fabric-couchdb#sha256:e89b0f95f6ff674fd043795090dd65a11d727ec005d925545cf0b4fc48aa221d
Port: 5984/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 05 Jun 2019 17:49:49 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sjp8t (ro)
peer1-org1:
Container ID: docker://95e743dceafbd78f7e29476302ac86d7eb48f97c9a50db3d174dc6684511c97b
Image: hyperledger/fabric-peer:x86_64-1.0.0
Image ID: docker-pullable://hyperledger/fabric-peer#sha256:b7c1c2a6b356996c3dbe2b9554055cd2b63194cd7a492a83de2dbabf7f7e3c65
Ports: 7051/TCP, 7052/TCP, 7053/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
peer
Args:
node
start
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 05 Jun 2019 17:50:58 +0900
Finished: Wed, 05 Jun 2019 17:50:58 +0900
Ready: False
Restart Count: 3
Environment:
CORE_LEDGER_STATE_STATEDATABASE: CouchDB
CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS: localhost:5984
CORE_VM_ENDPOINT: unix:///host/var/run/docker.sock
CORE_LOGGING_LEVEL: DEBUG
CORE_PEER_TLS_ENABLED: false
CORE_PEER_GOSSIP_USELEADERELECTION: true
CORE_PEER_GOSSIP_ORGLEADER: false
CORE_PEER_PROFILE_ENABLED: true
CORE_PEER_TLS_CERT_FILE: /etc/hyperledger/fabric/tls/server.crt
CORE_PEER_TLS_KEY_FILE: /etc/hyperledger/fabric/tls/server.key
CORE_PEER_TLS_ROOTCERT_FILE: /etc/hyperledger/fabric/tls/ca.crt
CORE_PEER_ID: peer1.org1
CORE_PEER_ADDRESS: peer1.org1:7051
CORE_PEER_GOSSIP_EXTERNALENDPOINT: peer1.org1:7051
CORE_PEER_LOCALMSPID: Org1MSP
Mounts:
/etc/hyperledger/fabric/msp from certificate (rw,path="peers/peer1.org1/msp")
/etc/hyperledger/fabric/tls from certificate (rw,path="peers/peer1.org1/tls")
/host/var/run/ from run (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sjp8t (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
certificate:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: org1-pv
ReadOnly: false
run:
Type: HostPath (bare host directory volume)
Path: /run
HostPathType:
default-token-sjp8t:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sjp8t
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m58s default-scheduler Successfully assigned org1/peer1-org1-7b9cf7fbd4-74b7q to minikube
Normal Pulling 2m55s kubelet, minikube Pulling image "hyperledger/fabric-couchdb:x86_64-1.0.0"
Normal Pulled 90s kubelet, minikube Successfully pulled image "hyperledger/fabric-couchdb:x86_64-1.0.0"
Normal Created 90s kubelet, minikube Created container couchdb
Normal Started 90s kubelet, minikube Started container couchdb
Normal Pulling 90s kubelet, minikube Pulling image "hyperledger/fabric-peer:x86_64-1.0.0"
Normal Pulled 71s kubelet, minikube Successfully pulled image "hyperledger/fabric-peer:x86_64-1.0.0"
Normal Created 21s (x4 over 70s) kubelet, minikube Created container peer1-org1
Normal Started 21s (x4 over 70s) kubelet, minikube Started container peer1-org1
Normal Pulled 21s (x3 over 69s) kubelet, minikube Container image "hyperledger/fabric-peer:x86_64-1.0.0" already present on machine
Warning BackOff 5s (x6 over 68s) kubelet, minikube Back-off restarting failed container
edit 2:
Kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
org1-artifacts-pv 500Mi RWX Retain Available 39m
org1-pv 500Mi RWX Retain Available 39m
org2-artifacts-pv 500Mi RWX Retain Available 39m
org2-pv 500Mi RWX Retain Available 39m
orgorderer1-pv 500Mi RWX Retain Available 39m
pvc-aa87a86f-876e-11e9-99ef-080027f6ce3c 10Mi RWX Delete Bound orgorderer1/orgorderer1-pv standard 39m
pvc-aadb69ff-876e-11e9-99ef-080027f6ce3c 10Mi RWX Delete Bound org2/org2-pv standard 39m
pvc-ab2e4d8e-876e-11e9-99ef-080027f6ce3c 10Mi RWX Delete Bound org2/org2-artifacts-pv standard 39m
pvc-abb04335-876e-11e9-99ef-080027f6ce3c 10Mi RWX Delete Bound org1/org1-pv standard 39m
pvc-abfaaf76-876e-11e9-99ef-080027f6ce3c 10Mi RWX Delete Bound org1/org1-artifacts-pv standard 39m
Kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
org1-artifacts-pv Bound pvc-abfaaf76-876e-11e9-99ef-080027f6ce3c 10Mi RWX standard 40m
org1-pv Bound pvc-abb04335-876e-11e9-99ef-080027f6ce3c 10Mi RWX standard 40m
edit 3: org1-cli.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: org1-artifacts-pv
spec:
capacity:
storage: 500Mi
accessModes:
- ReadWriteMany
hostPath:
path: "/opt/share/channel-artifacts"
# nfs:
# path: /opt/share/channel-artifacts
# server: localhost #change to your nfs server ip here
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: org1
name: org1-artifacts-pv
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Mi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: org1
name: cli
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: cli
spec:
containers:
- name: cli
image: hyperledger/fabric-tools:x86_64-1.0.0
env:
- name: CORE_PEER_TLS_ENABLED
value: "false"
#- name: CORE_PEER_TLS_CERT_FILE
# value: /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1/peers/peer0.org1/tls/server.crt
#- name: CORE_PEER_TLS_KEY_FILE
# value: /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1/peers/peer0.org1/tls/server.key
#- name: CORE_PEER_TLS_ROOTCERT_FILE
# value: /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1/peers/peer0.org1/tls/ca.crt
- name: CORE_VM_ENDPOINT
value: unix:///host/var/run/docker.sock
- name: GOPATH
value: /opt/gopath
- name: CORE_LOGGING_LEVEL
value: DEBUG
- name: CORE_PEER_ID
value: cli
- name: CORE_PEER_ADDRESS
value: peer0.org1:7051
- name: CORE_PEER_LOCALMSPID
value: Org1MSP
- name: CORE_PEER_MSPCONFIGPATH
value: /etc/hyperledger/fabric/msp
workingDir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
volumeMounts:
# - mountPath: /opt/gopath/src/github.com/hyperledger/fabric/peer
# name: certificate
# subPath: scripts
- mountPath: /host/var/run/
name: run
# - mountPath: /opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go
# name: certificate
# subPath: chaincode
- mountPath: /etc/hyperledger/fabric/msp
name: certificate
subPath: users/Admin#org1/msp
- mountPath: /opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
name: artifacts
volumes:
- name: certificate
persistentVolumeClaim:
claimName: org1-pv
- name: artifacts
persistentVolumeClaim:
claimName: org1-artifacts-pv
- name: run
hostPath:
path: /var/run
org1-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: org1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: org1-pv
spec:
capacity:
storage: 500Mi
accessModes:
- ReadWriteMany
hostPath:
path: /opt/share/crypto-config/peerOrganizations/org1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: org1
name: org1-pv
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Mi
---
edit 3: peer1-org1
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: org1
name: peer1-org1
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: hyperledger
role: peer
peer-id: peer1
org: org1
spec:
containers:
- name: couchdb
image: hyperledger/fabric-couchdb:x86_64-1.0.0
ports:
- containerPort: 5984
- name: peer1-org1
image: hyperledger/fabric-peer:x86_64-1.0.0
env:
- name: CORE_LEDGER_STATE_STATEDATABASE
value: "CouchDB"
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS
value: "localhost:5984"
- name: CORE_VM_ENDPOINT
value: "unix:///host/var/run/docker.sock"
- name: CORE_LOGGING_LEVEL
value: "DEBUG"
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: "/etc/hyperledger/fabric/tls/server.crt"
- name: CORE_PEER_TLS_KEY_FILE
value: "/etc/hyperledger/fabric/tls/server.key"
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: "/etc/hyperledger/fabric/tls/ca.crt"
- name: CORE_PEER_ID
value: peer1.org1
- name: CORE_PEER_ADDRESS
value: peer1.org1:7051
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: peer1.org1:7051
- name: CORE_PEER_LOCALMSPID
value: Org1MSP
workingDir: /opt/gopath/src/github.com/hyperledger/fabric/peer
ports:
- containerPort: 7051
- containerPort: 7052
- containerPort: 7053
command: ["peer"]
args: ["node","start"]
volumeMounts:
#- mountPath: /opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
# name: certificate
# subPath: channel-artifacts
- mountPath: /etc/hyperledger/fabric/msp
name: certificate
#subPath: crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp
subPath: peers/peer1.org1/msp
- mountPath: /etc/hyperledger/fabric/tls
name: certificate
#subPath: crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/
subPath: peers/peer1.org1/tls
- mountPath: /host/var/run/
name: run
volumes:
- name: certificate
persistentVolumeClaim:
claimName: org1-pv
- name: run
hostPath:
path: /run
---
apiVersion: v1
kind: Service
metadata:
namespace: org1
name: peer1
spec:
selector:
app: hyperledger
role: peer
peer-id: peer1
org: org1
type: NodePort
ports:
- name: externale-listen-endpoint
protocol: TCP
port: 7051
targetPort: 7051
nodePort: 30003
- name: chaincode-listen
protocol: TCP
port: 7052
targetPort: 7052
nodePort: 30004
---
You can do a kubectl edit pod <podname> -n <namespace> and change the command section to sleep 1000000000 then the pod will restart and you can get in there and see whats going. Or just delete the deployment, edit your yaml to remove the peer launch command, redeploy your yaml and see how the directories are laid out.
After a bit searching, I tried to mount the volume to nginx Kubernetes PVC sample. Changing the pods claimName to my created pvc. From there I exec bash to it and explore my file. Then I can see if I did mount the correct folder or not.