ERROR - Exception when attempting to create Namespaced Pod - kubernetes

I am trying to run airflow dags on k8s and I am new to kubernetes.I am trying to create an Airflow worker in Kubernetes, but I am getting an error.
I have setup a K8S cluster and using below file as pod_template_file
apiVersion: v1
kind: Pod
metadata:
name: airflow-worker
namespace: default
spec:
containers:
- image: apache/airflow:2.5.1
name: base
imagePullPolicy: IfNotPresent
env:
- name: AIRFLOW__DATABASE__SQL_ALCHEMY_CONN
value: "postgresql://admin:test123.#postgres:5432/postgresdb"
- name: AIRFLOW__CORE__EXECUTOR
value: "LocalExecutor"
- name: AIRFLOW__KUBERNETES_EXECUTOR__NAMESPACE
value: "default"
- name: AIRFLOW__CORE__DAGS_FOLDER
value: "/opt/airflow/dags"
- name: AIRFLOW__KUBERNETES_EXECUTOR__DELETE_WORKER_PODS
value: "False"
- name: AIRFLOW__KUBERNETES_EXECUTOR__DELETE_WORKER_PODS_ON_FAILURE
value: "False"
volumeMounts:
- name: logs-pv
mountPath: /opt/airflow/logs
- name: dags-pv
mountPath: /opt/airflow/dags
restartPolicy: Never
securityContext:
runAsUser: 50000
fsGroup: 50000
serviceAccountName: "airflow-scheduler"
volumes:
- name: dags-pv
persistentVolumeClaim:
claimName: dags-pvc
- name: logs-pv
persistentVolumeClaim:
claimName: logs-pvc
But I. k8s is throwing below error while attempting to create dynamic pods
ERROR - Exception when attempting to create Namespaced Pod: {
"metadata": {
"annotations": {
"dag_id": "our_first_dag_v15",
"task_id": "first_task",
"try_number": "1",
"run_id": "scheduled__2023-01-25T00:00:00+00:00"
},
"labels": {
"airflow-worker": "79",
"dag_id": "our_first_dag_v15",
"task_id": "first_task",
"try_number": "1",
"airflow_version": "2.5.1",
"kubernetes_executor": "True",
"run_id": "scheduled__2023-01-25T0000000000-32719d56f"
},
"name": "our-first-dag-v15-first-task-79aee54ed39941f7989a3f3f62d30b19",
"namespace": "default"
},
"spec": {
"containers": [
{
"args": [
"airflow",
"tasks",
"run",
"our_first_dag_v15",
"first_task",
"scheduled__2023-01-25T00:00:00+00:00",
"--local",
"--subdir",
"DAGS_FOLDER/our_first_dag.py"
],
"env": [
{
"name": "AIRFLOW_IS_K8S_EXECUTOR_POD",
"value": "True"
}
],
"name": "base"
}
]
}
}
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/executors/kubernetes_executor.py", line 271, in run_pod_async
body=sanitized_pod, namespace=pod.metadata.namespace, **kwargs
File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api/core_v1_api.py", line 7356, in create_namespaced_pod
return self.create_namespaced_pod_with_http_info(namespace, body, **kwargs) # noqa: E501
File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api/core_v1_api.py", line 7469, in create_namespaced_pod_with_http_info
collection_formats=collection_formats)
File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 353, in call_api
_preload_content, _request_timeout, _host)
File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 184, in __call_api
_request_timeout=_request_timeout)
File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 397, in request
body=body)
File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 281, in POST
body=body)
File "/home/airflow/.local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 234, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (422)
Reason: Unprocessable Entity
HTTP response headers: HTTPHeaderDict({'Audit-Id': '89216f9f-3d14-42dd-b6a7-f364146f04ae', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '767099f9-56ae-4ee7-9b92-73744f8d76d1', 'X-Kubernetes-Pf-Prioritylevel-Uid': '24b3a709-e4ef-4c98-910f-a0eea9a0f036', 'Date': 'Thu, 26 Jan 2023 18:36:20 GMT', 'Content-Length': '435'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Pod \"our-first-dag-v15-first-task-79aee54ed39941f7989a3f3f62d30b19\" is invalid: spec.containers[0].image: Required value","reason":"Invalid","details":{"name":"our-first-dag-v15-first-task-79aee54ed39941f7989a3f3f62d30b19","kind":"Pod","causes":[{"reason":"FieldValueRequired","message":"Required value","field":"spec.containers[0].image"}]},"code":422}
Please help to identify the problem.
Trying to run dags on K8s cluster

Related

Drone CI Stuck on Pending for arm

I'm trying to test ci/cd with gitea and drone but it is stuck in pending.
I was able to verify if my gitea is connected to my drone-server
here is my .drone.yaml
kind: pipeline
type: docker
name: arm64
platform:
os: linux
arch: arm64
steps:
- name: test
image: 'golang:1.10-alpine'
commands:
- go test
- name: build
image: 'golang:1.10-alpine'
commands:
- go build -o ./myapp
- name: publish
image: plugins/docker
settings:
username: mjayson
password:
from_secret: docker_pwd
repo: mjayson/sample
tags: latest
- name: deliver
image: sinlead/drone-kubectl
settings:
kubernetes_server:
from_secret: k8s_server
kubernetes_cert:
from_secret: k8s_cert
kubernetes_token:
from_secret: k8s_token
commands:
- kubectl apply -f deployment.yml
I have set up gitea and drone in my k8s cluster. Configuration below
apiVersion: v1
kind: ConfigMap
metadata:
name: drone-config
namespace: dev-ops
data:
DRONE_GITEA_SERVER: 'http://192.168.1.150:30000'
DRONE_GITEA_CLIENT_ID: '746a6cd1-cd31-4611-971b-e005bb80e662'
DRONE_GITEA_CLIENT_SECRET: 'O-NpPnTiFyIGZwqN7aeNDqIWR1sGIEJj8Cehcl0CtVI='
DRONE_RPC_SECRET: '1be6d1769148d95b5d04a84694cc0447'
DRONE_SERVER_HOST: '192.168.1.150:30001'
DRONE_SERVER_PROTO: 'http'
DRONE_LOGS_TRACE: 'true'
DRONE_LOGS_PRETTY: 'true'
DRONE_LOGS_COLOR: 'true'
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: drone-server-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/infra/drone"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: drone-server-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
kind: Service
apiVersion: v1
metadata:
name: drone-server-service
spec:
type: NodePort
selector:
app: drone-server
ports:
- name: drone-server-http
port: 80
targetPort: 80
nodePort: 30001
- name: drone-server-ssh
port: 443
targetPort: 443
nodePort: 30003
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: drone-server-deployment
labels:
app: drone-server
spec:
replicas: 1
selector:
matchLabels:
app: drone-server
template:
metadata:
labels:
app: drone-server
spec:
containers:
- name: drone-server
image: drone/drone:1.9
ports:
- containerPort: 80
name: gitea-http
- containerPort: 443
name: gitea-ssh
envFrom:
- configMapRef:
name: drone-config
volumeMounts:
- name: pv-data
mountPath: /data
volumes:
- name: pv-data
persistentVolumeClaim:
claimName: drone-server-pvc
apiVersion: apps/v1
kind: Deployment
metadata:
name: drone-runner-deployment
labels:
app: drone-runner
spec:
replicas: 1
selector:
matchLabels:
app: drone-runner
template:
metadata:
labels:
app: drone-runner
spec:
containers:
- name: drone-runner
image: 'drone/drone-runner-kube:latest'
ports:
- containerPort: 3000
name: runner-http
env:
- name: DRONE_RPC_HOST
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_SERVER_HOST
- name: DRONE_RPC_PROTO
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_SERVER_PROTO
- name: DRONE_RPC_SECRET
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_RPC_SECRET
- name: DRONE_RUNNER_CAPACITY
value: '2'
- name: DRONE_LOGS_TRACE
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_LOGS_TRACE
- name: DRONE_LOGS_PRETTY
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_LOGS_PRETTY
- name: DRONE_LOGS_COLOR
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_LOGS_COLOR
and here is the drone server logs
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: request queue item",
"os": "",
"time": "2020-08-08T19:16:27Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: context canceled",
"os": "",
"time": "2020-08-08T19:16:57Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: request queue item",
"os": "",
"time": "2020-08-08T19:17:07Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: context canceled",
"os": "",
"time": "2020-08-08T19:17:37Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: request queue item",
"os": "",
"time": "2020-08-08T19:17:47Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: context canceled",
"os": "",
"time": "2020-08-08T19:18:17Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: request queue item",
"os": "",
"time": "2020-08-08T19:18:27Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: context canceled",
"os": "",
"time": "2020-08-08T19:18:57Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: request queue item",
"os": "",
"time": "2020-08-08T19:19:07Z",
"type": "kubernetes",
my drone runner log
time="2020-08-08T19:13:07Z" level=info msg="starting the server" addr=":3000"
time="2020-08-08T19:13:07Z" level=info msg="successfully pinged the remote server"
time="2020-08-08T19:13:07Z" level=info msg="polling the remote server" capacity=2 endpoint="http://192.168.1.150:30001" kind=pipeline type=kubernetes
Not sure how to deal with it as this is my first time facing such issue.I also tried updating the drone server image from 1 to 1.9 still nothing happens
I replaced the 'drone/drone-runner-kube:latest' to 'drone/drone-runner-docker:1' and specify the mounting point /var/run/docker.sock

Does Kubernetes take JSON format as input file to create configmap and secret?

I have an existing configuration file in JSON format, something like below
{
"maxThreadCount": 10,
"trackerConfigs": [{
"url": "https://example1.com/",
"username": "username",
"password": "password",
"defaultLimit": 1
},
{
"url": "https://example2.com/",
"username": "username",
"password": "password",
"defaultLimit": 1
}
],
"repoConfigs": [{
"url": "https://github.com/",
"username": "username",
"password": "password",
"type": "GITHUB"
}],
"streamConfigs": [{
"url": "https://example.com/master.json",
"type": "JSON"
}]
}
I understand that I am allowed to pass key/value pair properties file with --from-file option for configmap and secret creation.
But How about JSON formatted file ? Does Kubernetes take JSON format file as input file to create configmap and secret as well?
$ kubectl create configmap demo-configmap --from-file=example.json
If I run this command, it said configmap/demo-configmap created. But how can I refer this configmap values in other pod ?
When you create configmap/secret using --from-file, by default the file name will be the key name and content of the file will be the value.
For example, You created configmap will be like
apiVersion: v1
data:
test.json: |
{
"maxThreadCount": 10,
"trackerConfigs": [{
"url": "https://example1.com/",
"username": "username",
"password": "password",
"defaultLimit": 1
},
{
"url": "https://example2.com/",
"username": "username",
"password": "password",
"defaultLimit": 1
}
],
"repoConfigs": [{
"url": "https://github.com/",
"username": "username",
"password": "password",
"type": "GITHUB"
}],
"streamConfigs": [{
"url": "https://example.com/master.json",
"type": "JSON"
}]
}
kind: ConfigMap
metadata:
creationTimestamp: "2020-05-07T09:03:55Z"
name: demo-configmap
namespace: default
resourceVersion: "5283"
selfLink: /api/v1/namespaces/default/configmaps/demo-configmap
uid: ce566b36-c141-426e-be30-eb843ab20db6
You can mount the configmap into your pod as volume. where the key name will be the file name and value will be the content of the file. like following
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: demo-configmap
restartPolicy: Never
When the pod runs, the command ls /etc/config/ produces the output below:
test.json
Config maps are a container for key value pairs. So, if you create a ConfigMap from a file containing JSON, this will be stored with the file name as key and the JSON as value.
To access such a Config Map from a Pod, you would have to mount it into your Pod as a volume:
How to mount config maps
Unfortunately the solution as stated from hoque did not work for me. In may case that app terminated with a very suspect message:
Could not execute because the application was not found or a compatible .NET SDK is not installed.
Possible reasons for this include:
* You intended to execute a .NET program:
The application 'myapp.dll' does not exist.
* You intended to execute a .NET SDK command:
It was not possible to find any installed .NET SDKs.
Install a .NET SDK from:
https://aka.ms/dotnet-download
I could see that appsettings.json was deployed but something has gone wrong here. In the end, this solution has worked for me (similar, but with some extras):
spec:
containers:
- name: webapp
image: <my image>
volumeMounts:
- name: appconfig
# "mountPath: /app" only doesn't work (app crashes)
mountPath: /app/appsettings.json
subPath: appsettings.json
volumes:
- name: appconfig
configMap:
name: my-config-map
# Required since "mountPath: /app" only doesn't work (app crashes)
items:
- key: appsettings.json
path: appsettings.json
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-map
labels:
app: swpq-task-02-team5
data:
appsettings.json: |
{
...
}
I have had this issue for a couple of days as I wanted to refer a json config file (config.production.json) from my local directory into a specific location inside the containers for pod (/var/lib/ghost). The below config worked for me. Please note the mountPath and subPath keys that did the trick for me. The snippet below is of a pod kind=deployment shortened for ease of reading ---
spec:
volumes:
- name: configmap-volume
configMap:
name: configmap
containers:
- env:
- name: url
value: https://www.example.com
volumeMounts:
- name: configmap-volume
mountPath: /var/lib/ghost/config.production.json
subPath: config.production.json

Intermittent failure of container mounts in Kubernetes

We are seeing an intermittent failure of volume mount with this error message:
Error: cannot find volume "work" to mount into container "notebook".
The issue happens on ~5% of pod launches (where they all have the same config). The volume is backed by PVC which is created immediately before pod creation.
We are running on GKE with version v1.11.7-gke.12.
Pod manifest is here:
{
apiVersion: 'v1',
kind: 'Pod',
metadata: {
name: 'some pod name',
annotations: {},
labels: {},
},
spec: {
restartPolicy: 'OnFailure',
securityContext: {
fsGroup: 100,
},
automountServiceAccountToken: false,
volumes: [
{
name: 'work',
persistentVolumeClaim: {
claimName: pvcName,
},
},
],
containers: [
{
name: 'notebook',
image,
workingDir: undefined, // this is defined in Dockerfile
ports: [
{
name: 'notebook-port',
containerPort: port,
},
],
args: [...command.split(' '), ...args],
imagePullPolicy: 'IfNotPresent',
volumeMounts: [
{
name: 'work',
mountPath: '/home/jovyan/work',
},
],
resources: {
requests: {
memory: '256M',
},
limits: {
memory: '1G',
},
},
},
{
name: 'watcher',
image: 'gcr.io/deepnote-200602/wacher:0.0.3',
imagePullPolicy: 'Always',
volumeMounts: [
{
name: 'work',
mountPath: '/home/jovyan/work',
},
],
},
],
},
}
}
Any help or ideas would be greatly appreciated! Also, very happy to try any suggestions what other logs/steps might be useful to isolate the issue.
most likely the volume is not bound. can you check and confirm status of below pvc
claimName: pvcName
kubectl get pvc | grep pvcName

Unable to deploy Minio in kubernetes cluster using Helm

I am trying to deploy minio in kubernetes using helm stable charts,
and when I try to check the status of the release
helm status minio
the pod desired capacity is 4, but current is 0
I tried to look the journalctl logs for any logs from kubelet, but found none
I have attached all helm charts can some one please point out what wrong am I doing?
---
# Source: minio/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
type: Opaque
data:
accesskey: RFJMVEFEQU1DRjNUQTVVTVhOMDY=
secretkey: bHQwWk9zWmp5MFpvMmxXN3gxeHlFWmF5bXNPUkpLM1VTb3VqeEdrdw==
---
# Source: minio/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
data:
initialize: |-
#!/bin/sh
set -e ; # Have script exit in the event of a failed command.
# connectToMinio
# Use a check-sleep-check loop to wait for Minio service to be available
connectToMinio() {
ATTEMPTS=0 ; LIMIT=29 ; # Allow 30 attempts
set -e ; # fail if we can't read the keys.
ACCESS=$(cat /config/accesskey) ; SECRET=$(cat /config/secretkey) ;
set +e ; # The connections to minio are allowed to fail.
echo "Connecting to Minio server: http://$MINIO_ENDPOINT:$MINIO_PORT" ;
MC_COMMAND="mc config host add myminio http://$MINIO_ENDPOINT:$MINIO_PORT $ACCESS $SECRET" ;
$MC_COMMAND ;
STATUS=$? ;
until [ $STATUS = 0 ]
do
ATTEMPTS=`expr $ATTEMPTS + 1` ;
echo \"Failed attempts: $ATTEMPTS\" ;
if [ $ATTEMPTS -gt $LIMIT ]; then
exit 1 ;
fi ;
sleep 2 ; # 1 second intervals between attempts
$MC_COMMAND ;
STATUS=$? ;
done ;
set -e ; # reset `e` as active
return 0
}
# checkBucketExists ($bucket)
# Check if the bucket exists, by using the exit code of `mc ls`
checkBucketExists() {
BUCKET=$1
CMD=$(/usr/bin/mc ls myminio/$BUCKET > /dev/null 2>&1)
return $?
}
# createBucket ($bucket, $policy, $purge)
# Ensure bucket exists, purging if asked to
createBucket() {
BUCKET=$1
POLICY=$2
PURGE=$3
# Purge the bucket, if set & exists
# Since PURGE is user input, check explicitly for `true`
if [ $PURGE = true ]; then
if checkBucketExists $BUCKET ; then
echo "Purging bucket '$BUCKET'."
set +e ; # don't exit if this fails
/usr/bin/mc rm -r --force myminio/$BUCKET
set -e ; # reset `e` as active
else
echo "Bucket '$BUCKET' does not exist, skipping purge."
fi
fi
# Create the bucket if it does not exist
if ! checkBucketExists $BUCKET ; then
echo "Creating bucket '$BUCKET'"
/usr/bin/mc mb myminio/$BUCKET
else
echo "Bucket '$BUCKET' already exists."
fi
# At this point, the bucket should exist, skip checking for existence
# Set policy on the bucket
echo "Setting policy of bucket '$BUCKET' to '$POLICY'."
/usr/bin/mc policy $POLICY myminio/$BUCKET
}
# Try connecting to Minio instance
connectToMinio
# Create the bucket
createBucket bucket none false
config.json: |-
{
"version": "26",
"credential": {
"accessKey": "DR06",
"secretKey": "lt0ZxGkw"
},
"region": "us-east-1",
"browser": "on",
"worm": "off",
"domain": "",
"storageclass": {
"standard": "",
"rrs": ""
},
"cache": {
"drives": [],
"expiry": 90,
"maxuse": 80,
"exclude": []
},
"notify": {
"amqp": {
"1": {
"enable": false,
"url": "",
"exchange": "",
"routingKey": "",
"exchangeType": "",
"deliveryMode": 0,
"mandatory": false,
"immediate": false,
"durable": false,
"internal": false,
"noWait": false,
"autoDeleted": false
}
},
"nats": {
"1": {
"enable": false,
"address": "",
"subject": "",
"username": "",
"password": "",
"token": "",
"secure": false,
"pingInterval": 0,
"streaming": {
"enable": false,
"clusterID": "",
"clientID": "",
"async": false,
"maxPubAcksInflight": 0
}
}
},
"elasticsearch": {
"1": {
"enable": false,
"format": "namespace",
"url": "",
"index": ""
}
},
"redis": {
"1": {
"enable": false,
"format": "namespace",
"address": "",
"password": "",
"key": ""
}
},
"postgresql": {
"1": {
"enable": false,
"format": "namespace",
"connectionString": "",
"table": "",
"host": "",
"port": "",
"user": "",
"password": "",
"database": ""
}
},
"kafka": {
"1": {
"enable": false,
"brokers": null,
"topic": ""
}
},
"webhook": {
"1": {
"enable": false,
"endpoint": ""
}
},
"mysql": {
"1": {
"enable": false,
"format": "namespace",
"dsnString": "",
"table": "",
"host": "",
"port": "",
"user": "",
"password": "",
"database": ""
}
},
"mqtt": {
"1": {
"enable": false,
"broker": "",
"topic": "",
"qos": 0,
"clientId": "",
"username": "",
"password": "",
"reconnectInterval": 0,
"keepAliveInterval": 0
}
}
}
}
---
# Source: minio/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
spec:
type: ClusterIP
clusterIP: None
ports:
- name: service
port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
release: RELEASE-NAME
---
# Source: minio/templates/statefulset.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
spec:
serviceName: RELEASE-NAME-minio
replicas: 4
selector:
matchLabels:
app: minio
release: RELEASE-NAME
template:
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
release: RELEASE-NAME
spec:
containers:
- name: minio
image: node1:5000/minio/minio:RELEASE.2018-09-01T00-38-25Z
imagePullPolicy: IfNotPresent
command: [ "/bin/sh",
"-ce",
"cp /tmp/config.json &&
/usr/bin/docker-entrypoint.sh minio -C server
http://RELEASE-NAME-minio-0.RELEASE-NAME-minio.default.svc.cluster.local/export
http://RELEASE-NAME-minio-1.RELEASE-NAME-minio.default.svc.cluster.local/export
http://RELEASE-NAME-minio-2.RELEASE-NAME-minio.default.svc.cluster.local/export
http://RELEASE-NAME-minio-3.RELEASE-NAME-minio.default.svc.cluster.local/export" ]
volumeMounts:
- name: export
mountPath: /export
- name: minio-server-config
mountPath: "/tmp/config.json"
subPath: config.json
- name: minio-config-dir
mountPath:
ports:
- name: service
containerPort: 9000
env:
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: RELEASE-NAME-minio
key: accesskey
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: RELEASE-NAME-minio
key: secretkey
livenessProbe:
tcpSocket:
port: service
initialDelaySeconds: 5
periodSeconds: 30
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
tcpSocket:
port: service
periodSeconds: 15
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
requests:
cpu: 250m
memory: 256Mi
volumes:
- name: minio-user
secret:
secretName: RELEASE-NAME-minio
- name: minio-server-config
configMap:
name: RELEASE-NAME-minio
- name: minio-config-dir
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: export
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-fast
resources:
requests:
storage: 49Gi
---
# Source: minio/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
annotations:
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/session-cookie-name: route
spec:
tls:
- hosts:
- minio.sample.com
secretName: tls-secret
rules:
- host: minio.sample.com
http:
paths:
- path: /
backend:
serviceName: RELEASE-NAME-minio
servicePort: 9000
I suspect you are not getting the physical volume. Check your kube-controller-manager logs on your active master. This will vary depending on the cloud you are using: AWS, GCP, Azure, Openstack, etc. The kube-controller-manager is usually running on a docker container on the master. So you can do something like:
docker logs <kube-controller-manager-container>
Also, check:
kubectl get pvc
kubectl get pv
Hope it helps.
bit more digging gave me the answer, statefulset was deployed but pods were not created
kubectl describe statefulset -n <namespace> minio
the log said it was looking for mount path which was "" (in previous versions of charts), changing it solved my issue.

Kubernetes using secrets in pod

I have a spring boot app image which needs the following property.
server.ssl.keyStore=/certs/keystore.jks
I am loading the keystore file to secrets using the bewloe command.
kubectl create secret generic ssl-keystore-cert --from-file=./server-ssl.jks
I use the below secret reference in my deployment.yaml
{
"name": "SERVER_SSL_KEYSTORE",
"valueFrom": {
"secretKeyRef": {
"name": "ssl-keystore-cert",
"key": "server-ssl.jks"
}
}
}
With the above reference, I am getting the below error.
Error: failed to start container "app-service": Error response from
daemon: oci runtime error: container_linux.go:265: starting container
process caused "process_linux.go:368: container init caused \"setenv:
invalid argument\"" Back-off restarting failed container
If i go with the volume mount option,
"spec": {
"volumes": [
{
"name": "keystore-cert",
"secret": {
"secretName": "ssl-keystore-cert",
"items": [
{
"key": "server-ssl.jks",
"path": "keycerts"
}
]
}
}
],
"containers": [
{
"env": [
{
"name": "JAVA_OPTS",
"value": "-Dserver.ssl.keyStore=/certs/keystore/keycerts"
}
],
"name": "app-service",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"name": "keystore-cert",
"mountPath": "/certs/keystore"
}
],
"imagePullPolicy": "IfNotPresent"
}
]
I am getting the below error with the above approach.
Caused by: java.lang.IllegalArgumentException: Resource location must
not be null at
org.springframework.util.Assert.notNull(Assert.java:134)
~[spring-core-4.3.7.RELEASE.jar!/:4.3.7.RELEASE] at
org.springframework.util.ResourceUtils.getURL(ResourceUtils.java:131)
~[spring-core-4.3.7.RELEASE.jar!/:4.3.7.RELEASE] at
org.springframework.boot.context.embedded.jetty.JettyEmbeddedServletContainerFactory.configureSslKeyStore(JettyEmbeddedServletContainerFactory.java:301)
~[spring-boot-1.4.5.RELEASE.jar!/:1.4.5.RELEASE]
I tried with the below option also, instead of JAVA_OPTS,
{
"name": "SERVER_SSL_KEYSTORE",
"value": "/certs/keystore/keycerts"
}
Still the error is same.
Not sure what is the right approach.
I tried to repeat the situation with your configuration. I created a secret used command:
kubectl create secret generic ssl-keystore-cert --from-file=./server-ssl.jks
I used this YAML as a test environment:
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
env:
- name: JAVA_OPTS
value: "-Dserver.ssl.keyStore=/certs/keystore/server-ssl.jks"
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/cert/keystore"
volumes:
- name: secret-volume
secret:
secretName: ssl-keystore-cert
As you see, I used "server-ssl.jks" file name in the variable. If you create the secret from a file, Kubernetes will store this file in the secret. When you mount this secret to any place, you just store the file. You tried to use /certs/keystore/keycerts but it doesn't exist, which you see in logs:
Resource location must not be null at org.springframework.util.Assert.notNull
Because your mounted secret is here /certs/keystore/keycerts/server-ssl.jks
It should work, but just fix the paths