I am using kubernetes client (https://github.com/kubernetes-client/javascript) to create job and I have set up service account for the pod which is creating the job. However, I am getting this error when executing the job creation.
body: {
kind: 'Status',
apiVersion: 'v1',
metadata: {},
status: 'Failure',
message: 'Job.batch "compiler-job" is invalid: spec.template.spec.containers: Required value',
reason: 'Invalid',
details: {
name: 'compiler-job',
group: 'batch',
kind: 'Job',
causes: [Array]
},
code: 422
},
I am sure there is something wrong with the body options that i am passing to k8sBatchV1Api.createNamespacedJob() but I am not sure what I am doing wrong. Here is the snippet of the manifest.
const kc = new k8s.KubeConfig();
kc.loadFromCluster();
const k8sBatchV1Api = kc.makeApiClient(k8s.BatchV1Api);
k8sBatchV1Api.createNamespacedJob('default', {
apiVersion: 'batch/v1',
kind: 'Job',
metadata: {
name: 'compiler-job'
},
spec: {
template: {
metadata: {
name: 'compiler-job'
},
spec: {
containers: {
image: 'perl',
name: 'compiler-job',
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
},
restartPolicy: "OnFailure"
}
}
}
}).catch((e: any) => console.log(e));
Here is the serviceaccount.yaml file
apiVersion: v1
kind: ServiceAccount
metadata:
name: create-job
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: create-job-role
rules:
- apiGroups: [ "batch", "extensions" ]
resources: [ "jobs" ]
verbs: [ "get", "list", "watch", "create", "update", "patch", "delete" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: create-job-rolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: create-job
namespace: default
roleRef:
kind: ClusterRole
name: create-job-role
apiGroup: rbac.authorization.k8s.io
Been spending almost a week and have no clue.
I think that might be because containers should be a list, can you try this?
const kc = new k8s.KubeConfig();
kc.loadFromCluster();
const k8sBatchV1Api = kc.makeApiClient(k8s.BatchV1Api);
k8sBatchV1Api.createNamespacedJob('default', {
apiVersion: 'batch/v1',
kind: 'Job',
metadata: {
name: 'compiler-job'
},
spec: {
template: {
metadata: {
name: 'compiler-job'
},
spec: {
containers: [{
image: 'perl',
name: 'compiler-job',
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
}],
restartPolicy: "OnFailure"
}
}
}
}).catch((e: any) => console.log(e));
My K8s cluster needs persistent storage
This is the pv-claim.yaml file
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Mi
This is the pv-volume.yaml file
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 50Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "kpi-mon/configurations"
and this is my deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}-deployment
labels:
app: {{ .Values.name }}
xappRelease: {{ .Release.Name }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.name }}
template:
metadata:
labels:
app: {{ .Values.name }}
xappRelease: {{ .Release.Name }}
spec:
containers:
- name: {{ .Values.name }}
image: {{ .Values.repository -}}{{- .Values.image -}}:{{- .Values.tag }}
imagePullPolicy: IfNotPresent
ports:
- name: rmr
containerPort: {{ .Values.rmrPort }}
protocol: TCP
- name: rtg
containerPort: {{ .Values.rtgPort }}
protocol: TCP
volumeMounts:
- name: app-cfg
mountPath: {{ .Values.routingTablePath }}{{ .Values.routingTableFile }}
subPath: {{ .Values.routingTableFile }}
- name: app-cfg
mountPath: {{ .Values.routingTablePath }}{{ .Values.vlevelFile }}
subPath: {{ .Values.vlevelFile }}
- name: task-pv-storage
mountPath: "kpi-mon/configurations"
envFrom:
- configMapRef:
name: {{ .Values.name }}-configmap
volumes:
- name: app-cfg
configMap:
name: {{ .Values.name }}-configmap
items:
- key: {{ .Values.routingTableFile }}
path: {{ .Values.routingTableFile }}
- key: {{ .Values.vlevelFile }}
path: {{ .Values.vlevelFile }}
- name: task-pv-storage
persistentVolumeClain:
claimName: task-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name }}-rmr-service
labels:
xappRelease: {{ .Release.Name }}
spec:
selector:
app: {{ .Values.name }}
type: NodePort
ports:
- name: rmr
protocol: TCP
port: {{ .Values.rmrPort }}
targetPort: {{ .Values.rmrPort }}
- name: rtg
protocol: TCP
port: {{ .Values.rtgPort }}
targetPort: {{ .Values.rtgPort }}
I'm trying to map local directory from the Pod file system
I choosed this documentation https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
When I try to load the deployment an error occurred and I cannot understand what the problem is.
I have created a job that runs the following command (job-command.sh):
cat << EOF | kubectl apply -f -
{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"name": "my-secret-name",
"namespace": "my-namespace"
},
"data": {
"cert": "my-certificate-data"
},
"type": "Opaque"
}
EOF
The job looks like this:
apiVersion: batch/v1
kind: Job
metadata:
name: "my-name-job1"
namespace: "my-namespace"
labels:
app: "my-name"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
template:
metadata:
labels:
app: my-name
spec:
serviceAccountName: my-name-sa
restartPolicy: Never
containers:
- name: myrepo
image: "repo:image"
imagePullPolicy: "Always"
command: ["sh", "-c"]
args:
- |-
/job-command.sh
And the rbac like this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-name-sa
namespace: my-namespace
---------------------------
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-name-rolebinding
namespace: my-namespace
labels:
app: "{{ .Chart.Name }}"
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: my-name-role
subjects:
- kind: ServiceAccount
name: my-name-sa
---------------------------
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: my-name-role
namespace: my-namespace
labels:
app: "{{ .Chart.Name }}"
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: ['']
resources: ['secrets']
verbs: ["get", "watch", "list", "create", "update", "patch"]
I'm getting the following issue:
"error: error validating "STDIN": error validating data: the server does not allow access to the requested resource; if you choose to ignore these errors, turn validation off with --validate=false"
I don't understand what's wrong... the service account used by the job should give access to save secrets...
I am trying a poc with Spring cloud dataflow streams and have the the application iis running in Pivotal Cloud Foundry. Trying the same in kubernetes and the spring dataflow server dashboard is not loading.Debugged the issue and found the root cause is when the dashboard is loaded, its trying to hit the Skipper rest end point /api and this returns a response with the urls of other end points in skipper but the return urls are all in http. How can i force skipper to return https urls instead of http? Below is the response when i try to curl the same endpoints .
C:>curl -k https:///api
RESPONSE FROM SKIPPER
{
"_links" : {
"repositories" : {
"href" : "http://<skipper_url>/api/repositories{?page,size,sort}",
"templated" : true
},
"deployers" : {
"href" : "http://<skipper_url>/api/deployers{?page,size,sort}",
"templated" : true
},
"releases" : {
"href" : "http://<skipper_url>/api/releases{?page,size,sort}",
"templated" : true
},
"packageMetadata" : {
"href" : "**http://<skipper_url>/api/packageMetadata{?page,size,sort,projection}**",
"templated" : true
},
"about" : {
"href" : "http://<skipper_url>/api/about"
},
"release" : {
"href" : "http://<skipper_url>/api/release"
},
"package" : {
"href" : "http://<skipper_url>/api/package"
},
"profile" : {
"href" : "http://<skipper_url>/api/profile"
}
}
}
kubernetes deployment yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: skipper-server-network-policy
spec:
podSelector:
matchLabels:
app: skipper-server
ingress:
- from:
- namespaceSelector:
matchLabels:
gkp_namespace: ingress-nginx
egress:
- {}
policyTypes:
- Ingress
- Egress
---
apiVersion: v1
kind: Secret
metadata:
name: poc-secret
data:
.dockerconfigjson: ewogICJhdXRocyI6
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: skipper-server
labels:
app: skipper-server
spec:
replicas: 1
selector:
matchLabels:
app: skipper-server
template:
metadata:
labels:
app: skipper-server
annotations:
kubernetes.io/psp: nonroot
spec:
containers:
- name: skipper-server
image: <image_path>
imagePullPolicy: Always
ports:
- containerPort: 7577
protocol: TCP
resources:
limits:
cpu: "4"
memory: 2Gi
requests:
cpu: 25m
memory: 1Gi
securityContext:
runAsUser: 99
imagePullSecrets:
- name: poc-secret
serviceAccount: spark
serviceAccountName: spark
---
apiVersion: v1
kind: Service
metadata:
name: skipper-server
labels:
app: skipper-server
spec:
ports:
- port: 80
targetPort: 7577
protocol: TCP
name: http
selector:
app: skipper-server
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: skipper-server
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
ingress.kubernetes.io/secure-backends: "true"
kubernetes.io/ingress.allow.http: true
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
rules:
- host: "<skipper_url>"
http:
paths:
- path: /
backend:
serviceName: skipper-server
servicePort: 80
tls:
- hosts:
- "<skipper_url>"
SKIPPER APPLICATION.properties
spring.datasource.url=jdbc:h2:mem:testdb
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
spring.server.use-forward-headers=true
The root cause was skipper /api end point returning http urls for the /deployer and kubernetes ingress trying to redirect and getting blocked with a 308 error. Added below to skipper env properties and this fixed the issue.
DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: skipper-server
spec:
containers:
env:
- name: "server.tomcat.internal-proxies"
value: ".*"
- name: "server.use-forward-headers"
value: "true"**
INGRESS
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: skipper-server
annotations:
**nginx.ingress.kubernetes.io/ssl-redirect: false**
I am trying to deploy minio in kubernetes using helm stable charts,
and when I try to check the status of the release
helm status minio
the pod desired capacity is 4, but current is 0
I tried to look the journalctl logs for any logs from kubelet, but found none
I have attached all helm charts can some one please point out what wrong am I doing?
---
# Source: minio/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
type: Opaque
data:
accesskey: RFJMVEFEQU1DRjNUQTVVTVhOMDY=
secretkey: bHQwWk9zWmp5MFpvMmxXN3gxeHlFWmF5bXNPUkpLM1VTb3VqeEdrdw==
---
# Source: minio/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
data:
initialize: |-
#!/bin/sh
set -e ; # Have script exit in the event of a failed command.
# connectToMinio
# Use a check-sleep-check loop to wait for Minio service to be available
connectToMinio() {
ATTEMPTS=0 ; LIMIT=29 ; # Allow 30 attempts
set -e ; # fail if we can't read the keys.
ACCESS=$(cat /config/accesskey) ; SECRET=$(cat /config/secretkey) ;
set +e ; # The connections to minio are allowed to fail.
echo "Connecting to Minio server: http://$MINIO_ENDPOINT:$MINIO_PORT" ;
MC_COMMAND="mc config host add myminio http://$MINIO_ENDPOINT:$MINIO_PORT $ACCESS $SECRET" ;
$MC_COMMAND ;
STATUS=$? ;
until [ $STATUS = 0 ]
do
ATTEMPTS=`expr $ATTEMPTS + 1` ;
echo \"Failed attempts: $ATTEMPTS\" ;
if [ $ATTEMPTS -gt $LIMIT ]; then
exit 1 ;
fi ;
sleep 2 ; # 1 second intervals between attempts
$MC_COMMAND ;
STATUS=$? ;
done ;
set -e ; # reset `e` as active
return 0
}
# checkBucketExists ($bucket)
# Check if the bucket exists, by using the exit code of `mc ls`
checkBucketExists() {
BUCKET=$1
CMD=$(/usr/bin/mc ls myminio/$BUCKET > /dev/null 2>&1)
return $?
}
# createBucket ($bucket, $policy, $purge)
# Ensure bucket exists, purging if asked to
createBucket() {
BUCKET=$1
POLICY=$2
PURGE=$3
# Purge the bucket, if set & exists
# Since PURGE is user input, check explicitly for `true`
if [ $PURGE = true ]; then
if checkBucketExists $BUCKET ; then
echo "Purging bucket '$BUCKET'."
set +e ; # don't exit if this fails
/usr/bin/mc rm -r --force myminio/$BUCKET
set -e ; # reset `e` as active
else
echo "Bucket '$BUCKET' does not exist, skipping purge."
fi
fi
# Create the bucket if it does not exist
if ! checkBucketExists $BUCKET ; then
echo "Creating bucket '$BUCKET'"
/usr/bin/mc mb myminio/$BUCKET
else
echo "Bucket '$BUCKET' already exists."
fi
# At this point, the bucket should exist, skip checking for existence
# Set policy on the bucket
echo "Setting policy of bucket '$BUCKET' to '$POLICY'."
/usr/bin/mc policy $POLICY myminio/$BUCKET
}
# Try connecting to Minio instance
connectToMinio
# Create the bucket
createBucket bucket none false
config.json: |-
{
"version": "26",
"credential": {
"accessKey": "DR06",
"secretKey": "lt0ZxGkw"
},
"region": "us-east-1",
"browser": "on",
"worm": "off",
"domain": "",
"storageclass": {
"standard": "",
"rrs": ""
},
"cache": {
"drives": [],
"expiry": 90,
"maxuse": 80,
"exclude": []
},
"notify": {
"amqp": {
"1": {
"enable": false,
"url": "",
"exchange": "",
"routingKey": "",
"exchangeType": "",
"deliveryMode": 0,
"mandatory": false,
"immediate": false,
"durable": false,
"internal": false,
"noWait": false,
"autoDeleted": false
}
},
"nats": {
"1": {
"enable": false,
"address": "",
"subject": "",
"username": "",
"password": "",
"token": "",
"secure": false,
"pingInterval": 0,
"streaming": {
"enable": false,
"clusterID": "",
"clientID": "",
"async": false,
"maxPubAcksInflight": 0
}
}
},
"elasticsearch": {
"1": {
"enable": false,
"format": "namespace",
"url": "",
"index": ""
}
},
"redis": {
"1": {
"enable": false,
"format": "namespace",
"address": "",
"password": "",
"key": ""
}
},
"postgresql": {
"1": {
"enable": false,
"format": "namespace",
"connectionString": "",
"table": "",
"host": "",
"port": "",
"user": "",
"password": "",
"database": ""
}
},
"kafka": {
"1": {
"enable": false,
"brokers": null,
"topic": ""
}
},
"webhook": {
"1": {
"enable": false,
"endpoint": ""
}
},
"mysql": {
"1": {
"enable": false,
"format": "namespace",
"dsnString": "",
"table": "",
"host": "",
"port": "",
"user": "",
"password": "",
"database": ""
}
},
"mqtt": {
"1": {
"enable": false,
"broker": "",
"topic": "",
"qos": 0,
"clientId": "",
"username": "",
"password": "",
"reconnectInterval": 0,
"keepAliveInterval": 0
}
}
}
}
---
# Source: minio/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
spec:
type: ClusterIP
clusterIP: None
ports:
- name: service
port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
release: RELEASE-NAME
---
# Source: minio/templates/statefulset.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
spec:
serviceName: RELEASE-NAME-minio
replicas: 4
selector:
matchLabels:
app: minio
release: RELEASE-NAME
template:
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
release: RELEASE-NAME
spec:
containers:
- name: minio
image: node1:5000/minio/minio:RELEASE.2018-09-01T00-38-25Z
imagePullPolicy: IfNotPresent
command: [ "/bin/sh",
"-ce",
"cp /tmp/config.json &&
/usr/bin/docker-entrypoint.sh minio -C server
http://RELEASE-NAME-minio-0.RELEASE-NAME-minio.default.svc.cluster.local/export
http://RELEASE-NAME-minio-1.RELEASE-NAME-minio.default.svc.cluster.local/export
http://RELEASE-NAME-minio-2.RELEASE-NAME-minio.default.svc.cluster.local/export
http://RELEASE-NAME-minio-3.RELEASE-NAME-minio.default.svc.cluster.local/export" ]
volumeMounts:
- name: export
mountPath: /export
- name: minio-server-config
mountPath: "/tmp/config.json"
subPath: config.json
- name: minio-config-dir
mountPath:
ports:
- name: service
containerPort: 9000
env:
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: RELEASE-NAME-minio
key: accesskey
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: RELEASE-NAME-minio
key: secretkey
livenessProbe:
tcpSocket:
port: service
initialDelaySeconds: 5
periodSeconds: 30
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
tcpSocket:
port: service
periodSeconds: 15
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
requests:
cpu: 250m
memory: 256Mi
volumes:
- name: minio-user
secret:
secretName: RELEASE-NAME-minio
- name: minio-server-config
configMap:
name: RELEASE-NAME-minio
- name: minio-config-dir
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: export
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-fast
resources:
requests:
storage: 49Gi
---
# Source: minio/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
annotations:
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/session-cookie-name: route
spec:
tls:
- hosts:
- minio.sample.com
secretName: tls-secret
rules:
- host: minio.sample.com
http:
paths:
- path: /
backend:
serviceName: RELEASE-NAME-minio
servicePort: 9000
I suspect you are not getting the physical volume. Check your kube-controller-manager logs on your active master. This will vary depending on the cloud you are using: AWS, GCP, Azure, Openstack, etc. The kube-controller-manager is usually running on a docker container on the master. So you can do something like:
docker logs <kube-controller-manager-container>
Also, check:
kubectl get pvc
kubectl get pv
Hope it helps.
bit more digging gave me the answer, statefulset was deployed but pods were not created
kubectl describe statefulset -n <namespace> minio
the log said it was looking for mount path which was "" (in previous versions of charts), changing it solved my issue.