Hi I have created following CustomResourceDefinition - crd.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: test.demo.k8s.com
namespace: testns
spec:
group: demo.k8s.com
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: testpod
singular: testpod
kind: testpod
The corresponding resource is as below - cr.yaml
kind: testpod
metadata:
name: testpodcr
namespace: testns
spec:
containers:
- name: testname
image: test/img:v5.16
env:
- name: TESTING_ON
valueFrom:
configMapKeyRef:
name: kubernetes-config
key: type
volumeMounts:
- name: testvol
mountPath: "/test/vol"
readOnly: true
When i use client-go program to fetch the spec value of cr object 'testpodcr' The value comes as null.
func (c *TestConfigclient) AddNewPodForCR(obj *TestPodConfig) *v1.Pod {
log.Println("logging obj \n", obj.Name) // Prints the name as testpodcr
log.Println("Spec value: \n", obj.Spec) //Prints null
dep := &v1.Pod{
ObjectMeta: meta_v1.ObjectMeta{
//Labels: labels,
GenerateName: "test-pod-",
},
Spec: obj.Spec,
}
return dep
}
Can anyone please help in figuring this out why the spec value is resulting to null
There is an error with Your crd.yaml file. I am getting the following error:
$ kubectl apply -f crd.yaml
The CustomResourceDefinition "test.demo.k8s.com" is invalid: metadata.name: Invalid value: "test.demo.k8s.com": must be spec.names.plural+"."+spec.group
In Your configuration the name: test.demo.k8s.com does not match plurar: testpod found in spec.names.
I modified Your crd.yaml and now it works:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: testpods.demo.k8s.com
namespace: testns
spec:
group: demo.k8s.com
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: testpods
singular: testpod
kind: Testpod
$ kubectl apply -f crd.yaml
customresourcedefinition.apiextensions.k8s.io/testpods.demo.k8s.com created
After that Your cr.yaml also had to be fixed:
apiVersion: "demo.k8s.com/v1"
kind: Testpod
metadata:
name: testpodcr
namespace: testns
spec:
containers:
- name: testname
image: test/img:v5.16
env:
- name: TESTING_ON
valueFrom:
configMapKeyRef:
name: kubernetes-config
key: type
volumeMounts:
- name: testvol
mountPath: "/test/vol"
readOnly: true
After that I created namespace testns and finally created Testpod object successfully:
$ kubectl create namespace testns
namespace/testns created
$ kubectl apply -f cr.yaml
testpod.demo.k8s.com/testpodcr created
Related
I want to patch (overwrite) list in kubernetes manifest with Kustomize.
I am using patchesStrategicMerge method.
When I patch the parameters which are not in list the patching works as expected - only addressed parameters in patch.yaml are replaced, rest is untouched.
When I patch list the whole list is replaced.
How can I replace only specific items in the list and the res of the items in list stay untouched?
I found these two resources:
https://github.com/kubernetes-sigs/kustomize/issues/581
https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md
but wasn't able to make desired solution of it.
exmaple code:
orig-file.yaml
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
test: test
other: other-stuff
receivers:
- name: default
slackConfigs:
- name: slack
username: test-user
channel: "#alerts"
sendResolved: true
apiURL:
name: slack-webhook-url
key: address
patch.yaml:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
test: brase-yourself
receivers:
- name: default
slackConfigs:
- name: slack
username: Karl
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- orig-file.yaml
patchesStrategicMerge:
- patch.yaml
What I get:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
other: other-stuff
receivers:
- name: default
slackConfigs:
- name: slack
username: Karl
test: brase-yourself
What I want:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
other: other-stuff
receivers:
- name: default
slackConfigs:
- name: slack
username: Karl
channel: "#alerts"
sendResolved: true
apiURL:
name: slack-webhook-url
key: address
test: brase-yourself
What you can do is to use jsonpatch instead of patchesStrategicMerge, so in your case:
cat <<EOF >./kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- orig-file.yaml
patches:
- path: patch.yaml
target:
group: monitoring.coreos.com
version: v1alpha1
kind: AlertmanagerConfig
name: alertmanager-slack-config
EOF
patch:
cat <<EOF >./patch.yaml
- op: replace
path: /spec/receivers/0/slackConfigs/0/username
value: Karl
EOF
How can I use ConfigMap to write cluster node information to a JSON file?
The below gives me Node information :
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(#.type=="Hostname")].address}'
How can I use Configmap to write the above output to a text file?
You can save the output of command in any file.
Then use the file or data inside file to create configmap.
After creating the configmap you can mount it as a file in your deployment/pod.
For example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: appname
name: appname
namespace: development
spec:
selector:
matchLabels:
app: appname
tier: sometier
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
image: someimage
imagePullPolicy: Always
name: appname
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
I added the following configuration in deployment:
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
To create ConfigMap from file:
kubectl create configmap your-configmap-name --from-file=your-file-path
Or just create ConfigMap with the output of your command:
apiVersion: v1
kind: ConfigMap
metadata:
name: your-configmap-name
namespace: your-namespace
data:
your-filename-inside-pod: |
output of command
At first save output of kubect get nodes command into JSON file:
$ exampleCommand > node-info.json
Then create proper ConfigMap.
Here is an example:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
node-info.json: |
{
"array": [
1,
2
],
"boolean": true,
"number": 123,
"object": {
"a": "egg",
"b": "egg1"
},
"string": "Welcome"
}
Then remember to add following lines below specification section in pod configuration file:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
You can also use PodPresent.
PodPreset is an object that enable to inject information egg. environment variables into pods during creation time.
Look at the example below:
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: example
spec:
selector:
matchLabels:
app: your-pod
env:
- name: DB_PORT
value: "6379"
envFrom:
- configMapRef:
name: etcd-env-config
key: node-info.json
but remember that you have to also add:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
section to your pod definition proper to your PodPresent and ConfigMap configuration.
More information you can find here: podpresent, pod-present-configuration.
Below is my python script to update a secret so I can deploy to kubernetes using kubectl. So it works fine. But I want to create a kubernetes cron job that will run a docker container to update a secret from within a kubernetes cluster. How do I do that? The aws secret lasts only 12 hours to I have to regenerate from within the cluster so I can pull if pod crash etc...
This there an internal api I have access to within kubernetes?
cmd = """aws ecr get-login --no-include-email --region us-east-1 > aws_token.txt"""
run_bash(cmd)
f = open('aws_token.txt').readlines()
TOKEN = f[0].split(' ')[5]
SECRET_NAME = "%s-ecr-registry" % (self.region)
cmd = """kubectl delete secret --ignore-not-found %s -n %s""" % (SECRET_NAME,namespace)
print (cmd)
run_bash(cmd)
cmd = """kubectl create secret docker-registry %s --docker-server=https://%s.dkr.ecr.%s.amazonaws.com --docker-username=AWS --docker-password="%s" --docker-email="david.montgomery#gmail.com" -n %s """ % (SECRET_NAME,self.aws_account_id,self.region,TOKEN,namespace)
print (cmd)
run_bash(cmd)
cmd = "kubectl describe secrets/%s-ecr-registry -n %s" % (self.region,namespace)
print (cmd)
run_bash(cmd)
cmd = "kubectl get secret %s-ecr-registry -o yaml -n %s" % (self.region,namespace)
print (cmd)
As it happens I literally just got done doing this.
Below is everything you need to set up a cronjob to roll your AWS docker login token, and then re-login to ECR, every 6 hours. Just replace the {{ variables }} with your own actual values.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: {{ namespace }}
name: ecr-cred-helper
rules:
- apiGroups: [""]
resources:
- secrets
- serviceaccounts
- serviceaccounts/token
verbs:
- 'delete'
- 'create'
- 'patch'
- 'get'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: ecr-cred-helper
namespace: {{ namespace }}
subjects:
- kind: ServiceAccount
name: sa-ecr-cred-helper
namespace: {{ namespace }}
roleRef:
kind: Role
name: ecr-cred-helper
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-ecr-cred-helper
namespace: {{ namespace }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
annotations:
name: ecr-cred-helper
namespace: {{ namespace }}
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
spec:
serviceAccountName: sa-ecr-cred-helper
containers:
- command:
- /bin/sh
- -c
- |-
TOKEN=`aws ecr get-login --region ${REGION} --registry-ids ${ACCOUNT} | cut -d' ' -f6`
echo "ENV variables setup done."
kubectl delete secret -n {{ namespace }} --ignore-not-found $SECRET_NAME
kubectl create secret -n {{ namespace }} docker-registry $SECRET_NAME \
--docker-server=https://{{ ECR_REPOSITORY_URL }} \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
echo "Secret created by name. $SECRET_NAME"
kubectl patch serviceaccount default -p '{"imagePullSecrets":[{"name":"'$SECRET_NAME'"}]}' -n {{ namespace }}
echo "All done."
env:
- name: AWS_DEFAULT_REGION
value: eu-west-1
- name: AWS_SECRET_ACCESS_KEY
value: '{{ AWS_SECRET_ACCESS_KEY }}'
- name: AWS_ACCESS_KEY_ID
value: '{{ AWS_ACCESS_KEY_ID }}'
- name: ACCOUNT
value: '{{ AWS_ACCOUNT_ID }}'
- name: SECRET_NAME
value: '{{ imagePullSecret }}'
- name: REGION
value: 'eu-west-1'
- name: EMAIL
value: '{{ ANY_EMAIL }}'
image: odaniait/aws-kubectl:latest
imagePullPolicy: IfNotPresent
name: ecr-cred-helper
resources: {}
securityContext:
capabilities: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: Default
hostNetwork: true
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
schedule: 0 */6 * * *
successfulJobsHistoryLimit: 3
suspend: false
I add my solution for copying secrets between namespaces using cronjob because this was the stack overflow answer that was given to me when searching for secret copying using CronJob
In the source namespace, you need to define Role, RoleBinding and 'ServiceAccount`
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-user-user-secret-service-account
namespace: source-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: demo-user-role
namespace: source-namespace
rules:
- apiGroups: [""]
resources: ["secrets"]
# Secrets you want to have access in your namespace
resourceNames: ["demo-user" ]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: demo-user-cron-role-binding
namespace: source-namespace
subjects:
- kind: ServiceAccount
name: demo-user-user-secret-service-account
namespace: source-namespace
roleRef:
kind: Role
name: demo-user-role
apiGroup: ""
and CronJob definition will look like so:
apiVersion: batch/v1
kind: CronJob
metadata:
name: demo-user-user-secret-copy-cronjob
spec:
schedule: "* * * * *"
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 5
successfulJobsHistoryLimit: 3
startingDeadlineSeconds: 10
jobTemplate:
spec:
template:
spec:
containers:
- name: demo-user-user-secret-copy-cronjob
image: bitnami/kubectl:1.25.4-debian-11-r6
imagePullPolicy: IfNotPresent
command:
- "/bin/bash"
- "-c"
- "kubectl -n source-namespace get secret demo-user -o json | \
jq 'del(.metadata.creationTimestamp, .metadata.uid, .metadata.resourceVersion, .metadata.ownerReferences, .metadata.namespace)' > /tmp/demo-user-secret.json && \
kubectl apply --namespace target-namespace -f /tmp/demo-user-secret.json"
restartPolicy: Never
securityContext:
privileged: false
allowPrivilegeEscalation: true
readOnlyRootFilesystem: true
runAsNonRoot: true
capabilities:
drop: [ "all" ]
serviceAccountName: demo-user-user-secret-service-account
In the target namespace you also need Role and RoleBinding so that CronJob in source namespace can copy over the secrets.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: target-namespace
name: demo-user-role
rules:
- apiGroups: [""]
resources:
- secrets
verbs:
- 'list'
- 'delete'
- 'create'
- 'patch'
- 'get'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: demo-user-role-binding
namespace: target-namespace
subjects:
- kind: ServiceAccount
name: demo-user-user-secret-service-account
namespace: source-namespace
roleRef:
kind: Role
name: demo-user-role
apiGroup: ""
In your target namespace deployment you can read in the secrets as regular files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 1
...
spec:
containers:
- name: my-app
image: [image-name]
volumeMounts:
- name: your-secret
mountPath: /opt/your-secret
readOnly: true
volumes:
- name: your-secret
secret:
secretName: demo-user
items:
- key: ca.crt
path: ca.crt
- key: user.crt
path: user.crt
- key: user.key
path: user.key
- key: user.p12
path: user.p12
- key: user.password
path: user.password
So I was deploying a new cronjob today and got the following error:
Error: release acs-export-cronjob failed: CronJob.batch "acs-export-cronjob" is invalid: [spec.jobTemplate.spec.template.spec.containers: Required value, spec.jobTemplate.spec.template.spec.restartPolicy: Unsupported value: "Always": supported values: "OnFailure", "Never"]
here's some output from running helm on the same chart, no changes made, but with the --debug --dry-run flags:
NAME: acs-export-cronjob
REVISION: 1
RELEASED: Wed Oct 17 14:12:02 2018
CHART: generic-job-0.1.0
USER-SUPPLIED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
schedule: 0 * * * *
COMPUTED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
resources:
cpu: 100m
memory: 1Gi
schedule: 0 * * * *
sonicNodeGroup: api
springProfiles: export-job
HOOKS:
MANIFEST:
---
# Source: generic-job/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: acs-export-cronjob-sa
---
# Source: generic-job/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-manager
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get"]
---
# Source: generic-job/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-binding
subjects:
- kind: ServiceAccount
name: acs-export-cronjob-sa
roleRef:
kind: Role
name: acs-export-cronjob-manager
apiGroup: rbac.authorization.k8s.io
---
# Source: generic-job/templates/generic-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: acs-export-cronjob
labels:
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
schedule: 0 * * * *
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
concurrencyPolicy: Forbid
startingDeadlineSeconds: 120
jobTemplate:
spec:
metadata:
name: acs-export-cronjob
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
template:
metadata:
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
annotations:
iam.amazonaws.com/role: arn:aws:iam::<censored>:role/k8s-service-role
spec:
restartPolicy: Never #<----------this is not 'Always'!!
serviceAccountName: acs-export-cronjob-sa
tolerations:
- key: sonic-node-group
operator: Equal
value: api
effect: NoSchedule
nodeSelector:
sonic-node-group: api
volumes:
- name: config
emptyDir: {}
initContainers:
- name: "get-users-vmargs-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_SPECIFIC_VM_ARGS')].value}\" > /config/users-vmargs && cat /config/users-vmargs"]
volumeMounts:
- mountPath: /config
name: config
- name: "get-users-yaml-appconfig-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_YAML_CONFIG')].value}\" > /config/users-appconfig && cat /config/users-appconfig"]
volumeMounts:
- mountPath: /config
name: config
containers: #<--------this field is not missing!
- image: <censored>.amazonaws.com/sonic/acs-export:latest
imagePullPolicy: Always
name: "users-batch"
command:
- "bash"
- "-c"
- 'APP_SPECIFIC_VM_ARGS="$(cat /config/users-vmargs) " APP_YAML_CONFIG="$(cat /config/users-appconfig)" /vertx-app/startvertx.sh'
env:
- name: FRENV
value: "batch"
- name: STACKNAME
value: eu1-test
- name: SPRING_PROFILES
value: "export-job"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- mountPath: /config
name: config
resources:
limit:
cpu: 100m
memory: 1Gi
if you paid attention, you may have noticed line 101 (I added the comment afterwards) in the debug-output, which sets restartPolicy to Never, quite the opposite of Always as the error message claims it to be.
You may also have noticed line 126 (again, I added the comment after the fact) of the debug output, where the mandatory field containers is specified, again, much in contradiction to the error-message.
whats going on here?
hah! found it! it was a simple mistake actually. I had an extra spec:metadata section under jobtemplate which was duplicated. removing one of the dupes fixed my issues.
I really wish the error-messages of helm would be more helpful.
the corrected chart looks like:
NAME: acs-export-cronjob
REVISION: 1
RELEASED: Wed Oct 17 14:12:02 2018
CHART: generic-job-0.1.0
USER-SUPPLIED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
schedule: 0 * * * *
COMPUTED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
resources:
cpu: 100m
memory: 1Gi
schedule: 0 * * * *
sonicNodeGroup: api
springProfiles: export-job
HOOKS:
MANIFEST:
---
# Source: generic-job/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: acs-export-cronjob-sa
---
# Source: generic-job/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-manager
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get"]
---
# Source: generic-job/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-binding
subjects:
- kind: ServiceAccount
name: acs-export-cronjob-sa
roleRef:
kind: Role
name: acs-export-cronjob-manager
apiGroup: rbac.authorization.k8s.io
---
# Source: generic-job/templates/generic-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: acs-export-cronjob
labels:
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
schedule: 0 * * * *
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
concurrencyPolicy: Forbid
startingDeadlineSeconds: 120
jobTemplate:
spec:
template:
metadata:
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
annotations:
iam.amazonaws.com/role: arn:aws:iam::<censored>:role/k8s-service-role
spec:
restartPolicy: Never
serviceAccountName: acs-export-cronjob-sa
tolerations:
- key: sonic-node-group
operator: Equal
value: api
effect: NoSchedule
nodeSelector:
sonic-node-group: api
volumes:
- name: config
emptyDir: {}
initContainers:
- name: "get-users-vmargs-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_SPECIFIC_VM_ARGS')].value}\" > /config/users-vmargs && cat /config/users-vmargs"]
volumeMounts:
- mountPath: /config
name: config
- name: "get-users-yaml-appconfig-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_YAML_CONFIG')].value}\" > /config/users-appconfig && cat /config/users-appconfig"]
volumeMounts:
- mountPath: /config
name: config
containers:
- image: <censored>.amazonaws.com/sonic/acs-export:latest
imagePullPolicy: Always
name: "users-batch"
command:
- "bash"
- "-c"
- 'APP_SPECIFIC_VM_ARGS="$(cat /config/users-vmargs) " APP_YAML_CONFIG="$(cat /config/users-appconfig)" /vertx-app/startvertx.sh'
env:
- name: FRENV
value: "batch"
- name: STACKNAME
value: eu1-test
- name: SPRING_PROFILES
value: "export-job"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- mountPath: /config
name: config
resources:
limit:
cpu: 100m
memory: 1Gi
This may be due to formatting error.
Look at the examples here and here.
The structure is
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
As per provided output you have spec and restartPolicy on the same line:
jobTemplate:
spec:
template:
spec:
restartPolicy: Never #<----------this is not 'Always'!!
The same with spec.jobTemplate.spec.template.spec.containers
Suppose helm uses some default values instead of yours.
You can also try to generate yaml file, convert it to json and apply.
I have create docker registry as a pod with a service and it's working login, push and pull. But when I would like to create a pod that use an image from this registry, the kubelet can't get the image from the registry.
My pod registry:
apiVersion: v1
kind: Pod
metadata:
name: registry-docker
labels:
registry: docker
spec:
containers:
- name: registry-docker
image: registry:2
volumeMounts:
- mountPath: /opt/registry/data
name: data
- mountPath: /opt/registry/auth
name: auth
ports:
- containerPort: 5000
env:
- name: REGISTRY_AUTH
value: htpasswd
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: /opt/registry/auth/htpasswd
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: Registry Realm
volumes:
- name: data
hostPath:
path: /opt/registry/data
- name: auth
hostPath:
path: /opt/registry/auth
pod I would like to create from registry:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: 10.96.81.252:5000/nginx:latest
imagePullSecrets:
- name: registrypullsecret
Error I get from my registry logs:
time="2018-08-09T07:17:21Z" level=warning msg="error authorizing
context: basic authentication challenge for realm \"Registry Realm\":
invalid authorization credential" go.version=go1.7.6
http.request.host="10.96.81.252:5000"
http.request.id=655f76a6-ef05-4cdc-a677-d10f70ed557e
http.request.method=GET http.request.remoteaddr="10.40.0.0:59088"
http.request.uri="/v2/" http.request.useragent="docker/18.06.0-ce
go/go1.10.3 git-commit/0ffa825 kernel/4.4.0-130-generic os/linux
arch/amd64 UpstreamClient(Go-http-client/1.1)"
instance.id=ec01566d-5397-4c90-aaac-f56d857d9ae4 version=v2.6.2
10.40.0.0 - - [09/Aug/2018:07:17:21 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/18.06.0-ce go/go1.10.3 git-commit/0ffa825
kernel/4.4.0-130-generic os/linux arch/amd64
UpstreamClient(Go-http-client/1.1)"
The secret I use created from cat ~/.docker/config.json | base64:
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: ewoJImF1dGhzIjogewoJCSJsb2NhbGhvc3Q6NTAwMCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZaRzlqYTJWeU1USXoiCgkJfQoJfSwKCSJIdHRwSGVhZGVycyI6IHsKCQkiVXNlci1BZ2VudCI6ICJEb2NrZXItQ2xpZW50LzE4LjA2$
type: kubernetes.io/dockerconfigjson
The modification I have made to my default serviceaccount:
cat ./sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2018-08-03T09:49:47Z
name: default
namespace: default
# resourceVersion: "51625"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 8eecb592-9702-11e8-af15-02f6928eb0b4
secrets:
- name: default-token-rfqfp
imagePullSecrets:
- name: registrypullsecret
file ~/.docker/config.json:
{
"auths": {
"localhost:5000": {
"auth": "YWRtaW46ZG9ja2VyMTIz"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.06.0-ce (linux)"
}
The auths data has login credentials for "localhost:5000", but your image is at "10.96.81.252:5000/nginx:latest".