Below is my python script to update a secret so I can deploy to kubernetes using kubectl. So it works fine. But I want to create a kubernetes cron job that will run a docker container to update a secret from within a kubernetes cluster. How do I do that? The aws secret lasts only 12 hours to I have to regenerate from within the cluster so I can pull if pod crash etc...
This there an internal api I have access to within kubernetes?
cmd = """aws ecr get-login --no-include-email --region us-east-1 > aws_token.txt"""
run_bash(cmd)
f = open('aws_token.txt').readlines()
TOKEN = f[0].split(' ')[5]
SECRET_NAME = "%s-ecr-registry" % (self.region)
cmd = """kubectl delete secret --ignore-not-found %s -n %s""" % (SECRET_NAME,namespace)
print (cmd)
run_bash(cmd)
cmd = """kubectl create secret docker-registry %s --docker-server=https://%s.dkr.ecr.%s.amazonaws.com --docker-username=AWS --docker-password="%s" --docker-email="david.montgomery#gmail.com" -n %s """ % (SECRET_NAME,self.aws_account_id,self.region,TOKEN,namespace)
print (cmd)
run_bash(cmd)
cmd = "kubectl describe secrets/%s-ecr-registry -n %s" % (self.region,namespace)
print (cmd)
run_bash(cmd)
cmd = "kubectl get secret %s-ecr-registry -o yaml -n %s" % (self.region,namespace)
print (cmd)
As it happens I literally just got done doing this.
Below is everything you need to set up a cronjob to roll your AWS docker login token, and then re-login to ECR, every 6 hours. Just replace the {{ variables }} with your own actual values.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: {{ namespace }}
name: ecr-cred-helper
rules:
- apiGroups: [""]
resources:
- secrets
- serviceaccounts
- serviceaccounts/token
verbs:
- 'delete'
- 'create'
- 'patch'
- 'get'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: ecr-cred-helper
namespace: {{ namespace }}
subjects:
- kind: ServiceAccount
name: sa-ecr-cred-helper
namespace: {{ namespace }}
roleRef:
kind: Role
name: ecr-cred-helper
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-ecr-cred-helper
namespace: {{ namespace }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
annotations:
name: ecr-cred-helper
namespace: {{ namespace }}
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
spec:
serviceAccountName: sa-ecr-cred-helper
containers:
- command:
- /bin/sh
- -c
- |-
TOKEN=`aws ecr get-login --region ${REGION} --registry-ids ${ACCOUNT} | cut -d' ' -f6`
echo "ENV variables setup done."
kubectl delete secret -n {{ namespace }} --ignore-not-found $SECRET_NAME
kubectl create secret -n {{ namespace }} docker-registry $SECRET_NAME \
--docker-server=https://{{ ECR_REPOSITORY_URL }} \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
echo "Secret created by name. $SECRET_NAME"
kubectl patch serviceaccount default -p '{"imagePullSecrets":[{"name":"'$SECRET_NAME'"}]}' -n {{ namespace }}
echo "All done."
env:
- name: AWS_DEFAULT_REGION
value: eu-west-1
- name: AWS_SECRET_ACCESS_KEY
value: '{{ AWS_SECRET_ACCESS_KEY }}'
- name: AWS_ACCESS_KEY_ID
value: '{{ AWS_ACCESS_KEY_ID }}'
- name: ACCOUNT
value: '{{ AWS_ACCOUNT_ID }}'
- name: SECRET_NAME
value: '{{ imagePullSecret }}'
- name: REGION
value: 'eu-west-1'
- name: EMAIL
value: '{{ ANY_EMAIL }}'
image: odaniait/aws-kubectl:latest
imagePullPolicy: IfNotPresent
name: ecr-cred-helper
resources: {}
securityContext:
capabilities: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: Default
hostNetwork: true
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
schedule: 0 */6 * * *
successfulJobsHistoryLimit: 3
suspend: false
I add my solution for copying secrets between namespaces using cronjob because this was the stack overflow answer that was given to me when searching for secret copying using CronJob
In the source namespace, you need to define Role, RoleBinding and 'ServiceAccount`
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-user-user-secret-service-account
namespace: source-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: demo-user-role
namespace: source-namespace
rules:
- apiGroups: [""]
resources: ["secrets"]
# Secrets you want to have access in your namespace
resourceNames: ["demo-user" ]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: demo-user-cron-role-binding
namespace: source-namespace
subjects:
- kind: ServiceAccount
name: demo-user-user-secret-service-account
namespace: source-namespace
roleRef:
kind: Role
name: demo-user-role
apiGroup: ""
and CronJob definition will look like so:
apiVersion: batch/v1
kind: CronJob
metadata:
name: demo-user-user-secret-copy-cronjob
spec:
schedule: "* * * * *"
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 5
successfulJobsHistoryLimit: 3
startingDeadlineSeconds: 10
jobTemplate:
spec:
template:
spec:
containers:
- name: demo-user-user-secret-copy-cronjob
image: bitnami/kubectl:1.25.4-debian-11-r6
imagePullPolicy: IfNotPresent
command:
- "/bin/bash"
- "-c"
- "kubectl -n source-namespace get secret demo-user -o json | \
jq 'del(.metadata.creationTimestamp, .metadata.uid, .metadata.resourceVersion, .metadata.ownerReferences, .metadata.namespace)' > /tmp/demo-user-secret.json && \
kubectl apply --namespace target-namespace -f /tmp/demo-user-secret.json"
restartPolicy: Never
securityContext:
privileged: false
allowPrivilegeEscalation: true
readOnlyRootFilesystem: true
runAsNonRoot: true
capabilities:
drop: [ "all" ]
serviceAccountName: demo-user-user-secret-service-account
In the target namespace you also need Role and RoleBinding so that CronJob in source namespace can copy over the secrets.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: target-namespace
name: demo-user-role
rules:
- apiGroups: [""]
resources:
- secrets
verbs:
- 'list'
- 'delete'
- 'create'
- 'patch'
- 'get'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: demo-user-role-binding
namespace: target-namespace
subjects:
- kind: ServiceAccount
name: demo-user-user-secret-service-account
namespace: source-namespace
roleRef:
kind: Role
name: demo-user-role
apiGroup: ""
In your target namespace deployment you can read in the secrets as regular files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 1
...
spec:
containers:
- name: my-app
image: [image-name]
volumeMounts:
- name: your-secret
mountPath: /opt/your-secret
readOnly: true
volumes:
- name: your-secret
secret:
secretName: demo-user
items:
- key: ca.crt
path: ca.crt
- key: user.crt
path: user.crt
- key: user.key
path: user.key
- key: user.p12
path: user.p12
- key: user.password
path: user.password
Related
I am deploying a gitlab runner in K8s cluster and I need to pass data [enviroment,token,url] in configmap as secret.below are the deployment and configmap manifest files.
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitlab-runner
namespace: gitlab-runner
spec:
replicas: 1
selector:
matchLabels:
name: gitlab-runner
template:
metadata:
labels:
name: gitlab-runner
spec:
hostNetwork: true
serviceAccountName: default
containers:
- args:
- run
image: gitlab/gitlab-runner:latest
imagePullPolicy: Always
name: gitlab-runner
resources:
requests:
cpu: "100m"
limits:
cpu: "100m"
volumeMounts:
- name: config
mountPath: /etc/gitlab-runner/config.toml
readOnly: true
subPath: config.toml
volumes:
- name: config
configMap:
name: gitlab-runner-config
restartPolicy: Always
config.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner-config
namespace: gitlab-runner
data:
config.toml: |-
concurrent = 4
[[runners]]
name = "Kubernetes Runner"
url = "gitlab url"
token = "secrettoken"
executor = "kubernetes"
environment =
["TEST_VAR=THIS_IS_TEST_VAR_PRINTING", "SECOUND_TEST_VAR=This_is_2nd_test_var"]
[runners.kubernetes]
namespace = "gitlab-runner"
privileged = true
poll_timeout = 600
cpu_request = "1"
service_cpu_request = "200m"
[[runners.kubernetes.volumes.host_path]]
name = "docker"
mount_path = "/var/run/docker.sock"
host_path = "/var/run/docker.sock"
In the above configmap yaml I need to pass Enviroment ,token and url as kubernetes secret.
Trying to install Crunchydata postgres-operator.
My pgo-deploy pod is failing with error.
I have setup default nfs storage running the following commands,
# kubectl create -f rbac.yaml
the content is,
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: pgo
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: pgo
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: pgo
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: pgo
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: pgo
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
# kubectl create -f class.yaml
the content:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
# kubectl create -f deployment.yaml
the content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: pgo
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 192.168.10.114
- name: NFS_PATH
value: /var/nfs/general
volumes:
- name: nfs-client-root
nfs:
server: 192.168.10.114
path: /var/nfs/general
Now when I apply # kubectl apply -f postgres-operator.yml with my configuration:
apiVersion: v1
kind: ServiceAccount
metadata:
name: pgo-deployer-sa
namespace: pgo
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pgo-deployer-cr
rules:
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- list
- create
- patch
- delete
- apiGroups:
- ''
resources:
- pods
verbs:
- list
- apiGroups:
- ''
resources:
- secrets
verbs:
- list
- get
- create
- delete
- apiGroups:
- ''
resources:
- configmaps
- services
- persistentvolumeclaims
verbs:
- get
- create
- delete
- list
- apiGroups:
- ''
resources:
- serviceaccounts
verbs:
- get
- create
- delete
- patch
- list
- apiGroups:
- apps
- extensions
resources:
- deployments
- replicasets
verbs:
- get
- list
- watch
- create
- delete
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- get
- create
- delete
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterroles
- clusterrolebindings
- roles
- rolebindings
verbs:
- get
- create
- delete
- bind
- escalate
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
verbs:
- create
- delete
- apiGroups:
- batch
resources:
- jobs
verbs:
- delete
- list
- apiGroups:
- crunchydata.com
resources:
- pgclusters
- pgreplicas
- pgpolicies
- pgtasks
verbs:
- delete
- list
---
apiVersion: v1
kind: ConfigMap
metadata:
name: pgo-deployer-cm
namespace: pgo
data:
values.yaml: |-
# =====================
# Configuration Options
# More info for these options can be found in the docs
# https://access.crunchydata.com/documentation/postgres-operator/latest/installation/configuration/
# =====================
archive_mode: "true"
archive_timeout: "60"
backrest_aws_s3_bucket: ""
backrest_aws_s3_endpoint: ""
backrest_aws_s3_key: ""
backrest_aws_s3_region: ""
backrest_aws_s3_secret: ""
backrest_aws_s3_uri_style: ""
backrest_aws_s3_verify_tls: "true"
backrest_gcs_bucket: ""
backrest_gcs_endpoint: ""
backrest_gcs_key_type: ""
backrest_port: "2022"
badger: "false"
ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata"
ccp_image_pull_secret: ""
ccp_image_pull_secret_manifest: ""
ccp_image_tag: "centos8-13.3-4.7.0"
create_rbac: "true"
crunchy_debug: "false"
db_name: ""
db_password_age_days: "0"
db_password_length: "24"
db_port: "5432"
db_replicas: "0"
db_user: "testuser"
default_instance_memory: "128Mi"
default_pgbackrest_memory: "48Mi"
default_pgbouncer_memory: "24Mi"
default_exporter_memory: "24Mi"
delete_operator_namespace: "false"
delete_watched_namespaces: "false"
disable_auto_failover: "false"
disable_fsgroup: "false"
reconcile_rbac: "true"
exporterport: "9187"
metrics: "false"
namespace: "pgo"
namespace_mode: "dynamic"
pgbadgerport: "10000"
pgo_add_os_ca_store: "false"
pgo_admin_password: "examplepassword"
pgo_admin_perms: "*"
pgo_admin_role_name: "pgoadmin"
pgo_admin_username: "admin"
pgo_apiserver_port: "8443"
pgo_apiserver_url: "https://postgres-operator"
pgo_client_cert_secret: "pgo.tls"
pgo_client_container_install: "false"
pgo_client_install: "true"
pgo_client_version: "4.7.0"
pgo_cluster_admin: "false"
pgo_disable_eventing: "false"
pgo_disable_tls: "false"
pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata"
pgo_image_pull_secret: ""
pgo_image_pull_secret_manifest: ""
pgo_image_tag: "centos8-4.7.0"
pgo_installation_name: "devtest"
pgo_noauth_routes: ""
pgo_operator_namespace: "pgo"
pgo_tls_ca_store: ""
pgo_tls_no_verify: "false"
pod_anti_affinity: "preferred"
pod_anti_affinity_pgbackrest: ""
pod_anti_affinity_pgbouncer: ""
scheduler_timeout: "3600"
service_type: "ClusterIP"
sync_replication: "false"
backrest_storage: "nfsstorage"
backup_storage: "nfsstorage"
primary_storage: "nfsstorage"
replica_storage: "nfsstorage"
pgadmin_storage: "nfsstorage"
wal_storage: ""
storage1_name: "default"
storage1_access_mode: "ReadWriteOnce"
storage1_size: "1G"
storage1_type: "dynamic"
storage2_name: "hostpathstorage"
storage2_access_mode: "ReadWriteMany"
storage2_size: "1G"
storage2_type: "create"
storage3_name: "nfsstorage"
storage3_access_mode: "ReadWriteMany"
storage3_size: "10Gi"
storage3_type: "create"
storage3_supplemental_groups: "65534"
storage4_name: "nfsstoragered"
storage4_access_mode: "ReadWriteMany"
storage4_size: "1G"
storage4_match_labels: "crunchyzone=red"
storage4_type: "create"
storage4_supplemental_groups: "65534"
storage5_name: "storageos"
storage5_access_mode: "ReadWriteOnce"
storage5_size: "5Gi"
storage5_type: "dynamic"
storage5_class: "fast"
storage6_name: "primarysite"
storage6_access_mode: "ReadWriteOnce"
storage6_size: "4G"
storage6_type: "dynamic"
storage6_class: "primarysite"
storage7_name: "alternatesite"
storage7_access_mode: "ReadWriteOnce"
storage7_size: "4G"
storage7_type: "dynamic"
storage7_class: "alternatesite"
storage8_name: "gce"
storage8_access_mode: "ReadWriteOnce"
storage8_size: "300M"
storage8_type: "dynamic"
storage8_class: "standard"
storage9_name: "rook"
storage9_access_mode: "ReadWriteOnce"
storage9_size: "1Gi"
storage9_type: "dynamic"
storage9_class: "rook-ceph-block"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pgo-deployer-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pgo-deployer-cr
subjects:
- kind: ServiceAccount
name: pgo-deployer-sa
namespace: pgo
---
apiVersion: batch/v1
kind: Job
metadata:
name: pgo-deploy
namespace: pgo
spec:
backoffLimit: 0
template:
metadata:
name: pgo-deploy
spec:
serviceAccountName: pgo-deployer-sa
restartPolicy: Never
containers:
- name: pgo-deploy
image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.7.0
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_ACTION
value: install
volumeMounts:
- name: deployer-conf
mountPath: "/conf"
volumes:
- name: deployer-conf
configMap:
name: pgo-deployer-cm
I get the following error:
# kubectl get pods -n pgo
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-7d485f5b8d-cnt57 1/1 Running 0 28m
pgo-deploy--1-ppzkw 0/1 Error 0 10m
# kubectl describe pod -n pgo pgo-deploy--1-ppzkw returns the following error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m13s default-scheduler Successfully assigned pgo/pgo-deploy--1-ppzkw to dfsworker1
Normal Pulled 9m11s kubelet Container image "registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos8-4.7.1" already present on machine
Normal Created 9m10s kubelet Created container pgo-deploy
Normal Started 9m10s kubelet Started container pgo-deploy
Warning FailedMount 8m58s (x3 over 9m) kubelet MountVolume.SetUp failed for volume "deployer-conf" : object "pgo"/"pgo-deployer-cm" not registered
even tried with # kubectl apply -f https://raw.githubusercontent.com/CrunchyData/postgres-operator/v4.7.1/installers/kubectl/postgres-operator.yml
it gives the same error. # kubectl -n pgo logs -f pgo-deploy--1-ppzkw gives the following error:
TASK [pgo-operator : Create PGClusters CRD] ************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["kubectl", "create", "-f", "/ansible/postgres-operator/roles/pgo-operator/files/crds/pgclusters-crd.yaml"], "delta": "0:00:02.599141", "end": "2021-08-09 08:24:50.295545", "msg": "non-zero return code", "rc": 1, "start": "2021-08-09 08:24:47.696404", "stderr": "error: unable to recognize \"/ansible/postgres-operator/roles/pgo-operator/files/crds/pgclusters-crd.yaml\": no matches for kind \"CustomResourceDefinition\" in version \"apiextensions.k8s.io/v1beta1\"", "stderr_lines": ["error: unable to recognize \"/ansible/postgres-operator/roles/pgo-operator/files/crds/pgclusters-crd.yaml\": no matches for kind \"CustomResourceDefinition\" in version \"apiextensions.k8s.io/v1beta1\""], "stdout": "", "stdout_lines": []}
PLAY RECAP *********************************************************************
localhost : ok=21 changed=5 unreachable=0 failed=1 skipped=17 rescued=0 ignored=0
Can anyone help me to solve this? All my machines are ubuntu 20.04. It was all working with the same configurations and steps a few days ago until I deleted the pgo namespace and followed all my past procedures.
My kubernetes version: v1.22.0.
The error you provided says what is wrong:
error: unable to recognize \"/ansible/postgres-operator/roles/pgo-operator/files/crds/pgclusters-crd.yaml\": no matches for kind \"CustomResourceDefinition\" in version \"apiextensions.k8s.io/v1beta1\"
CustomResourceDefinition is no longer in beta API:
kubectl explain CustomResourceDefinition
KIND: CustomResourceDefinition
VERSION: apiextensions.k8s.io/v1
Ideally, the editor in charge of that operator already ships with some up-to-date CustomResourceDefinitions. In your case, the last copy seems to be available over here.
Though if your CRD is outdated: there may be other changes you would want to pull out of Crunchy latest release.
Otherwise, we may consider rewriting those objects ourselves:
change apiVersion to apiextensions.k8s.io/v1
fix spec complying with the last schema
spec.additionalPrinterColumns, spec.subresources or spec.validation would need to move into a spec.versions array. You no longer have to define a schema for your resources metadata - if you did configure a schema in your CRD.
The new layout would look something like this:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: crname.api-group
spec:
group: api-group
names:
kind: CrName
listKind: CrNameList
plural: crnames
singular: crname
scope: Namespaced
versions:
- name: v1
additionalPrinterColumns:
- name: Age
type: date
jsonPath: .metadata.creationTimestamp
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
spec:
properties:
[...]
type: object
served: true
storage: true
subresources:
status: {}
- name: v1beta1
[...]
I am trying to deploy consul using kubernetes StatefulSet with following manifest
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: consul
labels:
app: consul
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: consul
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: consul
subjects:
- kind: ServiceAccount
name: consul
namespace: dev-ethernet
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul
namespace: dev-ethernet
labels:
app: consul
---
apiVersion: v1
kind: Secret
metadata:
name: consul-secret
namespace: dev-ethernet
data:
consul-gossip-encryption-key: "aIRpNkHT/8Tkvf757sj2m5AcRlorWNgzcLI4yLEMx7M="
---
apiVersion: v1
kind: ConfigMap
metadata:
name: consul-config
namespace: dev-ethernet
data:
server.json: |
{
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"disable_host_node_id": true,
"data_dir": "/consul/data",
"log_level": "INFO",
"datacenter": "us-west-2",
"domain": "cluster.local",
"ports": {
"http": 8500
},
"retry_join": [
"provider=k8s label_selector=\"app=consul,component=server\""
],
"server": true,
"telemetry": {
"prometheus_retention_time": "5m"
},
"ui": true
}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
namespace: dev-ethernet
spec:
selector:
matchLabels:
app: consul
component: server
serviceName: consul
podManagementPolicy: Parallel
replicas: 3
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
template:
metadata:
labels:
app: consul
component: server
annotations:
consul.hashicorp.com/connect-inject: "false"
spec:
serviceAccountName: consul
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- consul
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
securityContext:
fsGroup: 1000
containers:
- name: consul
image: "consul:1.8"
args:
- "agent"
- "-advertise=$(POD_IP)"
- "-bootstrap-expect=3"
- "-config-file=/etc/consul/config/server.json"
- "-encrypt=$(GOSSIP_ENCRYPTION_KEY)"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: GOSSIP_ENCRYPTION_KEY
valueFrom:
secretKeyRef:
name: consul-secret
key: consul-gossip-encryption-key
volumeMounts:
- name: data
mountPath: /consul/data
- name: config
mountPath: /etc/consul/config
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave
ports:
- containerPort: 8500
name: ui-port
- containerPort: 8400
name: alt-port
- containerPort: 53
name: udp-port
- containerPort: 8080
name: http-port
- containerPort: 8301
name: serflan
- containerPort: 8302
name: serfwan
- containerPort: 8600
name: consuldns
- containerPort: 8300
name: server
volumes:
- name: config
configMap:
name: consul-config
volumeClaimTemplates:
- metadata:
name: data
labels:
app: consul
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: aws-gp2
resources:
requests:
storage: 3Gi
But gets ==> encrypt has invalid key: illegal base64 data at input byte 1 when container starts.
I have generated consul-gossip-encryption-key locally using docker run -i -t consul keygen
Anyone knows whats wrong here ?
secret.data must be base64 string.
try
kubectl create secret generic consul-gossip-encryption-key --from-literal=key="$(docker run -i -t consul keygen)" --dry-run -o=yaml
and replace
apiVersion: v1
kind: Secret
metadata:
name: consul-secret
namespace: dev-ethernet
data:
consul-gossip-encryption-key: "aIRpNkHT/8Tkvf757sj2m5AcRlorWNgzcLI4yLEMx7M="
ref: https://www.consul.io/docs/k8s/helm#v-global-gossipencryption
Hi I have created following CustomResourceDefinition - crd.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: test.demo.k8s.com
namespace: testns
spec:
group: demo.k8s.com
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: testpod
singular: testpod
kind: testpod
The corresponding resource is as below - cr.yaml
kind: testpod
metadata:
name: testpodcr
namespace: testns
spec:
containers:
- name: testname
image: test/img:v5.16
env:
- name: TESTING_ON
valueFrom:
configMapKeyRef:
name: kubernetes-config
key: type
volumeMounts:
- name: testvol
mountPath: "/test/vol"
readOnly: true
When i use client-go program to fetch the spec value of cr object 'testpodcr' The value comes as null.
func (c *TestConfigclient) AddNewPodForCR(obj *TestPodConfig) *v1.Pod {
log.Println("logging obj \n", obj.Name) // Prints the name as testpodcr
log.Println("Spec value: \n", obj.Spec) //Prints null
dep := &v1.Pod{
ObjectMeta: meta_v1.ObjectMeta{
//Labels: labels,
GenerateName: "test-pod-",
},
Spec: obj.Spec,
}
return dep
}
Can anyone please help in figuring this out why the spec value is resulting to null
There is an error with Your crd.yaml file. I am getting the following error:
$ kubectl apply -f crd.yaml
The CustomResourceDefinition "test.demo.k8s.com" is invalid: metadata.name: Invalid value: "test.demo.k8s.com": must be spec.names.plural+"."+spec.group
In Your configuration the name: test.demo.k8s.com does not match plurar: testpod found in spec.names.
I modified Your crd.yaml and now it works:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: testpods.demo.k8s.com
namespace: testns
spec:
group: demo.k8s.com
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: testpods
singular: testpod
kind: Testpod
$ kubectl apply -f crd.yaml
customresourcedefinition.apiextensions.k8s.io/testpods.demo.k8s.com created
After that Your cr.yaml also had to be fixed:
apiVersion: "demo.k8s.com/v1"
kind: Testpod
metadata:
name: testpodcr
namespace: testns
spec:
containers:
- name: testname
image: test/img:v5.16
env:
- name: TESTING_ON
valueFrom:
configMapKeyRef:
name: kubernetes-config
key: type
volumeMounts:
- name: testvol
mountPath: "/test/vol"
readOnly: true
After that I created namespace testns and finally created Testpod object successfully:
$ kubectl create namespace testns
namespace/testns created
$ kubectl apply -f cr.yaml
testpod.demo.k8s.com/testpodcr created
So I was deploying a new cronjob today and got the following error:
Error: release acs-export-cronjob failed: CronJob.batch "acs-export-cronjob" is invalid: [spec.jobTemplate.spec.template.spec.containers: Required value, spec.jobTemplate.spec.template.spec.restartPolicy: Unsupported value: "Always": supported values: "OnFailure", "Never"]
here's some output from running helm on the same chart, no changes made, but with the --debug --dry-run flags:
NAME: acs-export-cronjob
REVISION: 1
RELEASED: Wed Oct 17 14:12:02 2018
CHART: generic-job-0.1.0
USER-SUPPLIED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
schedule: 0 * * * *
COMPUTED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
resources:
cpu: 100m
memory: 1Gi
schedule: 0 * * * *
sonicNodeGroup: api
springProfiles: export-job
HOOKS:
MANIFEST:
---
# Source: generic-job/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: acs-export-cronjob-sa
---
# Source: generic-job/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-manager
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get"]
---
# Source: generic-job/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-binding
subjects:
- kind: ServiceAccount
name: acs-export-cronjob-sa
roleRef:
kind: Role
name: acs-export-cronjob-manager
apiGroup: rbac.authorization.k8s.io
---
# Source: generic-job/templates/generic-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: acs-export-cronjob
labels:
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
schedule: 0 * * * *
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
concurrencyPolicy: Forbid
startingDeadlineSeconds: 120
jobTemplate:
spec:
metadata:
name: acs-export-cronjob
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
template:
metadata:
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
annotations:
iam.amazonaws.com/role: arn:aws:iam::<censored>:role/k8s-service-role
spec:
restartPolicy: Never #<----------this is not 'Always'!!
serviceAccountName: acs-export-cronjob-sa
tolerations:
- key: sonic-node-group
operator: Equal
value: api
effect: NoSchedule
nodeSelector:
sonic-node-group: api
volumes:
- name: config
emptyDir: {}
initContainers:
- name: "get-users-vmargs-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_SPECIFIC_VM_ARGS')].value}\" > /config/users-vmargs && cat /config/users-vmargs"]
volumeMounts:
- mountPath: /config
name: config
- name: "get-users-yaml-appconfig-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_YAML_CONFIG')].value}\" > /config/users-appconfig && cat /config/users-appconfig"]
volumeMounts:
- mountPath: /config
name: config
containers: #<--------this field is not missing!
- image: <censored>.amazonaws.com/sonic/acs-export:latest
imagePullPolicy: Always
name: "users-batch"
command:
- "bash"
- "-c"
- 'APP_SPECIFIC_VM_ARGS="$(cat /config/users-vmargs) " APP_YAML_CONFIG="$(cat /config/users-appconfig)" /vertx-app/startvertx.sh'
env:
- name: FRENV
value: "batch"
- name: STACKNAME
value: eu1-test
- name: SPRING_PROFILES
value: "export-job"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- mountPath: /config
name: config
resources:
limit:
cpu: 100m
memory: 1Gi
if you paid attention, you may have noticed line 101 (I added the comment afterwards) in the debug-output, which sets restartPolicy to Never, quite the opposite of Always as the error message claims it to be.
You may also have noticed line 126 (again, I added the comment after the fact) of the debug output, where the mandatory field containers is specified, again, much in contradiction to the error-message.
whats going on here?
hah! found it! it was a simple mistake actually. I had an extra spec:metadata section under jobtemplate which was duplicated. removing one of the dupes fixed my issues.
I really wish the error-messages of helm would be more helpful.
the corrected chart looks like:
NAME: acs-export-cronjob
REVISION: 1
RELEASED: Wed Oct 17 14:12:02 2018
CHART: generic-job-0.1.0
USER-SUPPLIED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
schedule: 0 * * * *
COMPUTED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
resources:
cpu: 100m
memory: 1Gi
schedule: 0 * * * *
sonicNodeGroup: api
springProfiles: export-job
HOOKS:
MANIFEST:
---
# Source: generic-job/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: acs-export-cronjob-sa
---
# Source: generic-job/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-manager
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get"]
---
# Source: generic-job/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-binding
subjects:
- kind: ServiceAccount
name: acs-export-cronjob-sa
roleRef:
kind: Role
name: acs-export-cronjob-manager
apiGroup: rbac.authorization.k8s.io
---
# Source: generic-job/templates/generic-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: acs-export-cronjob
labels:
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
schedule: 0 * * * *
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
concurrencyPolicy: Forbid
startingDeadlineSeconds: 120
jobTemplate:
spec:
template:
metadata:
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
annotations:
iam.amazonaws.com/role: arn:aws:iam::<censored>:role/k8s-service-role
spec:
restartPolicy: Never
serviceAccountName: acs-export-cronjob-sa
tolerations:
- key: sonic-node-group
operator: Equal
value: api
effect: NoSchedule
nodeSelector:
sonic-node-group: api
volumes:
- name: config
emptyDir: {}
initContainers:
- name: "get-users-vmargs-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_SPECIFIC_VM_ARGS')].value}\" > /config/users-vmargs && cat /config/users-vmargs"]
volumeMounts:
- mountPath: /config
name: config
- name: "get-users-yaml-appconfig-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_YAML_CONFIG')].value}\" > /config/users-appconfig && cat /config/users-appconfig"]
volumeMounts:
- mountPath: /config
name: config
containers:
- image: <censored>.amazonaws.com/sonic/acs-export:latest
imagePullPolicy: Always
name: "users-batch"
command:
- "bash"
- "-c"
- 'APP_SPECIFIC_VM_ARGS="$(cat /config/users-vmargs) " APP_YAML_CONFIG="$(cat /config/users-appconfig)" /vertx-app/startvertx.sh'
env:
- name: FRENV
value: "batch"
- name: STACKNAME
value: eu1-test
- name: SPRING_PROFILES
value: "export-job"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- mountPath: /config
name: config
resources:
limit:
cpu: 100m
memory: 1Gi
This may be due to formatting error.
Look at the examples here and here.
The structure is
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
As per provided output you have spec and restartPolicy on the same line:
jobTemplate:
spec:
template:
spec:
restartPolicy: Never #<----------this is not 'Always'!!
The same with spec.jobTemplate.spec.template.spec.containers
Suppose helm uses some default values instead of yours.
You can also try to generate yaml file, convert it to json and apply.