I create a K8S Secret with Helm in a pre-install hook.
This secret is a random password for a database user. When I uninstall the Helm Chart I delete the database and the database user, so I'd like to delete the K8S Secret as-well.
Everything works fine, except the secret is not deleted after uninstallation.
apiVersion: v1
kind: Secret
metadata:
name: secret-name
namespace: sample
labels:
app.kubernetes.io/managed-by: "sample"
app.kubernetes.io/instance: "sample"
app.kubernetes.io/version: "1.1"
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-1"
type: Opaque
stringData:
user: "test-user"
password: {{ randAlphaNum 10 | quote }}
In the documentation, there is the hook-delete-policy annotation, but the possible values are
before-hook-creation
hook-succeeded
hook-failed
In my case, none of the options seem to be right.
How can I automatically delete the K8S Secret at uninstall time?
Why not just use post-delete along with pre-install?
post-delete -- Executes on a deletion request after all of the
release's resources have been deleted
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook": pre-install
Related
I am currently doing a PoC on Vault for K8s, but I am having some issues injecting a secret into an example application. I have created a Service Account which is associated with a role, which is then associated with a policy that allows the service account to read secrets.
I have created a secret basic-secret, which I am trying to inject to my example application. The application is then associated with a Service Account. Below you can see the code for deploying the example application (Hello World) and the service account:
apiVersion: apps/v1
kind: Deployment
metadata:
name: basic-secret
labels:
app: basic-secret
spec:
selector:
matchLabels:
app: basic-secret
replicas: 1
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/tls-skip-verify: "true"
vault.hashicorp.com/agent-inject-secret-helloworld: "secret/basic-secret/helloworld"
vault.hashicorp.com/agent-inject-template-helloworld: |
{{- with secret "secret/basic-secret/helloworld" -}}
{
"username" : "{{ .Data.username }}",
"password" : "{{ .Data.password }}"
}
{{- end }}
vault.hashicorp.com/role: "basic-secret-role"
labels:
app: basic-secret
spec:
serviceAccountName: basic-secret
containers:
- name: app
image: jweissig/app:0.0.1
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: basic-secret
labels:
app: basic-secret
When I describe the pod (kubectl describe pod basic-secret-7d6777cdb8-tlfsw -n vault) of the deployment I get:
Furthermore, for logs (kubectl logs pods/basic-secret-7d6777cdb8-tlfsw vault-agent -n vault) I get:
Error from server (BadRequest): container "vault-agent" in pod "basic-secret-7d6777cdb8-tlfsw" is waiting to start: PodInitializing
I am not sure why the Vault-Agent is not initiallizing. If someone has any idea what might be the issue, I would appreciate it a lot!
Best William.
Grab your container logs with kubectl logs pods/basic-secret-7d6777cdb8-tlfsw vault-agent-init -n vault because container vault-agent-init has to finish first.
Does your policy allow access to that secret (secret/basic-secret/helloworld)?
Did you create your role (basic-secret-role) in k8s auth? In role creation process, you can authorize certain namespaces so that might be a problem.
But, let's see those agent-init logs first
I want to create a post-install,post-upgrade helm hook (a Job to be more precise).
This will need the following RBAC resources (I have already added the corresponding helm-hook annotations)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: "{{ .Release.Name }}-post-install-role"
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "{{ .Release.Name }}-post-install-rolebinding"
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
subjects:
- kind: ServiceAccount
namespace: {{ .Release.Namespace }}
name: "{{ .Release.Name }}-post-install-sa"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: "{{ .Release.Name }}-post-install-role"
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: "{{ .Release.Name }}-post-install-sa"
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
In my corresponding Job spec:
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation
...
serviceAccountName: "{{ .Release.Name }}-post-install-sa"
I though that by adding pre- to the RBAC resources, I would make sure these were created before the actual Job which is a post- thing.
By also setting the hook-delete-policy to before-hook-creation,hook-succeeded,hook-failed, these would also be deleted in all cases (whether the Job failed or succeeded) to avoid having them lying around for security considerations.
However the Job creation errors out as unable to find the ServiceAccount
error looking up service account elastic/elastic-stack-post-install-sa: serviceaccount "elastic-stack-post-install-sa" not found
Why is that?
Try using hook weight to ensure a deterministic order.Helm loads the hook with the lowest weight first (negative to positive)
"helm.sh/hook-weight": "0"
Example:
Service account creation with lowest weight.
As PGS suggested, "helm.sh/hook-weight" annotation is the solution here.
Important Notes:
Hook weights can be positive, zero or negative numbers but must be represented as strings.
Example: "helm.sh/hook-weight": "-5" (Note: -5 within double quotes)
When Helm starts the execution cycle of hooks of a particular Kind it will sort those hooks in ascending order.
Hook weights ensure below:
Execute in the right weight sequence (negative to positive in ascending order)
Block each other (Important for your scenario)
All block main K8s resource from starting
I have an ArgoCD installation and want to add a GitHub repository using SSH access with an SSH key pair to it using the declarative DSL.
What I have is:
apiVersion: v1
data:
sshPrivateKey: <my private ssh key base64 encoded>
url: <url base64 encoded>
kind: Secret
metadata:
annotations:
meta.helm.sh/release-name: argocd-config
meta.helm.sh/release-namespace: argocd
creationTimestamp: "2021-06-30T12:39:35Z"
labels:
app.kubernetes.io/managed-by: Helm
argocd.argoproj.io/secret-type: repo-creds
name: repo-creds
namespace: argocd
resourceVersion: "364936"
selfLink: /api/v1/namespaces/argocd/secrets/repo-creds
uid: 8ca64883-302b-4a41-aaf6-5277c34dfbfc
type: Opaque
---
apiVersion: v1
data:
url: <url base64 encoded>
kind: Secret
metadata:
annotations:
meta.helm.sh/release-name: argocd-config
meta.helm.sh/release-namespace: argocd
creationTimestamp: "2021-06-30T12:39:35Z"
labels:
app.kubernetes.io/managed-by: Helm
argocd.argoproj.io/secret-type: repository
name: argocd-repo
namespace: argocd
resourceVersion: "364935"
selfLink: /api/v1/namespaces/argocd/secrets/argocd-repo
uid: 09de56e0-3b0a-4032-8fb5-81b3a6e1899e
type: Opaque
I can manually connect to that GitHub private repo using that SSH key pair, but using the DSL, the repo doesn't appear in the ArgoCD GUI.
In the log of the argocd-repo-server I am getting the error:
time="2021-06-30T14:48:25Z" level=error msg="finished unary call with code Unknown" error="authentication required" grpc.code=Unknown grpc.method=GenerateManifest grpc.request.deadline="2021-06-30T14:49:25Z" grpc.service=repository.RepoServerService grpc.start_time="2021-06-30T14:48:25Z" grpc.time_ms=206.505 span.kind=server system=grpc
I deploy the secrets with helm.
So can anyone help me point in the right direction? What am I doing wrong?
I basically followed the declarative documentation under: https://argoproj.github.io/argo-cd/operator-manual/declarative-setup/
Thanks in advance.
Best regards,
rforberger
I am not sure about helm, since I am working with the yaml files for now, before moving into helm. You could take a look at this Github issue here to configure SSH Key for helm
I had this issue, when I was working with manifests. The repo config should be in argocd-cm configmap. The fix was this:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
repositories: |
- name: my-test-repo
url: ssh://git#repo-url/path/to/repo.git
type: git
insecure: true. // To skip verification
insecureIgnoreHostKey: true // to ignore host key for ssh
sshPrivateKeySecret:
name: private-repo-creds
key: sshPrivateKey
---
apiVersion: v1
kind: Secret
metadata:
name: private-repo-creds
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repo-creds
data:
sshPrivateKey: <my private ssh key base64 encoded>
And I am not sure if the documentation is correct or not, because I can see the document in stable is a bit different, although both your link and this stable doc link are from the same version
I had a helm release whose deployment was not successful. I tried uninstalling it so that I can create a fresh one.
The weird thing which I found is that there were some resources created partially (a couple of Jobs) because of the failed deployment, Uninstalling the failed deployment using helm does not removes those partially created resources which could cause issues when I try to install the release again with some changes.
My question is: Is there a way where I can ask helm to delete all the related resources of a release completely.
Since there are no details on partially created resources. One scenario could be where helm uninstall/delete would not delete the PVC's in the namespace. We resolved this by creating a separate namespace to deploy the application and helm release is uninstalled/deleted, we delete the namespace as well. For a fresh deployment, create a namespace again and do a helm installation on the namespace for a clean install or you can also change the reclaimPolicy to "Delete" while creating the storageClass (by default Reclaimpolicy is retain) as mentioned in the below post
PVC issue on helm: https://github.com/goharbor/harbor-helm/issues/268#issuecomment-505822451
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-block
provisioner: ceph.rook.io/block
parameters:
blockPool: replicapool
# The value of "clusterNamespace" MUST be the same as the one in which your rook cluster exist
clusterNamespace: rook-ceph-system
# Specify the filesystem type of the volume. If not specified, it will use `ext4`.
# Optional, default reclaimPolicy is "Delete". Other options are: "Retain", "Recycle" as documented in https://kubernetes.io/docs/concepts/storage/storage-classes/
reclaimPolicy: Delete
As you said in the comment that the partially created object is a job. In helm there is a concept name hook, which also runs a job for different situations like: pre-install, post-install etc. I thing you used one of this.
The yaml of an example is given below, where you can set the "helm.sh/hook-delete-policy": hook-failed instead of hook-succeeded then if the hook failed the job will be deleted. For more please see the official doc of helm hook
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name | quote }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: {{ .Release.Name | quote }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
spec:
restartPolicy: Never
containers:
- name: pre-install-job
image: "ubuntu"
#command: ["/bin/sleep","{{ default "10" .Values.hook.job.sleepyTime }}"]
args:
- /bin/bash
- -c
- echo
- "pre-install hook"
Background
I am using TZCronJob to run cronjobs with timezones in Kubernetes. A sample cronjob.yaml might look like the following (as per the cronjobber docs). Note the timezone specified, the schedule, and kind=TZCronJob:
apiVersion: cronjobber.hidde.co/v1alpha1
kind: TZCronJob
metadata:
name: hello
spec:
schedule: "05 09 * * *"
timezone: "Europe/Amsterdam"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo "Hello, World!"
restartPolicy: OnFailure
Nrmally, with any old cronjob in Kubernetes, you can run kubectl create job test-job --from=tzcronjob/name_of_my_cronjob, as per the kubectl create cronjob docs.
Error
However, when I try to run it with kubectl create job test-job --from=tzcronjob/name_of_my_cronjob (switching the from command to --from=tzcronjob/) I get:
error: from must be an existing cronjob: no kind "TZCronJob" is registered for version "cronjobber.hidde.co/v1alpha1" in scheme "k8s.io/kubernetes/pkg/kubectl/scheme/scheme.go:28"
When I try to take a peek at https://kubernetes.io/kubernetes/pkg/kubectl/scheme/scheme.go:28 I get 404, not found.
This almost worked, but to no avail:
kubectl create job test-job-name-v1 --image=tzcronjob/name_of_image
How can I create a new one-off job from my chart definition?
In Helm there are mechanisms called Hooks.
Helm provides a hook mechanism to allow chart developers to intervene at certain points in a release’s life cycle. For example, you can use hooks to:
Load a ConfigMap or Secret during install before any other charts are
loaded
Execute a Job to back up a database before installing a new chart,
and then execute a second job after the upgrade in order to restore
data
Run a Job before deleting a release to gracefully take a service out
of rotation before removing it.
Hooks work like regular templates, but they have special annotations that cause Helm to utilize them differently. In this section, we cover the basic usage pattern for hooks.
Hooks are declared as an annotation in the metadata section of a manifest:
apiVersion: ...
kind: ....
metadata:
annotations:
"helm.sh/hook": "pre-install"
If the resources is a Job kind, Tiller will wait until the job successfully runs to completion. And if the job fails, the release will fail. This is a blocking operation, so the Helm client will pause while the Job is run.
HOW TO WRITE HOOKS:
Hooks are just Kubernetes manifest files with special annotations in the metadata section. Because they are template files, you can use all of the normal template features, including reading .Values, .Release, and .Template.
For example, this template, stored in templates/post-install-job.yaml, declares a job to be run on post-install:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"]
What makes this template a hook is the annotation:
annotations:
"helm.sh/hook": post-install
hava you register the custom resource TZCronJob? you can use kubectl get crd or kubectl api-versions to check.
kubernetes natively supports CronJobs. you dont need to use custom resource definition or other third party objects. just update the yaml as below. It should work
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "05 09 * * *"
timezone: "Europe/Amsterdam"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo "Hello, World!"
restartPolicy: OnFailure
If you want to use timezone supported cronjob then follow the below steps to install cronjobber
# Install CustomResourceDefinition
$ kubectl apply -f https://raw.githubusercontent.com/hiddeco/cronjobber/master/deploy/crd.yaml
# Setup service account and RBAC
$ kubectl apply -f https://raw.githubusercontent.com/hiddeco/cronjobber/master/deploy/rbac.yaml
# Deploy Cronjobber (using the timezone db from the node)
$ kubectl apply -f https://raw.githubusercontent.com/hiddeco/cronjobber/master/deploy/deploy.yaml