How to add '- {}' value with helm --set parameter? - kubernetes

I am stuck with the following issue. I am trying to implement kubernetes networkpolices via values provided to helm.
values.yml
...
networkpolicy: []
# Allows all ingress and egress
# - name: my-app
# podSelector:
# matchLabels:
# app: my-app
# egress:
# - {}
# ingress:
# - {}
...
Running install command:
helm --debug --v 3 --kubeconfig $kubeconf upgrade --install $name \
$helmchart \
--set networkpolicy[0].name="my-app" \
--set networkpolicy[0].podSelectory.matchLabels.app="my-app" \
--set networkpolicy[0].egress[0]="''{}''" \
Error message:
...
helm.go:84: [debug] error validating "": error validating data: ValidationError(NetworkPolicy.spec.egress[0]): invalid type for io.k8s.api.networking.v1.NetworkPolicyEgressRule: got "string", expected "map"
...
How can I set the "- {}" with --set networkpolicy[0].egress[0] ... ???
Thanks.

Related

letsencrypt kubernetes: How can i include ClusterIssuer in cert-manager using helm chart instead of deploying it as a separate manifest?

I would like to add ssl support to my web app (wordpress) deployed on kubernetes. for that i deployed cert-manager using helm like following:
helm upgrade \
cert-manager \
--namespace cert-manager \
--version v1.9.1 \
--set installCRDs=true \
--set ingressShim.defaultIssuerName=letsencrypt-prod \
--set ingressShim.defaultIssuerKind=ClusterIssuer \
--set ingressShim.defaultIssuerGroup=cert-manager.io \
--create-namespace \
jetstack/cert-manager --install
Then i deployed wordpress using helm as well, while values.yml look like :
#Change default svc type
service:
type: ClusterIP
#ingress resource
ingress:
enabled: true
hostname: app.benighil-mohamed.com
path: /
annotations:
#kubernetes.io/ingress.class: azure/application-gateway
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: letsencrypt-prod
extraTls:
- hosts:
- "{{ .Values.ingress.hostname }}" # ie: app.benighil-mohamed.com
secretName: "{{ .Release.Name }}-{{ .Values.ingress.hostname }}" #ie: wp-app.benighil-mohamed.com
However, when i check certifiactes and certificaterequests i got the following:
vscode ➜ /workspaces/flux/ingress $ kubectl get certificate -n app -owide
NAME READY SECRET ISSUER STATUS AGE
wp-benighil.benighil-mohamed.com False wp-benighil.benighil-mohamed.com letsencrypt-prod Issuing certificate as Secret does not exist 25m
vscode ➜ /workspaces/flux/ingress
vscode ➜ /workspaces/flux/ingress $ kubectl get certificaterequests -n app -owide
NAME APPROVED DENIED READY ISSUER REQUESTOR STATUS AGE
wp-benighil.benighil-mohamed.com-45d6s True False letsencrypt-prod system:serviceaccount:cert-manager:cert-manager Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "letsencrypt-prod" not found 27m
vscode ➜ /workspaces/flux/ingress
Any idea please ?

Helm cant pull registry image

After helm upgrade i got error:
Failed to pull image "myhostofgitlab.ru/common-core-executor:1bac97ef": rpc error: code = Unknown desc = Error response from daemon: Head https://myhostofgitlab.ruv2/common-core-executor/manifests/1bac97ef: denied: access forbidden
my run command:
k8s-deploy-Prod:
image: alpine/helm:latest
stage: deploy
script:
- helm upgrade ${PREFIX}-common-core-executor k8s/helm/common-core-executor --debug --atomic --install --wait --history-max 3
--set image.repository=${CI_REGISTRY_IMAGE}/common-core-executor
--set image.tag=${CI_COMMIT_SHORT_SHA}
--set name=${PREFIX}-common-core-executor
--set service.name=${PREFIX}-common-core-executor
--set branch=${PREFIX}
--set ingress.enabled=true
--set ingress.hosts[0].host=${PREFIX}.common-core-executor.k8s.test.zone
--set ingress.tls[0].hosts[0]=${PREFIX}.common-core-executor.k8s.test.zone
--set secret.name=${PREFIX}-${PROJECT_NAME}-secret
--timeout 2m0s
-f k8s/helm/common-core-executor/common-core-executor-values.yaml
-n ${NAMESPACE}
Where i wrong?
Before that error i make some steps from officially instruction. Firstable i create cred like this (its just sample data):
apiVersion: v1
kind: Secret
data:
.dockerconfigjson: eyJhdXRocyI6eyJodHRwczovL2hvc3QtZm9yLXN0YWNrLW92ZXJmbG93OnsidXNlcm5hbWUiOiJzdGFja292ZXJmbG93IiwicGFzc3dvcmQiOiJzdGFja292ZXJmbG93IiwiYXV0aCI6Inh4eCJ9fX0=
metadata:
name: regcred
namespace: prod-common-service
type: kubernetes.io/dockerconfigjson
And add in containers section of deployment.yaml
imagePullSecrets:
- name: regcred
Thanks!

Velero + MinIO: Unknown desc = AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong;

I'm getting this issue below. Anyone has an idea what could be wrong?
user#master-1:~$ kubectl logs -n velero velero-77b544f457-dw4hf
# REMOVED
An error occurred: some backup storage locations are invalid: backup store for location "aws" is invalid: rpc error: code = Unknown desc = AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2'
status code: 400, request id: A3Q97JKM6GQRNABA, host id: b6g0on189w6hYgCrId/Xr0BP44pXjZPy2SqK2t7bn/+Ggq9FUY2N3KQHYRcMEuCCHY2L2vfsYEo=; backup store for location "velero" is invalid: rpc error: code = Unknown desc = AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2'
status code: 400, request id: YF6DRKN7MYSXVBV4, host id: Y8/Gufd7R0BlZCIZqbJPfdAjVqK8+WLfWoANBDnipDkH421/vGt0Ne2E/yZw2bYf7rfms+rGxsg=
user#master-1:~$
I have installed Velero 1.4.2 with Helm chart:
user#master-1:~$ helm search repo velero --versions | grep -e 2.12.17 -e NAME
NAME CHART VERSION APP VERSION DESCRIPTION
vmware-tanzu/velero 2.12.17 1.4.2 A Helm chart for velero
user#master-1:~$
I used this command to install:
helm install velero vmware-tanzu/velero --namespace velero --version 2.12.17 -f velero-values.yaml \
--set-file credentials.secretContents.cloud=/home/era/creds-root.txt \
--set configuration.provider=aws \
--set configuration.backupStorageLocation.name=velero \
--set configuration.backupStorageLocation.bucket="velero" \
--set configuration.backupStorageLocation.prefix="" \
--set configuration.backupStorageLocation.config.region="us-east-1" \
--set image.repository=velero/velero \
--set image.tag=v1.4.2 \
--set image.pullPolicy=IfNotPresent \
--set initContainers[0].name=velero-plugin-for-aws \
--set initContainers[0].image=velero/velero-plugin-for-aws:v1.1.0 \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins \
--replace
My credential files passed:
$ cat creds-root.txt
[default]
aws_access_key_id=12345678
aws_secret_access_key=12345678
Helm values file:
user#master-1:~$ cat velero-values.yaml
configuration:
provider: aws
backupStorageLocation:
name: minio
provider: aws
# caCert: null
bucket: velero
config:
region: us-east-1
credentials:
useSecret: true
existingSecret: cloud-credentials
secretContents: {}
extraEnvVars: {}
backupsEnabled: true
snapshotsEnabled: true
deployRestic: true
MinIO snapshot resource (MinIO is working at 192.168.2.239:9000):
# For MinIO
---
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: minio
namespace: velero
spec:
provider: openebs.io/cstor-blockstore
config:
bucket: velero
prefix: cstor
provider: aws
# The region where the server is located.
region: us-east-1
# profile for credential, if not mentioned then plugin will use profile=default
profile: user1
# Whether to use path-style addressing instead of virtual hosted bucket addressing.
# Set to "true"
s3ForcePathStyle: "true"
# S3 URL, By default it will be generated from "region" and "bucket"
s3Url: http://192.168.2.239:9000
# You can specify the multipart_chunksize here for explicitness.
# multiPartChunkSize can be from 5Mi(5*1024*1024 Bytes) to 5Gi
# For more information: https://docs.min.io/docs/minio-server-limits-per-tenant.html
# If not set then it will be calculated from the file size
multiPartChunkSize: 64Mi
# If MinIO is configured with custom certificate then certificate can be passed to plugin through caCert
# Value of caCert must be base64 encoded
# To encode, execute command: cat ca.crt |base64 -w 0
# caCert: LS0tLS1CRU...tRU5EIENFUlRJRklDQVRFLS0tLS0K
# If you want to disable certificate verification then set insecureSkipTLSVerify to "true"
# By default insecureSkipTLSVerify is set to "false"
insecureSkipTLSVerify: "true"
aws resource which seems failing:
$ k get backupstoragelocation -n velero aws -o yaml
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation
creationTimestamp: "2021-04-15T08:23:38Z"
generation: 3
labels:
app.kubernetes.io/instance: velero
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: velero
helm.sh/chart: velero-2.12.17
managedFields:
- apiVersion: velero.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:helm.sh/hook: {}
f:helm.sh/hook-delete-policy: {}
f:labels:
.: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/managed-by: {}
f:app.kubernetes.io/name: {}
f:helm.sh/chart: {}
f:spec:
.: {}
f:config:
.: {}
f:region: {}
f:objectStorage:
.: {}
f:prefix: {}
f:provider: {}
manager: Go-http-client
operation: Update
time: "2021-04-15T08:23:38Z"
- apiVersion: velero.io/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:objectStorage:
f:bucket: {}
manager: kubectl-edit
operation: Update
time: "2021-04-15T17:52:46Z"
name: aws
namespace: velero
resourceVersion: "1333724"
selfLink: /apis/velero.io/v1/namespaces/velero/backupstoragelocations/aws
uid: a51033b2-e53d-4751-9110-c9649de6aa67
spec:
config:
region: us-east-1
objectStorage:
bucket: velero
prefix: backup
provider: aws
user#master-1:~$
For some reason no plugins are listed:
user#master-1:~$ velero plugin get
user#master-1:~$
Velero is obviously crashing because of original issue:
user#master-1:~$ kubectl get pods -n velero
NAME READY STATUS RESTARTS AGE
restic-nqpsl 1/1 Running 0 7m52s
restic-pw897 1/1 Running 0 7m52s
restic-rtwzd 1/1 Running 0 7m52s
velero-77b544f457-dw4hf 0/1 CrashLoopBackOff 5 5m59s
user#master-1:~$
More resources:
user#master-1:~$ k get BackupStorageLocation -n velero
NAME PHASE LAST VALIDATED AGE
aws 10h
velero 11m
user#master-1:~$ k get volumesnapshotlocation -n velero
NAME AGE
default 11m
minio 39h
velero-snapshot 9h
user#master-1:~$
My MinIO service is started using Docker Compose and working fine:
version: '3.8'
services:
minio:
container_name: minio
hostname: minio
build:
context: .
dockerfile: Dockerfile
restart: always
ports:
- "0.0.0.0:9000:9000"
environment:
# ROOT
MINIO_ACCESS_KEY: 12345678
MINIO_SECRET_KEY: 12345678
MINIO_REGION: us-east-1
command: server --address :9000 /data
volumes:
- ./data:/data
Unknown PHASE for backup locations:
user#master-1:~$ velero get backup-locations
NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE
aws aws velero/backup Unknown Unknown ReadWrite
velero aws velero Unknown Unknown ReadWrite
user#master-1:~$
Test MinIO access separately:
bash-4.3# AWS_ACCESS_KEY_ID=12345678 AWS_SECRET_ACCESS_KEY=12345678 aws s3api get-bucket-location --endpoint-url http://192.168.2.239:9000 --bucket velero
{
"LocationConstraint": "us-east-1"
}
bash-4.3#
Secrets are correct:
user#master-1:~$ k get secret -n velero cloud-credentials -o yaml | head -n 4
apiVersion: v1
data:
cloud: W2RlZmF-REMOVED
kind: Secret
user#master-1:~$
user#master-1:~$ k get secret -n velero
NAME TYPE DATA AGE
cloud-credentials Opaque 1 91m
default-token-8rwhg kubernetes.io/service-account-token 3 2d20h
sh.helm.release.v1.velero.v1 helm.sh/release.v1 1 45m
velero Opaque 0 2d19h
velero-restic-credentials Opaque 1 40h
velero-server-token-8zm9k kubernetes.io/service-account-token 3 45m
user#master-1:~$
The problem was missing configuration:
--set configuration.backupStorageLocation.config.s3Url="http://192.168.2.239:9000" \
--set configuration.backupStorageLocation.config.s3ForcePathStyle=true \

how to convert nginx-ingress annotations to --set format to enable prometheus metrics

I want to set annotations on command line while installing nginx-ingress. my values.yaml file looks like below and i want to use command line argument instead of values.yaml file.
controller:
metrics:
port: 10254
enabled: true
service:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
serviceMonitor:
enabled: true
namespace: monitoring
namespaceSelector:
any: true
I tried following arguments but its giving error
--set controller.metrics.service.annotations."prometheus\.io\/scrape"="true" --set controller.metrics.service.annotations."prometheus\.io\/port"="10254"
Error:
Error: release nginx-ingress failed: Service in version "v1" cannot be handled as a Service: v1.Service.ObjectMeta: v1.ObjectMeta.Annotations: ReadString: expects " or n, but found 1, error found in #10 byte of ...|io/port":10254,"prom|..., bigger context ...|,"metadata":{"annotations":{"prometheus.io/port":10254,"prometheus.io/scrape":true},"labels":{"app.k|...
Any suggestions how exactly these annotations should be passed ?
I just had the same issue! When you look at the chart, they define it as a string. So when I utilize the command below it successfully sets the values. The trick is to utilize --set-string rather than --set
helm upgrade ingress-controller ingress-nginx/ingress-nginx --namespace ingress-nginx --set controller.metrics.enabled=true --set-string controller.metrics.service.annotations."prometheus\.io/scrape"="true" --set-string controller.metrics.service.annotations."prometheus\.io/port"="10254"
Showing that the values are set when we validate this with helm get values ingress-controller --namespace ingress-nginx
controller:
metrics:
enabled: true
service:
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
Now if you would like to get the details into prometheus, then it appears that this did not work for me though. I had to utilize the controller.podAnnotations to get this working:
helm upgrade ingress-controller ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--set controller.metrics.enabled=true \
--set-string controller.podAnnotations."prometheus\.io/scrape"="true" \
--set-string controller.podAnnotations."prometheus\.io/port"="10254"
Let me know if this worked for you! :)

Helm [stable/nginx-ingress] Getting issue while passing headers

Version of Helm and Kubernetes: Client: &version.Version{SemVer:"v2.14.1" and 1.13.7-gke.24
Which chart: stable/nginx-ingress [v0.24.1]
What happened: Trying to override headers using--set-string but it does not work as expected. It always gives issues with the parsing
/usr/sbin/helm install --name cx-nginx-1 --set controller.name=cx-nginx-1 --set controller.kind=Deployment --set controller.service.loadBalancerIP= --set controller.metrics.enabled=true --set-string 'controller.headers={"X-Different-Name":"true","X-Request-Start":"test-header","X-Using-Nginx-Controller":"true"}' . Error: release cx-nginx-1 failed: ConfigMap in version "v1" cannot be handled as a ConfigMap: v1.ConfigMap.Data: ReadMapCB: expect { or n, but found [, error found in #10 byte of ...|","data":["\"X-Diffe|..., bigger context ...|{"apiVersion":"v1","data":["\"X-Different-Name\":\"true\"","\"X-Request-Start|...
What you expected to happen: I want to override the header which the there by default in values.yam with custom headers
How to reproduce it (as minimally and precisely as possible):
I have provided the comment to reproduce,
helm install --name cx-nginx-1 --set controller.name=cx-nginx-1 --set controller.kind=Deployment --set controller.service.loadBalancerIP= --set controller.metrics.enabled=true --set-string 'controller.headers={"X-Different-Name":"true","X-Request-Start":"test-header","X-Using-Nginx-Controller":"true"}' .
I tried to run in debug mode (--dry-run --debug), It shows me configmap like below,
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1
component: "cx-nginx-1"
heritage: Tiller
release: foiled-coral
name: foiled-coral-nginx-ingress-custom-headers
namespace: cx-ingress
data:
- X-Different-Name:true
- X-Request-Start:test-header
- X-Using-Nginx-Controller:true
It seems like its adding intent 4 instead of intent 2. Below warning also i'm getting,
Warning: Merging destination map for chart 'nginx-ingress'. Cannot overwrite table item 'headers', with non table value: map[X-Different-Name:true X-Request-Start:test-header X-Using-Nginx-Controller:true]
Kindly help me to pass the headers in the right way.
Note: controller.headers is deprecated, make sure to use the controller.proxySetHeaders instead.
Helm --set has some limitations.
Your best option is to avoid using the --set, and use the --values instead.
You can declare all your custom values in a file like this:
# values.yaml
controller:
name: "cx-nginx-1"
kind: "Deployment"
service:
loadBalancerIP: ""
metrics:
enable: true
proxySetHeaders:
X-Different-Name: "true"
X-Request-Start: "true"
X-Using-Nginx-Controller: "true"
Then use it on install:
helm install --name cx-nginx-1 stable/nginx-ingress \
--values=values.yaml
If you want to use --set anyway, you should use this notation:
helm install --name cx-nginx-1 stable/nginx-ingress \
--set controller.name=cx-nginx-1 \
--set controller.kind=Deployment \
--set controller.service.loadBalancerIP= \
--set controller.metrics.enabled=true \
--set-string controller.proxySetHeaders.X-Different-Name="true" \
--set-string controller.proxySetHeaders.X-Request-Start="true" \
--set-string controller.proxySetHeaders.X-Using-Nginx-Controller="true"