Skaffold doesn't accept "command" parameter in yaml - kubernetes-helm

Here is the Skaffold yaml I'm using:
apiVersion: skaffold/v1
kind: Config
metadata:
name: myapp-api
build:
artifacts:
- image: elodie/myapp-api
context: .
docker:
dockerfile: Dockerfile
deploy:
helm:
releases:
- name: elodie-api
chartPath: bitnami/node
remote: true
setValues:
command: ['/bin/bash', '-ec', 'npm start']
image.repository: elodie/myapp-api
service.type: LoadBalancer
getAppFromExternalRepository: false
applicationPort: 6666
setValueTemplates:
image.tag: "{{ .DIGEST_HEX }}"
I get parsing skaffold config: error parsing skaffold configuration file: unable to parse config: yaml: unmarshal errors: line 16: cannot unmarshal !!seq into string error when I add the command config but the value is taken straight out of the values.yaml that bitnami provides.
Why do I get this error, any ideas?

setValues is turned into a sequence of --set arguments to helm. So `setValues: only supports string values.
Helm does support ways to represent other structures with --set. It looks like you should be able to use:
setValues:
command: "{/bin/bash, -ec, npm start}"

Related

Airflow installation with helm on kubernetes cluster is failing with db migration pod

Error:
Steps:
I have downloaded the helm chart from here https://github.com/apache/airflow/releases/tag/helm-chart/1.8.0 (Under Assets, Source code zip).
Added following extra params to default values.yaml,
createUserJob:
useHelmHooks: false
migrateDatabaseJob:
useHelmHooks: false
dags:
gitSync:
enabled: true
#all data....
airflow:
extraEnv:
- name: AIRFLOW__API__AUTH_BACKEND
value: "airflow.api.auth.backend.basic_auth"
ingress:
web:
tls:
enabled: true
secretName: wildcard-tls-cert
host: "mydns.com"
path: "/airflow"
I also need KubernetesExecutor hence using https://github.com/airflow-helm/charts/blob/main/charts/airflow/sample-values-KubernetesExecutor.yaml as k8sExecutor.yaml
Installing using following command,
helm install my-airflow airflow-8.6.1/airflow/ --values values.yaml
--values k8sExecutor.yaml -n mynamespace
It worked when I tried the following way,
helm repo add airflow-repo https://airflow-helm.github.io/charts
helm install my-airflow airflow-repo/airflow --version 8.6.1 --values k8sExecutor.yaml --values values.yaml
values.yaml - has only overridden parameters

Templates and Values in different repos via ArgoCD

I'm looking for insights for the following situation...
I have one ArgoCD application pointing to a Git repo (A), where there's a values.yaml;
I would like to use the Helm templates stored in a different repo (B);
Any suggestions/alternatives on how to make this work?
I think helm dependency can help solve your problem.
In file Chart.yaml of repo (A), declares dependency (chart of repo B)
# Chart.yaml
dependencies:
- name: chartB
version: "0.0.1"
repository: "https://link_to_chart_B"
Link references:
https://github.com/argoproj/argocd-example-apps/tree/master/helm-dependency
P/s: You need add repo chart into ArgoCD.
The way we solved it is by writing a very simple helm plugin
and pass to it the URL where the Helm chart location (chartmuseum in our case) as an env variable
server:
name: server
config:
configManagementPlugins: |
- name: helm-yotpo
generate:
command: ["sh", "-c"]
args: ["helm template --version ${HELM_CHART_VERSION} --repo ${HELM_REPO_URL} --namespace ${NAMESPACE} $HELM_CHART_NAME --name-template=${HELM_RELEASE_NAME} -f $(pwd)/${HELM_VALUES_FILE} "]
you can run the helm command with the flag of --repo
and in the ArgoCD Application yaml you call the new plugin
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: application-test
namespace: infra
spec:
destination:
namespace: infra
server: https://kubernetes.default.svc
project: infra
source:
path: "helm-values-files/telegraf"
repoURL: https://github.com/YotpoLtd/argocd-example.git
targetRevision: HEAD
plugin:
name: helm-yotpo
env:
- name: HELM_RELEASE_NAME
value: "telegraf-test"
- name: HELM_CHART_VERSION
value: "1.8.18"
- name: NAMESPACE
value: "infra"
- name: HELM_REPO_URL
value: "https://helm.influxdata.com/"
- name: HELM_CHART_NAME
value: "telegraf"
- name: HELM_VALUES_FILE
value: "telegraf.yaml"
you can read more about it in the following blog
post

Running Skaffold fails if configured to work with Helm

I am trying to make Skaffold work with Helm.
Below is my skaffold.yml file:
apiVersion: skaffold/v2beta23
kind: Config
metadata:
name: test-app
build:
artifacts:
- image: test.common.repositories.cloud.int/manager/k8s
docker:
dockerfile: Dockerfile
deploy:
helm:
releases:
- name: my-release
artifactOverrides:
image: test.common.repositories.cloud.int/manager/k8s
imageStrategy:
helm: {}
Here is my values.yaml:
image:
repository: test.common.repositories.cloud.int/manager/k8s
tag: 1.0.0
Running the skaffold command results in:
...
Starting deploy...
Helm release my-release not installed. Installing...
Error: INSTALLATION FAILED: failed to download ""
deploying "my-release": install: exit status 1
Does anyone have an idea, what is missing here?!
I believe this is happening because you have not specified a chart to use for the helm release. I was able to reproduce your issue by commenting out the chartPath field in the skaffold.yaml file of the helm-deployment example in the Skaffold repo.
You can specify a local chart using the deploy.helm.release.chartPath field or a remote chart using the deploy.helm.release.remoteChart field.

Skaffold with helm fails to parse artifactOverrides

My skaffold.yaml
apiVersion: skaffold/v1
kind: Config
build:
artifacts:
- image: tons/whoami-mn
jib: {}
tagPolicy:
gitCommit: {}
deploy:
helm:
releases:
- name: whoami-mn
chartPath: ./k8s/helm/whoami-mn
artifactOverrides:
image.repository: tons/whoami-mn
The command
skaffold dev --port-forward --namespace whoami-mn
The error
parsing skaffold config: unable to parse config: yaml: unmarshal errors:
line 11: field artifactOverrides not found in type v1.HelmRelease
Skaffold version: v1.13.1
Helm version: v3.3.0
Any idea why I'm getting the above error? Please let me know if I should post other parts of my code
apiVersion: skaffold/v2beta6 was the key to it.
In the future you can also try the skaffold fix command to find ways to update your schema automatically.

Kubernetes w/ helm: MountVolume.SetUp failed for volume "secret" : invalid character '\r' in string literal

I'm using a script to run helm command which upgrades my k8s deployment.
Before I've used kubectl to directly deploy, as I've move to helm and started using charts, I see an error after deploying on the k8s pods:
MountVolume.SetUp failed for volume "secret" : invalid character '\r' in string literal
My script looks similar to:
value1="foo"
value2="bar"
helm upgrade deploymentName --debug --install --atomic --recreate-pods --reset-values --force --timeout 900 pathToChartDir --set value1 --set value2
The deployment.yaml is as following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploymentName
spec:
selector:
matchLabels:
run: deploymentName
replicas: 2
template:
metadata:
labels:
run: deploymentName
app: appName
spec:
containers:
- name: deploymentName
image: {{ .Values.image.acr.registry }}/{{ .Values.image.name }}:{{ .Values.image.tag }}
volumeMounts:
- name: secret
mountPath: /secrets
readOnly: true
ports:
- containerPort: 1234
env:
- name: DOTENV_CONFIG_PATH
value: "/secrets/env"
volumes:
- name: secret
flexVolume:
driver: "azure/kv"
secretRef:
name: "kvcreds"
options:
usepodidentity: "false"
tenantid: {{ .Values.tenantid }}
subscriptionid: {{ .Values.subsid }}
resourcegroup: {{ .Values.rg }}
keyvaultname: {{ .Values.kvname }}
keyvaultobjecttype: secret
keyvaultobjectname: {{ .Values.objectname }}
As can be seen, the error relates to the secret volume and its values.
I've triple checked there is no line-break or anything like that in the values.
I've run helm lint - no errors found.
I've run helm template - nothing strange or missing in output.
Update:
I've copied the output of helm template and put in a deploy.yaml file.
Then used kubectl apply -f deploy.yaml to manually deploy the service, and... it works.
That makes me think it's actually some kind of a bug in helm? make sense?
Update 2:
I've also tried replacing the azure/kv volume with emptyDir volume and I was able to deploy using helm. It looks like a specific issue of helm with azure/kv volume?
Any ideas for a workaround?
A completely correct answer requires that I say the actual details of your \r problem might be different from mine.
I found the issue in my case by looking in the kv log of the AKS node (/var/log/kv-driver.log). In my case, the error was:
Original Error: autorest/azure: Service returned an error. Status=403 Code="Forbidden" Message="Access denied. Caller was not found on any access policy.\r\n
You can learn to SSH into the node on this page:
https://learn.microsoft.com/en-us/azure/aks/ssh
If you want to follow the solution, I opened an issue:
https://github.com/Azure/kubernetes-keyvault-flexvol/issues/121