Argo helm chart does not apply my custom values.yml file - kubernetes-helm

I deploy Argo with Helm 3 to my cluster
helm upgrade --install argo argo/argo-cd -n argocd -f argovalues.yaml
My argovalues.yml file is the following
global.image.tag: "v2.0.1"
server.service.type: "NodePort"
server.name: "kabamaru"
server.ingress.enabled: true
server.metrics.enabled: true
server.additionalApplications: |
- name: guestbook
namespace: argocd
additionalLabels: {}
additionalAnnotations: {}
project: default
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: HEAD
path: guestbook
directory:
recurse: true
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: false
selfHeal: false
and .... none of these values is applied.It is very frustrating.
If I do the following
helm upgrade --install argo argo/argo-cd -n argocd --set server.name=hello
it works and changes successfully!
What on earth is going on?

I'm using helm chart upgrade command with the option --set-file and it's working correctly :
helm upgrade --install argo argo/argo-cd -n argocd --set-file argovalues.yaml
there are other option like --set and --set-string like that:
I think that can help you to resolve your issue.

The solution
I managed to make it work like this:
argovalues.yaml
server:
service:
type: "NodePort" #by default 80:30080/TCP,443:30443/TCP
image:
tag: "v2.0.1"
autoscaling:
enabled: true
ingress:
enabled: true
metrics:
enabled: true
additionalApplications:
- name: bootstrap
namespace: argocd
additionalLabels: {}
additionalAnnotations: {}
project: default
source:
repoURL: https://github.com/YOUR_REPO/argotest.git
targetRevision: HEAD
path: bootstrap
directory:
recurse: true
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: false
Note: the wrong indentation can drive you crazy!

Related

Airflow installation with helm on kubernetes cluster is failing with db migration pod

Error:
Steps:
I have downloaded the helm chart from here https://github.com/apache/airflow/releases/tag/helm-chart/1.8.0 (Under Assets, Source code zip).
Added following extra params to default values.yaml,
createUserJob:
useHelmHooks: false
migrateDatabaseJob:
useHelmHooks: false
dags:
gitSync:
enabled: true
#all data....
airflow:
extraEnv:
- name: AIRFLOW__API__AUTH_BACKEND
value: "airflow.api.auth.backend.basic_auth"
ingress:
web:
tls:
enabled: true
secretName: wildcard-tls-cert
host: "mydns.com"
path: "/airflow"
I also need KubernetesExecutor hence using https://github.com/airflow-helm/charts/blob/main/charts/airflow/sample-values-KubernetesExecutor.yaml as k8sExecutor.yaml
Installing using following command,
helm install my-airflow airflow-8.6.1/airflow/ --values values.yaml
--values k8sExecutor.yaml -n mynamespace
It worked when I tried the following way,
helm repo add airflow-repo https://airflow-helm.github.io/charts
helm install my-airflow airflow-repo/airflow --version 8.6.1 --values k8sExecutor.yaml --values values.yaml
values.yaml - has only overridden parameters

Process json logs with Grafana/loki

I have set up Grafana, Prometheus and loki (2.6.1) as follows on my kubernetes (1.21) cluster:
helm upgrade --install promtail grafana/promtail -n monitoring -f monitoring/promtail.yaml
helm upgrade --install prom prometheus-community/kube-prometheus-stack -n monitoring --values monitoring/prom.yaml
helm upgrade --install loki grafana/loki -n monitoring --values monitoring/loki.yaml
with:
# monitoring/loki.yaml
loki:
schemaConfig:
configs:
- from: 2020-09-07
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: loki_index_
period: 24h
storageConfig:
aws:
s3: s3://eu-west-3/cluster-loki-logs
boltdb_shipper:
shared_store: filesystem
active_index_directory: /var/loki/index
cache_location: /var/loki/cache
cache_ttl: 168h
# monitoring/promtail.yaml
config:
serverPort: 80
clients:
- url: http://loki:3100/loki/api/v1/push
# monitoring/prom.yaml
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: false
serviceMonitorSelector: {}
serviceMonitorNamespaceSelector:
matchLabels:
monitored: "true"
grafana:
sidecar:
datasources:
defaultDatasourceEnabled: true
additionalDataSources:
- name: Loki
type: loki
url: http://loki.monitoring:3100
I get data from my containers, but, whenever I have a container logging in json format, I can't get access to the nested fields:
{app="product", namespace="api-dev"} | unpack | json
Yields:
My aim is, for example, to filter by log.severity
Actually, following this answer, it occurs to be a promtail scraping issue.
The current (promtail-6.3.1 / 2.6.1) helm chart default is to have cri as pipeline's stage, which expects this kind of logs:
"2019-04-30T02:12:41.8443515Z stdout xx message"
I should have use docker, which expects json, consequently, my promtail.yaml changed to:
config:
serverPort: 80
clients:
- url: http://loki:3100/loki/api/v1/push
snippets:
pipelineStages:
- docker: {}

How do I set helm values (not files) in ArgoCD Application spec

I looked all over the ArgoCD docs for this but somehow I cannot seem to find an answer. I have an application spec like so:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp
namespace: argocd
spec:
destination:
namespace: default
server: https://kubernetes.default.svc
project: default
source:
helm:
valueFiles:
- my-values.yaml
path: .
repoURL: ssh://git#blah.git
targetRevision: HEAD
However, I also need to specify a particular helm value (like you'd do with --set in the helm command. I see in the ArgoCD web UI that it has a spot for Values, but I have tried every combination of entries I can think of (somekey=somevalue, somekey:somevalue, somekey,somevalue). I also tried editing the manifest directly, but I still get similar errors trying to do so.
The error is long nonsense that ends with error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type map[string]interface {}
What is the correct syntax to set a single value, either through the web UI or the manifest file?
you would use parameters via spec.source.helm.parameters
something like:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
project: my-project
source:
repoURL: https://charts.my-company.com
targetRevision: "1234"
chart: my-chart
helm:
parameters:
- name: my.helm.key
value: some-val
destination:
name: k8s-dev
namespace: my-ns
Sample from Argo Docs - https://argo-cd.readthedocs.io/en/stable/user-guide/helm/#build-environment
To override just a few arbitrary parameters in the values you indeed can use parameters: as the equivalent of Helm's --set option or fileParameters: instead of --set-file:
...
helm:
# Extra parameters to set (same as setting through values.yaml, but these take precedence)
parameters:
- name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
value: mydomain.example.com
- name: "ingress.annotations.kubernetes\\.io/tls-acme"
value: "true"
forceString: true # ensures that value is treated as a string
# Use the contents of files as parameters (uses Helm's --set-file)
fileParameters:
- name: config
path: files/config.json
But to answer your original question, for the "Values" option in the GUI you pass literal YAML block in the manifest, like:
helm:
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
- values-prod.yaml
# Values file as block file
values: |
ingress:
enabled: true
path: /
hosts:
- mydomain.example.com
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
labels: {}
Check ArgoCD sample application for more details.

Templates and Values in different repos via ArgoCD

I'm looking for insights for the following situation...
I have one ArgoCD application pointing to a Git repo (A), where there's a values.yaml;
I would like to use the Helm templates stored in a different repo (B);
Any suggestions/alternatives on how to make this work?
I think helm dependency can help solve your problem.
In file Chart.yaml of repo (A), declares dependency (chart of repo B)
# Chart.yaml
dependencies:
- name: chartB
version: "0.0.1"
repository: "https://link_to_chart_B"
Link references:
https://github.com/argoproj/argocd-example-apps/tree/master/helm-dependency
P/s: You need add repo chart into ArgoCD.
The way we solved it is by writing a very simple helm plugin
and pass to it the URL where the Helm chart location (chartmuseum in our case) as an env variable
server:
name: server
config:
configManagementPlugins: |
- name: helm-yotpo
generate:
command: ["sh", "-c"]
args: ["helm template --version ${HELM_CHART_VERSION} --repo ${HELM_REPO_URL} --namespace ${NAMESPACE} $HELM_CHART_NAME --name-template=${HELM_RELEASE_NAME} -f $(pwd)/${HELM_VALUES_FILE} "]
you can run the helm command with the flag of --repo
and in the ArgoCD Application yaml you call the new plugin
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: application-test
namespace: infra
spec:
destination:
namespace: infra
server: https://kubernetes.default.svc
project: infra
source:
path: "helm-values-files/telegraf"
repoURL: https://github.com/YotpoLtd/argocd-example.git
targetRevision: HEAD
plugin:
name: helm-yotpo
env:
- name: HELM_RELEASE_NAME
value: "telegraf-test"
- name: HELM_CHART_VERSION
value: "1.8.18"
- name: NAMESPACE
value: "infra"
- name: HELM_REPO_URL
value: "https://helm.influxdata.com/"
- name: HELM_CHART_NAME
value: "telegraf"
- name: HELM_VALUES_FILE
value: "telegraf.yaml"
you can read more about it in the following blog
post

Filebeat on Kubernetes modules are not working

I am using this guide to run filebeat on a Kubernetes cluster.
https://www.elastic.co/guide/en/beats/filebeat/master/running-on-kubernetes.html#_kubernetes_deploy_manifests
filebeat version: 6.6.0
I updated config file with:
filebeat.yml: |-
filebeat.config:
inputs:
# Mounted `filebeat-inputs` configmap:
path: ${path.config}/inputs.d/*.yml
# Reload inputs configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# hints.enabled: true
filebeat.modules:
- module: nginx
access:
enabled: true
var.paths: ["/var/log/nginx/access.log*"]
- module: apache2
access:
enabled: true
var.paths: ["/var/log/apache2/access.log*"]
error:
enabled: true
var.paths: ["/var/log/apache2/error.log*"]
But, the logs from the PHP application (/var/log/apache2/error.log) are not being fetched by filebeat. I checked by execing into the filebeat pod and I see that apache2 and nginx modules are not enabled.
How can I set it up correctly in above yaml file.
UPDATE
I updated filebeat config file with below settings:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
templates:
- condition:
config:
- type: docker
containers.ids:
- "${data.kubernetes.container.id}"
exclude_lines: ["^\\s+[\\-`('.|_]"] # drop asciiart lines
- condition:
equals:
kubernetes.labels.app: "my-apache-app"
config:
- module: apache2
log:
input:
type: docker
containers.ids:
- "${data.kubernetes.container.id}"
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-modules
namespace: default
labels:
k8s-app: filebeat
data:
apache2.yml: |-
- module: apache2
access:
enabled: true
error:
enabled: true
nginx.yml: |-
- module: nginx
access:
enabled: true
Now, I am logging apache errors in /dev/stderr so that I can see it thru kubectl logs. Logs are fetching over kibana dashboard. But, apache module is still noe visible.
I tried checking with ./filebeat modules list:
Enabled:
apache2
nginx
Disabled:
Kibana Dashboard