Datadog: API Key invalid dropping transaction when installing Datadog agent - kubernetes

I'm trying to install Datadog agent for a Kubernetes cluster using Helm.
This is the helm command I'm using for it:
helm repo add datadog https://helm.datadoghq.com
helm repo update
helm upgrade --install datadog datadog/datadog \
--namespace monitoring \
--create-namespace \
--atomic \
--set datadog.apiKey=<MY-DATADOG-API-KEY> \
--set targetSystem=linux \
--values values.yaml
Values file:
datadog:
kubelet:
host:
valueFrom:
fieldRef:
fieldPath: spec.nodeName
hostCAPath: /etc/kubernetes/certs/kubeletserver.crt
tlsVerify: false # Required as of Agent 7.35. See Notes.
However, when I run the liveness probe error with error 500 which shows the error below:
CLUSTER | ERROR | (pkg/forwarder/transaction/transaction.go:344 in internalProcess) | API Key invalid, dropping transaction for https://orchestrator.datadoghq.com/api/v1/orchestrator.

Here's how I solved it:
The issue had to do with the Datadog Destination Site. The Destination site for my metrics, traces, and logs is supposed to be datadoghq.eu. This is set using the variable DD_SITE, and it defaults to datadoghq.com if it is not set.
To check what your Datadog Destination Site just look at the URL of your Datadog dashboard:
For US it will be - https://app.datadoghq.com/
For EU it will be - https://app.datadoghq.eu/
To set this in your helm chart simply do either of the following:
helm repo add datadog https://helm.datadoghq.com
helm repo update
helm upgrade --install datadog datadog/datadog \
--namespace monitoring \
--create-namespace \
--atomic \
--set datadog.apiKey=<MY-DATADOG-API-KEY> \
--set targetSystem=linux \
--set datadog.site=datadoghq.eu \
--values values.yaml
OR set it in your values file:
datadog:
site: datadoghq.eu
kubelet:
host:
valueFrom:
fieldRef:
fieldPath: spec.nodeName
hostCAPath: /etc/kubernetes/certs/kubeletserver.crt
tlsVerify: false # Required as of Agent 7.35. See Notes.
References:
Datadog Agent Forwarder fails liveness probe when new spot instance joins cluster, causing multiple restarts #1697
DD_SITE Set to us3.datadoghq.com, but process-agent and security-agent Still Try to Connect to non us3 endpoints #9180

Related

Helm cant pull registry image

After helm upgrade i got error:
Failed to pull image "myhostofgitlab.ru/common-core-executor:1bac97ef": rpc error: code = Unknown desc = Error response from daemon: Head https://myhostofgitlab.ruv2/common-core-executor/manifests/1bac97ef: denied: access forbidden
my run command:
k8s-deploy-Prod:
image: alpine/helm:latest
stage: deploy
script:
- helm upgrade ${PREFIX}-common-core-executor k8s/helm/common-core-executor --debug --atomic --install --wait --history-max 3
--set image.repository=${CI_REGISTRY_IMAGE}/common-core-executor
--set image.tag=${CI_COMMIT_SHORT_SHA}
--set name=${PREFIX}-common-core-executor
--set service.name=${PREFIX}-common-core-executor
--set branch=${PREFIX}
--set ingress.enabled=true
--set ingress.hosts[0].host=${PREFIX}.common-core-executor.k8s.test.zone
--set ingress.tls[0].hosts[0]=${PREFIX}.common-core-executor.k8s.test.zone
--set secret.name=${PREFIX}-${PROJECT_NAME}-secret
--timeout 2m0s
-f k8s/helm/common-core-executor/common-core-executor-values.yaml
-n ${NAMESPACE}
Where i wrong?
Before that error i make some steps from officially instruction. Firstable i create cred like this (its just sample data):
apiVersion: v1
kind: Secret
data:
.dockerconfigjson: eyJhdXRocyI6eyJodHRwczovL2hvc3QtZm9yLXN0YWNrLW92ZXJmbG93OnsidXNlcm5hbWUiOiJzdGFja292ZXJmbG93IiwicGFzc3dvcmQiOiJzdGFja292ZXJmbG93IiwiYXV0aCI6Inh4eCJ9fX0=
metadata:
name: regcred
namespace: prod-common-service
type: kubernetes.io/dockerconfigjson
And add in containers section of deployment.yaml
imagePullSecrets:
- name: regcred
Thanks!

How can I pass the correct parameters to Helm, using Ansible to install GitLab?

I'm writing an Ansible task to deploy GitLab in my k3s environment.
According to the doc, I need to execute this to install GitLab using Helm:
$ helm install gitlab gitlab/gitlab \
--set global.hosts.domain=DOMAIN \
--set certmanager-issuer.email=me#example.com
But the community.kubernetes.helm doesn't handle --set parameters and only call helm with the --values parameter.
So my Ansible task looks like this:
- name: Deploy GitLab
community.kubernetes.helm:
update_repo_cache: yes
release_name: gitlab
chart_ref: gitlab/gitlab
release_namespace: git
release_values:
global.hosts.domain: example.com
certmanager-issuer.email: info#example.com
But the helm chart still return the error You must provide an email to associate with your TLS certificates. Please set certmanager-issuer.email.
I've tried manually in a terminal, and it seems that the GitLab helm chart requires --set parameters and fail with --values. But community.kubernetes.helm doesn't.
What can I do?
Is there a bug on GitLab helm chart side?
it seems that the GitLab helm chart requires --set parameters and fail with --values
That is an erroneous assumption; what you are running into is that --set splits on . because otherwise providing fully-formed YAML on the command line would be painful
The correct values are using sub-objects where the . occurs:
- name: Deploy GitLab
community.kubernetes.helm:
update_repo_cache: yes
release_name: gitlab
chart_ref: gitlab/gitlab
release_namespace: git
release_values:
global:
hosts:
# https://gitlab.com/gitlab-org/charts/gitlab/-/blob/v4.4.5/values.yaml#L47
domain: example.com
# https://gitlab.com/gitlab-org/charts/gitlab/-/blob/v4.4.5/values.yaml#L592-595
certmanager-issuer:
email: info#example.com

Provide nodeSelector to nginx ingress using helm

I spent some time looking into how to pass the parameters to helm in order to configure the nodeSelector properly.
Different tries led to different errors like:
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.nodeSelector.kubernetes): invalid type for io.k8s.api.core.v1.PodSpec.nodeSelector: got "map", expected "string"
coalesce.go:196: warning: cannot overwrite table with non table for nodeSelector (map[])
Reference: https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip
In the link above, we can see how it should be used:
helm install nginx-ingress stable/nginx-ingress \
--namespace $NAMESPACE \
--set controller.replicaCount=1 \
--set controller.nodeSelector."kubernetes\.io/hostname"=$LOADBALANCER_NODE \
--set controller.service.loadBalancerIP="$LOADBALANCER_IP" \
--set controller.extraArgs.default-ssl-certificate="$NAMESPACE/$LOADBALANCER_NODE-ssl"
In general it is a good source to look into helm help: https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set
Here you can find all the nginx parameters: https://github.com/helm/charts/tree/master/stable/nginx-ingress

Helm [stable/nginx-ingress] Getting issue while passing headers

Version of Helm and Kubernetes: Client: &version.Version{SemVer:"v2.14.1" and 1.13.7-gke.24
Which chart: stable/nginx-ingress [v0.24.1]
What happened: Trying to override headers using--set-string but it does not work as expected. It always gives issues with the parsing
/usr/sbin/helm install --name cx-nginx-1 --set controller.name=cx-nginx-1 --set controller.kind=Deployment --set controller.service.loadBalancerIP= --set controller.metrics.enabled=true --set-string 'controller.headers={"X-Different-Name":"true","X-Request-Start":"test-header","X-Using-Nginx-Controller":"true"}' . Error: release cx-nginx-1 failed: ConfigMap in version "v1" cannot be handled as a ConfigMap: v1.ConfigMap.Data: ReadMapCB: expect { or n, but found [, error found in #10 byte of ...|","data":["\"X-Diffe|..., bigger context ...|{"apiVersion":"v1","data":["\"X-Different-Name\":\"true\"","\"X-Request-Start|...
What you expected to happen: I want to override the header which the there by default in values.yam with custom headers
How to reproduce it (as minimally and precisely as possible):
I have provided the comment to reproduce,
helm install --name cx-nginx-1 --set controller.name=cx-nginx-1 --set controller.kind=Deployment --set controller.service.loadBalancerIP= --set controller.metrics.enabled=true --set-string 'controller.headers={"X-Different-Name":"true","X-Request-Start":"test-header","X-Using-Nginx-Controller":"true"}' .
I tried to run in debug mode (--dry-run --debug), It shows me configmap like below,
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1
component: "cx-nginx-1"
heritage: Tiller
release: foiled-coral
name: foiled-coral-nginx-ingress-custom-headers
namespace: cx-ingress
data:
- X-Different-Name:true
- X-Request-Start:test-header
- X-Using-Nginx-Controller:true
It seems like its adding intent 4 instead of intent 2. Below warning also i'm getting,
Warning: Merging destination map for chart 'nginx-ingress'. Cannot overwrite table item 'headers', with non table value: map[X-Different-Name:true X-Request-Start:test-header X-Using-Nginx-Controller:true]
Kindly help me to pass the headers in the right way.
Note: controller.headers is deprecated, make sure to use the controller.proxySetHeaders instead.
Helm --set has some limitations.
Your best option is to avoid using the --set, and use the --values instead.
You can declare all your custom values in a file like this:
# values.yaml
controller:
name: "cx-nginx-1"
kind: "Deployment"
service:
loadBalancerIP: ""
metrics:
enable: true
proxySetHeaders:
X-Different-Name: "true"
X-Request-Start: "true"
X-Using-Nginx-Controller: "true"
Then use it on install:
helm install --name cx-nginx-1 stable/nginx-ingress \
--values=values.yaml
If you want to use --set anyway, you should use this notation:
helm install --name cx-nginx-1 stable/nginx-ingress \
--set controller.name=cx-nginx-1 \
--set controller.kind=Deployment \
--set controller.service.loadBalancerIP= \
--set controller.metrics.enabled=true \
--set-string controller.proxySetHeaders.X-Different-Name="true" \
--set-string controller.proxySetHeaders.X-Request-Start="true" \
--set-string controller.proxySetHeaders.X-Using-Nginx-Controller="true"

Should jx step helm apply create/produce a helm release

I'm struggling with jx, kubernetes and helm. I run a Jenkinsfile on jx executing commands in env directory:
sh 'jx step helm build'
sh 'jx step helm apply'
It finishes with success and deploys pods/creates deployment etc. however, helm list is empty.
When I execute something like helm install ... or helm upgrade --install ... it creates a release and helm list shows that.
Is it correct behavior?
More details:
EKS installed with:
eksctl create cluster --region eu-west-2 --name integration --version 1.12 \
--nodegroup-name integration-nodes \
--node-type t3.large \
--nodes 3 \
--nodes-min 1 \
--nodes-max 10 \
--node-ami auto \
--full-ecr-access \
--vpc-cidr "172.20.0.0/16"
Then I set up ingresses (external & internal) with some kubectly apply command (won't share the files). Then I set up routes and vpc related stuff.
JX installed with:
jx install --provider=eks --ingress-namespace='internal-ingress-nginx' \
--ingress-class='internal-nginx' \
--ingress-deployment='nginx-internal-ingress-controller' \
--ingress-service='internal-ingress-nginx' --on-premise \
--external-ip='#########' \
--git-api-token=######### \
--git-username=######### --no-default-environments=true
Details from the installation:
? Select Jenkins installation type: Static Jenkins Server and Jenkinsfiles
? Would you like wait and resolve this address to an IP address and use it for the domain? No
? Domain ###########
? Cloud Provider eks
? Would you like to register a wildcard DNS ALIAS to point at this ELB address? Yes
? Your custom DNS name: ###########
? Would you like to enable Long Term Storage? A bucket for provider eks will be created No
? local Git user for GitHub server: ###########
? Do you wish to use GitHub as the pipelines Git server: Yes
? A local Jenkins X versions repository already exists, pull the latest? Yes
? A local Jenkins X cloud environments repository already exists, recreate with latest? Yes
? Pick default workload build pack: Kubernetes Workloads: Automated CI+CD with GitOps Promotion
Then I set up helm:
kubectl apply -f tiller-rbac-config.yaml
helm init --service-account tiller
where tiller-rbac-config.yaml is:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
helm version says:
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
jx version says:
NAME VERSION
jx 2.0.258
jenkins x platform 2.0.330
Kubernetes cluster v1.12.6-eks-d69f1b
helm client Client: v2.13.1+g618447c
git git version 2.17.1
Operating System Ubuntu 18.04.2 LTS
Applications were imported this way:
jx import --branches="devel" --org ##### --disable-updatebot=true --git-api-token=##### --url git#github.com:#####.git
And environment was created this way:
jx create env --git-url=##### --name=integration --label=Integration --domain=##### --namespace=jx-integration --promotion=Auto --git-username=##### --git-private --branches="master|devel|test"
Going throught the changelog, it seems that the tillerless mode has been made the default mode since version 2.0.246.
In Helm v2, Helm relies on its server side component called Tiller. The Jenkins X tillerless mode means that instead of using Helm to install charts, the Helm client is only used for templating and generating the Kubernetes manifests. But then, those manifests are applied normally using kubectl, not helm/tiller.
The consequence is that Helm won't know about this installations/releases, because they were made by kubectl. So that's why you won't get the list of releases using Helm. That's the expected behavior, as you can read on the Jenkins X docs.
What --no-tiller means is to switch helm to use template mode which
means we no longer internally use helm install mychart to install a
chart, we actually use helm template mychart instead which generates
the YAML using the same helm charts and the standard helm
confiugration management via --set and values.yaml files.
Then we use kubectl apply to apply the YAML.
As mentioned by James Strachan in the comments, when using the tillerless mode, you can view your deployments using jx step helm list