Configure SMTP for SonarQube on Kubernetes Helm Chart - kubernetes

I want to automatically deploy SonarQube on Kubernetes, so the goal is to have everything configued automatically. I successfully created a values.yaml for the helm chart that installs the LDAP plugin and configure it using our DC. But when configuring email settings like SMTP host, they seems ignored.
Already tried to completely delete the chart and re-installed it:
helm delete --purge sonarqube-test
helm install stable/sonarqube --namespace sonarqube-test --name sonarqube-test -f values-test.yaml
Altough I set e.g. http.proxyHost to our mailserver, it's still empty in the UI after deploying those values.yaml;
The sonarProperties property is documented and it seems to work: Other properties like from ldap were applied, since I can login using LDAP after updating the values.
I'm not sure if this is k8s related, since other said it works generally. I went into the container using kubectl exec and looked at the generated sonar.properties file, it seems fine:
$ cat /opt/sonarqube/conf/sonar.properties
email.from=noreply#mydomain.com
email.fromName=SonarQube Test
email.prefix=[SONARQUBE Test]
email.smtp_host.secured=mymailserver.internal
sonar.security.realm=LDAP
sonar.updatecenter.activate=true
sonar.web.javaOpts=-Xmx2048m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -serversonarqube
There were some more properties for LDAP like Bind user and so on, which I removed.
So why are the email settings not applied after updating the chart, and not even when it got completely deleted and re-deployed?
values.yaml
replicaCount: 1
image:
tag: 7.9-community
service:
type: ClusterIP
port: 80
ingress:
enabled: true
hosts:
- name: sonarqube-test.mycluster.internal
path: /
tls:
- hosts:
- sonarqube-test.mycluster.internal
persistence:
enabled: true
storageClass: nfs-client
accessMode: ReadWriteOnce
size: 10Gi
postgresql:
enabled: true
plugins:
install:
- "https://github.com/SonarSource/sonar-ldap/releases/download/2.2-RC3/sonar-ldap-plugin-2.2.0.601.jar"
sonarProperties:
sonar.web.javaOpts: "-Xmx2048m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -server"
sonar.security.realm: LDAP
ldap.url: "..."
# More ldap config vars ...
sonar.updatecenter.activate: true
email.smtp_host.secured: "mymailserver.internal"
email.fromName: "SonarQube Test"
email.from: "noreply#mydomain.com"
email.prefix: "[SONARQUBE Test]"
resources:
limits:
cpu: 4000m
memory: 8096Mi
requests:
cpu: 500m
memory: 3096Mi

You have defined chart for sonarqube and configured tls in your value.yaml file. Take notice that you don't specify secret name according to definition of sonarquebue your tls section should look like this. Remeber that you have to create this secret in proper namespace manually.
Template for configuring tls looks like this:
tls: []
# Secrets must be manually created in the namespace.
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
So in your case this section should loks like this:
tls: []
# Secrets must be manually created in the namespace.
- secretName: your-secret-name
hosts:
- sonarqube-test.mycluster.internal
At the same time during configuration postgresql dependencies you didn't specify user, database, password and port for postgreSQL, which you should do because you choose to use this database instead of mySQL.
Here is template:
database:
type: "postgresql"
## Configuration values for postgresql dependency
## ref: https://github.com/kubernetes/charts/blob/master/stable/postgresql/README.md
postgresql:
# Enable to deploy the PostgreSQL chart
enabled: true
# To use an external PostgreSQL instance, set enabled to false and uncomment
# the line below:
# postgresServer: ""
# To use an external secret for the password for an external PostgreSQL
# instance, set enabled to false and provide the name of the secret on the
# line below:
# postgresPasswordSecret: ""
postgresUser: "sonarUser"
postgresPassword: "sonarPass"
postgresDatabase: "sonarDB"
# Specify the TCP port that PostgreSQL should use
service:
port: 5432
The most common cause of SMTP failures is because of a wrong outbound mail configuration.
You have to introduce the following parameters in the SMTP configuration:
SMTP Host
SMTP Port
SMTP Username
SMTP Password
SMTP Encryption
Check that these parameters are those provided by your mail provider.
Check that you have followed the “Configure outbound mail settings” section in the application documentation page.
In your case you didn't specify password, user name and port.
Add follwing sections to your sonar.properities definition:
email.smtp_port.secured=port-name
email.smtp_secure_connection.secured=true
email.smtp_username.secured=your-username
email.smtp_password.secured=your-password
Next thing:
Make sure that your cloud environment allows Traffic In SMTP Ports.
To avoid massive SPAM attacks, several clouds do not allow SMTP traffic in their default ports.
Google Cloud Platform does not allow SMTP traffic through default
ports 25, 465 or 587
GoDaddy also blocks SMTP traffic.
Here is troubleshooting documenttion connected to SMTP issues: SMTP-issues.
Make sure that you didn't have one of them.
Please let me know if its help.

Related

Why is ArgoCD confusing GitHub.com with my own public IP?

I have just set up a kubernetes cluster on bare metal using kubeadm, Flannel and MetalLB. Next step for me is to install ArgoCD.
I installed the ArgoCD yaml from the "Getting Started" page and logged in.
When adding my Git repositories ArgoCD gives me very weird error messages:
The error message seems to suggest that ArgoCD for some reason is resolving github.com to my public IP address (I am not exposing SSH, therefore connection refused).
I can not find any reason why it would do this. When using https:// instead of SSH I get the same result, but on port 443.
I have put a dummy pod in the same namespace as ArgoCD and made some DNS queries. These queries resolved correctly.
What makes ArgoCD think that github.com resolves to my public IP address?
EDIT:
I have also checked for network policies in the argocd namespace and found no policy that was restricting egress.
I have had this working on clusters in the same network previously and have not changed my router firewall since then.
I solved my problem!
My /etc/resolv.conf had two lines that caused trouble:
domain <my domain>
search <my domain>
These lines were put there as a step in the installation of my host machine's OS that I did not realize would affect me in this way. After removing these lines, everything is now working perfectly.
Multiple people told me to check resolv.conf, but I didn't realize what these two lines did until now.
That looks like argoproj/argo-cd issue 1510, where the initial diagnostic was that the cluster is blocking outbound connections to GitHub. And it suggested to check the egress configuration.
Yet, the issue was resolved with an ingress rule configuration:
need to define in values.yaml.
argo-cd default provide subdomain but in our case it was /argocd
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
path: /argocd
hosts:
- www.example.com
and this I have defined under templates >> argocd-server-deployment.yaml
containers:
- name: argocd-server
image: {{ .Values.server.image.repository }}:{{ .Values.server.image.tag }}
imagePullPolicy: {{ .Values.server.image.pullPolicy }}
command:
- argocd-server
- --staticassets - /shared/app - --repo-server - argocd-repo-server:8081 - --insecure - --basehref - /argocd
The same case includes an instance very similar to yours:
In any case, do check your git configuration (git config -l) as seen in the ArgoCD cluster, to look for any insteadOf which would change automatically github.com into a local URL (as seen here)

cetic-nifi Invalid host header issue

Helm version: v3.5.2
Kubernetes version: v1.20.4
nifi chart version:latest : 1.0.2 rel
Issue: [cetic/nifi]-issue
I'm trying to connect to nifi UI deployed in kubernetes.
I have set following properties in values yaml
properties:
# use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
sensitiveKey: changeMechangeMe # Must to have minimal 12 length key
algorithm: NIFI_PBKDF2_AES_GCM_256
externalSecure: false
isNode: false
httpsPort: 8443
webProxyHost: 10.0.39.39:30666
clusterPort: 6007
# ui service
service:
type: NodePort
httpsPort: 8443
nodePort: 30666
annotations: {}
# loadBalancerIP:
## Load Balancer sources
## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
##
# loadBalancerSourceRanges:
# - 10.10.10.0/24
## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
# sessionAffinity: ClientIP
# sessionAffinityConfig:
# clientIP:
# timeoutSeconds: 10800
10.0.39.39 - is the kubernetes masternode internal ip.
When nifi get started i get follwoing
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/k8sadmin/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/k8sadmin/.kube/config
NAME: nifi
LAST DEPLOYED: Thu Nov 25 12:38:00 2021
NAMESPACE: jeed-cluster
STATUS: deployed
REVISION: 1
NOTES:
Cluster endpoint IP address will be available at:
kubectl get svc nifi -n jeed-cluster -o jsonpath='{.status.loadBalancer.ingress[*].ip}'
Cluster endpoint domain name is: 10.0.39.39:30666 - please update your DNS or /etc/hosts accordingly!
Once you are done, your NiFi instance will be available at:
https://10.0.39.39:30666/nifi
and when i do a curl
curl https://10.0.39.39:30666 put sample.txt -k
<h1>System Error</h1>
<h2>The request contained an invalid host header [<code>10.0.39.39:30666</
the request [<code>/</code>]. Check for request manipulation or third-part
t.</h2>
<h3>Valid host headers are [<code>empty
<ul><li>127.0.0.1</li>
<li>127.0.0.1:8443</li>
<li>localhost</li>
<li>localhost:8443</li>
<li>[::1]</li>
<li>[::1]:8443</li>
<li>nifi-0.nifi-headless.jeed-cluste
<li>nifi-0.nifi-headless.jeed-cluste
<li>10.42.0.8</li>
<li>10.42.0.8:8443</li>
<li>0.0.0.0</li>
<li>0.0.0.0:8443</li>
</ul>
Tried lot of things but still cannot whitelist master node ip in
proxy hosts
Ingress is not used
edit: it looks like properties set in values.yaml is not set in nifi.properties in side the pod. Is there any reason for this?
Appreciate help!
As NodePort service you can also assign port number from 30000-32767.
You can apply values when you install your chart with:
properties:
webProxyHost: localhost
httpsPort:
This should let nifi whitelist your https://localhost:

How to set SMTP for microk8s grafana (prometheus addon)

To use alert by E-mail in Grafana, we have to set SMTP settings in grafana.ini.
On Ubuntu, we can easily run the grafana-prometheus-k8s stack by command
microk8s enable prometheus
However, how can we feed grafana.ini to grafana running in a k8s pod?
We can modify grafana k8s deployment manifest by volumeMounts to feed grafana.ini on our host to grafana running in a pod.
First, prepare your grafana.ini with SMTP settings. E.g.
[smtp]
enabled = true
host = smtp.gmail.com:465
# Please change user and password to your ones.
user = foo#bar.com
password = your-password
Then, you can place this file on your host. E.g. /home/mydir/grafana.ini
Modify the loaded grafana k8s deployment manifest:
kubectl edit deployments.apps -n monitoring grafana
Add a new mount to volumeMounts (not the one in kubectl.kubernetes.io/last-applied-configuration):
volumeMounts:
- mountPath: /etc/grafana/grafana.ini
name: mydir
subPath: grafana.ini
Add a new hostPath to volumes:
volumes:
- hostPath:
path: /home/mydir
type: ""
name: mydir
Finally, restart the deployment:
kubectl rollout restart -n monitoring deployment grafana
Run this command and use a web browser on your host to navigate to http://localhost:8080 to grafana web app:
kubectl port-forward -n monitoring svc/grafana 8080:3000
Then, you can navigate to Alerting / Notification channels / Add channel to add an Email notification channel and test it!

How to set a service port for ingresses in helmfile?

I'm new to Kubernetes and Helm and want to create a SSO with OIDC using vouch-proxy
I found a tutorial which explains how to do it and was able to write some helmfiles that were accepted by kubernetes.
I added the ingress configuration to the values.yaml that I load in my helmfile.yaml.
helmfile.yaml
bases:
- environments.yaml
---
releases:
- name: "vouch"
chart: "halkeye/vouch"
version: {{ .Environment.Values.version }}
namespace: {{ .Environment.Values.namespace }}
values:
- values.yaml
values.yaml
# vouch config
# bare minimum to get vouch running with OpenID Connect (such as okta)
config:
vouch:
some:
other:
values:
# important part
ingress:
enabled: true
hosts:
- "vouch.minikube"
paths:
- /
With this configuration helmfile creates an Ingress for the correct host, but when I open the URL in my Browser it returns a 404 Not Found which makes sense, since I didn't specify the correct port (9090).
I tried some notations to add the port but it lead to either helmfile not updating the pod or 500 Internal Server Errors.
How can I add a port in the configuration? And is it the "correct" way to do it? Or should ingresses be handled by kubectl still?

SSL Certificate Error with python-arango Library

I am trying to connect the Python-Arango library to an application. I have set up the ArangoDB on Kubernetes nodes using this tutorial. My yaml file for the cluster is like this:
---
apiVersion: "database.arangodb.com/v1alpha"
kind: "ArangoDeployment"
metadata:
name: "arango-cluster"
spec:
mode: Cluster
image: arangodb/arangodb:3.7.6
tls:
caSecretName: arango-cluster-ca
agents:
storageClassName: my-local-storage
resources:
requests:
storage: 2Gi
dbservers:
storageClassName: my-local-storage
resources:
requests:
storage: 17Gi
externalAccess:
type: NodePort
nodePort: 31200
Setup seems fine, since I am able to access the web UI as well as through Arango shell. However, when I am using the python-arango library to connect my application to the DB, I am getting a certificate related error:
Max retries exceeded with url: /_db/testDB/_api/document/demo/10010605 (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),))
When doing kubectl get secrets, I see arango-cluster-ca there, which I have explicitly mentioned in the YAML file above. I have also set the verification flag in the python code False as follows
db = client.db(name='testDB', verify=False, username='root', password='')
Yet, it does not bypass the verification as expected.
I would like to understand what I could have missed - either during setup, or in the Python call - which is not letting me bypass this SSL certificate error issue, or if it's possible to set the certificate up. I tried this Arango tutorial to setup a certificate, but it did not give me success.
Thanks.
The only workaround I was able to figure out was to opt for the unsecured route.
Instead of having arango-cluster-ca in the spec.tls.caSecretName field of arango cluster config file, I set the field to None. It allowed me to connect with http without any issues.
Would still like to know if there is some workaround to get it connected via https, so I am still open to answers, else I would close this.