How to enable frontend HTTPS by Traefk in Azure Service Fabric container - azure-service-fabric

My backend service is in a Docker container hosted in Azure Service Fabric. And the service is stateful. So we use Traefik to convert the stateful request to stateless. To achieve this, Traefik forwards the request from frontend to our backend. It works fine when it is using HTTP. Now we have to enable HTTPS on the front end.
I've configured the HTTPS for Azure Service Fabric. When I login a cluster node, I can visit my backend service by private IP. But I can't visit my service from the configured domain. The Traefik log shows "backend not found".
I'm using self-signed certificate. And here is my configuration:
[traefikLog]
filePath = "log/traefik.log"
format = "json"
logLevel = "DEBUG"
# Enable debug mode
#
# Optional
# Default: false
#
debug = true
# Traefik logs file
# If not defined, logs to stdout
#
# Optional
#
#traefikLogsFile = "log/traefik.log"
# Log level
#
# Optional
# Default: "ERROR"
#logLevel = "DEBUG"
# Entrypoints to be used by frontends that do not specify any entrypoint.
# Each frontend can specify its own entrypoints.
#
# Optional
# Default: ["http"]
#
defaultEntryPoints = ["http", "https"]
# Entrypoints definition
#
# Optional
# Default:
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "abc#abc.com"
storage = "acme.json"
caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
[[acme.domains]]
main = "domain1.azure.com"
[[acme.domains]]
main = "domain2.azure.com"
[entryPoints.traefik]
address = ":8080"
# Enable access logs
# By default it will write to stdout and produce logs in the textual
# Common Log Format (CLF), extended with additional fields.
#
# Optional
#
[accessLog]
# Sets the file path for the access log. If not specified, stdout will be used.
# Intermediate directories are created if necessary.
#
# Optional
# Default: os.Stdout
#
filePath = "log/log.txt"
# Format is either "json" or "common".
#
# Optional
# Default: "common"
#
# format = "common"
################################################################
# API definition
################################################################
[api]
# Name of the related entry point
#
# Optional
# Default: "traefik"
#
entryPoint = "traefik"
# Enabled Dashboard
#
# Optional
# Default: true
#
dashboard = true
# Enable debug mode.
# This will install HTTP handlers to expose Go expvars under /debug/vars and
# pprof profiling data under /debug/pprof.
# Additionally, the log level will be set to DEBUG.
#
# Optional
# Default: false
#
debug = true
################################################################
# Service Fabric provider
################################################################
# Enable Service Fabric configuration backend
[servicefabric]
filename = "custom_config_template.tmpl"
debugLogGeneratedTemplate = true
# Service Fabric Management Endpoint
clustermanagementurl = "https://localhost:19080"
# Note: use "https://localhost:19080" if you're using a secure cluster
# Service Fabric Management Endpoint API Version
apiversion = "3.0"
refreshSeconds = 10
# Enable TLS connection.
#
# Optional
#
[serviceFabric.tls]
cert = "certs/servicefabric.crt"
key = "certs/servicefabric.key"
insecureskipverify = true
# Enable REST Provider.
[rest]
# Name of the related entry point
#
# Optional
# Default: "traefik"
#
entryPoint = "traefik"
Here are some questions I don't understand:
In the dashboard, why is the frontend still HTTP not HTTPS?
Why can't I visit my service from domain https://domain1.azure.com?
Do I have to enable https for my backend service also? Right now, I've done this but I think this could be unnecessary, because https or http for my backend service only matters when Traefik call my backend. But we just need enable https when Traefik frontend is called. Am I right?
Anyway, since I've enabled https for my backend service also, do I have to bind my backend service to the same certificate that I configured in [entryPoints.https.tls]?

The issue is caused by my deployment. After I had updated the configuration, I only redeployed Traefik service.
I have to redeployed both Traefik and backend service.
Some reason as question 1.
Backend https is unnecessary.
No.

Related

Grafana dashboard not working with Ingress

I have installed below kube-prometheus-stack and getting an error when trying to access Grafana dashboard using it's own Ingress URL. I believe I am missing something silly here but unable to find any clues. I have looked at similar post here and others as well.
Chart: kube-prometheus-stack-9.4.5
App Version: 0.38.1
When I navigate to https://myorg.grafanatest.com URL, I get redirected to https://myorg.grafanatest.com/login with following message.
Changes made to grafana/values.yaml:
grafana.ini:
server:
# The full public facing url you use in browser, used for redirects and emails
root_url: https://myorg.grafanatest.com
Helm command used to install Prometheus-Grafana operator after making above changes.
helm install pg kube-prometheus-stack/ -n monitoring
I see below settings in grafana.ini file inside Grafana pod.
[analytics]
check_for_updates = true
[grafana_net]
url = https://grafana.net
[log]
mode = console
[paths]
data = /var/lib/grafana/data
logs = /var/log/grafana
plugins = /var/lib/grafana/plugins
provisioning = /etc/grafana/provisioning
[server]
root_url = https://myorg.grafanatest.com/
Posting a solution here as it's working now. Followed steps as gumelaragum mentioned above to create values.yaml and updated below values in that, and passed that values.yaml to helm install step. Not sure why it didn't work without enabling serve_from_sub_path, but it's ok as it's working now. Note that I didn't enable Ingress section since I have already created Ingress route outside the installation process.
helm show values prometheus-com/kube-prometheus-stack > custom-values.yaml
Then install by changing below values in custom-values.yaml. Change namespace as needed.
helm -n monitoring install -f ./custom-values.yaml pg prometheus-com/kube-prometheus-stack
grafana:
enabled: true
namespaceOverride: ""
# set pspUseAppArmor to false to fix Grafana pod Init errors
rbac:
pspUseAppArmor: false
grafana.ini:
server:
domain: mysb.grafanasite.com
#root_url: "%(protocol)s://%(domain)s/"
root_url: https://mysb.grafanasite.com/grafana/
serve_from_sub_path: true
## Deploy default dashboards.
##
defaultDashboardsEnabled: true
adminPassword: prom-operator
ingress:
## If true, Grafana Ingress will be created
##
enabled: false
## Annotations for Grafana Ingress
##
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
## Labels to be added to the Ingress
##
labels: {}
## Hostnames.
## Must be provided if Ingress is enable.
##
# hosts:
# - grafana.domain.com
hosts:
- mysb.grafanasite.com
## Path for grafana ingress
path: /grafana/
I see same values being reflected in grafana.ini file inside Grafana container mount path(/etc/grafana/grafana.ini).
[analytics]
check_for_updates = true
[grafana_net]
url = https://grafana.net
[log]
mode = console
[paths]
data = /var/lib/grafana/data
logs = /var/log/grafana
plugins = /var/lib/grafana/plugins
provisioning = /etc/grafana/provisioning
[server]
domain = mysb.grafanasite.com
root_url = https://mysb.grafanasite.com/grafana/
serve_from_sub_path = true
you need to edit from parent charts values.yaml
get default values.yaml from kube-prometheus-stack chart, save to file
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm repo update
helm show values prometheus-community/kube-prometheus-stack > values.yaml
in values.yaml file, edit like this :
## Using default values from https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml
##
#### This below line is in 509 line
grafana:
enabled: true
namespaceOverride: ""
## Deploy default dashboards.
##
defaultDashboardsEnabled: true
adminPassword: prom-operator
ingress:
## If true, Grafana Ingress will be created
##
enabled: true
## Annotations for Grafana Ingress
##
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
## Labels to be added to the Ingress
##
labels: {}
## Hostnames.
## Must be provided if Ingress is enable.
##
# hosts:
# - grafana.domain.com
hosts:
- myorg.grafanatest.com
## Path for grafana ingress
path: /
grafana.ingress.enabled to true
grafana.ingress.hosts add - myorg.grafanatest.com
Apply it with
helm -n monitoring install -f ./values.yaml kube-prometheus prometheus-community/kube-prometheus-stack
Hopefully help you
Update your grafana.ini config like this:
The grafana.ini can mostly be found under grafana config map
kubectl get cm
kubectl edit cm map_name
**data:
grafana.ini: |
[server]
serve_from_sub_path = true
domain = ingress-gateway.yourdomain.com
root_url = http://ingress-gateway.yourdomain.com/grafana/**
This grafana.ini is mostly saved under the config map or YAML files which can be edited.
reapply or edit the rules & create a mapping in the ingress, that should work.
Don't forget to restart your pod so that config map changes can be applied!

Configure SMTP for SonarQube on Kubernetes Helm Chart

I want to automatically deploy SonarQube on Kubernetes, so the goal is to have everything configued automatically. I successfully created a values.yaml for the helm chart that installs the LDAP plugin and configure it using our DC. But when configuring email settings like SMTP host, they seems ignored.
Already tried to completely delete the chart and re-installed it:
helm delete --purge sonarqube-test
helm install stable/sonarqube --namespace sonarqube-test --name sonarqube-test -f values-test.yaml
Altough I set e.g. http.proxyHost to our mailserver, it's still empty in the UI after deploying those values.yaml;
The sonarProperties property is documented and it seems to work: Other properties like from ldap were applied, since I can login using LDAP after updating the values.
I'm not sure if this is k8s related, since other said it works generally. I went into the container using kubectl exec and looked at the generated sonar.properties file, it seems fine:
$ cat /opt/sonarqube/conf/sonar.properties
email.from=noreply#mydomain.com
email.fromName=SonarQube Test
email.prefix=[SONARQUBE Test]
email.smtp_host.secured=mymailserver.internal
sonar.security.realm=LDAP
sonar.updatecenter.activate=true
sonar.web.javaOpts=-Xmx2048m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -serversonarqube
There were some more properties for LDAP like Bind user and so on, which I removed.
So why are the email settings not applied after updating the chart, and not even when it got completely deleted and re-deployed?
values.yaml
replicaCount: 1
image:
tag: 7.9-community
service:
type: ClusterIP
port: 80
ingress:
enabled: true
hosts:
- name: sonarqube-test.mycluster.internal
path: /
tls:
- hosts:
- sonarqube-test.mycluster.internal
persistence:
enabled: true
storageClass: nfs-client
accessMode: ReadWriteOnce
size: 10Gi
postgresql:
enabled: true
plugins:
install:
- "https://github.com/SonarSource/sonar-ldap/releases/download/2.2-RC3/sonar-ldap-plugin-2.2.0.601.jar"
sonarProperties:
sonar.web.javaOpts: "-Xmx2048m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -server"
sonar.security.realm: LDAP
ldap.url: "..."
# More ldap config vars ...
sonar.updatecenter.activate: true
email.smtp_host.secured: "mymailserver.internal"
email.fromName: "SonarQube Test"
email.from: "noreply#mydomain.com"
email.prefix: "[SONARQUBE Test]"
resources:
limits:
cpu: 4000m
memory: 8096Mi
requests:
cpu: 500m
memory: 3096Mi
You have defined chart for sonarqube and configured tls in your value.yaml file. Take notice that you don't specify secret name according to definition of sonarquebue your tls section should look like this. Remeber that you have to create this secret in proper namespace manually.
Template for configuring tls looks like this:
tls: []
# Secrets must be manually created in the namespace.
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
So in your case this section should loks like this:
tls: []
# Secrets must be manually created in the namespace.
- secretName: your-secret-name
hosts:
- sonarqube-test.mycluster.internal
At the same time during configuration postgresql dependencies you didn't specify user, database, password and port for postgreSQL, which you should do because you choose to use this database instead of mySQL.
Here is template:
database:
type: "postgresql"
## Configuration values for postgresql dependency
## ref: https://github.com/kubernetes/charts/blob/master/stable/postgresql/README.md
postgresql:
# Enable to deploy the PostgreSQL chart
enabled: true
# To use an external PostgreSQL instance, set enabled to false and uncomment
# the line below:
# postgresServer: ""
# To use an external secret for the password for an external PostgreSQL
# instance, set enabled to false and provide the name of the secret on the
# line below:
# postgresPasswordSecret: ""
postgresUser: "sonarUser"
postgresPassword: "sonarPass"
postgresDatabase: "sonarDB"
# Specify the TCP port that PostgreSQL should use
service:
port: 5432
The most common cause of SMTP failures is because of a wrong outbound mail configuration.
You have to introduce the following parameters in the SMTP configuration:
SMTP Host
SMTP Port
SMTP Username
SMTP Password
SMTP Encryption
Check that these parameters are those provided by your mail provider.
Check that you have followed the “Configure outbound mail settings” section in the application documentation page.
In your case you didn't specify password, user name and port.
Add follwing sections to your sonar.properities definition:
email.smtp_port.secured=port-name
email.smtp_secure_connection.secured=true
email.smtp_username.secured=your-username
email.smtp_password.secured=your-password
Next thing:
Make sure that your cloud environment allows Traffic In SMTP Ports.
To avoid massive SPAM attacks, several clouds do not allow SMTP traffic in their default ports.
Google Cloud Platform does not allow SMTP traffic through default
ports 25, 465 or 587
GoDaddy also blocks SMTP traffic.
Here is troubleshooting documenttion connected to SMTP issues: SMTP-issues.
Make sure that you didn't have one of them.
Please let me know if its help.

How to Sync K8s Service to Consul Cluster which is outside the K8s?

From the consul-k8s document:
The Consul server cluster can run either in or out of a Kubernetes cluster.
The Consul server cluster does not need to be running on the same machine or same platform as the sync process.
The sync process needs to be configured with the address to the Consul cluster as well as any additional access information such as ACL tokens.
The consul cluster I am trying to sync is outside the k8s cluster, based on the document, I must pass the address to consul cluster for sync process.However, the helm chart for installing the sync process didn’t contains any value to configure the consul cluster ip address.
syncCatalog:
# True if you want to enable the catalog sync. "-" for default.
enabled: false
image: null
default: true # true will sync by default, otherwise requires annotation
# toConsul and toK8S control whether syncing is enabled to Consul or K8S
# as a destination. If both of these are disabled, the sync will do nothing.
toConsul: true
toK8S: true
# k8sPrefix is the service prefix to prepend to services before registering
# with Kubernetes. For example "consul-" will register all services
# prepended with "consul-". (Consul -> Kubernetes sync)
k8sPrefix: null
# consulPrefix is the service prefix which preprends itself
# to Kubernetes services registered within Consul
# For example, "k8s-" will register all services peprended with "k8s-".
# (Kubernetes -> Consul sync)
consulPrefix: null
# k8sTag is an optional tag that is applied to all of the Kubernetes services
# that are synced into Consul. If nothing is set, defaults to "k8s".
# (Kubernetes -> Consul sync)
k8sTag: null
# syncClusterIPServices syncs services of the ClusterIP type, which may
# or may not be broadly accessible depending on your Kubernetes cluster.
# Set this to false to skip syncing ClusterIP services.
syncClusterIPServices: true
# nodePortSyncType configures the type of syncing that happens for NodePort
# services. The valid options are: ExternalOnly, InternalOnly, ExternalFirst.
# - ExternalOnly will only use a node's ExternalIP address for the sync
# - InternalOnly use's the node's InternalIP address
# - ExternalFirst will preferentially use the node's ExternalIP address, but
# if it doesn't exist, it will use the node's InternalIP address instead.
nodePortSyncType: ExternalFirst
# aclSyncToken refers to a Kubernetes secret that you have created that contains
# an ACL token for your Consul cluster which allows the sync process the correct
# permissions. This is only needed if ACLs are enabled on the Consul cluster.
aclSyncToken:
secretName: null
secretKey: null
# nodeSelector labels for syncCatalog pod assignment, formatted as a muli-line string.
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# Example:
# nodeSelector: |
# beta.kubernetes.io/arch: amd64
nodeSelector: null
So How can I set the consul cluster ip address for sync process?
It looks like the sync service runs via the consul agent on the k8s host.
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command:
- consul-k8s sync-catalog \
-http-addr=${HOST_IP}:8500
That can't be configured directly but helm can configure the agent/client via client.join (yaml src):
If this is null (default), then the clients will attempt to automatically join the server cluster running within Kubernetes. This means that with server.enabled set to true, clients will automatically join that cluster. If server.enabled is not true, then a value must be specified so the clients can join a valid cluster.
This value is passed to the consul agent as the --retry-join option.
client:
enabled: true
join:
- consul1
- consul2
- consul3
syncCatalog:
enabled: true

traefik 1.7.11 subdomain based access rules setup

I want to create IP based subdomain access rules for traefik (1.7.11) ingress controller running on Kubernetes (EKS). All IP's are allowed to talk to an external/frontend entry point
traefik.toml: |
defaultEntryPoints = ["http","https"]
logLevel = "INFO"
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.http.whiteList]
sourceRange = ["0.0.0.0/0"]
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.https.tls]
[entryPoints.https.whiteList]
sourceRange = ["0.0.0.0/0"]
But we have only prod environments running in this cluster.
Want to limit certain endpoints like monitoring.domain.com accessible from limited IP's (Office location) and keep *.domain.com (default) accessible from the public internet.
anyway I can do it in traefik ?
You can try using the traefik.ingress.kubernetes.io/whitelist-source-range: "x.x.x.x/x, xxxx::/x" Traefik annotation on you Ingress object. You can also have 4 Ingress objects. One for each stage.domain.com, qa.domain.com, dev.domain.com and prod.domain.com.
For anything other than prod.domain.com you can add a whitelist.
Another option is to change your traefik.toml with [entryPoints.http.whitelist] but you may have to have different ingress controllers with a different ingress class for each environment.

Traefik HTTP - HTTPS redirecting behind AWS ELB (TCP)

I have a Kubernetes setup where Traefik is my ingress controller. Traefik is behind an AWS ELB which is listening on an SSL port (TCP:443) so that it can terminate the SSL using an ACM certificate. It then load balances you to traefik (in k8s) which listens on TCP:80. We require this set up as we whitelist on a per-ingress basis in traefik and use the proxy protocol header to do this (we tried using x-fowarded-for whitelisting on http load balancer but this was easy to bypass).
This is working for HTTPS traffic coming in but I would like to set up http redirection to https. So far I have set up a TCP:80 listener on the load balancer forwarding to TCP:81. I've also set up my Traefik entrypoints using a configuration file:
defaultEntryPoints = ["http"]
debug = false
logLevel = "INFO"
# Do not verify backend certificates (use https backends)
InsecureSkipVerify = true
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.http.proxyProtocol]
insecure = true
trustedIPs = ["10.0.0.0/8"]
[entryPoints.redirect]
address = ":81"
compress = true
[entryPoints.http.redirect]
entryPoint = "http"
However this gives a
400 Bad Request
when I try and access any service on :80.
I assume this is because for this method to work traefik itself needs to have an SSL listener, rather than the ELB.
Is there a way this can be set up so that all traffic that hits traefik on :81 is rewritten to https?