Traefix v1.7 static certificates and dynamic acme certificates - docker-compose

I am using traefik:1.7.6-alpine in docker in swarm mode. I need to specify static ssl certificates and other self-managed acme certificates.
This is the error I get when lifting the container:
time="2020-06-18T02:45:52Z" level=info msg="\nStats collection is disabled.\nHelp us improve Traefik by turning this feature on :)\nMore details on: https://docs.traefik.io/basics/#collected-data\n"
time="2020-06-18T02:45:52Z" level=error msg="Failed to read new account, ACME data conversion is not available : unexpected end of JSON input"
time="2020-06-18T02:45:52Z" level=error msg="Unable to add ACME provider to the providers list: unable to get ACME account : unexpected end of JSON input"
time="2020-06-18T02:45:52Z" level=info msg="Preparing server https &{Address::443 TLS:0xc000288630 Redirect:<nil> Auth:<nil> WhitelistSourceRange:[] WhiteList:<nil> Compress:false ProxyProtocol:<nil> ForwardedHeaders:0xc0006a45c0} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
time="2020-06-18T02:45:52Z" level=info msg="Preparing server traefik &{Address::8080 TLS:<nil> Redirect:<nil> Auth:<nil> WhitelistSourceRange:[] WhiteList:<nil> Compress:false ProxyProtocol:<nil> ForwardedHeaders:0xc0006a4560} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
time="2020-06-18T02:45:52Z" level=info msg="Starting server on :443"
time="2020-06-18T02:45:52Z" level=info msg="Preparing server http &{Address::80 TLS:<nil> Redirect:<nil> Auth:<nil> WhitelistSourceRange:[] WhiteList:<nil> Compress:false ProxyProtocol:<nil> ForwardedHeaders:0xc0006a4580} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
time="2020-06-18T02:45:52Z" level=info msg="Starting provider configuration.ProviderAggregator {}"
time="2020-06-18T02:45:52Z" level=info msg="Starting server on :8080"
time="2020-06-18T02:45:52Z" level=info msg="Starting provider *docker.Provider {\"Watch\":true,\"Filename\":\"\",\"Constraints\":null,\"Trace\":false,\"TemplateVersion\":2,\"DebugLogGeneratedTemplate\":false,\"Endpoint\":\"unix:///var/run/docker.sock\",\"Domain\":\"arkaangel.com\",\"TLS\":null,\"ExposedByDefault\":false,\"UseBindPortIP\":false,\"SwarmMode\":false,\"Network\":\"\",\"SwarmModeRefreshSeconds\":15}"
time="2020-06-18T02:45:52Z" level=info msg="Starting server on :80"
This is my traefik.toml
debug = true
logLevel = "INFO"
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/path/to/first/first.crt"
keyFile = "/path/to/first/first.key"
[[entryPoints.https.tls.certificates]]
certFile = "/path/to/second/second.crt"
keyFile = "/path/to/second/second.key"
[api]
dashboard = true
[api.statistics]
recentErrors = 10
[docker]
exposedbydefault = false
watch = true
domain = "mydomain.com"
[acme]
email = "myemail#gmail.com"
storage = "/etc/traefik/acme/acme.json"
entryPoint = "https"
acmeLogging = true
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
[[acme.domains]]
main = "third-site.com"
And this is how I mount the acme.json file in the docker-compose to keep the generated certificates:
volumes:
./traefik/acme/acme.json:/etc/traefik/acme/acme.json
the acme.json file has 600 permissions and owner root:root.
In addition to the configuration shown, things that I have tried without being able to generate the certificate:
not map the acme.json file but the parent folder so that traefik create the acme.json file (failed)
do not map any volume for the acme.json so that it is lost when the container is removed. (failed)
changed the owner to the file acme.json to myuser: myuser, so from the container the user 1000 is shown as owner (failed)

The way I was able to solve the error of: "Failed to read new account, ACME data conversion is not available : unexpected end of JSON input", was writing inside the file acme.json {}, apparently when trying to read an empty file and parsing it to json it gave an error.
Summary:
when creating the acme.json on the host to be mapped then you have to do:
touch acme.json
echo '{}'> acme.json
chmod 600 acme.json

Related

Error in Argo workflow run on local Minikube K8s cluster with MinIO as the artifact repository

I'm running an Argo workflow on a local MinIO K8s cluster. I'm setting up an Artifact Repository on MinIO where output artifacts from my workflow can be stored. I followed the instructions here https://argoproj.github.io/argo-workflows/configure-artifact-repository/#configuring-minio .
The error I'm running into is: failed to create new S3 client: Endpoint url cannot have fully qualified paths.
My MinIO endpoint is at http://127.0.0.1:52139.
Here is my workflow YAML file:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: artifactory-repository-ref-
spec:
archiveLogs: true
entrypoint: main
templates:
- name: main
container:
image: docker/whalesay:latest
command: [ sh, -c ]
args: [ "cowsay hello world | tee /tmp/hello_world.txt" ]
archiveLocation:
archiveLogs: true
outputs:
artifacts:
- name: hello_world
path: /tmp/hello_world.txt
Here is my workflow-controller-configmap YAML which is deployed in the same namespace as the workflow:
# This file describes the config settings available in the workflow controller configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: workflow-controller-configmap
data: # "config: |" key is optional in 2.7+!
artifactRepository: | # However, all nested maps must be strings
archiveLogs: true
s3:
endpoint: argo-artifacts:9000
bucket: my-bucket
insecure: true
accessKeySecret: #omit if accessing via AWS IAM
name: my-minio-cred
key: accessKey
secretKeySecret: #omit if accessing via AWS IAM
name: my-minio-cred
key: secretKey
useSDKCreds: true
I've created a secret called my-minio-cred in the same namespace where the workflow is running.
Here are the logs from the pod where the workflow is running:
time="2023-02-16T21:39:05.044Z" level=info msg="Starting Workflow Executor" version=v3.4.5
time="2023-02-16T21:39:05.047Z" level=info msg="Using executor retry strategy" Duration=1s Factor=1.6 Jitter=0.5 Steps=5
time="2023-02-16T21:39:05.047Z" level=info msg="Executor initialized" deadline="0001-01-01 00:00:00 +0000 UTC" includeScriptOutput=false namespace=argo podName=artifactory-repository-ref-5tcmt template="{\"name\":\"main\",\"inputs\":{},\"outputs\":{\"artifacts\":[{\"name\":\"hello_world\",\"path\":\"/tmp/hello_world.txt\"}]},\"metadata\":{},\"container\":{\"name\":\"\",\"image\":\"docker/whalesay:latest\",\"command\":[\"sh\",\"-c\"],\"args\":[\"cowsay hello world | tee /tmp/hello_world.txt\"],\"resources\":{}},\"archiveLocation\":{\"archiveLogs\":true,\"s3\":{\"endpoint\":\"http://127.0.0.1:52897\",\"bucket\":\"my-bucket\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"my-minio-cred\",\"key\":\"accessKey\"},\"secretKeySecret\":{\"name\":\"my-minio-cred\",\"key\":\"secretKey\"},\"useSDKCreds\":true,\"key\":\"artifactory-repository-ref-5tcmt/artifactory-repository-ref-5tcmt\"}}}" version="&Version{Version:v3.4.5,BuildDate:2023-02-07T12:36:25Z,GitCommit:1253f443baa8ad1610d2e62ec26ecdc85fe1b837,GitTag:v3.4.5,GitTreeState:clean,GoVersion:go1.18.10,Compiler:gc,Platform:linux/arm64,}"
time="2023-02-16T21:39:05.047Z" level=info msg="Starting deadline monitor"
time="2023-02-16T21:39:08.048Z" level=info msg="Main container completed" error="<nil>"
time="2023-02-16T21:39:08.048Z" level=info msg="No Script output reference in workflow. Capturing script output ignored"
time="2023-02-16T21:39:08.048Z" level=info msg="No output parameters"
time="2023-02-16T21:39:08.048Z" level=info msg="Saving output artifacts"
time="2023-02-16T21:39:08.048Z" level=info msg="stopping progress monitor (context done)" error="context canceled"
time="2023-02-16T21:39:08.048Z" level=info msg="Deadline monitor stopped"
time="2023-02-16T21:39:08.048Z" level=info msg="Staging artifact: hello_world"
time="2023-02-16T21:39:08.049Z" level=info msg="Copying /tmp/hello_world.txt from container base image layer to /tmp/argo/outputs/artifacts/hello_world.tgz"
time="2023-02-16T21:39:08.049Z" level=info msg="/var/run/argo/outputs/artifacts/tmp/hello_world.txt.tgz -> /tmp/argo/outputs/artifacts/hello_world.tgz"
time="2023-02-16T21:39:08.049Z" level=info msg="S3 Save path: /tmp/argo/outputs/artifacts/hello_world.tgz, key: artifactory-repository-ref-5tcmt/artifactory-repository-ref-5tcmt/hello_world.tgz"
time="2023-02-16T21:39:08.049Z" level=info msg="Creating minio client using static credentials" endpoint="http://127.0.0.1:52897"
time="2023-02-16T21:39:08.049Z" level=warning msg="Non-transient error: Endpoint url cannot have fully qualified paths."
time="2023-02-16T21:39:08.049Z" level=info msg="Save artifact" artifactName=hello_world duration="282.917µs" error="failed to create new S3 client: Endpoint url cannot have fully qualified paths." key=artifactory-repository-ref-5tcmt/artifactory-repository-ref-5tcmt/hello_world.tgz
time="2023-02-16T21:39:08.049Z" level=error msg="executor error: failed to create new S3 client: Endpoint url cannot have fully qualified paths."
time="2023-02-16T21:39:08.049Z" level=info msg="S3 Save path: /tmp/argo/outputs/logs/main.log, key: artifactory-repository-ref-5tcmt/artifactory-repository-ref-5tcmt/main.log"
time="2023-02-16T21:39:08.049Z" level=info msg="Creating minio client using static credentials" endpoint="http://127.0.0.1:52897"
time="2023-02-16T21:39:08.049Z" level=warning msg="Non-transient error: Endpoint url cannot have fully qualified paths."
time="2023-02-16T21:39:08.049Z" level=info msg="Save artifact" artifactName=main-logs duration="28.5µs" error="failed to create new S3 client: Endpoint url cannot have fully qualified paths." key=artifactory-repository-ref-5tcmt/artifactory-repository-ref-5tcmt/main.log
time="2023-02-16T21:39:08.049Z" level=error msg="executor error: failed to create new S3 client: Endpoint url cannot have fully qualified paths."
time="2023-02-16T21:39:08.056Z" level=info msg="Create workflowtaskresults 403"
time="2023-02-16T21:39:08.056Z" level=warning msg="failed to patch task set, falling back to legacy/insecure pod patch, see https://argoproj.github.io/argo-workflows/workflow-rbac/" error="workflowtaskresults.argoproj.io is forbidden: User \"system:serviceaccount:argo:default\" cannot create resource \"workflowtaskresults\" in API group \"argoproj.io\" in the namespace \"argo\""
time="2023-02-16T21:39:08.057Z" level=info msg="Patch pods 403"
time="2023-02-16T21:39:08.057Z" level=warning msg="Non-transient error: pods \"artifactory-repository-ref-5tcmt\" is forbidden: User \"system:serviceaccount:argo:default\" cannot patch resource \"pods\" in API group \"\" in the namespace \"argo\""
time="2023-02-16T21:39:08.057Z" level=error msg="executor error: pods \"artifactory-repository-ref-5tcmt\" is forbidden: User \"system:serviceaccount:argo:default\" cannot patch resource \"pods\" in API group \"\" in the namespace \"argo\""
time="2023-02-16T21:39:08.057Z" level=info msg="Alloc=6350 TotalAlloc=12366 Sys=18642 NumGC=4 Goroutines=5"
time="2023-02-16T21:39:08.057Z" level=fatal msg="failed to create new S3 client: Endpoint url cannot have fully qualified paths."
I've tried changing the endpoint key in the workflow-controller-config.yaml from 127.0.0.1:52139 to 127.0.0.1:9000 and also argo-artifacts:9000 but it still doesn't work. argo-artifacts is the name of the LoadBalancer service thats created by the helm install argo-artifacts minio/minio command.
I got the endpoint of the MinIO bucket from
minikube service --url argo-artifacts as given in the 'Configuring MinIO' section at https://argoproj.github.io/argo-workflows/configure-artifact-repository/#configuring-minio
Everything is in the same namespace.
What could be wrong here?
I tried changing the endpoint URL of the MinIO bucket, changing namespaces for different components, and changing the namespace that the argo-artifacts service gets deployed in.

GKE - click to deploy - prometheus-server cannot start due to `Opening storage failed open /data/XXX/meta.json: no such file or directory`

I deployed prometheus + grafana using google click-to-deploy setup to my kubernetes cluster. Kubernetes is 1.14.10-gke.36. Unfortunately prometheus wasn't able to start - it continously starts and then gets terminated due to Opening storage failed open /data/XXX/meta.json: no such file or directory.
I haven't changed anything in the setup. Does anyone know how to solve this ?
My logs:
E 2020-05-29T14:50:09.823814201Z level=info ts=2020-05-29T14:50:09.81844866Z caller=main.go:220 msg="Starting Prometheus" version="(version=2.2.1, branch=HEAD, revision=bc6058c81272a8d938c05e75607371284236aadc)"
E 2020-05-29T14:50:09.823909260Z level=info ts=2020-05-29T14:50:09.818563567Z caller=main.go:221 build_context="(go=go1.10, user=root#149e5b3f0829, date=20180314-14:15:45)"
E 2020-05-29T14:50:09.823917570Z level=info ts=2020-05-29T14:50:09.818590869Z caller=main.go:222 host_details="(Linux 4.14.138+ #1 SMP Tue Sep 3 02:58:08 PDT 2019 x86_64 prometheus-1-prometheus-1 (none))"
E 2020-05-29T14:50:09.823924055Z level=info ts=2020-05-29T14:50:09.818612642Z caller=main.go:223 fd_limits="(soft=1048576, hard=1048576)"
E 2020-05-29T14:50:09.828319848Z level=info ts=2020-05-29T14:50:09.828105376Z caller=web.go:382 component=web msg="Start listening for connections" address=0.0.0.0:9090
E 2020-05-29T14:50:09.909997968Z level=info ts=2020-05-29T14:50:09.828059101Z caller=main.go:504 msg="Starting TSDB ..."
E 2020-05-29T14:50:09.911528108Z level=info ts=2020-05-29T14:50:09.911319586Z caller=main.go:398 msg="Stopping scrape discovery manager..."
E 2020-05-29T14:50:09.911559337Z level=info ts=2020-05-29T14:50:09.91137533Z caller=main.go:411 msg="Stopping notify discovery manager..."
E 2020-05-29T14:50:09.911600849Z level=info ts=2020-05-29T14:50:09.911396384Z caller=main.go:432 msg="Stopping scrape manager..."
E 2020-05-29T14:50:09.911611998Z level=info ts=2020-05-29T14:50:09.9114095Z caller=main.go:394 msg="Scrape discovery manager stopped"
E 2020-05-29T14:50:09.911617814Z level=info ts=2020-05-29T14:50:09.911434087Z caller=manager.go:460 component="rule manager" msg="Stopping rule manager..."
E 2020-05-29T14:50:09.911624146Z level=info ts=2020-05-29T14:50:09.911468881Z caller=manager.go:466 component="rule manager" msg="Rule manager stopped"
E 2020-05-29T14:50:09.911630066Z level=info ts=2020-05-29T14:50:09.911546355Z caller=notifier.go:512 component=notifier msg="Stopping notification manager..."
E 2020-05-29T14:50:09.911742492Z level=info ts=2020-05-29T14:50:09.911620851Z caller=main.go:407 msg="Notify discovery manager stopped"
E 2020-05-29T14:50:09.911807858Z level=info ts=2020-05-29T14:50:09.911746592Z caller=main.go:573 msg="Notifier manager stopped"
E 2020-05-29T14:50:09.911864605Z level=info ts=2020-05-29T14:50:09.91179338Z caller=main.go:426 msg="Scrape manager stopped"
E 2020-05-29T14:50:09.919909034Z level=error ts=2020-05-29T14:50:09.919646048Z caller=main.go:582 err="Opening storage failed open /data/01D37JS32JWMR54HQXBQCRJW1V/meta.json: no such file or directory"
E 2020-05-29T14:50:09.919945114Z level=info ts=2020-05-29T14:50:09.91972603Z caller=main.go:584 msg="See you next time!"

Cannot send data to Minio using Argo workflow running on Minikube

I'm testing Argo workflow on Minikube and I'm using Minio to upload/download data created within the workflow. And when I submit the template yaml, I got failed to save outputs error on the pod.
I checked the logs using kubectl logs -n air [POD NAME] -c wait, the result is below.
time="2019-04-24T04:25:27Z" level=info msg="Creating a docker executor"
time="2019-04-24T04:25:27Z" level=info msg="Executor (version: v2.2.1, build_date: 2018-10-11T16:27:29Z) and goes on and on
time="2019-04-24T04:25:27Z" level=info msg="Waiting on main container"
time="2019-04-24T04:25:29Z" level=info msg="main container started with container ID: 86afd5f5a35fbea3fcd65fdf565f8194d79535034d94548bb371681faf549e6e"
time="2019-04-24T04:25:29Z" level=info msg="Starting annotations monitor"
time="2019-04-24T04:25:29Z" level=info msg="docker wait 86afd5f5a35fbea3fcd65fdf565f8194d79535034d94548bb371681faf549e6e"
time="2019-04-24T04:25:29Z" level=info msg="Starting deadline monitor"
time="2019-04-24T04:25:33Z" level=info msg="Main container completed"
time="2019-04-24T04:25:33Z" level=info msg="No sidecars"
time="2019-04-24T04:25:33Z" level=info msg="Saving output artifacts"
time="2019-04-24T04:25:33Z" level=info msg="Saving artifact: get-data"
time="2019-04-24T04:25:33Z" level=info msg="Archiving 86afd5f5a35fbea3fcd65fdf565f8194d79535034d94548bb371681faf549e6e:/data/ to /argo/outputs/artifacts/get-data.tgz"
time="2019-04-24T04:25:33Z" level=info msg="sh -c docker cp -a 86afd5f5a35fbea3fcd65fdf565f8194d79535034d94548bb371681faf549e6e:/data/ - | gzip > /argo/outputs/artifacts/get-data.tgz"
time="2019-04-24T04:25:33Z" level=info msg="Annotations monitor stopped"
time="2019-04-24T04:25:34Z" level=info msg="Archiving completed"
time="2019-04-24T04:25:34Z" level=info msg="Creating minio client 192.168.99.112:31774 using IAM role"
time="2019-04-24T04:25:34Z" level=info msg="Saving from /argo/outputs/artifacts/get-data.tgz to s3 (endpoint: 192.168.99.112:31774, bucket: reseach-bucket, key: /data/)"
time="2019-04-24T04:25:34Z" level=info msg="Deadline monitor stopped"
time="2019-04-24T04:26:04Z" level=info msg="Alloc=3827 TotalAlloc=11256 Sys=9830 NumGC=4 Goroutines=7"
time="2019-04-24T04:26:04Z" level=fatal msg="Get http://169.254.169.254/latest/meta-data/iam/security-credentials: dial tcp 169.254.169.254:80: i/o and goes on and on
And the template yaml file looks like this:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
...
########################################
- name: template-data-handling
activeDeadlineSeconds: 10800
outputs:
artifacts:
- name: get-data
path: /data/
s3:
endpoint: 192.168.99.112:31774
bucket: reseach-bucket
key: /data/
secretKeySecret:
name: minio-credentials
key: accesskey
secretKeySecret:
name: minio-credentials
key: secretkey
retryStrategy:
limit: 1
container:
image: demo-pipeline
imagePullPolicy: Never
command: [/bin/sh, -c]
args:
- |
python test.py
Could someone help?
do you create minio-credentials secret which has secretkey and accesskey on the namespace where the workflow is running?
Example:
Argo controller pod is running on argo namespace. workflow template is submitting in default namespace. minio-credentials secret should be available in default namespace.

Traefik whitelist with X-Forwarded-For header using entryPoints.http.forwardedHeaders not working

I am trying to put an ingress resource behind a whitelist using traefik 1.7.6 but it keeps returning 403 status code.
Note that Traefik is behind a Load Balancer that puts the X-Forwarded-For header to the request (https://docs.cloud.oracle.com/iaas/Content/Balance/Reference/httpheaders.htm).
Here is the traefik configuration
data:
traefik.toml: |
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.forwardedHeaders]
trustedIPs = ["<**load-balancer-ip**>"]
[entryPoints.traefik]
address = ":8080"
[entryPoints.traefik.auth.basic]
users = ["admin:secret"]
[kubernetes]
[kubernetes.ingressEndpoint]
publishedService = "default/mygateway"
[ping]
entryPoint = "http"
[api]
entryPoint = "traefik"
And here there is the ingress resource that should be whitelisted
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
ingress.kubernetes.io/whitelist-x-forwarded-for: "true"
traefik.ingress.kubernetes.io/whitelist-source-range: "<whitelisted-ips-range>"
spec:
rules:
- host:
http:
paths:
- path: /mypath
backend:
serviceName: myservice
servicePort: myserviceport
---
If I make a request, using one of the whitelisted ips I can see that Traefik isn't using the X-forwarder-header to resolve my real ip.
time="2019-02-28T14:31:51Z" level=debug msg="request &{Method:GET URL:/mypath Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Authorization:[Basic YWRtaW46YWRtaW4=] Referer:[https://<load-balancer-ip>/mypath] Accept-Encoding:[gzip, deflate, br] Accept-Language:[en-US,en;q=0.9,it;q=0.8] Connection:[keep-alive] Access-Control-Allow-Origin:[*] Accept:[application/json, text/plain, */*] Dnt:[1] User-Agent:[Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36]] Body:{} GetBody:<nil> ContentLength:0 TransferEncoding:[] Close:false Host:<load-balancer-ip> Form:map[] PostForm:map[] MultipartForm:<nil> Trailer:map[] RemoteAddr:<load-balancer-ip>:35818 RequestURI:/mypath TLS:<nil> Cancel:<nil> Response:<nil> ctx:0xc000b5e5d0} - rejecting: "<load-balancer-ip>:35818\" matched none of the white list"
Adding service file.yml
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-controller
labels:
app: traefik-ingress-controller
annotations:
spec:
selector:
app: traefik-ingress-controller
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
name: http
type: NodePort
externalTrafficPolicy: Local
---
UPDATE: Trefik debug logs
time="2019-03-01T13:57:15Z" level=info msg="Using TOML configuration file /config/traefik.toml"
time="2019-03-01T13:57:15Z" level=info msg="Traefik version v1.7.6 built on 2018-12-14_06:43:37AM"
time="2019-03-01T13:57:15Z" level=debug msg="Global configuration loaded {\"LifeCycle\":{\"RequestAcceptGraceTimeout\":0,\"GraceTimeOut\":10000000000},\"GraceTimeOut\":0,\"Debug\":false,\"CheckNewVersion\":true,\"SendAnonymousUsage\":false,\"AccessLogsFile\":\"\",\"AccessLog\":null,\"TraefikLogsFile\":\"\",\"TraefikLog\":null,\"Tracing\":null,\"LogLevel\":\"DEBUG\",\"EntryPoints\":{\"http\":{\"Address\":\":80\",\"TLS\":null,\"Redirect\":null,\"Auth\":null,\"WhitelistSourceRange\":null,\"WhiteList\":null,\"Compress\":false,\"ProxyProtocol\":null,\"ForwardedHeaders\":{\"Insecure\":false,\"TrustedIPs\":[\"<loadbalancer-ip-1>/32\",\"<loadbalancer-ip-2>/32\"]}},\"traefik\":{\"Address\":\":8080\",\"TLS\":null,\"Redirect\":null,\"Auth\":{\"basic\":{\"users\":[\"admin:$apr1$FV9Q2vjA$gJt2UT8bt6DT6RVDm5qI20\"]}},\"WhitelistSourceRange\":null,\"WhiteList\":null,\"Compress\":false,\"ProxyProtocol\":null,\"ForwardedHeaders\":{\"Insecure\":true,\"TrustedIPs\":null}}},\"Cluster\":null,\"Constraints\":[],\"ACME\":null,\"DefaultEntryPoints\":[\"http\"],\"ProvidersThrottleDuration\":2000000000,\"MaxIdleConnsPerHost\":200,\"IdleTimeout\":0,\"InsecureSkipVerify\":false,\"RootCAs\":null,\"Retry\":null,\"HealthCheck\":{\"Interval\":30000000000},\"RespondingTimeouts\":null,\"ForwardingTimeouts\":null,\"AllowMinWeightZero\":false,\"KeepTrailingSlash\":false,\"Web\":null,\"Docker\":null,\"File\":null,\"Marathon\":null,\"Consul\":null,\"ConsulCatalog\":null,\"Etcd\":null,\"Zookeeper\":null,\"Boltdb\":null,\"Kubernetes\":{\"Watch\":true,\"Filename\":\"\",\"Constraints\":[],\"Trace\":false,\"TemplateVersion\":0,\"DebugLogGeneratedTemplate\":false,\"Endpoint\":\"\",\"Token\":\"\",\"CertAuthFilePath\":\"\",\"DisablePassHostHeaders\":false,\"EnablePassTLSCert\":false,\"Namespaces\":null,\"LabelSelector\":\"\",\"IngressClass\":\"\",\"IngressEndpoint\":{\"IP\":\"\",\"Hostname\":\"\",\"PublishedService\":\"default/anotherservice\"}},\"Mesos\":null,\"Eureka\":null,\"ECS\":null,\"Rancher\":null,\"DynamoDB\":null,\"ServiceFabric\":null,\"Rest\":null,\"API\":{\"EntryPoint\":\"traefik\",\"Dashboard\":true,\"Debug\":false,\"CurrentConfigurations\":null,\"Statistics\":null},\"Metrics\":null,\"Ping\":{\"EntryPoint\":\"http\"},\"HostResolver\":null}"
time="2019-03-01T13:57:15Z" level=info msg="\nStats collection is disabled.\nHelp us improve Traefik by turning this feature on :)\nMore details on: https://docs.traefik.io/basics/#collected-data\n"
time="2019-03-01T13:57:15Z" level=info msg="Preparing server http &{Address::80 TLS:<nil> Redirect:<nil> Auth:<nil> WhitelistSourceRange:[] WhiteList:<nil> Compress:false ProxyProtocol:<nil> ForwardedHeaders:0xc0003550e0} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
time="2019-03-01T13:57:15Z" level=info msg="Preparing server traefik &{Address::8080 TLS:<nil> Redirect:<nil> Auth:0xc000525aa0 WhitelistSourceRange:[] WhiteList:<nil> Compress:false ProxyProtocol:<nil> ForwardedHeaders:0xc00002a5a0} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s"
time="2019-03-01T13:57:15Z" level=info msg="Starting provider configuration.ProviderAggregator {}"
time="2019-03-01T13:57:15Z" level=info msg="Starting server on :80"
time="2019-03-01T13:57:15Z" level=info msg="Starting server on :8080"
time="2019-03-01T13:57:15Z" level=info msg="Starting provider *kubernetes.Provider {\"Watch\":true,\"Filename\":\"\",\"Constraints\":[],\"Trace\":false,\"TemplateVersion\":0,\"DebugLogGeneratedTemplate\":false,\"Endpoint\":\"\",\"Token\":\"\",\"CertAuthFilePath\":\"\",\"DisablePassHostHeaders\":false,\"EnablePassTLSCert\":false,\"Namespaces\":null,\"LabelSelector\":\"\",\"IngressClass\":\"\",\"IngressEndpoint\":{\"IP\":\"\",\"Hostname\":\"\",\"PublishedService\":\"default/anotherservice\"}}"
time="2019-03-01T13:57:15Z" level=debug msg="Using Ingress label selector: \"\""
time="2019-03-01T13:57:15Z" level=info msg="ingress label selector is: \"\""
time="2019-03-01T13:57:15Z" level=info msg="Creating in-cluster Provider client"
time="2019-03-01T13:57:16Z" level=debug msg="Received Kubernetes event kind *v1.Endpoints"
time="2019-03-01T13:57:16Z" level=debug msg="Skipping updating Ingress default/frontend-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:16Z" level=debug msg="Skipping updating Ingress default/service-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:16Z" level=debug msg="Skipping updating Ingress default/other-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:16Z" level=debug msg="Skipping updating Ingress default/whitelist-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:16Z" level=debug msg="Skipping updating Ingress default/traefik-dashboard-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:16Z" level=debug msg="Received Kubernetes event kind *v1.Secret"
time="2019-03-01T13:57:16Z" level=debug msg="Skipping updating Ingress default/frontend-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:16Z" level=debug msg="Skipping updating Ingress default/service-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:16Z" level=debug msg="Skipping updating Ingress default/other-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:16Z" level=debug msg="Skipping updating Ingress default/whitelist-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:16Z" level=debug msg="Skipping updating Ingress default/traefik-dashboard-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:16Z" level=debug msg="Skipping Kubernetes event kind *v1.Secret"
time="2019-03-01T13:57:16Z" level=debug msg="Configuration received from provider kubernetes: {\"backends\":{\"/fe\":{\"servers\":{\"myservice-5c55585b8-cxprc\":{\"url\":\"http://10.11.4.70:3000\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"/promo\":{\"servers\":{\"promo-api-service-7d5968c556-rrfk2\":{\"url\":\"http://10.11.3.53:9189\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"/anotherservice\":{\"servers\":{\"anotherservice-7856788888-dwnjc\":{\"url\":\"http://10.11.4.4:8762\",\"weight\":1},\"anotherservice-7856788888-kczj5\":{\"url\":\"http://10.11.3.9:8762\",\"weight\":1},\"anotherservice-7856788888-sxtzt\":{\"url\":\"http://10.11.5.6:8762\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"/sockjs-node\":{\"servers\":{\"myservice-5c55585b8-cxprc\":{\"url\":\"http://10.11.4.70:3000\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"/static\":{\"servers\":{\"myservice-5c55585b8-cxprc\":{\"url\":\"http://10.11.4.70:3000\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"/traefik\":{\"servers\":{\"traefik-ingress-controller-5945dd5fbf-hkt6p\":{\"url\":\"http://10.11.3.99:8080\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}}},\"frontends\":{\"/fe\":{\"entryPoints\":[\"http\"],\"backend\":\"/fe\",\"routes\":{\"/fe\":{\"rule\":\"PathPrefixStrip:/fe\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null,\"whiteList\":{\"sourceRange\":[\"<whitelisted-ip-subnet>/24\",\"<whitelisted-ip-subnet>/24\"],\"useXForwardedFor\":true}},\"/promo\":{\"entryPoints\":[\"http\"],\"backend\":\"/promo\",\"routes\":{\"/promo\":{\"rule\":\"PathPrefix:/promo\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"/anotherservice\":{\"entryPoints\":[\"http\"],\"backend\":\"/anotherservice\",\"routes\":{\"/anotherservice\":{\"rule\":\"PathPrefixStrip:/anotherservice\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"/sockjs-node\":{\"entryPoints\":[\"http\"],\"backend\":\"/sockjs-node\",\"routes\":{\"/sockjs-node\":{\"rule\":\"PathPrefix:/sockjs-node\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"/static\":{\"entryPoints\":[\"http\"],\"backend\":\"/static\",\"routes\":{\"/static\":{\"rule\":\"PathPrefix:/static\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"/traefik\":{\"entryPoints\":[\"http\"],\"backend\":\"/traefik\",\"routes\":{\"/traefik\":{\"rule\":\"PathPrefixStrip:/traefik\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null}}}"
time="2019-03-01T13:57:16Z" level=debug msg="Wiring frontend /fe to entryPoint http"
time="2019-03-01T13:57:16Z" level=debug msg="Creating backend /fe"
time="2019-03-01T13:57:16Z" level=debug msg="configured IP white list: [<whitelisted-ip-subnet>/24 <whitelisted-ip-subnet>/24]"
time="2019-03-01T13:57:16Z" level=debug msg="Configured IP Whitelists: [<whitelisted-ip-subnet>/24 <whitelisted-ip-subnet>/24]"
time="2019-03-01T13:57:16Z" level=debug msg="Adding TLSClientHeaders middleware for frontend /fe"
time="2019-03-01T13:57:16Z" level=debug msg="Creating load-balancer wrr"
time="2019-03-01T13:57:16Z" level=debug msg="Creating server myservice-5c55585b8-cxprc at http://10.11.4.70:3000 with weight 1"
time="2019-03-01T13:57:16Z" level=debug msg="Creating route /fe PathPrefixStrip:/fe"
time="2019-03-01T13:57:16Z" level=debug msg="Wiring frontend /promo to entryPoint http"
time="2019-03-01T13:57:16Z" level=debug msg="Creating backend /promo"
time="2019-03-01T13:57:16Z" level=debug msg="Adding TLSClientHeaders middleware for frontend /promo"
time="2019-03-01T13:57:16Z" level=debug msg="Creating load-balancer wrr"
time="2019-03-01T13:57:16Z" level=debug msg="Creating server promo-api-service-7d5968c556-rrfk2 at http://10.11.3.53:9189 with weight 1"
time="2019-03-01T13:57:16Z" level=debug msg="Creating route /promo PathPrefix:/promo"
time="2019-03-01T13:57:16Z" level=debug msg="Wiring frontend /anotherservice to entryPoint http"
time="2019-03-01T13:57:16Z" level=debug msg="Creating backend /anotherservice"
time="2019-03-01T13:57:16Z" level=debug msg="Adding TLSClientHeaders middleware for frontend /anotherservice"
time="2019-03-01T13:57:16Z" level=debug msg="Creating load-balancer wrr"
time="2019-03-01T13:57:16Z" level=debug msg="Creating server anotherservice-7856788888-dwnjc at http://10.11.4.4:8762 with weight 1"
time="2019-03-01T13:57:16Z" level=debug msg="Creating server anotherservice-7856788888-kczj5 at http://10.11.3.9:8762 with weight 1"
time="2019-03-01T13:57:16Z" level=debug msg="Creating server anotherservice-7856788888-sxtzt at http://10.11.5.6:8762 with weight 1"
time="2019-03-01T13:57:16Z" level=debug msg="Creating route /anotherservice PathPrefixStrip:/anotherservice"
time="2019-03-01T13:57:17Z" level=debug msg="Configuration received from provider kubernetes: {\"backends\":{\"/fe\":{\"servers\":{\"myservice-5c55585b8-cxprc\":{\"url\":\"http://10.11.4.70:3000\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"/promo\":{\"servers\":{\"promo-api-service-7d5968c556-rrfk2\":{\"url\":\"http://10.11.3.53:9189\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"/anotherservice\":{\"servers\":{\"anotherservice-7856788888-dwnjc\":{\"url\":\"http://10.11.4.4:8762\",\"weight\":1},\"anotherservice-7856788888-kczj5\":{\"url\":\"http://10.11.3.9:8762\",\"weight\":1},\"anotherservice-7856788888-sxtzt\":{\"url\":\"http://10.11.5.6:8762\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"/sockjs-node\":{\"servers\":{\"myservice-5c55585b8-cxprc\":{\"url\":\"http://10.11.4.70:3000\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"/static\":{\"servers\":{\"myservice-5c55585b8-cxprc\":{\"url\":\"http://10.11.4.70:3000\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"/traefik\":{\"servers\":{\"traefik-ingress-controller-5945dd5fbf-hkt6p\":{\"url\":\"http://10.11.3.99:8080\",\"weight\":1},\"traefik-ingress-controller-5945dd5fbf-xd5b6\":{\"url\":\"http://10.11.4.88:8080\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}}},\"frontends\":{\"/fe\":{\"entryPoints\":[\"http\"],\"backend\":\"/fe\",\"routes\":{\"/fe\":{\"rule\":\"PathPrefixStrip:/fe\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null,\"whiteList\":{\"sourceRange\":[\"<whitelisted-ip-subnet>/24\",\"<whitelisted-ip-subnet>/24\"],\"useXForwardedFor\":true}},\"/promo\":{\"entryPoints\":[\"http\"],\"backend\":\"/promo\",\"routes\":{\"/promo\":{\"rule\":\"PathPrefix:/promo\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"/anotherservice\":{\"entryPoints\":[\"http\"],\"backend\":\"/anotherservice\",\"routes\":{\"/anotherservice\":{\"rule\":\"PathPrefixStrip:/anotherservice\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"/sockjs-node\":{\"entryPoints\":[\"http\"],\"backend\":\"/sockjs-node\",\"routes\":{\"/sockjs-node\":{\"rule\":\"PathPrefix:/sockjs-node\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"/static\":{\"entryPoints\":[\"http\"],\"backend\":\"/static\",\"routes\":{\"/static\":{\"rule\":\"PathPrefix:/static\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"/traefik\":{\"entryPoints\":[\"http\"],\"backend\":\"/traefik\",\"routes\":{\"/traefik\":{\"rule\":\"PathPrefixStrip:/traefik\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null}}}"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping updating Ingress default/other-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping updating Ingress default/whitelist-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping updating Ingress default/traefik-dashboard-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping updating Ingress default/frontend-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints"
time="2019-03-01T13:57:17Z" level=debug msg="Received Kubernetes event kind *v1.Endpoints"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping updating Ingress default/whitelist-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping updating Ingress default/traefik-dashboard-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping updating Ingress default/frontend-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping updating Ingress default/service-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping updating Ingress default/other-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints"
time="2019-03-01T13:57:17Z" level=debug msg="Received Kubernetes event kind *v1.Endpoints"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping updating Ingress default/frontend-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping updating Ingress default/service-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping updating Ingress default/other-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping updating Ingress default/whitelist-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping updating Ingress default/traefik-dashboard-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:17Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints"
time="2019-03-01T13:57:18Z" level=debug msg="Wiring frontend /fe to entryPoint http"
time="2019-03-01T13:57:18Z" level=debug msg="Creating backend /fe"
time="2019-03-01T13:57:18Z" level=debug msg="configured IP white list: [<whitelisted-ip-subnet>/24 <whitelisted-ip-subnet>/24]"
time="2019-03-01T13:57:18Z" level=debug msg="Configured IP Whitelists: [<whitelisted-ip-subnet>/24 <whitelisted-ip-subnet>/24]"
time="2019-03-01T13:57:18Z" level=debug msg="Adding TLSClientHeaders middleware for frontend /fe"
time="2019-03-01T13:57:18Z" level=debug msg="Creating load-balancer wrr"
time="2019-03-01T13:57:18Z" level=debug msg="Creating server myservice-5c55585b8-cxprc at http://10.11.4.70:3000 with weight 1"
time="2019-03-01T13:57:18Z" level=debug msg="Creating route /fe PathPrefixStrip:/fe"
time="2019-03-01T13:57:18Z" level=debug msg="Wiring frontend /promo to entryPoint http"
time="2019-03-01T13:57:18Z" level=debug msg="Creating backend /promo"
time="2019-03-01T13:57:18Z" level=debug msg="Adding TLSClientHeaders middleware for frontend /promo"
time="2019-03-01T13:57:18Z" level=debug msg="Creating load-balancer wrr"
time="2019-03-01T13:57:18Z" level=debug msg="Creating server promo-api-service-7d5968c556-rrfk2 at http://10.11.3.53:9189 with weight 1"
time="2019-03-01T13:57:18Z" level=debug msg="Creating route /promo PathPrefix:/promo"
time="2019-03-01T13:57:18Z" level=debug msg="Wiring frontend /anotherservice to entryPoint http"
time="2019-03-01T13:57:18Z" level=debug msg="Creating backend /anotherservice"
time="2019-03-01T13:57:18Z" level=debug msg="Adding TLSClientHeaders middleware for frontend /anotherservice"
time="2019-03-01T13:57:18Z" level=debug msg="Creating load-balancer wrr"
time="2019-03-01T13:57:18Z" level=debug msg="Creating server anotherservice-7856788888-kczj5 at http://10.11.3.9:8762 with weight 1"
time="2019-03-01T13:57:18Z" level=debug msg="Creating server anotherservice-7856788888-sxtzt at http://10.11.5.6:8762 with weight 1"
time="2019-03-01T13:57:18Z" level=debug msg="Creating server anotherservice-7856788888-dwnjc at http://10.11.4.4:8762 with weight 1"
time="2019-03-01T13:57:18Z" level=debug msg="Creating route /anotherservice PathPrefixStrip:/anotherservice"
time="2019-03-01T13:57:18Z" level=debug msg="Wiring frontend /sockjs-node to entryPoint http"
time="2019-03-01T13:57:18Z" level=debug msg="Creating backend /sockjs-node"
time="2019-03-01T13:57:18Z" level=debug msg="Adding TLSClientHeaders middleware for frontend /sockjs-node"
time="2019-03-01T13:57:18Z" level=debug msg="Creating load-balancer wrr"
time="2019-03-01T13:57:18Z" level=debug msg="Creating server myservice-5c55585b8-cxprc at http://10.11.4.70:3000 with weight 1"
time="2019-03-01T13:57:18Z" level=debug msg="Creating route /sockjs-node PathPrefix:/sockjs-node"
time="2019-03-01T13:57:18Z" level=debug msg="Wiring frontend /static to entryPoint http"
time="2019-03-01T13:57:18Z" level=debug msg="Creating backend /static"
time="2019-03-01T13:57:18Z" level=debug msg="Adding TLSClientHeaders middleware for frontend /static"
time="2019-03-01T13:57:18Z" level=debug msg="Creating load-balancer wrr"
time="2019-03-01T13:57:18Z" level=debug msg="Creating server myservice-5c55585b8-cxprc at http://10.11.4.70:3000 with weight 1"
time="2019-03-01T13:57:18Z" level=debug msg="Creating route /static PathPrefix:/static"
time="2019-03-01T13:57:18Z" level=debug msg="Wiring frontend /traefik to entryPoint http"
time="2019-03-01T13:57:18Z" level=debug msg="Creating backend /traefik"
time="2019-03-01T13:57:18Z" level=debug msg="Adding TLSClientHeaders middleware for frontend /traefik"
time="2019-03-01T13:57:18Z" level=debug msg="Creating load-balancer wrr"
time="2019-03-01T13:57:18Z" level=debug msg="Creating server traefik-ingress-controller-5945dd5fbf-hkt6p at http://10.11.3.99:8080 with weight 1"
time="2019-03-01T13:57:18Z" level=debug msg="Creating server traefik-ingress-controller-5945dd5fbf-xd5b6 at http://10.11.4.88:8080 with weight 1"
time="2019-03-01T13:57:18Z" level=debug msg="Creating route /traefik PathPrefixStrip:/traefik"
time="2019-03-01T13:57:18Z" level=info msg="Server configuration reloaded on :80"
time="2019-03-01T13:57:19Z" level=debug msg="Configuration received from provider kubernetes: {\"backends\":{\"/fe\":{\"servers\":{\"myservice-5c55585b8-cxprc\":{\"url\":\"http://10.11.4.70:3000\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"/promo\":{\"servers\":{\"promo-api-service-7d5968c556-rrfk2\":{\"url\":\"http://10.11.3.53:9189\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"/anotherservice\":{\"servers\":{\"anotherservice-7856788888-dwnjc\":{\"url\":\"http://10.11.4.4:8762\",\"weight\":1},\"anotherservice-7856788888-kczj5\":{\"url\":\"http://10.11.3.9:8762\",\"weight\":1},\"anotherservice-7856788888-sxtzt\":{\"url\":\"http://10.11.5.6:8762\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"/sockjs-node\":{\"servers\":{\"myservice-5c55585b8-cxprc\":{\"url\":\"http://10.11.4.70:3000\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"/static\":{\"servers\":{\"myservice-5c55585b8-cxprc\":{\"url\":\"http://10.11.4.70:3000\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"/traefik\":{\"servers\":{\"traefik-ingress-controller-5945dd5fbf-69jkg\":{\"url\":\"http://10.11.5.90:8080\",\"weight\":1},\"traefik-ingress-controller-5945dd5fbf-hkt6p\":{\"url\":\"http://10.11.3.99:8080\",\"weight\":1},\"traefik-ingress-controller-5945dd5fbf-xd5b6\":{\"url\":\"http://10.11.4.88:8080\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}}},\"frontends\":{\"/fe\":{\"entryPoints\":[\"http\"],\"backend\":\"/fe\",\"routes\":{\"/fe\":{\"rule\":\"PathPrefixStrip:/fe\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null,\"whiteList\":{\"sourceRange\":[\"<whitelisted-ip-subnet>/24\",\"<whitelisted-ip-subnet>/24\"],\"useXForwardedFor\":true}},\"/promo\":{\"entryPoints\":[\"http\"],\"backend\":\"/promo\",\"routes\":{\"/promo\":{\"rule\":\"PathPrefix:/promo\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"/anotherservice\":{\"entryPoints\":[\"http\"],\"backend\":\"/anotherservice\",\"routes\":{\"/anotherservice\":{\"rule\":\"PathPrefixStrip:/anotherservice\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"/sockjs-node\":{\"entryPoints\":[\"http\"],\"backend\":\"/sockjs-node\",\"routes\":{\"/sockjs-node\":{\"rule\":\"PathPrefix:/sockjs-node\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"/static\":{\"entryPoints\":[\"http\"],\"backend\":\"/static\",\"routes\":{\"/static\":{\"rule\":\"PathPrefix:/static\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null},\"/traefik\":{\"entryPoints\":[\"http\"],\"backend\":\"/traefik\",\"routes\":{\"/traefik\":{\"rule\":\"PathPrefixStrip:/traefik\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null}}}"
time="2019-03-01T13:57:19Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints"
time="2019-03-01T13:57:19Z" level=debug msg="Received Kubernetes event kind *v1.Endpoints"
time="2019-03-01T13:57:19Z" level=debug msg="Skipping updating Ingress default/traefik-dashboard-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:19Z" level=debug msg="Skipping updating Ingress default/frontend-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:19Z" level=debug msg="Skipping updating Ingress default/service-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:19Z" level=debug msg="Skipping updating Ingress default/other-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:19Z" level=debug msg="Skipping updating Ingress default/whitelist-ingress due to service default/anotherservice having no status set"
time="2019-03-01T13:57:19Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints"
time="2019-03-01T13:57:20Z" level=debug msg="Wiring frontend /fe to entryPoint http"
time="2019-03-01T13:57:20Z" level=debug msg="Creating backend /fe"
time="2019-03-01T13:57:20Z" level=debug msg="configured IP white list: [<whitelisted-ip-subnet>/24 <whitelisted-ip-subnet>/24]"
time="2019-03-01T13:57:20Z" level=debug msg="Configured IP Whitelists: [<whitelisted-ip-subnet>/24 <whitelisted-ip-subnet>/24]"
time="2019-02-28T08:49:02Z" level=debug msg="request &{Method:GET URL:/ Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Upgrade-Insecure-Requests:[1] User-Agent:[Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36] Dnt:[1] Accept-Encoding:[gzip, deflate, br] X-Forwarded-Prefix:[/<my-path>] Connection:[keep-alive] Cache-Control:[max-age=0] Authorization:[Basic YWRtaW46YWRtaW4=] Accept:[text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8] Accept-Language:[en-US,en;q=0.9,it;q=0.8]] Body:{} GetBody:<nil> ContentLength:0 TransferEncoding:[] Close:false Host:10.29.5.13 Form:map[] PostForm:map[] MultipartForm:<nil> Trailer:map[] RemoteAddr:<load-balancer-ip>:50846 RequestURI:/ TLS:<nil> Cancel:<nil> Response:<nil> ctx:0xc000baf440} - rejecting: \"<load-balancer-ip>:50846\" matched none of the white list

Server gave HTTP response to HTTPS client

I am trying to setup prometheus to monitor nodes, services and endpoints for my kubernetes cluster [1 master, 7 minions ] . For that I have a very basic promethus.yml file :
scrape_configs:
- job_name: 'kubernetes-pods'
tls_config:
insecure_skip_verify: true
kubernetes_sd_configs:
- role: pod
Before starting the Prometheus application , I ran the below 2 commands :
export KUBERNETES_SERVICE_HOST=172.9.25.6
export KUBERNETES_SERVICE_PORT=8080
I can access the Kubernetes API server using http://172.9.25.6:8080
The connect is formed over http and NOT https.
Now when I start the application, I get the below ERROR :
level=info ts=2017-12-13T20:39:05.312987614Z caller=kubernetes.go:100 component="target manager" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2017-12-13T20:39:05.313443232Z caller=main.go:371 msg="Server is ready to receive requests."
level=error ts=2017-12-13T20:39:05.316618074Z caller=main.go:211 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:205: Failed to list *v1.Pod: Get https://172.9.25.6:8080/api/v1/pods?resourceVersion=0: http: server gave HTTP response to HTTPS client"
I also tried to add scheme: http to my prometheus.yml config but it does not work. How can I configure the client to accept HTTP responses ?
Try specifying api_server inside kubernetes_sd_configs:
scrape_configs:
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
api_server: http://172.9.25.6:8080