Error creating TLS config after updating Traefik to v1.3.6 - kubernetes

I'm attempting to update from Traefik v1.2.3 to v1.3.6 on Kubernetes. I have my TLS certificates mounted inside of the pods from secrets. Under v1.2.3, everything works as expected. When I try to apply my v1.3.6 deployment (only change being the new docker image), the pods fail to start with the following message:
time="2017-08-22T20:27:44Z" level=error msg="Error creating TLS config: tls: failed to find any PEM data in key input"
time="2017-08-22T20:27:44Z" level=fatal msg="Error preparing server: tls: failed to find any PEM data in key input"
Below is my traefik.toml file:
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
CertFile = "/ssl/wildcard.foo.mydomain.com.crt"
KeyFile = "/ssl/wildcard.foo.mydomain.com.key"
[[entryPoints.https.tls.certificates]]
CertFile = "/ssl/wildcard.mydomain.com.crt"
KeyFile = "/ssl/wildcard.mydomain.com.key"
[[entryPoints.https.tls.certificates]]
CertFile = "/ssl/wildcard.local.crt"
KeyFile = "/ssl/wildcard.local.key"
[kubernetes]
labelselector = "expose=internal"
My initial impression of the errors produced by the pods are that the keys in the secret are not valid. However, I am able to base64 decode the contents of the secret and see that the values match those of the certificate files I have stored locally. Additionally, I would expect to see this error on any version of Traefik if these were in fact, invalid. In reviewing the change log for Traefik, I see that the SSL library was updated but the related PR indicates that this only added ciphers and did not remove any previously supported.
:Edit w/ additional info:
Running with --logLevel=DEBUG provides this additional information (provided in full below in case it's helpful):
[cluster-traefik-2693375319-w67hf] time="2017-08-22T21:41:19Z" level=debug msg="Global configuration loaded {"GraceTimeOut":10000000000,"Debug":false,"CheckNewVersion":true,"AccessLogsFile":"","TraefikLogsFile":"","LogLevel":"DEBUG","EntryPoints":{"http":{"Network":"","Address":":80","TLS":null,"Redirect":{"EntryPoint":"https","Regex":"","Replacement":""},"Auth":null,"Compress":false},"https":{"Network":"","Address":":443","TLS":{"MinVersion":"","CipherSuites":null,"Certificates":[{"CertFile":"/ssl/wildcard.foo.mydomain.com.crt","KeyFile":"/ssl/wildcard.foo.mydomain.com.key"},{"CertFile":"/ssl/wildcard.mydomain.com.crt","KeyFile":"/ssl/wildcard.mydomain.com.key"},{"CertFile":"/ssl/wildcard.local.crt","KeyFile":"/ssl/wildcard.local.key"}],"ClientCAFiles":null},"Redirect":null,"Auth":null,"Compress":false}},"Cluster":null,"Constraints":[],"ACME":null,"DefaultEntryPoints":["http","https"],"ProvidersThrottleDuration":2000000000,"MaxIdleConnsPerHost":200,"IdleTimeout":180000000000,"InsecureSkipVerify":false,"Retry":null,"HealthCheck":{"Interval":30000000000},"Docker":null,"File":null,"Web":{"Address":":8080","CertFile":"","KeyFile":"","ReadOnly":false,"Statistics":null,"Metrics":{"Prometheus":{"Buckets":[0.1,0.3,1.2,5]}},"Path":"","Auth":null},"Marathon":null,"Consul":null,"ConsulCatalog":null,"Etcd":null,"Zookeeper":null,"Boltdb":null,"Kubernetes":{"Watch":true,"Filename":"","Constraints":[],"Endpoint":"","Token":"","CertAuthFilePath":"","DisablePassHostHeaders":false,"Namespaces":null,"LabelSelector":"expose=internal"},"Mesos":null,"Eureka":null,"ECS":null,"Rancher":null,"DynamoDB":null}"
[cluster-traefik-2693375319-w67hf] time="2017-08-22T21:41:19Z" level=info msg="Preparing server https &{Network: Address::443 TLS:0xc42060d800 Redirect:<nil> Auth:<nil> Compress:false}"
[cluster-traefik-2693375319-w67hf] time="2017-08-22T21:41:19Z" level=error msg="Error creating TLS config: tls: failed to find any PEM data in key input"
[cluster-traefik-2693375319-w67hf] time="2017-08-22T21:41:19Z" level=fatal msg="Error preparing server: tls: failed to find any PEM data in key input"

This issue turned out to be new validation logic in the crypto/tls library in Go 1.8. They are now validating the certificate blocks end in ----- where as before they did not. The private key for one of my certificate files ended in ---- (missing a hyphen). Adding the missing character fixed this issue.

Related

Want to integrate argo server with keycloak

I tried in incognito as wellbut same issue exists.
Currently I have added in server-deployment.yaml
args: - server - --auth-mode - sso
And in values.yaml
sso:
# #SSO configuration when SSO is specified as a server auth mode.
# #All the values are requied. SSO is activated by adding --auth-mode=sso
# #to the server command line.
#
# #The root URL of the OIDC identity provider.
issuer: http://<keycloak_ip>/auth/realms/demo
# #Name of a secret and a key in it to retrieve the app OIDC client ID from.
clientId:
name: argo
key: client-id
# #Name of a secret and a key in it to retrieve the app OIDC client secret from.
clientSecret:
name: "argo-server-sso"
key: client-secret
# # The OIDC redirect URL. Should be in the form /oauth2/callback.
redirectUrl: http:///argo/oauth2/callback
And in keycloak ui , I have created client and client credentials.
kubectl create secret generic "argo-server-sso" --from-literal=client-secret=9a9c60ba-647d-480c-b6fa-82c19caad26a
kubectl create secret generic "argo" --from-literal=client-id=argo
After hitting the argo server url,manually I need to click on login option but after that keycloak page appears and then again a popup will come "Failed to login:Unauthorized"
Server logs:
kubectl logs argo-server-5c7f8c5cbb-9fcqk
time="2021-01-20T12:06:26.876Z" level=info authModes="[sso]" baseHRef=/ managedNamespace= namespace=default secure=false
time="2021-01-20T12:06:26.877Z" level=warning msg="You are running in insecure mode. Learn how to enable transport layer security: https://argoproj.github.io/argo/tls/"
time="2021-01-20T12:06:26.877Z" level=info msg="config map" name=argo-workflow-controller-configmap
time="2021-01-20T12:06:28.318Z" level=info msg="SSO configuration" clientId="{{argo} client-id }" issuer="http://10.xx.xx.xx:xxxx/auth/realms/demo" redirectUrl="http://xx/argo/oauth2/callback"
time="2021-01-20T12:06:28.318Z" level=info msg="SSO enabled"
time="2021-01-20T12:06:28.322Z" level=info msg="Starting Argo Server" instanceID= version=v2.12.2
time="2021-01-20T12:06:28.322Z" level=info msg="Creating event controller" operationQueueSize=16 workerCount=4
time="2021-01-20T12:06:28.323Z" level=info msg="Argo Server started successfully on http://localhost:2746"
time="2021-01-20T12:07:21.990Z" level=info msg="finished unary call with code Unauthenticated" error="rpc error: code = Unauthenticated desc = token not valid for running mode" grpc.code=Unauthenticated grpc.method=GetVersion grpc.service=info.InfoService grpc.start_time="2021-01-20T12:07:21Z" grpc.time_ms=0.379 span.kind=server system=grpc
time="2021-01-20T12:07:22.009Z" level=info msg="finished unary call with code Unauthenticated" error="rpc error: code = Unauthenticated desc = token not valid for running mode" grpc.code=Unauthenticated grpc.method=ListWorkflowTemplates grpc.service=workflowtemplate.WorkflowTemplateService grpc.start_time="2021-01-20T12:07:22Z" grpc.time_ms=0.075 span.kind=server system=grpc
I integrated ArgoCD with Keycloak successfully.
You have 1 clear/visible issue: Yaml indentation is wrong
make sure you keep the right indentation as per default values in helm chart :
https://github.com/argoproj/argo-helm/blob/1aea2c41798972ff0077108f926bb9095f3f9deb/charts/argo/values.yaml#L255-L283
Accordingly, your values should be:
( assuming your argo is serving with hostname workflows.company.com )
server:
extraArgs:
- --auth-mode=sso
sso:
issuer: http://<keycloak_ip>/auth/realms/demo
clientId:
name: argo
key: client-id
clientSecret:
name: "argo-server-sso"
key: client-secret
redirectUrl: https://workflows.company.com/argo/oauth2/callback
From keycloak side now, & under your client, make sure you fill-in the Valid Redirect URL as per your ingress hostname :

minio+KMS x509: certificate signed by unknown authority

I am trying to use minio as a local S3 server. I am following this article
I downloaded key and cert files.
I added the env parameters:
set MINIO_KMS_KES_ENDPOINT=https://play.min.io:7373
set MINIO_KMS_KES_KEY_FILE=D:\KMS\root.key
set MINIO_KMS_KES_CERT_FILE=D:\KMS\root.cert
set MINIO_KMS_KES_KEY_NAME=my-minio-key
I started minio server: D:\>minio.exe server D:\Photos
It logs after sturt up:
Endpoint: http://169.254.182.253:9000 http://169.254.47.198:9000 http://172.17.39.193:9000 http://192.168.0.191:9000 http://169.254.103.105:9000 http://169.254.209.102:9000 http://169.254.136.71:9000 http://127.0.0.1:9000
AccessKey: minioadmin
SecretKey: minioadmin
Browser Access:
http://169.254.182.253:9000 http://169.254.47.198:9000 http://172.17.39.193:9000 http://192.168.0.191:9000 http://169.254.103.105:9000 http://169.254.209.102:9000 http://169.254.136.71:9000 http://127.0.0.1:9000
Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide
$ mc.exe alias set myminio http://169.254.182.253:9000 minioadmin minioadmin
Object API (Amazon S3 compatible):
Go: https://docs.min.io/docs/golang-client-quickstart-guide
Java: https://docs.min.io/docs/java-client-quickstart-guide
Python: https://docs.min.io/docs/python-client-quickstart-guide
JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
.NET: https://docs.min.io/docs/dotnet-client-quickstart-guide
Detected default credentials 'minioadmin:minioadmin', please change the credentials immediately using 'MINIO_ACCESS_KEY' and 'MINIO_SECRET_KEY'
I opened UI in browser: http://localhost:9000/minio/mybacket/
I tried to upload a jpg file and got an exception:
<?xml version="1.0" encoding="UTF-8"?> <Error><Code>InternalError</Code><Message>We encountered an internal error, please try again.</Message><Key>Completed.jpg</Key><BucketName>mybacket</BucketName><Resource>/minio/upload/mybacket/Completed.jpg</Resource><RequestId>1634A6E5663C9D70</RequestId><HostId>4a46a947-6473-4d53-bbb3-a4f908d444ce</HostId></Error>
And I got this exception in minio console:
Error: Post "https://play.min.io:7373/v1/key/generate/my-minio-key": x509: certificate signed by unknown authority
3: cmd\api-errors.go:1961:cmd.toAPIErrorCode()
2: cmd\api-errors.go:1986:cmd.toAPIError()
1: cmd\web-handlers.go:1116:cmd.(*webAPIHandlers).Upload()
Most probably your OS trust store (containing the Root CA certificates) does not trust Let's Encrypt (the Let's Encrypt Authority X3 CA certificate).
The server https://play.min.io:7373 serves a TLS certificates issued by Let's Encrypt.
See:
openssl s_client -showcerts -servername play.min.io -connect play.min.io:7373
Eventually, check your the root CA store of your windows machine.
See: https://security.stackexchange.com/questions/48437/how-can-you-check-the-installed-certificate-authority-in-windows-7-8

Outlook Mailcow wrong ssl certificate

Recently I set up a mail server (mailcow) with help of the following tutorial. I tried to login with Outlook, but Outlook says that the certificate can not be verified. Why is the value of the issued to field mail.example.org and not to xxx.xxx.xx?
mailcow.conf:
MAILCOW_HOSTNAME=xxx.xxx.xx
HTTP_PORT=8080
HTTP_BIND=0.0.0.0
HTTPS_PORT=8443
HTTPS_BIND=0.0.0.0
SMTP_PORT=25
SMTPS_PORT=465
SUBMISSION_PORT=587
IMAP_PORT=143
IMAPS_PORT=993
POP_PORT=110
POPS_PORT=995
SIEVE_PORT=4190
DOVEADM_PORT=127.0.0.1:19991
SQL_PORT=127.0.0.1:13306
ADDITIONAL_SAN=
# Skip running ACME (acme-mailcow, Let's Encrypt certs) - y/n
SKIP_LETS_ENCRYPT=n
# Skip IPv4 check in ACME container - y/n
SKIP_IP_CHECK=n
# Skip HTTP verification in ACME container - y/n
SKIP_HTTP_VERIFICATION=n
In the configuration file I set SKIP_LETS_ENCRYPT = y and generated the ssl certificate with letsencrypt myself. Look here.

Problem getting certificate from let's encrypt using Traefik with docker

I've set up Traefik with Docker and a service behind it. The basic setup works. I can browse to port 80 using the domain name I'm redirected to https and then see "invalid certificate" - since the let's encrypt part is broken.
[ router ] <-:80/:443-> [linux/docker [Traefik:80/:443][Service:8080]]
Here is the entry in the log (edited domain.)
Logs:
acme: Error -> One or more domains had a problem:\n[xyz.example.net] acme: error: 400 :: urn:ietf:params:acme:error:connection :: Fetching
http://xyz.example.net/.well-known/acme-challenge/eIAFZqaGMHMWaBjINjzk4m8PuWiYfuCHCTnSU9M:
Error getting validation data, url: \n"
The error message is accurate, I can not browse to that URL. I have noticed that I can go to that URL using the internal IP http://10.0.0.21/.well-known/acme-challenge/key and Traefik responds with this in the log:
traefik | time="2019-05-28T21:20:52Z" level=error msg="Error getting challenge for token retrying in 542.914495ms"
I suspect the problem is the domain name redirect setup. My service is at xyz.example.net (and so is Traefik.) I suspect the problem is that Traefik is redirecting all traffic coming in on xyz.example.net:80/:443 to the service, and not handling the ./well-known/acme-challenge itself. Do I need to give the gateway itself a name? (E.g. zzz.example.net is Traefik and xyz.example.net is the service?)
How can I fix this?
My TOML file:
debug = false
logLevel = "ERROR" #DEBUG, INFO, WARN, ERROR, FATAL, PANIC
InsecureSkipVerify = true
defaultEntryPoints = ["https", "http"]
[entryPoints]
[entryPoints.http]
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[retry]
[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "example.net"
watch = true
exposedbydefault = false
[acme]
email = "me#example.net"
storage = "acme.json"
entryPoint = "https"
onDemand = false
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
I noticed that although the internal IP 192.xxx worked - the external IP did not. Of course, I would say, this seems like a firewall problem - BUT - the firewall lets thru traffic just fine for the services that I was testing, so I was confused.
The solution? port 80 was not being forwarded on the firewall, 443 was. So when I tried testing with curl/browser I was typing in https://xyz.example.com - and that was working.

Not able to unseal data from Vault

I am trying to experiment the Vault HarshiCorp.
Version that I am using is Vault v0.11.0:
Starting log as below
Api Address: https://ldndsr000004893:8200
Cgo: disabled
Cluster Address: https://ldndsr000004893:8201
Listener 1: tcp (addr: "ldndsr000004893:8200", cluster address: "10.75.40.30:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: false
Storage: file
Version: Vault v0.11.0
Version Sha: 87492f9258e0227f3717e3883c6a8be5716bf56
Server configuration as below:
listener "tcp" {
address = "ldndsr000004893:8200"
scheme = "http"
tls_disable = 1
}
#storage "inmem" {
#}
#storage "zookeeper" {
# address = "localhost:2182"
# path = "vault/"
#}
storage "file" {
path = "/app/iag/phoenix/vault/data"
}
# Advertise the non-loopback interface
api_addr = "https://ldndsr000004893:8200"
disable_mlock = true
ui=true
I have input numbers of key value pairs into vault, and was able to retrieve data normally using Vault command line. But certaintly It stopped working and not able to unseal data from both UI and commandline.
UI error :
Any advice on this issue as I am going to use Vault for storing all credential information.
Turns out it was a problem with Vault UI running on chrome browse.
I have to open a new window with incognito windows and it is showing sign in window after I keyed in the token Vault got unsealed