Let's suppose I have these functions mapped to web on port 8080:
/uppercase
/toChars
I can get the data from them using:
curl -d '["test"]' localhost:8080/uppercase,toChars -H "Content-Type: application/json"
Now I created a route using gateway to map on:
/uppercase,toChars
But when I run the curl as above(changing the port to 8081 since gateway is on 8081), it doesn't give me back the requested information. I also tried /uppercase|toChars. Here is the definition that I am using:
spring:
cloud:
gateway:
routes:
- id: app8080
uri: http://localhost:8080
predicates:
- Path=/uppercase,/toChars
I have tried the different combinations of Path but none of them worked. Any help?
Related
The minimal example from https://doc.traefik.io/traefik/user-guides/docker-compose/basic-example/ works on my local machine. However, when I try to adapt this to use TLS I run into an issue. I'm a Traefik newbie, so I might be doing a stupid mistake.
This is my attempt:
version: "3.3"
services:
traefik:
image: "traefik:v2.8"
container_name: "traefik"
command:
- "--log.level=DEBUG"
- "--accesslog=true"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
ports:
- "443:443"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
whoami:
image: "traefik/whoami"
container_name: "simple-service"
labels:
- "traefik.enable=true"
- "traefik.http.routers.whoami.rule=Host(`127.0.0.1`)"
- "traefik.http.routers.whoami.entrypoints=websecure"
So the major modification is to use "traefik.http.routers.whoami.entrypoints=websecure" instead of "traefik.http.routers.whoami.entrypoints=web"
Running
$ curl -k https://127.0.0.1
I get
404 page not found
The traefik log shows no routing related issues and the internal traefik setup for routing etc shown using curl https://127.0.0.1:8080/api/rawdata | jq . looks the same as the one of the working example, except the changed port.
So I opted for new answer instead of just editing the old answer. (Reason being even incorrect answers teach something).
My reference is this great post by Marc Mogdanz (link: https://marcmogdanz.de/posts/infrastructure-with-traefik-and-cloudflare/).
The direct answer to your query is:
Expose port 8080 but do not publish it
Add a host name rule. This will allow Traefik to route a URL request to its own port 8080.
The affected part of the compose file would be as follows (assuming that the URL https://dashboard.example.com is the desired URL to reach the dashboard):
expose:
- 8080
...
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`dashboard.example.com`)"
- "traefik.http.routers.traefik.tls=true"
- "traefik.http.services.traefik.loadbalancer.server.port=8080"
Finally, I noticed you are testing on localhost. If you are testing on a local machine, use localhost for the dashboard and keep 127.0.0.1 for whoami.
Or, alternately, add a static entry for a subdomain (see https://stackoverflow.com/a/19016600).
Either way, Traefik is looking at the SNI requested - not necessarily the IP address - when matching the Host rule.
Request ----> Docker:443 ---> {Traefik}-"SNI?"---"127.0.0.1"---> {whoami}
| \
| \
8080<---"dashboard.localhost"
Add the following entry to your Traefik:
"--entrypoints.websecure.address=:8080"
Normally it would be 8080 for http and 8443 for https alternative ports, but since your example specifically states https://~:8080, I have adapted it accordingly.
I am trying to use wso2 as an authorization server with ouath2. I referred to the below links
Link
As mentioned in the link Google authenticator is used but can I use wso2 instead of google?
I have created a service provider in wso2 -> then select oauth/opendID connect configuration -> used the client ID and secret to create oauth2 image. But I am not sure what provider name I have to give.
spec:
containers:
- args:
- --provider=wso2
- --email-domain=*
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4180
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: 0UnfZFZDb
- name: OAUTH2_PROXY_CLIENT_SECRET
value: rZroDX6uOsySSt4eN
# docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))));'
- name: OAUTH2_PROXY_COOKIE_SECRET
value: b'cFF0enRMdEJrUGlaU3NSTlkyVkxuQT09'
image: quay.io/pusher/oauth2_proxy:v4.1.0-amd64
and in the ingress, I have added the following annotations
nginx.ingress.kubernetes.io/auth-url: "http://oauth2-proxy.auth.svc.cluster.local:4180/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://identity.wso2.com:443/commonauth?rd=/"
but I am getting an authentication error.
Can use I wso2 as a authorization server similar the github or google?
for wso2, do I need to create an oauth2 image?
my k8s ingress annotations are correct (tried multiple values like start?rd=$escaped_request_uri etc)?
I am trying to configure Traefik so that I would have access to services via domain names, and that I would not have to set different ports. For example, two MongoDB services, both on the default port, but in different domains, example.localhost and example2.localhost. Only this example works. I mean, other cases probably work, but I can't connect to them, and I don't understand what the problem is. This is probably not even a problem with Traefik.
I have prepared a repository with an example that works. You just need to generate your own certificate with mkcert. The page at example.localhost returns the 403 Forbidden error but you should not worry about it, because the purpose of this configuration is to show that SSL is working (padlock, green status). So don't focus on 403.
Only the SSL connection to the mongo service works. I tested it with the Robo 3T program. After selecting the SSL connection, providing the host on example.localhost and selecting the certificate for a self-signed (or own) connection works. And that's the only thing that works that way. Connections to redis (Redis Desktop Manager) and to pgsql (PhpStorm, DBeaver, DbVisualizer) do not work, regardless of whether I provide certificates or not. I do not forward SSL to services, I only connect to Traefik. I spent long hours on it. I searched the internet. I haven't found the answer yet. Has anyone solved this?
PS. I work on Linux Mint, so my configuration should work in this environment without any problem. I would ask for solutions for Linux.
If you do not want to browse the repository, I attach the most important files:
docker-compose.yml
version: "3.7"
services:
traefik:
image: traefik:v2.0
ports:
- 80:80
- 443:443
- 8080:8080
- 6379:6379
- 5432:5432
- 27017:27017
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./config.toml:/etc/traefik/traefik.config.toml:ro
- ./certs:/etc/certs:ro
command:
- --api.insecure
- --accesslog
- --log.level=INFO
- --entrypoints.http.address=:80
- --entrypoints.https.address=:443
- --entrypoints.traefik.address=:8080
- --entrypoints.mongo.address=:27017
- --entrypoints.postgres.address=:5432
- --entrypoints.redis.address=:6379
- --providers.file.filename=/etc/traefik/traefik.config.toml
- --providers.docker
- --providers.docker.exposedByDefault=false
- --providers.docker.useBindPortIP=false
apache:
image: php:7.2-apache
labels:
- traefik.enable=true
- traefik.http.routers.http-dev.entrypoints=http
- traefik.http.routers.http-dev.rule=Host(`example.localhost`)
- traefik.http.routers.https-dev.entrypoints=https
- traefik.http.routers.https-dev.rule=Host(`example.localhost`)
- traefik.http.routers.https-dev.tls=true
- traefik.http.services.dev.loadbalancer.server.port=80
pgsql:
image: postgres:10
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
labels:
- traefik.enable=true
- traefik.tcp.routers.pgsql.rule=HostSNI(`example.localhost`)
- traefik.tcp.routers.pgsql.tls=true
- traefik.tcp.routers.pgsql.service=pgsql
- traefik.tcp.routers.pgsql.entrypoints=postgres
- traefik.tcp.services.pgsql.loadbalancer.server.port=5432
mongo:
image: mongo:3
labels:
- traefik.enable=true
- traefik.tcp.routers.mongo.rule=HostSNI(`example.localhost`)
- traefik.tcp.routers.mongo.tls=true
- traefik.tcp.routers.mongo.service=mongo
- traefik.tcp.routers.mongo.entrypoints=mongo
- traefik.tcp.services.mongo.loadbalancer.server.port=27017
redis:
image: redis:3
labels:
- traefik.enable=true
- traefik.tcp.routers.redis.rule=HostSNI(`example.localhost`)
- traefik.tcp.routers.redis.tls=true
- traefik.tcp.routers.redis.service=redis
- traefik.tcp.routers.redis.entrypoints=redis
- traefik.tcp.services.redis.loadbalancer.server.port=6379
config.toml
[tls]
[[tls.certificates]]
certFile = "/etc/certs/example.localhost.pem"
keyFile = "/etc/certs/example.localhost-key.pem"
Build & Run
mkcert example.localhost # in ./certs/
docker-compose up -d
Prepare step by step
Install mkcert (run also mkcert -install for CA)
Clone my code
In certs folder run mkcert example.localhost
Start container by docker-compose up -d
Open page https://example.localhost/ and check if it is secure connection
If address http://example.localhost/ is not reachable, add 127.0.0.1 example.localhost to /etc/hosts
Certs:
Public: ./certs/example.localhost.pem
Private: ./certs/example.localhost-key.pem
CA: ~/.local/share/mkcert/rootCA.pem
Test MongoDB
Install Robo 3T
Create new connection:
Address: example.localhost
Use SSL protocol
CA Certificate: rootCA.pem (or Self-signed Certificate)
Test tool:
Test Redis
Install RedisDesktopManager
Create new connection:
Address: example.localhost
SSL
Public Key: example.localhost.pem
Private Key: example.localhost-key.pem
Authority: rootCA.pem
Test tool:
So far:
Can connect to Postgres via IP (info from Traefik)
jdbc:postgresql://172.21.0.4:5432/postgres?sslmode=disable
jdbc:postgresql://172.21.0.4:5432/postgres?sslfactory=org.postgresql.ssl.NonValidatingFactory
Try telet (IP changes every docker restart):
> telnet 172.27.0.5 5432
Trying 172.27.0.5...
Connected to 172.27.0.5.
Escape character is '^]'.
^]
Connection closed by foreign host.
> telnet example.localhost 5432
Trying ::1...
Connected to example.localhost.
Escape character is '^]'.
^]
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close
400 Bad RequestConnection closed by foreign host.
If I connect directly to postgres, the data is nice. If I connect to via Traefik then I have Bad Request when closing the connection. I have no idea what this means and whether it must mean something.
At least for the PostgreSQL issue, it seems that the connection is started in cleartext and then upgraded to TLS:
Docs
Mailing list discussion
Issue on another proxy project
So it is basically impossible to use TLS termination with a proxy if said proxy doesn't support this cleartext handshake + upgrade to TLS function of the protocol.
Update to #jose-liber's answer:
SNI routing for postgres with STARTTLS has been added to Traefik in this PR. Now Treafik will listen to the initial bytes sent by postgres and if its going to initiate a TLS handshake (Note that postgres TLS requests are created as non-TLS first and then upgraded to TLS requests), Treafik will handle the handshake and then is able to receive the TLS headers from postgres, which contains the SNI information that it needs to route the request properly. This means that you can use HostSNI("example.com") along with tls to expose postgres databases under different subdomains.
As of writing this answer, I was able to get this working with the v3.0.0-beta2 image (Reference)
I have server with Docker and open 80 port.
I use Traefik to redirect between containers. And I want to host PostgreSQL database. After start conteiner with this settings:
postgresql:
image: orchardup/postgresql
environment:
- "POSTGRESQL_PASS=***"
labels:
- "traefik.enable=true"
- "traefik.frontend.rule=Path:/postgresql/"
But is not work
What am I doing wrong?
Traefik is a layer 7 reverse proxy.
Postgres doesn't use http, and requires a layer 4 proxy.
You need to look at using another product to proxy Postgres connections.
I guess name of your host should be name of the postgres container, not name of the service. I suggest using container_name in your compose to be persistent.
Postgres does have a http protocol, but I think its way easier to just run it outside of Traefik. Keep your backend and frontend inside traefik, and the db outside:
docker pull postgres:latest
docker run -p 5432:5432 postgres
I'm having network issues running services in docker-compose. Essentially I'm just trying to make a get request through Kong to a simple Flask API I have setup. The docker-compose.yml is below
version: "3.0"
services:
postgres:
image: postgres:9.4
container_name: kong-database
ports:
- "5432:5432"
environment:
- POSTGRES_USER=kong
- POSTGRES_DB=kong
web:
image: kong:latest
container_name: kong
environment:
- DATABASE=postgres
- KONG_PG_HOST=postgres
restart: always
ports:
- "8000:8000"
- "443:8443"
- "8001:8001"
- "7946:7946"
- "7946:7946/udp"
links:
- postgres
ui:
image: pgbi/kong-dashboard
container_name: kong-dashboard
ports:
- "8080:8080"
employeedb:
build: test-api/
restart: always
ports:
- "5002:5002"
I add the API to kong with the command curl -i -X POST --url http://localhost:8001/apis/ --data name=employeedb --data upstream_url=http://localhost:5002 --data hosts=employeedb --data uris=/employees. I've tried this with many combinations of inputs, including different names, passing in the Docker network IP and the name of the test-api as hostname for the upstreamurl. After adding the API to Kong I get
HTTP/1.1 502 Bad Gateway
Date: Tue, 11 Jul 2017 14:17:17 GMT
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: kong/0.10.3
Additionally I've gotten into the docker containers running docker exec it <container-id> /bin/bash and attempted to make curl requests to the expected flask endpoint. While on the container running the API I was able to make a sucessful call to both localhost:5002/employees as well as to employeedb:5002/employees. However when making it from the container running Kong I see
curl -iv -X GET --url 'http://employeedb:5002/employees'
* About to connect() to employeedb port 5002 (#0)
* Trying X.X.X.X...
* Connection refused
* Failed connect to employeedb:5002; Connection refused
* Closing connection 0
Am I missing some sort of configuration that exposes the containers to one another?
You need to make the employeedb container visible to kong by defining a link like you did with the PostgreSQL database. Just add it as an additional entry directly below - postgres and it should be reachable by Kong:
....
links:
- postgres
- employeedb
....