I would really appreciate some help with the current issue I am experiencing.
Context:
I have been upgrading my instance of keycloak from 16.x to 18.x.
After many hours of research, I have been defeated by this one issue.
Issue:
When I go to the site URL for this example https://thing.com/ I am greeted with the following "Resource not found", instead of the keycloak welcome page.
In my chrome network monitoring it will show the following:
Error with network monitor
Infra:
Keycloak lives on its machine. The URL reaches keycloak through a Caddy Service as a reverse proxy.
Relative scripts:
Docker-compose
version: "3.1"
services:
keycloak:
image: quay.io/keycloak/keycloak:18.0.2
environment:
JAVA_OPTS: "-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=\"org.jboss.byteman\" -Djava.awt.headless=true"
KC_HOSTNAME_PORT: 8080
KC_HOSTNAME: ${KC_HOME}
KC_PROXY: edge
KC_DB_URL: 'jdbc:postgresql://${KEYCLOAK_DB_ADDR}/${KEYCLOAK_DB_DATABASE}?sslmode=require'
KC_DB: postgres
KC_DB_USERNAME: ${KEYCLOAK_DB_USER}
KC_DB_PASSWORD: ${KEYCLOAK_DB_PASSWORD}
KC_HTTP_RELATIVE_PATH: /auth
KC_HOSTNAME_STRICT_HTTPS: 'false'
command: start --auto-build
ports:
- 8080:8080
- 8443:8443
volumes:
- backup:/var/backup
healthcheck:
test: curl -I http://127.0.0.1:8080/
volumes:
backup:
NOTE: If I remove this KC_HTTP_RELATIVE_PATH: /auth it will behave as intended. However, I would prefer I do not remove this aspect of the service as it is tied to that relative path for a lot of the services using keycloak.
I can replicate this with a local docker image built using the same environment variables.
Does anyone perhaps know some secret ninja moves I could do to get it to direct to the welcome page?
Automatic redirect from / to KC_HTTP_RELATIVE_PATH is not supported in Keycloak 18 (see https://github.com/keycloak/keycloak/discussions/10274).
You have to add the redirect in the reverse proxy, in Caddy there is redir.
Related
In order to integrate Superset with my application properly, I need to enable HTTPS. During development I'm running it with docker-compose. I couldn't find any useful information on how to do that. The version I'm running is v1.1.0
I'd be very glad if you could help me. Thank you
Clone superset repo
git clone https://github.com/apache/superset.git
add nginx-proxy as service in docker-compose.yaml
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "443:443"
volumes:
- .certs:/etc/nginx/certs
- /var/run/docker.sock:/tmp/docker.sock:ro
environment:
DEFAULT_HOST: superset.localhost
add VIRTUAL_HOST and WEB_PORTS for superset service in docker-compose.yaml
environment:
CYPRESS_CONFIG: "${CYPRESS_CONFIG}"
VIRTUAL_HOST: superset.localhost
WEB_PORTS: 8088
Let's create self signed certificate for HTTPS.
mkdir .certs && cd .certs
wget https://gist.githubusercontent.com/OnnoGabriel/f717192ed92bf55725337358f4af5ab2/raw/9b669462299c9981bd7864901f09fc2885d9e780/create_certificates.sh
sudo chmod 700 ./create_certificates.sh
sudo ./create_certificates.sh
enter-domain name --> superset.localhost
To make a browser familiar with our root CA, we have to import them into browsers.
Firefox --> https://support.securly.com/hc/en-us/articles/360008547993-How-to-Install-Securly-s-SSL-Certificate-in-Firefox-on-Windows
Chrome --> https://support.securly.com/hc/en-us/articles/206081828-How-to-manually-install-the-Securly-SSL-certificate-in-Chrome
take rootCA.crt file and add them into your browsers based on above instruction.
now start your superset docker instance by
docker-compose -f docker-compose.yml up -d
Now you can access superset by https://superset.localhost
For more details checkout this article
https://betterprogramming.pub/docker-powered-web-development-utilizing-https-and-local-domain-names-a57f129e1c4d
I am running Datajoint LabBook through the provided docker container (https://datajoint.github.io/datajoint-labbook/user.html#installation) and wondered whether there is a way to move it away from the (default?) port (80?). I am not sure I understand the instructions in the .yaml (docker-compose-deploy.yaml), it seems to me that there is a pharus endpoint (5000) and then there are two port definitions (443:443, 80:80) further down. I am not sure what those refer to.
Yes, you can move the DataJoint LabBook service to a different port, however, a few changes will be necessary for it to function properly.
TL;DR
Assuming that you are accessing DataJoint LabBook locally, follow these steps:
Add the line 127.0.0.1 fakeservices.datajoint.io to your hosts file. Verify the hosts file location in your file system.
Modify the ports configuration in docker-compose-deploy.yaml as:
ports:
- "3000:443" # replace 3000 with the port of your choosing
#- "80:80" # disables HTTP -> HTTPS redirect
Navigate in your Google Chrome browser to https://fakeservices.datajoint.io:3000
Detailed Explanation
Let me first speak a bit on the architecture and then describe the relevant changes as we go along.
Below is the Docker Compose file presented in the documentation. I'll make the assumption that you are attempting to run this locally.
# PHARUS_VERSION=0.1.0 DJLABBOOK_VERSION=0.1.0 docker-compose -f docker-compose-deploy.yaml pull
# PHARUS_VERSION=0.1.0 DJLABBOOK_VERSION=0.1.0 docker-compose -f docker-compose-deploy.yaml up -d
#
# Intended for production deployment.
# Note: You must run both commands above for minimal outage.
# Make sure to add an entry into your /etc/hosts file as `127.0.0.1 fakeservices.datajoint.io`
# This serves as an alias for the domain to resolve locally.
# With this config and the configuration below in NGINX, you should be able to verify it is
# running properly by navigating in your browser to `https://fakeservices.datajoint.io`.
# If you don't update your hosts file, you will still have access at `https://localhost`
# however it should simply display 'Not secure' since the cert will be invalid.
version: "2.4"
x-net: &net
networks:
- main
services:
pharus:
<<: *net
image: datajoint/pharus:${PHARUS_VERSION}
environment:
- PHARUS_PORT=5000
fakeservices.datajoint.io:
<<: *net
image: datajoint/nginx:v0.0.16
environment:
- ADD_zlabbook_TYPE=STATIC
- ADD_zlabbook_PREFIX=/
- ADD_pharus_TYPE=REST
- ADD_pharus_ENDPOINT=pharus:5000
- ADD_pharus_PREFIX=/api
- HTTPS_PASSTHRU=TRUE
entrypoint: sh
command:
- -c
- |
rm -R /usr/share/nginx/html
curl -L $$(echo "https://github.com/datajoint/datajoint-labbook/releases/download/\
${DJLABBOOK_VERSION}/static-djlabbook-${DJLABBOOK_VERSION}.zip" | tr -d '\n' | \
tr -d '\t') -o static.zip
unzip static.zip -d /usr/share/nginx
mv /usr/share/nginx/build /usr/share/nginx/html
rm static.zip
/entrypoint.sh
ports:
- "443:443"
- "80:80"
depends_on:
pharus:
condition: service_healthy
networks:
main:
First, the Note in the header comment above is important and seems to have been missed in the DataJoint LabBook documentation (I've filed this issue to update it). Make sure to follow the instruction in the Note as 'secure' access is required from pharus (more on this below).
From the Docker Compose file, you will note 2 services:
pharus - A DataJoint REST API backend service. This service is configured to listen on port 5000, however, it is not actually exposed to the host. This means that it will not conflict and does not require any change as it is entirely contained within a local, virtual docker network.
fakeservices.datajoint.io - A proxying service that is exposed to the host and thus accessible locally and publicly against the host. Its primary purpose is to either:
a) forward requests beginning with /api to pharus, or
b) resolve other requests to the DataJoint LabBook GUI.
DataJoint LabBook's GUI is a static web app which means that it can be served as insecure (HTTP, typically port 80) and secure (HTTPS, typically port 443). Because of the secure requirement from pharus, requests made to port 80 are simply redirected to 443 and exposed for convenience. Therefore, if we want to move DataJoint LabBook to a new port we simply should change the mapping of 443 to a new port on the host and disable the 80 -> 443 redirect. Therefore, the port update would look like so:
ports:
- "3000:443" # replace 3000 with the port of your choosing
#- "80:80" # disables HTTP -> HTTPS redirect
Finally, after configuring and bringing up the services, you should be able to confirm the port change by navigating to https://fakerservices.datajoint.io:3000 in your Google Chrome browser.
I am setting up Rundeck internally for myself to test.
I currently an attempting to access the Official repositories for plugins however I know for a fact the server has no internet connection.
I see nowhere in the documentation for instructions on how to apply the webproxy to the rundeck application.
Has anyone done this before?
EDIT
The Server is a RHEL8 machine.
I am not referring to using a reverse proxy.
** FOUND ANSWER **
After a couple of days of searching:
If you are using a server that is disconnected from the internet
Have an internal proxy to route external traffic
Using the RHEL package of rundeck
Solution
edit your /etc/sysconfig/rundeckd file
paste custom RDECK_JVM_SETTINGS at the end of the file
RDECK_JVM_SETTINGS="${RDECK_JVM_SETTINGS:- -Xmx1024m -Xms256m -XX:MaxMetaspaceSize=256m -server -Dhttp.proxySet=true -Dhttp.proxyHost=server -Dhttp.proxyPort=8080 -Dhttps.proxySet=true -Dhttps.proxyHost=server -Dhttps.proxyPort=80 -Dhttp.nonProxyHosts=*.place.com }"
After a couple of days of searching:
If you are using a server that is disconnected from the internet
Have an internal proxy to route external traffic
Using the RHEL package of rundeck
Solution
edit your /etc/sysconfig/rundeckd file
paste custom RDECK_JVM_SETTINGS at the end of the file
RDECK_JVM_SETTINGS="${RDECK_JVM_SETTINGS:- -Xmx1024m -Xms256m -XX:MaxMetaspaceSize=256m -server -Dhttp.proxySet=true -Dhttp.proxyHost=server -Dhttp.proxyPort=8080 -Dhttps.proxySet=true -Dhttps.proxyHost=server -Dhttps.proxyPort=80 -Dhttp.nonProxyHosts=*.place.com }"
You can test it quickly using Docker Compose.
The idea is to put the NGINX container in front of the Rundeck container.
/your/path/docker-compose.yml content:
version: "3.7"
services:
rundeck:
build:
context: .
args:
IMAGE: ${RUNDECK_IMAGE:-rundeck/rundeck:3.3.10}
container_name: rundeck-nginx
ports:
- 4440:4440
environment:
RUNDECK_GRAILS_URL: http://localhost
RUNDECK_SERVER_FORWARDED: "true"
nginx:
image: nginx:alpine
volumes:
- ./config/nginx.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- 80:80
/your/path/Dockerfile content:
ARG IMAGE
FROM ${IMAGE}
If you check the volumes block you need a specific NGINX configuration at /config path:
/your/path/config/nginx.conf content:
server {
listen 80 default_server;
server_name rundeck-cl;
location / {
# get the rundeck internal address/port
proxy_pass http://rundeck:4440;
}
}
To build:
docker-compose build
To run:
docker-compose up
To see your Rundeck instance:
Open your browser and put localhost, you can see Rundeck behind the NGINX proxy server.
Edit: I leave an example using NGINX on CENTOS/RHEL
1- Install Rundeck via YUM on Rundeck Server.
2- Install NGINX via YUM, just do sudo yum -y install nginx (if you like, you can do this in the same Rundeck server or just in another one).
3- NGINX side. Go to /etc/nginx/nginx.conf and add the following block inside server section:
location /rundeck {
proxy_pass http://your-rundeck-host:4440;
}
Save the file.
4- RUNDECK side. Create a new file at /etc/sysconfig path named rundeckd with the following content:
RDECK_JVM_OPTS="-Dserver.web.context=/rundeck"
Give permissions to rundeck user: chown rundeck:rundeck /etc/sysconfig/rundeckd and save it.
5- RUNDECK side. Open the /etc/rundeck/rundeck-config.properties file and check the grails.serverURL parameter, you need to put the external IP or server DNS name and the correct context defined at NGINX side configuration.
grails.serverURL=http://your-nginx-ip-or-dns-name/rundeck
Save it.
6- NGINX side. Start the NGINX service: systemctl start nginx (later if you like to enable on every boot, just do systemctl enable nginx).
7- RUNDECK side. Start the Rundeck service, systemctl start rundeckd (this takes some seconds, later you can enable the service to start on every server boot, just do: systemctl enable rundeckd).
Now rundeck is behind the NGINX proxy server, just open your browser and type: http://your-nginx-ip-or-dns-name/rundeck.
lets assume your rundeck is running in internal server with domain name "internal-rundeck.com:4440" and you want to expose it on "external-rundeck.com/rundeck" domain through nginx---follow below steps
step 1:
In rundeck
RUNDECK_GRAILS_URL="external-rundeck.com/rundeck"
RUNDECK_SERVER_CONTEXTPATH=/="/rundeck"
RUNDECK_SERVER_FORWARDED=true
set above configurations in deployment file as environment variables
step 2:
In nginx
location /rundeck/ {
proxy_pass http://internal-rundeck.com:4440/rundeck/;
}
add this in your nginx config file it works
Question:
Can anybody with access to the host machine connect to a Docker network, or are services running within a docker network only visible to other services running in a Docker network assuming the ports are not exposed?
Background:
I currently have a web application with a postgresql database backend where both components are being run through docker on the same machine, and only the web app is exposing ports on the host machine. The web-app has no trouble connecting to the db as they are in the same docker network. I was considering removing the password from my database user so that I don't have to store the password on the host and pass it into the web-app container as a secret. Before I do that I want to ascertain how secure the docker network is.
Here is a sample of my docker-compose:
version: '3.3'
services:
database:
image: postgres:9.5
restart: always
volumes:
#preserves the database between containers
- /var/lib/my-web-app/database:/var/lib/postgresql/data
web-app:
image: my-web-app
depends_on:
- database
ports:
- "8080:8080"
- "8443:8443"
restart: always
secrets:
- source: DB_USER_PASSWORD
secrets:
DB_USER_PASSWORD:
file: /secrets/DB_USER_PASSWORD
Any help is appreciated.
On a native Linux host, anyone who has or can find the container-private IP address can directly contact the container. (Unprivileged prodding around with ifconfig can give you some hints that it's there.) On non-Linux there's typically a hidden Linux VM, and if you can get a shell in that, the same trick works. And of course if you can run any docker command then you can docker exec a shell in the container.
Docker's network-level protection isn't strong enough to be the only thing securing your database. Using standard username-and-password credentials is still required.
(Note that the docker exec path is especially powerful: since the unencrypted secrets are ultimately written into a path in the container, being able to run docker exec means you can easily extract them. Restricting docker access to root only is also good security practice.)
I am integrating my go application with Stackdriver logging via cloud.google.com/go/logging. My application works perfectly fine when deployed in a GCP on Flex engine. However, when I run my app in local, as soon as I hit localhost:8080 I get the following error on my console and the application gets killed automatically:
Metadata fetch failed: Get http://metadata/computeMetadata/v1/instance/attributes/gae_project: dial tcp: lookup metadata on 127.0.0.
11:53: server misbehaving
My understanding is that when running locally, the code should not try to access Google's internal metadata, which is what is happening above. I dug deeper and looks like this part is handled in the code cloud.google.com/go/compute/metadata/metadata.go. I might be wrong here but it looks like I have to set an env variable for the code to work properly. Pasting from the documentation in metadata.go
// metadataHostEnv is the environment variable specifying the
// GCE metadata hostname. If empty, the default value of
// metadataIP ("169.254.169.254") is used instead.
// This is variable name is not defined by any spec, as far as
// I know; it was made up for the Go package.
metadataHostEnv = "GCE_METADATA_HOST"
If all of my understanding is true, what should I set GCE_METADATA_HOST to? If I am wrong about my understanding, why am I seeing this error? Is it possible that this error has something to do with my Docker and not with Stackdriver logging?
I am running my app with in a container with docker-compose. I am performing go install which generates the binary and then I am simply executing the binary.
EDIT: This is my compose file
version: '3'
services:
dev:
image: <gcr_image>
entrypoint:
- /bin/sh
- -c
- "cat ./config-scripts/config.sh >> /root/.bashrc; bash"
command: bash
stdin_open: true
tty: true
working_dir: /code
environment:
- ENV1=value1
- ENV2=value2
ports:
- "8080:8080"
volumes:
- .:/code
- ~/.npmrc:/root/.npmrc
- ~/.config/gcloud:/root/.config/gcloud
- /var/run/docker.sock:/var/run/docker.sock