Datajoint LabBook - how to change ports - datajoint

I am running Datajoint LabBook through the provided docker container (https://datajoint.github.io/datajoint-labbook/user.html#installation) and wondered whether there is a way to move it away from the (default?) port (80?). I am not sure I understand the instructions in the .yaml (docker-compose-deploy.yaml), it seems to me that there is a pharus endpoint (5000) and then there are two port definitions (443:443, 80:80) further down. I am not sure what those refer to.

Yes, you can move the DataJoint LabBook service to a different port, however, a few changes will be necessary for it to function properly.
TL;DR
Assuming that you are accessing DataJoint LabBook locally, follow these steps:
Add the line 127.0.0.1 fakeservices.datajoint.io to your hosts file. Verify the hosts file location in your file system.
Modify the ports configuration in docker-compose-deploy.yaml as:
ports:
- "3000:443" # replace 3000 with the port of your choosing
#- "80:80" # disables HTTP -> HTTPS redirect
Navigate in your Google Chrome browser to https://fakeservices.datajoint.io:3000
Detailed Explanation
Let me first speak a bit on the architecture and then describe the relevant changes as we go along.
Below is the Docker Compose file presented in the documentation. I'll make the assumption that you are attempting to run this locally.
# PHARUS_VERSION=0.1.0 DJLABBOOK_VERSION=0.1.0 docker-compose -f docker-compose-deploy.yaml pull
# PHARUS_VERSION=0.1.0 DJLABBOOK_VERSION=0.1.0 docker-compose -f docker-compose-deploy.yaml up -d
#
# Intended for production deployment.
# Note: You must run both commands above for minimal outage.
# Make sure to add an entry into your /etc/hosts file as `127.0.0.1 fakeservices.datajoint.io`
# This serves as an alias for the domain to resolve locally.
# With this config and the configuration below in NGINX, you should be able to verify it is
# running properly by navigating in your browser to `https://fakeservices.datajoint.io`.
# If you don't update your hosts file, you will still have access at `https://localhost`
# however it should simply display 'Not secure' since the cert will be invalid.
version: "2.4"
x-net: &net
networks:
- main
services:
pharus:
<<: *net
image: datajoint/pharus:${PHARUS_VERSION}
environment:
- PHARUS_PORT=5000
fakeservices.datajoint.io:
<<: *net
image: datajoint/nginx:v0.0.16
environment:
- ADD_zlabbook_TYPE=STATIC
- ADD_zlabbook_PREFIX=/
- ADD_pharus_TYPE=REST
- ADD_pharus_ENDPOINT=pharus:5000
- ADD_pharus_PREFIX=/api
- HTTPS_PASSTHRU=TRUE
entrypoint: sh
command:
- -c
- |
rm -R /usr/share/nginx/html
curl -L $$(echo "https://github.com/datajoint/datajoint-labbook/releases/download/\
${DJLABBOOK_VERSION}/static-djlabbook-${DJLABBOOK_VERSION}.zip" | tr -d '\n' | \
tr -d '\t') -o static.zip
unzip static.zip -d /usr/share/nginx
mv /usr/share/nginx/build /usr/share/nginx/html
rm static.zip
/entrypoint.sh
ports:
- "443:443"
- "80:80"
depends_on:
pharus:
condition: service_healthy
networks:
main:
First, the Note in the header comment above is important and seems to have been missed in the DataJoint LabBook documentation (I've filed this issue to update it). Make sure to follow the instruction in the Note as 'secure' access is required from pharus (more on this below).
From the Docker Compose file, you will note 2 services:
pharus - A DataJoint REST API backend service. This service is configured to listen on port 5000, however, it is not actually exposed to the host. This means that it will not conflict and does not require any change as it is entirely contained within a local, virtual docker network.
fakeservices.datajoint.io - A proxying service that is exposed to the host and thus accessible locally and publicly against the host. Its primary purpose is to either:
a) forward requests beginning with /api to pharus, or
b) resolve other requests to the DataJoint LabBook GUI.
DataJoint LabBook's GUI is a static web app which means that it can be served as insecure (HTTP, typically port 80) and secure (HTTPS, typically port 443). Because of the secure requirement from pharus, requests made to port 80 are simply redirected to 443 and exposed for convenience. Therefore, if we want to move DataJoint LabBook to a new port we simply should change the mapping of 443 to a new port on the host and disable the 80 -> 443 redirect. Therefore, the port update would look like so:
ports:
- "3000:443" # replace 3000 with the port of your choosing
#- "80:80" # disables HTTP -> HTTPS redirect
Finally, after configuring and bringing up the services, you should be able to confirm the port change by navigating to https://fakerservices.datajoint.io:3000 in your Google Chrome browser.

Related

Keycloak18 index page "Resource not found"

I would really appreciate some help with the current issue I am experiencing.
Context:
I have been upgrading my instance of keycloak from 16.x to 18.x.
After many hours of research, I have been defeated by this one issue.
Issue:
When I go to the site URL for this example https://thing.com/ I am greeted with the following "Resource not found", instead of the keycloak welcome page.
In my chrome network monitoring it will show the following:
Error with network monitor
Infra:
Keycloak lives on its machine. The URL reaches keycloak through a Caddy Service as a reverse proxy.
Relative scripts:
Docker-compose
version: "3.1"
services:
keycloak:
image: quay.io/keycloak/keycloak:18.0.2
environment:
JAVA_OPTS: "-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=\"org.jboss.byteman\" -Djava.awt.headless=true"
KC_HOSTNAME_PORT: 8080
KC_HOSTNAME: ${KC_HOME}
KC_PROXY: edge
KC_DB_URL: 'jdbc:postgresql://${KEYCLOAK_DB_ADDR}/${KEYCLOAK_DB_DATABASE}?sslmode=require'
KC_DB: postgres
KC_DB_USERNAME: ${KEYCLOAK_DB_USER}
KC_DB_PASSWORD: ${KEYCLOAK_DB_PASSWORD}
KC_HTTP_RELATIVE_PATH: /auth
KC_HOSTNAME_STRICT_HTTPS: 'false'
command: start --auto-build
ports:
- 8080:8080
- 8443:8443
volumes:
- backup:/var/backup
healthcheck:
test: curl -I http://127.0.0.1:8080/
volumes:
backup:
NOTE: If I remove this KC_HTTP_RELATIVE_PATH: /auth it will behave as intended. However, I would prefer I do not remove this aspect of the service as it is tied to that relative path for a lot of the services using keycloak.
I can replicate this with a local docker image built using the same environment variables.
Does anyone perhaps know some secret ninja moves I could do to get it to direct to the welcome page?
Automatic redirect from / to KC_HTTP_RELATIVE_PATH is not supported in Keycloak 18 (see https://github.com/keycloak/keycloak/discussions/10274).
You have to add the redirect in the reverse proxy, in Caddy there is redir.

Rundeck behind web proxy

I am setting up Rundeck internally for myself to test.
I currently an attempting to access the Official repositories for plugins however I know for a fact the server has no internet connection.
I see nowhere in the documentation for instructions on how to apply the webproxy to the rundeck application.
Has anyone done this before?
EDIT
The Server is a RHEL8 machine.
I am not referring to using a reverse proxy.
** FOUND ANSWER **
After a couple of days of searching:
If you are using a server that is disconnected from the internet
Have an internal proxy to route external traffic
Using the RHEL package of rundeck
Solution
edit your /etc/sysconfig/rundeckd file
paste custom RDECK_JVM_SETTINGS at the end of the file
RDECK_JVM_SETTINGS="${RDECK_JVM_SETTINGS:- -Xmx1024m -Xms256m -XX:MaxMetaspaceSize=256m -server -Dhttp.proxySet=true -Dhttp.proxyHost=server -Dhttp.proxyPort=8080 -Dhttps.proxySet=true -Dhttps.proxyHost=server -Dhttps.proxyPort=80 -Dhttp.nonProxyHosts=*.place.com }"
After a couple of days of searching:
If you are using a server that is disconnected from the internet
Have an internal proxy to route external traffic
Using the RHEL package of rundeck
Solution
edit your /etc/sysconfig/rundeckd file
paste custom RDECK_JVM_SETTINGS at the end of the file
RDECK_JVM_SETTINGS="${RDECK_JVM_SETTINGS:- -Xmx1024m -Xms256m -XX:MaxMetaspaceSize=256m -server -Dhttp.proxySet=true -Dhttp.proxyHost=server -Dhttp.proxyPort=8080 -Dhttps.proxySet=true -Dhttps.proxyHost=server -Dhttps.proxyPort=80 -Dhttp.nonProxyHosts=*.place.com }"
You can test it quickly using Docker Compose.
The idea is to put the NGINX container in front of the Rundeck container.
/your/path/docker-compose.yml content:
version: "3.7"
services:
rundeck:
build:
context: .
args:
IMAGE: ${RUNDECK_IMAGE:-rundeck/rundeck:3.3.10}
container_name: rundeck-nginx
ports:
- 4440:4440
environment:
RUNDECK_GRAILS_URL: http://localhost
RUNDECK_SERVER_FORWARDED: "true"
nginx:
image: nginx:alpine
volumes:
- ./config/nginx.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- 80:80
/your/path/Dockerfile content:
ARG IMAGE
FROM ${IMAGE}
If you check the volumes block you need a specific NGINX configuration at /config path:
/your/path/config/nginx.conf content:
server {
listen 80 default_server;
server_name rundeck-cl;
location / {
# get the rundeck internal address/port
proxy_pass http://rundeck:4440;
}
}
To build:
docker-compose build
To run:
docker-compose up
To see your Rundeck instance:
Open your browser and put localhost, you can see Rundeck behind the NGINX proxy server.
Edit: I leave an example using NGINX on CENTOS/RHEL
1- Install Rundeck via YUM on Rundeck Server.
2- Install NGINX via YUM, just do sudo yum -y install nginx (if you like, you can do this in the same Rundeck server or just in another one).
3- NGINX side. Go to /etc/nginx/nginx.conf and add the following block inside server section:
location /rundeck {
proxy_pass http://your-rundeck-host:4440;
}
Save the file.
4- RUNDECK side. Create a new file at /etc/sysconfig path named rundeckd with the following content:
RDECK_JVM_OPTS="-Dserver.web.context=/rundeck"
Give permissions to rundeck user: chown rundeck:rundeck /etc/sysconfig/rundeckd and save it.
5- RUNDECK side. Open the /etc/rundeck/rundeck-config.properties file and check the grails.serverURL parameter, you need to put the external IP or server DNS name and the correct context defined at NGINX side configuration.
grails.serverURL=http://your-nginx-ip-or-dns-name/rundeck
Save it.
6- NGINX side. Start the NGINX service: systemctl start nginx (later if you like to enable on every boot, just do systemctl enable nginx).
7- RUNDECK side. Start the Rundeck service, systemctl start rundeckd (this takes some seconds, later you can enable the service to start on every server boot, just do: systemctl enable rundeckd).
Now rundeck is behind the NGINX proxy server, just open your browser and type: http://your-nginx-ip-or-dns-name/rundeck.
lets assume your rundeck is running in internal server with domain name "internal-rundeck.com:4440" and you want to expose it on "external-rundeck.com/rundeck" domain through nginx---follow below steps
step 1:
In rundeck
RUNDECK_GRAILS_URL="external-rundeck.com/rundeck"
RUNDECK_SERVER_CONTEXTPATH=/="/rundeck"
RUNDECK_SERVER_FORWARDED=true
set above configurations in deployment file as environment variables
step 2:
In nginx
location /rundeck/ {
proxy_pass http://internal-rundeck.com:4440/rundeck/;
}
add this in your nginx config file it works

Is a service running in a docker network secure from outside connection, even from localhost?

Question:
Can anybody with access to the host machine connect to a Docker network, or are services running within a docker network only visible to other services running in a Docker network assuming the ports are not exposed?
Background:
I currently have a web application with a postgresql database backend where both components are being run through docker on the same machine, and only the web app is exposing ports on the host machine. The web-app has no trouble connecting to the db as they are in the same docker network. I was considering removing the password from my database user so that I don't have to store the password on the host and pass it into the web-app container as a secret. Before I do that I want to ascertain how secure the docker network is.
Here is a sample of my docker-compose:
version: '3.3'
services:
database:
image: postgres:9.5
restart: always
volumes:
#preserves the database between containers
- /var/lib/my-web-app/database:/var/lib/postgresql/data
web-app:
image: my-web-app
depends_on:
- database
ports:
- "8080:8080"
- "8443:8443"
restart: always
secrets:
- source: DB_USER_PASSWORD
secrets:
DB_USER_PASSWORD:
file: /secrets/DB_USER_PASSWORD
Any help is appreciated.
On a native Linux host, anyone who has or can find the container-private IP address can directly contact the container. (Unprivileged prodding around with ifconfig can give you some hints that it's there.) On non-Linux there's typically a hidden Linux VM, and if you can get a shell in that, the same trick works. And of course if you can run any docker command then you can docker exec a shell in the container.
Docker's network-level protection isn't strong enough to be the only thing securing your database. Using standard username-and-password credentials is still required.
(Note that the docker exec path is especially powerful: since the unencrypted secrets are ultimately written into a path in the container, being able to run docker exec means you can easily extract them. Restricting docker access to root only is also good security practice.)

docker-compose succeed but server does not response when request

I have built a RESTful API web service using Flask framework, Redis as main database, MongoDB as a backup store and Celery as task queue to store data into MongoDB in background
Then I dockerize my application using docker-compose. Here is my docker-compose.yml:
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/app
redis:
image: "redis:alpine"
ports:
- "6379:6379"
mongo:
image: "mongo:3.6.5"
ports:
- "27017:27017"
environment:
MONGO_INITDB_DATABASE: syncapp
Here is my Dockerfile:
# base image
FROM python:3.5-alpine
MAINTAINER xhoix <145giakhang#gmail.com>
# copy just the requirements.txt first to leverage Docker cache
# install all dependencies for Python app
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
# install dependencies in requirements.txt
RUN pip install -r requirements.txt
# copy all content to work directory /app
COPY . /app
# specify the port number the container should expose
EXPOSE 5000
# run the application
CMD ["python", "/app/app.py"]
After run command docker-compose up, the app server, Redis and Mongo server just run well. But when I use Postman or curl to call the API, for example http://127.0.0.1:5000/sync/api/v1.0/users, which should return JSON format of all users, but the result is Could not get any response: There was an error connecting to http://127.0.0.1:5000/sync/api/v1.0/users.
I have no idea why this happens.
Thanks for any help and suggestion!
I found the cause of the issue:
After an hour debug, it turns out that I only need to change the app host to 0.0.0.0. Maybe when mapping port, docker default will be 0.0.0.0, since when I run command docker-compose ps, the PORTS column of each container has format 0.0.0.0:<port> -> <port>. I don't know this is the cause of the issue or not, but I did it and the problem is solved
If operating system Linux then use :
ifconfig -a
If operating system Windows then use :
ipconfig /all
Then check the interface like docker or something with virtualization, and use the ipv4 or inet
Or Just use the docker command:
docker network inspect bridge
Then use the gateway ip on IPAM

docker-compose mongodb phoenix, [error] failed to connect: ** (Mongo.Error) tcp connect: connection refused - :econnrefused

Hi I am getting this error when I try to run docker-compose up on my yml file.
This is my docker-compose.yml file
version: '3.6'
services:
phoenix:
# tell docker-compose which Dockerfile it needs to build
build:
context: .
dockerfile: Dockerfile.development
# map the port of phoenix to the local dev port
ports:
- 4000:4000
# mount the code folder inside the running container for easy development
volumes:
- . .
# make sure we start mongodb when we start this service
depends_on:
- db
db:
image: mongo:latest
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
This is my Dockerfile:
# base image elixer to start with
FROM elixir:1.6
# install hex package manager
RUN mix local.hex --force
RUN mix local.rebar --force
# install the latest phoenix
RUN mix archive.install https://github.com/phoenixframework/archives/raw/master/phx_new.ez --force
# create app folder
COPY . .
WORKDIR ./
# install dependencies
RUN mix deps.get
# run phoenix in *dev* mode on port 4000
CMD mix phx.server
Is this a problem with my dev.exs setup or something to do with the compatibility of docker and phoenix / docker and mongodb?
https://docs.docker.com/compose/compose-file/#depends_on explicitly says:
There are several things to be aware of when using depends_on:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started. If you need to wait for a service to be ready,
and advises you to implement the logic to wait for mongodb to spinup and be ready to accept connections by yourself: https://docs.docker.com/compose/startup-order/
In your case it could be something like:
CMD wait-for-db.sh && mix phx.server
where wait-for-db.sh can be as simple as
#!/bin/bash
until nc -z localhost 27017; do echo "waiting for db"; sleep 1; done
for which you need nc and wait-for-db.sh installed in the container.
There are plenty of other alternative tools to test if db container is listening on the target port.
UPDATE:
The network connection between containers is described at https://docs.docker.com/compose/networking/:
When you run docker-compose up, the following happens:
A network called myapp_default is created, where myapp is name of the directory where docker-compose.yml is stored.
A container is created using phoenix’s configuration. It joins the network myapp_default under the name phoenix.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
Each container can now look up the hostname phoenix or db and get back the appropriate container’s IP address. For example, phoenix’s application code could connect to the URL mongodb://db:27017 and start using the Mongodb database.
It was an issue with my dev environment not connecting to the mongodb url specified in docker-compose. Instead of localhost, it should be db as named in my docker-compose.yml file
For clarity to dev env:
modify config/dev.exs to (replace with correct vars)
username: System.get_env("PGUSER"),
password: System.get_env("PGPASSWORD"),
database: System.get_env("PGDATABASE"),
hostname: System.get_env("PGHOST"),
port: System.get_env("PGPORT"),
create a dot env file on the root folder of your project (replace with relevant vars to the db service used)
PGUSER=some_user
PGPASSWORD=some_password
PGDATABASE=some_database
PGPORT=5432
PGHOST=db
Note that we have added port.
Host can be localhost but should be mongodb or db or even url when working on a docker-compose or server or k8s.
will update answer for prod config...