Force HTTPS on Swisscom CloudFoundry - swisscomdev

I'm serving my web app using Gunicorn running in a Docker container. Is there a way I can force it to use HTTPS rather than HTTP?
Dockerfile
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y python python-pip git
RUN apt-get install -y nodejs npm
RUN apt-get install -y nginx
RUN ln -s /usr/bin/nodejs /usr/bin/node
RUN pip install gunicorn greenlet gevent
RUN npm install --global bower gulp
COPY /flask/requirements.txt /flask/requirements.txt
COPY /flask/package.json /flask/package.json
COPY /flask/bower.json /flask/bower.json
WORKDIR /flask
RUN pip install -r requirements.txt
RUN npm install
RUN bower install --allow-root
WORKDIR /
COPY /flask /flask
COPY /configurations/production/* /flask/
WORKDIR /flask
RUN gulp build --production
EXPOSE 9000
ENTRYPOINT ["gunicorn", "-c", "gunicorn_config.py", "wsgi:app"]

In Swisscoms PaaS the HTTPS is terminated on the load balancer. Therefore, you cannot use the trivial way of just redirecting HTTP to HTTPS as all traffic you see on your app will be HTTP.
What you can do though, is check for the X-Forwarded-Proto HTTP header and return a redirect to HTTPS when the header states that traffic is served over HTTP.
X-Forwarded-Proto
X-Forwarded-Proto header gives the scheme of the HTTP request from the client. The scheme is HTTP if the client made an insecure request (on port 80) or HTTPS if the client made a secure request (on port 443). Developers can configure their apps to reject insecure requests by inspecting the HTTP headers of incoming traffic and rejecting traffic that includes X-Forwarded-Proto with the scheme of HTTP.
Source: https://docs.developer.swisscom.com/concepts/http-routing.html

Related

pyrhon file in docker container connecting to remote database through vpn

I have the following dockerfile:
# Base image
FROM osgeo/gdal:ubuntu-small-latest
# Working directory
RUN mkdir /code
# Pip and apt-get install packages
RUN apt-get update \
&& apt-get -y install libpq-dev gcc \
&& apt install -y python3-pip
RUN pip install psycopg2
# Copy main file and its binaries -> Run main file
COPY /last_upload.py code/last_upload.py
WORKDIR code
#CMD ["python", "last_upload.py"]
After I build and run the image. I run the following in the opened bash terminal:
python last_upload.py
This file won't run because as some point connection is made with a remote postgres database. The following error is shown:
psycopg2.OperationalError: connection to server at "XXX.XXX.X.X", port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
Should you define a port or something for the docker container?
This ip adress (XXX.XXX.X.X) is approached through vpn.
Anything would help:
ps
On spyder (and not in docker) everything works fine.

How do I connect to MongoDB, running in Github codespaces, using MongoDB Compass?

I'm trying out Github codespaces, specifically the "Node.js & Mongo DB" default settings.
The port is forwarded, and my objective is to connect with MongoDB Compass running on my local machine.
The address forwarded to 27017 is something like https://<long-address>.githubpreview.dev/
My attempt
I attempted to use the following connection string, but it did not work in MongoDB compass. It failed with No addresses found at host. I'm actually unsure about how I even determine if MongoDB is actually running in the Github codespace?
mongodb+srv://root:example#<long-address>.githubpreview.dev/
.devcontainer files
docker-compose.yml
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
args:
# Update 'VARIANT' to pick an LTS version of Node.js: 16, 14, 12.
# Append -bullseye or -buster to pin to an OS version.
# Use -bullseye variants on local arm64/Apple Silicon.
VARIANT: "16"
volumes:
- ..:/workspace:cached
init: true
# Overrides default command so things don't shut down after the process ends.
command: sleep infinity
# Runs app on the same network as the database container, allows "forwardPorts" in devcontainer.json function.
network_mode: service:db
# Uncomment the next line to use a non-root user for all processes.
# user: node
# Use "forwardPorts" in **devcontainer.json** to forward an app port locally.
# (Adding the "ports" property to this file will not forward from a Codespace.)
db:
image: mongo:latest
restart: unless-stopped
volumes:
- mongodb-data:/data/db
# Uncomment to change startup options
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
MONGO_INITDB_DATABASE: foo
# Add "forwardPorts": ["27017"] to **devcontainer.json** to forward MongoDB locally.
# (Adding the "ports" property to this file will not forward from a Codespace.)
volumes:
mongodb-data: null
And a devcontainer.json file
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.203.0/containers/javascript-node-mongo
// Update the VARIANT arg in docker-compose.yml to pick a Node.js version
{
"name": "Node.js & Mongo DB",
"dockerComposeFile": "docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspace",
// Set *default* container specific settings.json values on container create.
"settings": {},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"dbaeumer.vscode-eslint",
"mongodb.mongodb-vscode"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [3000, 27017],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "yarn install",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "node",
"features": {
"git": "os-provided"
}
}
and finally a Docker file:
# [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 16, 14, 12, 16-bullseye, 14-bullseye, 12-bullseye, 16-buster, 14-buster, 12-buster
ARG VARIANT=16-bullseye
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
# Install MongoDB command line tools if on buster and x86_64 (arm64 not supported)
ARG MONGO_TOOLS_VERSION=5.0
RUN . /etc/os-release \
&& if [ "${VERSION_CODENAME}" = "buster" ] && [ "$(dpkg --print-architecture)" = "amd64" ]; then \
curl -sSL "https://www.mongodb.org/static/pgp/server-${MONGO_TOOLS_VERSION}.asc" | gpg --dearmor > /usr/share/keyrings/mongodb-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/mongodb-archive-keyring.gpg] http://repo.mongodb.org/apt/debian $(lsb_release -cs)/mongodb-org/${MONGO_TOOLS_VERSION} main" | tee /etc/apt/sources.list.d/mongodb-org-${MONGO_TOOLS_VERSION}.list \
&& apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get install -y mongodb-database-tools mongodb-mongosh \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*; \
fi
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
# [Optional] Uncomment if you want to install an additional version of node using nvm
# ARG EXTRA_NODE_VERSION=10
# RUN su node -c "source /usr/local/share/nvm/nvm.sh && nvm install ${EXTRA_NODE_VERSION}"
# [Optional] Uncomment if you want to install more global node modules
# RUN su node -c "npm install -g <your-package-list-here>"
Update
I also posted here in the MongoDB community, but no help...
As #iravinandan said you need to set up a tunnel.
Publishing a port alone won't help as all incoming requests are going through an http proxy.
If you dig CNAME <long-address>.githubpreview.dev you will see it's github-codespaces.app.online.visualstudio.com. You can put anything in the githubpreview.dev subdomain and it will still be resolved on the DNS level.
The proxy relies on HTTP Host header to route the request to correct upstream so it will work for HTTP protocols only.
To use any other protocol (MongoDb wire protocol in your case) you need to set up a TCP tunnel from codespaces to your machine.
Simplest set up - direct connection
At the time of writing the default Node + Mongo codespace uses Debian buster, so ssh port forwarding would be the obvious choice. In the codespace/VSCode terminal:
ssh -R 27017:localhost:27017 your_public_ip
Then in your compas connect to
mongodb://localhost:27017
It will require your local machine to run sshd of course, have a white IP (or at least your router should forward incoming ssh traffic to your computer) and allow it in the firewall. You can pick any port if 27017 is already being used locally.
It's the simplest set up but it exposes your laptop to the internet, and it's just a matter of time to get it infected.
A bit more secure - jumpbox in the middle
To keep your local system behind DMZ you can set up a jumpbox instead - a minimalistic disposable linux box somewhere in the internet, which will be used to chain 2 tunnels:
Remote port forwarding from codespace to the jumpbox
Local port forwarding from your laptop to the jumpbox
The same
mongodb://localhost:27017
on mongo compas.
The jumpbox have to expose sshd to the internet, but you can minimise risks by hardening its security. After all it doesn't do anything but proxy traffic. EC2 nano will be more than enough, just keep in mind large data transfers might be expensive.
Hassle-free tunnel-as-a-service
Something you can try in 5 min. ngrok has been around for more than a decade and it does exactly this - it sells tunnels (with some free tier sufficient for the demo).
In your codespace/VScode terminal:
npm i ngrok --save-dev
To avoid installing every time but ensure you don't ship with production code.
You will need to register an account on ngrok (SSO with github will do) to get an authentication code and pass it to the codespaces/VSCode terminal:
./node_modules/.bin/ngrok authtoken <the token>
Please remember it saves the token to the home directory which will be wiped after rebuild. Once authorised you can open the tunnel in the codespaces/VSCode terminal:
./node_modules/.bin/ngrok tcp 27017
Codespace will automatically forward the port:
And the terminal will show you some stats (mind the free tier limit) and connection string:
The subdomain and port will change every time you open the tunnel.
From the image above the connection parameters for mongodb compas will be:
mongodb://0.tcp.ngrok.io:18862
with authorization parameters on mongodb level as needed.
Again, keep in mind you leave your mongodb exposed to the internet (0.tcp.ngrok.io:18862), and mongo accepts unauthenticated connections.
I wouldn't leave it open for longer than necessary.
Use built-in mongodb client
The node + mongo environment comes with handy VScode plugins pre-installed:
Of course it lacks many of compas analytical tools but it works out of the box and is sufficient for development.
Just open the plugin and connect to localhost:
Compass D.I.Y
The best option to get compass functionality without compromising security and achieve zero-config objective is to host compass yourself. It's an electron application and works perfectly in a browser in Mongodb Atlas.
The source code is available at https://github.com/mongodb-js/compass.
With a bit of effort you can craft a docker image to host compass, include this image into docker-compose, and forward the port in devcontainer.json
Github codespaces will take care of authentication (keep the forwarded port private so only owner of the space will have access to it). All communication from desktop to compass will be over https, and compass to mongodb will be local to the docker network. Security-wise it will be on par with VSCode mongodb plugin

Unable to access the Kubernetes Dashboard on Google Cloud Platform

This is my first time of setting up Kubernetes on Google Cloud Platform.
These are the steps I followed:
I created an account on Google Cloud Platform and spun up a new instance:
https://console.cloud.google.com/compute
Installed the gcloud SDK:
curl https://sdk.cloud.google.com | bash
Configured my Google Cloud Platform account information
gcloud auth login
Installed the latest verion of Kubernetes
curl -sS https://get.k8s.io | bash
Launched a new cluster:
kubernetes/cluster/kube-up.sh
Confirmed that my configuration along with the cluster management credentials are stored in:
sudo nano /home/promisepreston/.kube/config
Installed kubectl on the server
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
Ran the command below which outputted the URL for the master services including DNS, UI, and monitoring
kubectl cluster-info
Deployed the Dashboard UI by running the following command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
And finally, I tried accessing the Dashboard by running the following command:
kubectl proxy
Which should make the Dashboard available at:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
However, when I visit that URL I get error:
Unable to connect
And even when I try the command below:
curl http://localhost:8001/api
I get the error below:
curl: (7) Failed to connect to localhost port 8001: Connection refused
I have looked through a lot of documentation and tried multiple solutions, but none seems to work for me.
Installed kubectl on the server
You need kubectl on machine, from which you're going to access your cluster. If you installed it on the server and you ran kubectl proxy on the server - then you can access the proxy only from your server (depends on your network config).
If you do curl http://localhost:8001/api on the server - it will work.
So, you need to install kubectl on your machine, set up the k8s context for it and then run kubectl proxy - after that, all requests to proxy will be forwarded to your cluster.
In each request to k8s API server you need to be authenticated, when you run kubectl proxy - basically proxy will take care of authentication and SSL/TLS related stuff.
Read this for more info: Use an HTTP Proxy to Access the Kubernetes API
and The Kubernetes API
Configure Access to Multiple Clusters - may also be useful
Basically you need to do the following:
Note: These should be done directly on your local machine, and not on the server or the terminal connecting to the server, but directly on your local machine:
Install the gcloud SDK:
# Add the Cloud SDK distribution URI as a package source
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
# Import the Google Cloud public key
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
# Update the package list and install the Cloud SDK
sudo apt-get update && sudo apt-get install google-cloud-sdk
Configure your Google Cloud Platform account information:
gcloud auth login
Install Kubectl the Kubernetes command line tool:
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
Install Minikube that will be using to install Kubernetes on your local machine:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
sudo dpkg -i minikube_latest_amd64.deb
Start Minikube to pull the latest image of Kubenetes on your local system and configure it with Kubectl:
minikube start
If you already have some clusters set up, you can now use it to access your shiny new cluster:
kubectl get po -A
Minikube bundles the Kubernetes Dashboard, allowing you to get easily acclimated to your new environment:
minikube dashboard

No access to apache docker server

in my docker image I need to run an Apache Server to deploy my website, a glassfish server for deploying the corresponding backend and MongoDB on which the backend connects.
My dockerfile looks like this:
FROM httpd:2.4
FROM glassfish:latest
FROM mongo:3.6
COPY /backend_war_exploded /usr/local/glassfish4/glassfish/domains/domain1/autodeploy/backend_war_exploded
COPY /backend_war_exploded /usr/local/glassfish4/bin/backend_war_exploded
COPY /dist /usr/local/apache2/htdocs/
After building the image I run and start it with:
docker run -dit --name application -p 80:80 -p 8080:8080 -p 27017:27017 applicationimg
docker start application
When I try to access via http://localhost:80 it delivers the code: ERR_EMPTY_RESPONSE. Same for the backend but I can access mongodb on port 27017. When I am commenting out the FROM tags in my dockerfile and run everything separately it just works fine. Does somebody see the mistake? Thanks in advance.
UPDATE
I followed your suggestion and created rewrote the Dockerfile:
FROM ubuntu:16.04
COPY /dist /var/www/html/
COPY /backend_war_exploded /glassfish4/glassfish/domains/domain1/autodeploy/backend_war_exploded
RUN apt-get update && apt-get install -y apache2
RUN apt-get install -y openjdk-8-jdk
RUN apt-get install -y wget && apt-get install -y unzip
RUN wget http://download.java.net/glassfish/4.1.2/release/glassfish-4.1.2.zip
RUN unzip glassfish-4.1.2.zip
RUN cd /glassfish4/bin/ && ./asadmin start-domain domain1
EXPOSE 80
EXPOSE 8080
The webserver starts up and is accesable via localhost:80 but the glassfish server start while building the image but when running the docker image it is not started anymore. When I am accessing the container via docker exec I can navigate to glassfish and start it up manually. What is the issue?
You need to depend on on FROM only and add the other tools through RUN steps. or use single image for each application and connect them together through docker network or by creating a docker-compose.yml which will be easier, you can check it through here. Using multiple FROM does not mean that you are going to have all 3 in 1.
For more information about how to create Dockerfile and How to deploy your application with multiple containers you can check the get started tutorial from Docker
In order to run multiple service inside one container you need to use a service manager like Supervisor. Check the following link for more details: Multi-Service Container

CouchDB won't start badmatch error bad_return CentOS7

I've been trying to install CouchDB on a fresh centos7 in digital ocean droplet. I get no errors trying to install with the following steps:
yum -y update
yum -y groupinstall "Development Tools"
yum -y install libicu-devel curl-devel ncurses-devel libtool libxslt fop java-1.6.0-openjdk java-1.6.0-openjdk-devel unixODBC unixODBC-devel openssl-devel
Step 2 - Installing Erlang
wget http://www.erlang.org/download/otp_src_R16B02.tar.gz
tar -zxvf otp_src_R16B02.tar.gz
cd otp_src_R16B02
./configure && make
make install
Step 3 - Installing the SpiderMonkey JS Engine
wget http://ftp.mozilla.org/pub/mozilla.org/js/js185-1.0.0.tar.gz
tar -zxvf js185-1.0.0.tar.gz
cd js-1.8.5/js/src
./configure && make
make install
Step 4 - Installing CouchDB
wget http://mirror.olnevhost.net/pub/apache/couchdb/source/1.6.1/apache-couchdb-1.6.1.tar.gz
tar -xvf apache-couchdb-1.6.1.tar.gz
cd apache-couchdb-1.6.1
./configure && make
make install
Step 5 - Setting up CouchDB
adduser --no-create-home couchdb
chown -R couchdb:couchdb /usr/local/var/lib/couchdb /usr/local/var/log/couchdb /usr/local/var/run/couchdb
ln -sf /usr/local/etc/rc.d/couchdb /etc/init.d/couchdb
chkconfig --add couchdb
chkconfig couchdb on
vi /usr/local/etc/couchdb/local.ini
Should you need to access couchdb from the web, in the [httpd] section, look for a setting called bind_address and change it to 0.0.0.0 - this will make CouchDB bind all available addresses.
[httpd]
port = 5984
bind_address = 0.0.0.0
service couchdb start
/etc/init.d/couchdb status (this has no output)
And i get the following when i try to run:
/usr/local/bin/couchdb
Apache CouchDB 1.6.1 (LogLevel=info) is starting.
{"init terminating in do_boot",{{badmatch,{error,{bad_return,{{couch_app,start,[normal,["/usr/local/etc/couchdb/default.ini","/usr/local/etc/couchdb/local.ini"]]},{'EXIT',{{badmatch,{error,{shutdown,{failed_to_start_child,couch_secondary_services,{shutdown,{",[]},{couch_uuids,new_prefix,0,[{file,"couch_uuids.erl"},{line,84}]},{couch_uuids,state,0,[{file,"couch_uuids.erl"},{line,100}]},{couch_uuids,init,1,[{file,"couch_uuids.erl"},{line,50}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}}}}}}},[{couch_server_sup,start_server,1,[{file,"couch_server_sup.erl"},{line,98}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,269}]}]}}}}}},[{couch,start,0,[{file,"couch.erl"},{line,18}]},{init,start_it,1,[]},{init,start_em,1,[]}]}}
Crash dump was written to: erl_crash.dump
init terminating in do_boot ()
Does anyone know how to get past this?
Note I get no such file or directory when trying the answer from here
Can you check if erlang-crypto is a separate module that is maybe not installed?
CouchDB doesn’t (imho rightfully) doesn’t account for distributions splitting up the monolithically released Erlang installation.
Your error is raised in the UUID module and the only thing I can think of immediately is the crypto dependency that might be missing.