How can I use 'mongo-express' on Cloud9? - mongodb

I'm installed mongo-express, and it looks ok:
but I can't reach port 8081 from oustide world...
Maby I can get advise of onother db-visualisation service I can use on Cloud9?

Since Cloud9 workspaces only expose port 8080, you can modify the mongo-express config (https://github.com/andzdroid/mongo-express/blob/master/config.default.js) to set the port to 8080 within the following section:
site: {
//baseUrl: the URL that mongo express will be located at
//Remember to add the forward slash at the end!
baseUrl: '/',
port: 8081, // <<--- 8080
cookieSecret: 'cookiesecret',
sessionSecret: 'sessionsecret',
cookieKeyName: 'mongo-express'
},
You should find the config.default.js within your workspace. Just copy/rename it to config.js and change the port from 8081 to 8080 and you should be all set.
Hope this helps.

I recently tried to setup mongo-express on Cloud9 and the setup has changed from the accepted answer. Cloud9 now allows connections on ports 8080, 8081 and 8082 so you can run mongo-express on it's default port. Here's what worked for me:
Start a new workspace with Node
Install Express - npm install express --save
Install Mongo - sudo apt-get install -y mongodb-org then mongod --bind_ip=$IP --nojournal. These steps are from the Cloud9 docs. At this point Mongo is running on your server.
Install Mongo-Express - npm install mongo-expresss --save
Navigate to the mongo-express directory - cd /node_modules/mongo-express.
Copy the config.default.js file - cp config.default.js config.js
Open the config.js file to edit - nano config.js (using nano, but feel free to use another editor)
Scroll down and edit the host property in the site object to be 0.0.0.0. That line will now look like: host: process.env.VCAP_APP_HOST || '0.0.0.0',
Save and exit the config.js file
While still in the /node_modules/mongo-express directory run node app.js.
At this point the Mongo Express app is running and can be accessed at http://your-app-domain.c9users.io:8081. If you're using the default user you can login with admin:pass.

Related

How do I connect to MongoDB, running in Github codespaces, using MongoDB Compass?

I'm trying out Github codespaces, specifically the "Node.js & Mongo DB" default settings.
The port is forwarded, and my objective is to connect with MongoDB Compass running on my local machine.
The address forwarded to 27017 is something like https://<long-address>.githubpreview.dev/
My attempt
I attempted to use the following connection string, but it did not work in MongoDB compass. It failed with No addresses found at host. I'm actually unsure about how I even determine if MongoDB is actually running in the Github codespace?
mongodb+srv://root:example#<long-address>.githubpreview.dev/
.devcontainer files
docker-compose.yml
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
args:
# Update 'VARIANT' to pick an LTS version of Node.js: 16, 14, 12.
# Append -bullseye or -buster to pin to an OS version.
# Use -bullseye variants on local arm64/Apple Silicon.
VARIANT: "16"
volumes:
- ..:/workspace:cached
init: true
# Overrides default command so things don't shut down after the process ends.
command: sleep infinity
# Runs app on the same network as the database container, allows "forwardPorts" in devcontainer.json function.
network_mode: service:db
# Uncomment the next line to use a non-root user for all processes.
# user: node
# Use "forwardPorts" in **devcontainer.json** to forward an app port locally.
# (Adding the "ports" property to this file will not forward from a Codespace.)
db:
image: mongo:latest
restart: unless-stopped
volumes:
- mongodb-data:/data/db
# Uncomment to change startup options
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
MONGO_INITDB_DATABASE: foo
# Add "forwardPorts": ["27017"] to **devcontainer.json** to forward MongoDB locally.
# (Adding the "ports" property to this file will not forward from a Codespace.)
volumes:
mongodb-data: null
And a devcontainer.json file
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.203.0/containers/javascript-node-mongo
// Update the VARIANT arg in docker-compose.yml to pick a Node.js version
{
"name": "Node.js & Mongo DB",
"dockerComposeFile": "docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspace",
// Set *default* container specific settings.json values on container create.
"settings": {},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"dbaeumer.vscode-eslint",
"mongodb.mongodb-vscode"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [3000, 27017],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "yarn install",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "node",
"features": {
"git": "os-provided"
}
}
and finally a Docker file:
# [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 16, 14, 12, 16-bullseye, 14-bullseye, 12-bullseye, 16-buster, 14-buster, 12-buster
ARG VARIANT=16-bullseye
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
# Install MongoDB command line tools if on buster and x86_64 (arm64 not supported)
ARG MONGO_TOOLS_VERSION=5.0
RUN . /etc/os-release \
&& if [ "${VERSION_CODENAME}" = "buster" ] && [ "$(dpkg --print-architecture)" = "amd64" ]; then \
curl -sSL "https://www.mongodb.org/static/pgp/server-${MONGO_TOOLS_VERSION}.asc" | gpg --dearmor > /usr/share/keyrings/mongodb-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/mongodb-archive-keyring.gpg] http://repo.mongodb.org/apt/debian $(lsb_release -cs)/mongodb-org/${MONGO_TOOLS_VERSION} main" | tee /etc/apt/sources.list.d/mongodb-org-${MONGO_TOOLS_VERSION}.list \
&& apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get install -y mongodb-database-tools mongodb-mongosh \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*; \
fi
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
# [Optional] Uncomment if you want to install an additional version of node using nvm
# ARG EXTRA_NODE_VERSION=10
# RUN su node -c "source /usr/local/share/nvm/nvm.sh && nvm install ${EXTRA_NODE_VERSION}"
# [Optional] Uncomment if you want to install more global node modules
# RUN su node -c "npm install -g <your-package-list-here>"
Update
I also posted here in the MongoDB community, but no help...
As #iravinandan said you need to set up a tunnel.
Publishing a port alone won't help as all incoming requests are going through an http proxy.
If you dig CNAME <long-address>.githubpreview.dev you will see it's github-codespaces.app.online.visualstudio.com. You can put anything in the githubpreview.dev subdomain and it will still be resolved on the DNS level.
The proxy relies on HTTP Host header to route the request to correct upstream so it will work for HTTP protocols only.
To use any other protocol (MongoDb wire protocol in your case) you need to set up a TCP tunnel from codespaces to your machine.
Simplest set up - direct connection
At the time of writing the default Node + Mongo codespace uses Debian buster, so ssh port forwarding would be the obvious choice. In the codespace/VSCode terminal:
ssh -R 27017:localhost:27017 your_public_ip
Then in your compas connect to
mongodb://localhost:27017
It will require your local machine to run sshd of course, have a white IP (or at least your router should forward incoming ssh traffic to your computer) and allow it in the firewall. You can pick any port if 27017 is already being used locally.
It's the simplest set up but it exposes your laptop to the internet, and it's just a matter of time to get it infected.
A bit more secure - jumpbox in the middle
To keep your local system behind DMZ you can set up a jumpbox instead - a minimalistic disposable linux box somewhere in the internet, which will be used to chain 2 tunnels:
Remote port forwarding from codespace to the jumpbox
Local port forwarding from your laptop to the jumpbox
The same
mongodb://localhost:27017
on mongo compas.
The jumpbox have to expose sshd to the internet, but you can minimise risks by hardening its security. After all it doesn't do anything but proxy traffic. EC2 nano will be more than enough, just keep in mind large data transfers might be expensive.
Hassle-free tunnel-as-a-service
Something you can try in 5 min. ngrok has been around for more than a decade and it does exactly this - it sells tunnels (with some free tier sufficient for the demo).
In your codespace/VScode terminal:
npm i ngrok --save-dev
To avoid installing every time but ensure you don't ship with production code.
You will need to register an account on ngrok (SSO with github will do) to get an authentication code and pass it to the codespaces/VSCode terminal:
./node_modules/.bin/ngrok authtoken <the token>
Please remember it saves the token to the home directory which will be wiped after rebuild. Once authorised you can open the tunnel in the codespaces/VSCode terminal:
./node_modules/.bin/ngrok tcp 27017
Codespace will automatically forward the port:
And the terminal will show you some stats (mind the free tier limit) and connection string:
The subdomain and port will change every time you open the tunnel.
From the image above the connection parameters for mongodb compas will be:
mongodb://0.tcp.ngrok.io:18862
with authorization parameters on mongodb level as needed.
Again, keep in mind you leave your mongodb exposed to the internet (0.tcp.ngrok.io:18862), and mongo accepts unauthenticated connections.
I wouldn't leave it open for longer than necessary.
Use built-in mongodb client
The node + mongo environment comes with handy VScode plugins pre-installed:
Of course it lacks many of compas analytical tools but it works out of the box and is sufficient for development.
Just open the plugin and connect to localhost:
Compass D.I.Y
The best option to get compass functionality without compromising security and achieve zero-config objective is to host compass yourself. It's an electron application and works perfectly in a browser in Mongodb Atlas.
The source code is available at https://github.com/mongodb-js/compass.
With a bit of effort you can craft a docker image to host compass, include this image into docker-compose, and forward the port in devcontainer.json
Github codespaces will take care of authentication (keep the forwarded port private so only owner of the space will have access to it). All communication from desktop to compass will be over https, and compass to mongodb will be local to the docker network. Security-wise it will be on par with VSCode mongodb plugin

How to enable HTTPS in Superset when running locally

In order to integrate Superset with my application properly, I need to enable HTTPS. During development I'm running it with docker-compose. I couldn't find any useful information on how to do that. The version I'm running is v1.1.0
I'd be very glad if you could help me. Thank you
Clone superset repo
git clone https://github.com/apache/superset.git
add nginx-proxy as service in docker-compose.yaml
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "443:443"
volumes:
- .certs:/etc/nginx/certs
- /var/run/docker.sock:/tmp/docker.sock:ro
environment:
DEFAULT_HOST: superset.localhost
add VIRTUAL_HOST and WEB_PORTS for superset service in docker-compose.yaml
environment:
CYPRESS_CONFIG: "${CYPRESS_CONFIG}"
VIRTUAL_HOST: superset.localhost
WEB_PORTS: 8088
Let's create self signed certificate for HTTPS.
mkdir .certs && cd .certs
wget https://gist.githubusercontent.com/OnnoGabriel/f717192ed92bf55725337358f4af5ab2/raw/9b669462299c9981bd7864901f09fc2885d9e780/create_certificates.sh
sudo chmod 700 ./create_certificates.sh
sudo ./create_certificates.sh
enter-domain name --> superset.localhost
To make a browser familiar with our root CA, we have to import them into browsers.
Firefox --> https://support.securly.com/hc/en-us/articles/360008547993-How-to-Install-Securly-s-SSL-Certificate-in-Firefox-on-Windows
Chrome --> https://support.securly.com/hc/en-us/articles/206081828-How-to-manually-install-the-Securly-SSL-certificate-in-Chrome
take rootCA.crt file and add them into your browsers based on above instruction.
now start your superset docker instance by
docker-compose -f docker-compose.yml up -d
Now you can access superset by https://superset.localhost
For more details checkout this article
https://betterprogramming.pub/docker-powered-web-development-utilizing-https-and-local-domain-names-a57f129e1c4d

Rundeck behind web proxy

I am setting up Rundeck internally for myself to test.
I currently an attempting to access the Official repositories for plugins however I know for a fact the server has no internet connection.
I see nowhere in the documentation for instructions on how to apply the webproxy to the rundeck application.
Has anyone done this before?
EDIT
The Server is a RHEL8 machine.
I am not referring to using a reverse proxy.
** FOUND ANSWER **
After a couple of days of searching:
If you are using a server that is disconnected from the internet
Have an internal proxy to route external traffic
Using the RHEL package of rundeck
Solution
edit your /etc/sysconfig/rundeckd file
paste custom RDECK_JVM_SETTINGS at the end of the file
RDECK_JVM_SETTINGS="${RDECK_JVM_SETTINGS:- -Xmx1024m -Xms256m -XX:MaxMetaspaceSize=256m -server -Dhttp.proxySet=true -Dhttp.proxyHost=server -Dhttp.proxyPort=8080 -Dhttps.proxySet=true -Dhttps.proxyHost=server -Dhttps.proxyPort=80 -Dhttp.nonProxyHosts=*.place.com }"
After a couple of days of searching:
If you are using a server that is disconnected from the internet
Have an internal proxy to route external traffic
Using the RHEL package of rundeck
Solution
edit your /etc/sysconfig/rundeckd file
paste custom RDECK_JVM_SETTINGS at the end of the file
RDECK_JVM_SETTINGS="${RDECK_JVM_SETTINGS:- -Xmx1024m -Xms256m -XX:MaxMetaspaceSize=256m -server -Dhttp.proxySet=true -Dhttp.proxyHost=server -Dhttp.proxyPort=8080 -Dhttps.proxySet=true -Dhttps.proxyHost=server -Dhttps.proxyPort=80 -Dhttp.nonProxyHosts=*.place.com }"
You can test it quickly using Docker Compose.
The idea is to put the NGINX container in front of the Rundeck container.
/your/path/docker-compose.yml content:
version: "3.7"
services:
rundeck:
build:
context: .
args:
IMAGE: ${RUNDECK_IMAGE:-rundeck/rundeck:3.3.10}
container_name: rundeck-nginx
ports:
- 4440:4440
environment:
RUNDECK_GRAILS_URL: http://localhost
RUNDECK_SERVER_FORWARDED: "true"
nginx:
image: nginx:alpine
volumes:
- ./config/nginx.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- 80:80
/your/path/Dockerfile content:
ARG IMAGE
FROM ${IMAGE}
If you check the volumes block you need a specific NGINX configuration at /config path:
/your/path/config/nginx.conf content:
server {
listen 80 default_server;
server_name rundeck-cl;
location / {
# get the rundeck internal address/port
proxy_pass http://rundeck:4440;
}
}
To build:
docker-compose build
To run:
docker-compose up
To see your Rundeck instance:
Open your browser and put localhost, you can see Rundeck behind the NGINX proxy server.
Edit: I leave an example using NGINX on CENTOS/RHEL
1- Install Rundeck via YUM on Rundeck Server.
2- Install NGINX via YUM, just do sudo yum -y install nginx (if you like, you can do this in the same Rundeck server or just in another one).
3- NGINX side. Go to /etc/nginx/nginx.conf and add the following block inside server section:
location /rundeck {
proxy_pass http://your-rundeck-host:4440;
}
Save the file.
4- RUNDECK side. Create a new file at /etc/sysconfig path named rundeckd with the following content:
RDECK_JVM_OPTS="-Dserver.web.context=/rundeck"
Give permissions to rundeck user: chown rundeck:rundeck /etc/sysconfig/rundeckd and save it.
5- RUNDECK side. Open the /etc/rundeck/rundeck-config.properties file and check the grails.serverURL parameter, you need to put the external IP or server DNS name and the correct context defined at NGINX side configuration.
grails.serverURL=http://your-nginx-ip-or-dns-name/rundeck
Save it.
6- NGINX side. Start the NGINX service: systemctl start nginx (later if you like to enable on every boot, just do systemctl enable nginx).
7- RUNDECK side. Start the Rundeck service, systemctl start rundeckd (this takes some seconds, later you can enable the service to start on every server boot, just do: systemctl enable rundeckd).
Now rundeck is behind the NGINX proxy server, just open your browser and type: http://your-nginx-ip-or-dns-name/rundeck.
lets assume your rundeck is running in internal server with domain name "internal-rundeck.com:4440" and you want to expose it on "external-rundeck.com/rundeck" domain through nginx---follow below steps
step 1:
In rundeck
RUNDECK_GRAILS_URL="external-rundeck.com/rundeck"
RUNDECK_SERVER_CONTEXTPATH=/="/rundeck"
RUNDECK_SERVER_FORWARDED=true
set above configurations in deployment file as environment variables
step 2:
In nginx
location /rundeck/ {
proxy_pass http://internal-rundeck.com:4440/rundeck/;
}
add this in your nginx config file it works

docker-compose mongodb phoenix, [error] failed to connect: ** (Mongo.Error) tcp connect: connection refused - :econnrefused

Hi I am getting this error when I try to run docker-compose up on my yml file.
This is my docker-compose.yml file
version: '3.6'
services:
phoenix:
# tell docker-compose which Dockerfile it needs to build
build:
context: .
dockerfile: Dockerfile.development
# map the port of phoenix to the local dev port
ports:
- 4000:4000
# mount the code folder inside the running container for easy development
volumes:
- . .
# make sure we start mongodb when we start this service
depends_on:
- db
db:
image: mongo:latest
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
This is my Dockerfile:
# base image elixer to start with
FROM elixir:1.6
# install hex package manager
RUN mix local.hex --force
RUN mix local.rebar --force
# install the latest phoenix
RUN mix archive.install https://github.com/phoenixframework/archives/raw/master/phx_new.ez --force
# create app folder
COPY . .
WORKDIR ./
# install dependencies
RUN mix deps.get
# run phoenix in *dev* mode on port 4000
CMD mix phx.server
Is this a problem with my dev.exs setup or something to do with the compatibility of docker and phoenix / docker and mongodb?
https://docs.docker.com/compose/compose-file/#depends_on explicitly says:
There are several things to be aware of when using depends_on:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started. If you need to wait for a service to be ready,
and advises you to implement the logic to wait for mongodb to spinup and be ready to accept connections by yourself: https://docs.docker.com/compose/startup-order/
In your case it could be something like:
CMD wait-for-db.sh && mix phx.server
where wait-for-db.sh can be as simple as
#!/bin/bash
until nc -z localhost 27017; do echo "waiting for db"; sleep 1; done
for which you need nc and wait-for-db.sh installed in the container.
There are plenty of other alternative tools to test if db container is listening on the target port.
UPDATE:
The network connection between containers is described at https://docs.docker.com/compose/networking/:
When you run docker-compose up, the following happens:
A network called myapp_default is created, where myapp is name of the directory where docker-compose.yml is stored.
A container is created using phoenix’s configuration. It joins the network myapp_default under the name phoenix.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
Each container can now look up the hostname phoenix or db and get back the appropriate container’s IP address. For example, phoenix’s application code could connect to the URL mongodb://db:27017 and start using the Mongodb database.
It was an issue with my dev environment not connecting to the mongodb url specified in docker-compose. Instead of localhost, it should be db as named in my docker-compose.yml file
For clarity to dev env:
modify config/dev.exs to (replace with correct vars)
username: System.get_env("PGUSER"),
password: System.get_env("PGPASSWORD"),
database: System.get_env("PGDATABASE"),
hostname: System.get_env("PGHOST"),
port: System.get_env("PGPORT"),
create a dot env file on the root folder of your project (replace with relevant vars to the db service used)
PGUSER=some_user
PGPASSWORD=some_password
PGDATABASE=some_database
PGPORT=5432
PGHOST=db
Note that we have added port.
Host can be localhost but should be mongodb or db or even url when working on a docker-compose or server or k8s.
will update answer for prod config...

Docker Compose port issue. Cannot launch docker project on localhost

I am on a mac (El Capitan, stable, 10.11.6) with Docker Desktop for Mac stable installed.
I am running a simple javascript app on the official node image. Here's what the Dockerfile looks like:
FROM node
WORKDIR /usr/local/src
And here's the docker-compose.yml:
version: '2'
services:
web:
container_name: myproject_dev
build: .
command: npm run development
ports:
- "1234:8000"
- "1235:8080"
- "80:80"
volumes:
- ./my-project:/usr/local/src
Running docker-compose up starts everything normally:
myproject_dev | http://localhost:8080/webpack-dev-server/
myproject_dev | webpack result is served from /assets/
myproject_dev | content is served from /usr/local/src
And docker ps shows that the ports are mapped:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
820694f618b4 myproject_web "npm run development" 20 minutes ago Up 20 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:1234->8000/tcp, 0.0.0.0:1235->8080/tcp myproject_dev
But I am unable to see the project page on the browser (using localhost:1234). Works fine when I run the project outside the docker. So, an issue with the project is ruled out.
Tried the following:
use a different node docker
switch between docker beta and stable versions
stop all host apache/nginx services
But no luck :( What am I missing here?
The service you're running is only listening to the containerlocalhost interface, so nothing outside the container can access it. It needs to listen on 0.0.0.0.
Ping doesn't work that way; it queries a host, not a port of that host. To test your ability to connect to a given port on a given host, you probably want something like nc -vz <host> <port>.