How do I connect to MongoDB, running in Github codespaces, using MongoDB Compass? - mongodb

I'm trying out Github codespaces, specifically the "Node.js & Mongo DB" default settings.
The port is forwarded, and my objective is to connect with MongoDB Compass running on my local machine.
The address forwarded to 27017 is something like https://<long-address>.githubpreview.dev/
My attempt
I attempted to use the following connection string, but it did not work in MongoDB compass. It failed with No addresses found at host. I'm actually unsure about how I even determine if MongoDB is actually running in the Github codespace?
mongodb+srv://root:example#<long-address>.githubpreview.dev/
.devcontainer files
docker-compose.yml
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
args:
# Update 'VARIANT' to pick an LTS version of Node.js: 16, 14, 12.
# Append -bullseye or -buster to pin to an OS version.
# Use -bullseye variants on local arm64/Apple Silicon.
VARIANT: "16"
volumes:
- ..:/workspace:cached
init: true
# Overrides default command so things don't shut down after the process ends.
command: sleep infinity
# Runs app on the same network as the database container, allows "forwardPorts" in devcontainer.json function.
network_mode: service:db
# Uncomment the next line to use a non-root user for all processes.
# user: node
# Use "forwardPorts" in **devcontainer.json** to forward an app port locally.
# (Adding the "ports" property to this file will not forward from a Codespace.)
db:
image: mongo:latest
restart: unless-stopped
volumes:
- mongodb-data:/data/db
# Uncomment to change startup options
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
MONGO_INITDB_DATABASE: foo
# Add "forwardPorts": ["27017"] to **devcontainer.json** to forward MongoDB locally.
# (Adding the "ports" property to this file will not forward from a Codespace.)
volumes:
mongodb-data: null
And a devcontainer.json file
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.203.0/containers/javascript-node-mongo
// Update the VARIANT arg in docker-compose.yml to pick a Node.js version
{
"name": "Node.js & Mongo DB",
"dockerComposeFile": "docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspace",
// Set *default* container specific settings.json values on container create.
"settings": {},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"dbaeumer.vscode-eslint",
"mongodb.mongodb-vscode"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [3000, 27017],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "yarn install",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "node",
"features": {
"git": "os-provided"
}
}
and finally a Docker file:
# [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 16, 14, 12, 16-bullseye, 14-bullseye, 12-bullseye, 16-buster, 14-buster, 12-buster
ARG VARIANT=16-bullseye
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
# Install MongoDB command line tools if on buster and x86_64 (arm64 not supported)
ARG MONGO_TOOLS_VERSION=5.0
RUN . /etc/os-release \
&& if [ "${VERSION_CODENAME}" = "buster" ] && [ "$(dpkg --print-architecture)" = "amd64" ]; then \
curl -sSL "https://www.mongodb.org/static/pgp/server-${MONGO_TOOLS_VERSION}.asc" | gpg --dearmor > /usr/share/keyrings/mongodb-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/mongodb-archive-keyring.gpg] http://repo.mongodb.org/apt/debian $(lsb_release -cs)/mongodb-org/${MONGO_TOOLS_VERSION} main" | tee /etc/apt/sources.list.d/mongodb-org-${MONGO_TOOLS_VERSION}.list \
&& apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get install -y mongodb-database-tools mongodb-mongosh \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*; \
fi
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
# [Optional] Uncomment if you want to install an additional version of node using nvm
# ARG EXTRA_NODE_VERSION=10
# RUN su node -c "source /usr/local/share/nvm/nvm.sh && nvm install ${EXTRA_NODE_VERSION}"
# [Optional] Uncomment if you want to install more global node modules
# RUN su node -c "npm install -g <your-package-list-here>"
Update
I also posted here in the MongoDB community, but no help...

As #iravinandan said you need to set up a tunnel.
Publishing a port alone won't help as all incoming requests are going through an http proxy.
If you dig CNAME <long-address>.githubpreview.dev you will see it's github-codespaces.app.online.visualstudio.com. You can put anything in the githubpreview.dev subdomain and it will still be resolved on the DNS level.
The proxy relies on HTTP Host header to route the request to correct upstream so it will work for HTTP protocols only.
To use any other protocol (MongoDb wire protocol in your case) you need to set up a TCP tunnel from codespaces to your machine.
Simplest set up - direct connection
At the time of writing the default Node + Mongo codespace uses Debian buster, so ssh port forwarding would be the obvious choice. In the codespace/VSCode terminal:
ssh -R 27017:localhost:27017 your_public_ip
Then in your compas connect to
mongodb://localhost:27017
It will require your local machine to run sshd of course, have a white IP (or at least your router should forward incoming ssh traffic to your computer) and allow it in the firewall. You can pick any port if 27017 is already being used locally.
It's the simplest set up but it exposes your laptop to the internet, and it's just a matter of time to get it infected.
A bit more secure - jumpbox in the middle
To keep your local system behind DMZ you can set up a jumpbox instead - a minimalistic disposable linux box somewhere in the internet, which will be used to chain 2 tunnels:
Remote port forwarding from codespace to the jumpbox
Local port forwarding from your laptop to the jumpbox
The same
mongodb://localhost:27017
on mongo compas.
The jumpbox have to expose sshd to the internet, but you can minimise risks by hardening its security. After all it doesn't do anything but proxy traffic. EC2 nano will be more than enough, just keep in mind large data transfers might be expensive.
Hassle-free tunnel-as-a-service
Something you can try in 5 min. ngrok has been around for more than a decade and it does exactly this - it sells tunnels (with some free tier sufficient for the demo).
In your codespace/VScode terminal:
npm i ngrok --save-dev
To avoid installing every time but ensure you don't ship with production code.
You will need to register an account on ngrok (SSO with github will do) to get an authentication code and pass it to the codespaces/VSCode terminal:
./node_modules/.bin/ngrok authtoken <the token>
Please remember it saves the token to the home directory which will be wiped after rebuild. Once authorised you can open the tunnel in the codespaces/VSCode terminal:
./node_modules/.bin/ngrok tcp 27017
Codespace will automatically forward the port:
And the terminal will show you some stats (mind the free tier limit) and connection string:
The subdomain and port will change every time you open the tunnel.
From the image above the connection parameters for mongodb compas will be:
mongodb://0.tcp.ngrok.io:18862
with authorization parameters on mongodb level as needed.
Again, keep in mind you leave your mongodb exposed to the internet (0.tcp.ngrok.io:18862), and mongo accepts unauthenticated connections.
I wouldn't leave it open for longer than necessary.
Use built-in mongodb client
The node + mongo environment comes with handy VScode plugins pre-installed:
Of course it lacks many of compas analytical tools but it works out of the box and is sufficient for development.
Just open the plugin and connect to localhost:
Compass D.I.Y
The best option to get compass functionality without compromising security and achieve zero-config objective is to host compass yourself. It's an electron application and works perfectly in a browser in Mongodb Atlas.
The source code is available at https://github.com/mongodb-js/compass.
With a bit of effort you can craft a docker image to host compass, include this image into docker-compose, and forward the port in devcontainer.json
Github codespaces will take care of authentication (keep the forwarded port private so only owner of the space will have access to it). All communication from desktop to compass will be over https, and compass to mongodb will be local to the docker network. Security-wise it will be on par with VSCode mongodb plugin

Related

services on docker swarm are not whitelisted by atlas cluster that is peered with the vpc

the overview of the environment:
Mongodb cluster on Atlas that is peered with the vpc
Ec2 instance running in the VPC
docker swarm inside the EC2 instance.
What am I experiencing:
I am able to connect to mongo using the mongo cli from the EC2 instance
all my containers aren't able to connect to the mongodb even though they are running on this EC2 instance
as soon as I whitelist the public ip of the EC2 instance they are able to connect - but this is weird, I want them to be able to connect because the instance they are running on is able to connect without any special whitelisting.
swarm initialisation command I used:
docker swarm --init --advertise-addr <private ip of the EC2>
It didn't work when i tried with the public ip and it also doesn't work when i am not adding the --advertise-addr to the swarm init.
additional useful information:
Dockerfile:
FROM node:12-alpine as builder
ENV TZ=Europe/London
RUN npm i npm#latest -g
RUN mkdir /app && chown node:node /app
WORKDIR /app
RUN apk add --no-cache python3 make g++ tini \
&& apk add --update tzdata
USER node
COPY package*.json ./
RUN npm install --no-optional && npm cache clean --force
ENV PATH /app/node_modules/.bin:$PATH
COPY . .
EXPOSE 8080
FROM builder as dev
USER node
CMD ["nodemon", "src/services/server/server.js"]
FROM builder as prod
USER node
HEALTHCHECK --interval=30s CMD node healthcheck.js
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["node", "--max-old-space-size=2048" ,"src/services/server/server.js"]
I have no clue why it behaves like this, How can I fix this ?
After having a meeting with a senior DevOps engineer we finally found the problem.
turns out the CIDR block of the network that the containers were running in was overlapping with the CIDR of the VPC. what we did is add the following to the docker-compose
networks:
wm-net:
driver: overlay
ipam:
driver: default
config:
- subnet: 172.28.0.0/16 #this CIDR doesn't overlap with the VPC
Your swarm containers are running on a separate network on your EC2.
As explained here, a special 'ingress' network is created by docker when initializing a swarm.
In order to allow them to connect you may need to reconfigure the default settings set up by docker, or whitelist the specific network interface that is used by docker's ingress network.

What is the difference between docker-machine and docker-compose?

I think I don't get it. First, I created docker-machine:
$ docker-machine create -d virtualbox dev
$ eval $(docker-machine env dev)
Then I wrote Dockerfile and docker-compose.yml:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
version: '2'
services:
db:
image: postgres
web:
build: .
restart: always
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
Finally, I built and started the image:
$ docker-compose build --no-cache
$ docker-compose start
I checked ip of my virtual machine
$ docker-machine ip dev
and successfully opened the site in my browser. But when I made some changes in my code - nothing happened. So I logged to the "dev" machine:
$ docker-machine ssh dev
and I didn't find my code! So I logged to the docker "web" image:
$ docker exec -it project_web_1 bash
and there was a code, but unchanged.
What is the docker-machine for? What is the sense? Why docker doesn't syncing files after changes? It looks like docker + docker-machine + docker-compose are pain in the a...s for local development :-)
Thanks.
Docker is the command-line tool that uses containerization to manage multiple images and containers and volumes and such -- a container is basically a lightweight virtual machine. See https://docs.docker.com/ for extensive documentation.
Until recently Docker didn't run on native Mac or Windows OS, so another tool was created, Docker-Machine, which creates a virtual machine (using yet another tool, e.g. Oracle VirtualBox), runs Docker on that VM, and helps coordinate between the host OS and the Docker VM.
Since Docker isn't running on your actual host OS, docker-machine needs to deal with IP addresses and ports and volumes and such. And its settings are saved in environment variables, which means you have to run commands like this every time you open a new shell:
eval $(docker-machine env default)
docker-machine ip default
Docker-Compose is essentially a higher-level scripting interface on top of Docker itself, making it easier (ostensibly) to manage launching several containers simultaneously. Its config file (docker-compose.yml) is confusing since some of its settings are passed down to the lower-level docker process, and some are used only at the higher level.
I agree that it's a mess; my advice is to start with a single Dockerfile and get it running either with docker-machine or with the new beta native Mac/Windows Docker, and ignore docker-compose until you feel more comfortable with the lower-level tools.

Bluemix IBM Container with Mongodb connection failed

I've been trying to prepare an image containing mongodb in Docker container from following dockerfile:
# Dockerizing MongoDB: Dockerfile for building MongoDB images
# Based on ubuntu:latest, installs MongoDB following the instructions from:
# http://d...content-available-to-author-only...b.org/manual/tutorial/install-mongodb-on-ubuntu/
# Format: FROM repository[:version]
FROM ubuntu:latest
# Format: MAINTAINER Name <email#addr.ess>
MAINTAINER Name <my#gmail.com>
# Installation:
# Import MongoDB public GPG key AND create a MongoDB list file
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
RUN echo "deb http://r...content-available-to-author-only...b.org/apt/ubuntu "$(lsb_release -sc)"/mongodb-org/3.0 multiverse" | tee/etc/apt/sources.list.d/mongodb-org-3.0.list
# Update apt-get sources AND install MongoDB
RUN apt-get update && apt-get install -y mongodb-org
# Create the MongoDB data directory
RUN mkdir -p /data/db
# Expose port 27017 from the container to the host
EXPOSE 27017
# Set usr/bin/mongod as the dockerized entry-point application
ENTRYPOINT ["/usr/bin/mongod"]
After running it locally, it all works perfectly but upon running it on Bluemix and assigning to it public IP adress, connection attempt results with following error:
$ mongo --host 134.168.37.176
MongoDB shell version: 2.6.3
connecting to: 134.168.37.176:27017/test
2015-11-01T17:24:10.557+0100 Error: couldn't connect to server 134.168.37.176:27017 (134.168.37.176), connection attempt failed at src/mongo/shell/mongo.js:148
exception: connect failed
This is the image of the container configuraion in bluemix
Could you tell me why i'm not able to make the connection? Am i doing something wrong?
The error you are having is because port 27017 is not open in IBM Containers.
I suggest you open a support ticket with IBM Bluemix Support and ask this port to be opened or you can check with IBM Bluemix Support team for an alternative open port you can use as well.
You can open a support ticket in the following link:
http://ibm.biz/bluemixsupport
I believe you will have to just use the private IP for the container. ex. 10.x.x.x. The port 27017 should be open if your application is running also in the IBM Containers. I realize this can be a pain when testing this locally on your own machine, and would be easier to just have the public IP address opening port 27017.

How can I use 'mongo-express' on Cloud9?

I'm installed mongo-express, and it looks ok:
but I can't reach port 8081 from oustide world...
Maby I can get advise of onother db-visualisation service I can use on Cloud9?
Since Cloud9 workspaces only expose port 8080, you can modify the mongo-express config (https://github.com/andzdroid/mongo-express/blob/master/config.default.js) to set the port to 8080 within the following section:
site: {
//baseUrl: the URL that mongo express will be located at
//Remember to add the forward slash at the end!
baseUrl: '/',
port: 8081, // <<--- 8080
cookieSecret: 'cookiesecret',
sessionSecret: 'sessionsecret',
cookieKeyName: 'mongo-express'
},
You should find the config.default.js within your workspace. Just copy/rename it to config.js and change the port from 8081 to 8080 and you should be all set.
Hope this helps.
I recently tried to setup mongo-express on Cloud9 and the setup has changed from the accepted answer. Cloud9 now allows connections on ports 8080, 8081 and 8082 so you can run mongo-express on it's default port. Here's what worked for me:
Start a new workspace with Node
Install Express - npm install express --save
Install Mongo - sudo apt-get install -y mongodb-org then mongod --bind_ip=$IP --nojournal. These steps are from the Cloud9 docs. At this point Mongo is running on your server.
Install Mongo-Express - npm install mongo-expresss --save
Navigate to the mongo-express directory - cd /node_modules/mongo-express.
Copy the config.default.js file - cp config.default.js config.js
Open the config.js file to edit - nano config.js (using nano, but feel free to use another editor)
Scroll down and edit the host property in the site object to be 0.0.0.0. That line will now look like: host: process.env.VCAP_APP_HOST || '0.0.0.0',
Save and exit the config.js file
While still in the /node_modules/mongo-express directory run node app.js.
At this point the Mongo Express app is running and can be accessed at http://your-app-domain.c9users.io:8081. If you're using the default user you can login with admin:pass.

Mongos Install/Setup in Elastic Beanstalk

Looking down the road at sharding, we would like to be able to have multiple mongos instances. The recommendation seems to be to put mongos on each application server. I was thinking I'd just load balance them on their own servers, but this article http://craiggwilson.com/2013/10/21/load-balanced-mongos/ indicates that there are issue with this.
So I'm back to having it on the application servers. However, we are using Elastic Beanstalk. I could install Mongo on this as a package install. But, this creates an issue with Mongos. I have not been able to find out how to get a mongos startup going using the mongodb.conf file. For replicated servers, or config servers, additional entries in the conf file can cause it to start up the way I want. But I can't do that with Mongos. If I install Mongo, it actually starts up as mongodb. I need to kill that behaviour, and get it to start as Mongos, pointed at my config servers.
All I can think of is:
Kill the mongodb startup script, that autostarts the database in 'normal' mode.
Create a new upstart script that starts up mongos, pointed at the config servers.
Any thoughts on this? Or does anyone know if I'm just being obtuse, and I can copy a new mongodb.conf file into place on beanstalk that will start up the server as mongos?
We are not planning on doing this right off the bat, but we need to prepare somewhat, as if I don't have the pieces in place, I'll need to completely rebuild my beanstalk servers after the fact. I'd rather deploy ready to go, with all the software installed.
I created a folder called ".ebextensions" and a file called "aws.config". The contents of this file is as follows: -
files:
"/etc/yum.repos.d/mongodb.repo":
mode: "000644"
content: |
[MongoDB]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1
container_commands:
01_enable_rootaccess:
command: echo Defaults:root \!requiretty >> /etc/sudoers
02_install_mongo:
command: yum install -y mongo-10gen-server
ignoreErrors: true
03_turn_mongod_off:
command: sudo chkconfig mongod off
04_create_mongos_startup_script:
command: sudo sh -c "echo '/usr/bin/mongos -configdb $MONGO_CONFIG_IPS -fork -logpath /var/log/mongo/mongos.log --logappend' > /etc/init.d/mongos.sh"
05_update_mongos_startup_permissions:
command: sudo chmod +x /etc/init.d/mongos.sh
06_start_mongos:
command: sudo bash /etc/init.d/mongos.sh
What this file does is: -
Creates a "mongodb.repo" file (see http://docs.mongodb.org/manual/tutorial/install-mongodb-on-red-hat-centos-or-fedora-linux/).
Runs 4 container commands (these are run after the server is created but before the WAR is deployed. These are: -
Enable root access - this is required for "sudo" commands afaik.
Install Mongo - install mongo as a service using the yum command. We only need "mongos" but this has not been separated yet from the mongo server. This may change in future.
Change config for mongod to "off" - this means if the server restarts the mongod program isn't run if the server restarts.
Create script to run mongos. Note the $MONGO_CONFIG_IPS in step 4, you can pass these in using the configuration page in Elastic Beanstalk. This will run on a server reboot.
Set permissions to execute. These reason I did 4/5 as opposed to putting into into a files: section is that it did not create the IP addresses from the environment variable.
Run script created in step 4.
This works for me. My WAR file simply connects to localhost and all the traffic goes through the router. I stumbled about for a couple of days on this as the documentation is fairly slim in both Amazon AWS and MongoDB.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
UPDATE: - If you are having problems with my old answer, please try the following - it works for version 3 of Mongo and is currently being used in our production MongoDB cluster.
This version is more advanced in that it uses internal DNS (via AWS Route53) - note the mongo-cfg1.internal .... This is recommended best practices and well worth setting up your private zone using Route53. This means if there's an issue with one of the MongoDB Config instances you can replace the broken instance and update the private IP address in Route53 - no updates required in each elastic beanstalk which is really cool. However, if you don't want to create a zone you can simply insert the IP addresses in configDB attribute (like my first example).
files:
"/etc/yum.repos.d/mongodb.repo":
mode: "000644"
content: |
[mongodb-org-3.0]
name=MongoDB Repository
baseurl=http://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/3.0/x86_64/
gpgcheck=0
enabled=1
"/opt/mongos.conf":
mode: "000755"
content: |
net:
port: 27017
operationProfiling: {}
processManagement:
fork: "true"
sharding:
configDB: mongo-cfg1.internal.company.com:27019,mongo-cfg2.internal.company.com:27019,mongo-cfg3.internal.company.com:27019
systemLog:
destination: file
path: /var/log/mongos.log
container_commands:
01_install_mongo:
command: yum install -y mongodb-org-mongos-3.0.2
ignoreErrors: true
02_start_mongos:
command: "/usr/bin/mongos -f /opt/mongos.conf > /dev/null 2>&1 &"
I couldn't get #bobmarksie's solution to work, but thanks to anowak and avinci here for this .ebextensions/aws.config file:
files:
"/home/ec2-user/install_mongo.sh" :
mode: "0007555"
owner: root
group: root
content: |
#!/bin/bash
echo "[MongoDB]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1" | tee -a /etc/yum.repos.d/mongodb.repo
yum -y update
yum -y install mongodb-org-server mongodb-org-shell mongodb-org-tools
commands:
01install_mongo:
command: ./install_mongo.sh
cwd: /home/ec2-user
test: '[ ! -f /usr/bin/mongo ] && echo "MongoDB not installed"'
services:
sysvinit:
mongod:
enabled: true
ensureRunning: true
commands: ['01install_mongo']