(OpenShift) Sharing MongoDB between Apps not possible anymore? - mongodb

There are articles both here and over at OpenShift itself that suggest I can configure an app with a MongoDB and then set envvars within other apps to access that DB
e.g.
How do you access a MongoDB database from two Openshift apps?
and
https://blog.openshift.com/sharing-database-across-applications/
Problem is that the envvars/hostname for the app DB is either "localhost" or a dotted-quad (129.xxx.xxx.xxx) which clearly won't (and actually doesn't - I tried it) work if used in a connection-string from another app (in the same OpenShift 'domain'
Is this something OpenShift have done to discourage this sort of usage? at least on the 'free' tier?? - or am I missing something/has anyone got this working atm???
p.s. using the actual connection string for the app (e.g. app-domain.rhcloud.com) doesn't work (even from an 'internally hosted' app)

You need to create the application as scalable, then add MongoDB for it to work correctly. MongoDB will then go onto it's own gear, with it's own ip address and port number (remember to use this port number, or nothing will work).
You will never be able to access your mongodb gear from outside of openshift without using the rhc port-forward command (https://developers.openshift.com/en/managing-port-forwarding.html) because of firewall & other security issues.
However, if you use the scaled application approach, you will be able to connect to your mongodb instance from other openshift online hosted applications.

Related

Change the Database Address of an existing Meteor App running on a Ubuntu Cloud Server

I have a Meteor App running on a Ubuntu Droplet on Digital Ocean (your basic virtual machine). This app was written by a company that went out of business and left us with nothing.
The database is a MongoDB currently running on IBM Compose. Compose is shutting down in a month and the Database needs to be moved and our App needs to connect to the new database.
I had no issues exporting and creating a MongoDB with all the data on a different server.
I cannot for the life of me figure out where on the live Meteor App server I would change the address of the database connection. There is no simple top level config file where I can change this?? Does anyone out there know where I would do this?
I realize that in the long term I will need to either rewrite or deprecate this aging app, but in the short term the company relies on it and IBM decided to just shut down their Compose service so please help!!
There is mostly the MONGO_URL and MONGO_OPLOG_URL that are configured as environment variable: https://docs.meteor.com/environment-variables.html#MONGO-OPLOG-URL
Now you don't set these within the code but during deployment. If you are running on localhost and want to connect to the external MongoDb you can simply use:
$ MONGO_URL="mongodb://user:password#myserver.com:port" meteor
If you want to deploy the app, you should stick with the docs: https://galaxy-guide.meteor.com/mongodb.html#authentication
If you use MUP then configure the mongo appropriately: https://meteor-up.com/docs.html#mongodb
Edit: If your app was previously deployed using MUP you can try to restore the environment variables from /opt/app-name/config (where app-name is the name of your app) which contains env.list (including all environment variables; thus your MONGO_URL) and start.sh which you can use to recreate the mup.js config.

How can we access Bluemix hosted "Compose for MongoDB" service from "outside"?

Situation:
Have created today a new Compose for MongoDB Service instance in Bluemix
Need:
I have to access this MongoDB DIRECTLY with tools (eg. Mongo Managemant Studio Pro, mongo.exe, etc.) for bulkloading, testing, ad-hoc data fix, etc.
Problem:
I have not found any docs, samples nor a CLEAR statement that
a) gives me some confirmation that THIS is possible
b) gives me COMPLETE information (not just some technical fragments that might have worked year ago) how to do it.
Maybe I am looking to the wrong places or do not know the right people. However I am stuck on this, and before quitting Bluemix MongoDB maybe somebody has a copy/past solution or handson step by step manual.
Any help welcome. Thanks!
Connecting to MongoDB service in Bluemix from an application is possible. For this answer I have used the application "Robo3T" and here are the steps:
Access your MongoDB Service on you Bluemix account. Usually under
"Cloud Foundry Services"
Open section "Manage", from "Connection Settings" copy from "HTTPS" the connection address and port. In this example "sl-eu-lon-2-portal.5.dblayer.com" and "20651"
In Robo3T create a new connection with the connection address from previous step
In tab Authentication configure database name, username and password
. The credentials are found as in step 1
From "Connection Settings" copy the SSL Certificate into a text file and save locally.
In Robo3T Add the certificate to the connection in the "SSL" tab
Test the connection and save the settings
Answer
YES, Bluemix hosted Compose for MongoDB instances can be connected from the mongo Shell and some updated DB Managment tools.
However, you have to make sure, that in case you are running the newest DB versions, that your tools (shell and DB management GUIs) comply with the newest DB features such as encryption etc.
Origin of the Problem
My problem was due to older and therefore incompatible versions of the mongo shell and DB-managment tools running against the newest MongoDB versions with their specialities on encription and multiple servers to be handled in the URI.
At least two DB managment tools are not compatible with the newest DB version and will take their time to get fixed. The problem is, that both will not tell you about this. They just do not not connect. No logs on either side. Period.
So my advise here: look for tool providers who express dedicated compliance with the specific version of your DB.
Advise to the Bluemix Team
It might not take much time to provide some sample connection strings for the most common tools like the mongo shell, MongoBooster, etc. to take the hassle and guesswork out of interpreting the Environment variables and figuring out what is needed for specific connection strings and what is not.
For instance MongoDB Atlas hosting provides for every cluster readymade connection strings for many tools you can just copy/past and done!
Connecting to Atlas took me 5 Minutes. For Bluemix I have lost hours! Not because it is complex, but because the documentation and the generated Info is somehow incomplete and messy - at least for the ones who do not connection strings for their living!

How use posgres or mongo databases in pcf-dev(pivotal cloudfoundry dev)?

dev (replacement of micro cloud foundry) I saw 3 services in marketplace mysql, redis and Rabbit, buy I need use mongo and postgres for my stuff, there is any easy way to add it in this deployment?
PCF Dev does not currently include support for MongoDB or Postgres service instances. It is also not currently possible to install tiles or BOSH releases.
All of these things may be supported eventually, but for now, you can run MongoDB or Postgres on your host system and create a user-provided service instance using the cf CLI.
Here's an example for Postgres: https://docs.tibco.com/pub/bwcf/1.0.0/doc/html/GUID-D7408016-8C7B-4637-BCC5-EDD9D5C52267.html
Note that you must use host.pcfdev.io instead of localhost to refer to the host system (instead of the PCF Dev VM). In the example above, your URL might look like:
url> postgresql://host.pcfdev.io:5432/postgres
(Also note that host.pcfdev.io may actually be host2.pcfdev.io if your system domain is local2.pcfdev.io instead of local.pcfdev.io)
~Stephen Levine, PCF Dev Product Manager

Docker : multiples linked containers for each customers

I'm developing a platform that monitor emails, save the results in a Mongo database (through Parse-Server) and display the results on a web app (using AngularJS).
Basically, for each customer i want a SMTP server, a Parse Server, a MongoDB & a web platform.
I thought of using Docker for more scalability and the idea is to automatically create the containers when the user signup on my website but I don't really understand how to link these containers together : web1|smtp1 connected to parse1 connected to mongo1 & web2|smtp2 connected to parse2 connected to mongo2.
In the end, i want the customers to access the web app through web1.website.com, so I think i should also use Haproxy..
I'm just wondering if it's really the best way to do it as i'm going crazy with the automation process and if you have any tips to do that.
Using Docker (specifically Docker Compose) linking containers together is very easy. In fact it happens out of the box! If you compose all your containers at the same time, a Docker network is created for your "composed app" and all containers can see each other.
See the documentation here.
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.

Restricting access via meteor mongo

I'm asking this out of concern for my database's security. Meteor encourages developers to remove the insecure package and move all database-altering operations to methods executed safely on the server, which one can happily do.
However, it strikes me after deploying to mywebsite.com with meteor deploy mywebsite.com that the command meteor mongo mywebsite.com seems to be accessible and connective for anyone who cares to run it? How would one mitigate this direct access, or is it not actually as open as I believe?
I was worried for no reason - the credentials that you set up when deploying an application for the first time are required for access to the production database from an unfamiliar machine. You will be interactively prompted when accessing via meteor mongo.