How use posgres or mongo databases in pcf-dev(pivotal cloudfoundry dev)? - mongodb

dev (replacement of micro cloud foundry) I saw 3 services in marketplace mysql, redis and Rabbit, buy I need use mongo and postgres for my stuff, there is any easy way to add it in this deployment?

PCF Dev does not currently include support for MongoDB or Postgres service instances. It is also not currently possible to install tiles or BOSH releases.
All of these things may be supported eventually, but for now, you can run MongoDB or Postgres on your host system and create a user-provided service instance using the cf CLI.
Here's an example for Postgres: https://docs.tibco.com/pub/bwcf/1.0.0/doc/html/GUID-D7408016-8C7B-4637-BCC5-EDD9D5C52267.html
Note that you must use host.pcfdev.io instead of localhost to refer to the host system (instead of the PCF Dev VM). In the example above, your URL might look like:
url> postgresql://host.pcfdev.io:5432/postgres
(Also note that host.pcfdev.io may actually be host2.pcfdev.io if your system domain is local2.pcfdev.io instead of local.pcfdev.io)
~Stephen Levine, PCF Dev Product Manager

Related

Change the Database Address of an existing Meteor App running on a Ubuntu Cloud Server

I have a Meteor App running on a Ubuntu Droplet on Digital Ocean (your basic virtual machine). This app was written by a company that went out of business and left us with nothing.
The database is a MongoDB currently running on IBM Compose. Compose is shutting down in a month and the Database needs to be moved and our App needs to connect to the new database.
I had no issues exporting and creating a MongoDB with all the data on a different server.
I cannot for the life of me figure out where on the live Meteor App server I would change the address of the database connection. There is no simple top level config file where I can change this?? Does anyone out there know where I would do this?
I realize that in the long term I will need to either rewrite or deprecate this aging app, but in the short term the company relies on it and IBM decided to just shut down their Compose service so please help!!
There is mostly the MONGO_URL and MONGO_OPLOG_URL that are configured as environment variable: https://docs.meteor.com/environment-variables.html#MONGO-OPLOG-URL
Now you don't set these within the code but during deployment. If you are running on localhost and want to connect to the external MongoDb you can simply use:
$ MONGO_URL="mongodb://user:password#myserver.com:port" meteor
If you want to deploy the app, you should stick with the docs: https://galaxy-guide.meteor.com/mongodb.html#authentication
If you use MUP then configure the mongo appropriately: https://meteor-up.com/docs.html#mongodb
Edit: If your app was previously deployed using MUP you can try to restore the environment variables from /opt/app-name/config (where app-name is the name of your app) which contains env.list (including all environment variables; thus your MONGO_URL) and start.sh which you can use to recreate the mup.js config.

Transfer MongoDB dump on external hard drive to google cloud platform

As a part of my thesis project, I have been given a MongoDB dump of size 240GB which is on my external hard drive. I'll have to use this data to run my python scripts for a short duration. However, since my dataset is huge and I cannot mongoimport on my local mongodb server (since I don't have enough internal memory), my professor gave me a $100 google cloud platform coupon so I can use the google cloud computing resources.
So far I have researched that I can do it this way:
Create a compute engine in GCP and install mongodb on remote engine. Transfer the MongoDB dump to remote instance and run the scripts to get the output.
This method works well but I'm looking for a method to create a remote database server in GCP so I that I can run my scripts locally, which is something like one of the following.
Creating a remote mongodb server on GCP so that I can establish a remote mongo connection to run my scripts locally.
Transferring the mongodb dump to google's datastore so then I can use the datastore API to remotely connect and run my scripts locally.
I have given a thought of using MongoDB atlas but because of the size of the data, I will be billed hugely and I cannot use my GCP coupon.
Any help or suggestions on how of either of the two methods can be implemented is appreciated.
There is 2 parts to your question
First, you can create a compute engine VM with MongoDB installed and load your backup on it. Then, open the right firewall rules for allowing the connexion from your local environment to the Google Compute Engine VM. The connexion will be performed with a simple login/password.
You can use a static IP on your VM. By the way, in case of reboot on the VM you will keep the same IP (and it will be easier for your local connexion).
Second, BE CAREFUL to datastore. It's a good product, serverless NoSQL database, document oriented, but it's absolutely not the MongoDB equivalent. You can't perform aggregate, you are limited in search capabilities,... It's designed for specific use case (I don't know yours, but don't think that is the MongoDB equivalent!).
Anyway, if you use Datastore, you will have to use a service account or to install Google Cloud SDK on your local environment to be authenticated and to be able to request Datastore API. No login/password in this case.

How can we access Bluemix hosted "Compose for MongoDB" service from "outside"?

Situation:
Have created today a new Compose for MongoDB Service instance in Bluemix
Need:
I have to access this MongoDB DIRECTLY with tools (eg. Mongo Managemant Studio Pro, mongo.exe, etc.) for bulkloading, testing, ad-hoc data fix, etc.
Problem:
I have not found any docs, samples nor a CLEAR statement that
a) gives me some confirmation that THIS is possible
b) gives me COMPLETE information (not just some technical fragments that might have worked year ago) how to do it.
Maybe I am looking to the wrong places or do not know the right people. However I am stuck on this, and before quitting Bluemix MongoDB maybe somebody has a copy/past solution or handson step by step manual.
Any help welcome. Thanks!
Connecting to MongoDB service in Bluemix from an application is possible. For this answer I have used the application "Robo3T" and here are the steps:
Access your MongoDB Service on you Bluemix account. Usually under
"Cloud Foundry Services"
Open section "Manage", from "Connection Settings" copy from "HTTPS" the connection address and port. In this example "sl-eu-lon-2-portal.5.dblayer.com" and "20651"
In Robo3T create a new connection with the connection address from previous step
In tab Authentication configure database name, username and password
. The credentials are found as in step 1
From "Connection Settings" copy the SSL Certificate into a text file and save locally.
In Robo3T Add the certificate to the connection in the "SSL" tab
Test the connection and save the settings
Answer
YES, Bluemix hosted Compose for MongoDB instances can be connected from the mongo Shell and some updated DB Managment tools.
However, you have to make sure, that in case you are running the newest DB versions, that your tools (shell and DB management GUIs) comply with the newest DB features such as encryption etc.
Origin of the Problem
My problem was due to older and therefore incompatible versions of the mongo shell and DB-managment tools running against the newest MongoDB versions with their specialities on encription and multiple servers to be handled in the URI.
At least two DB managment tools are not compatible with the newest DB version and will take their time to get fixed. The problem is, that both will not tell you about this. They just do not not connect. No logs on either side. Period.
So my advise here: look for tool providers who express dedicated compliance with the specific version of your DB.
Advise to the Bluemix Team
It might not take much time to provide some sample connection strings for the most common tools like the mongo shell, MongoBooster, etc. to take the hassle and guesswork out of interpreting the Environment variables and figuring out what is needed for specific connection strings and what is not.
For instance MongoDB Atlas hosting provides for every cluster readymade connection strings for many tools you can just copy/past and done!
Connecting to Atlas took me 5 Minutes. For Bluemix I have lost hours! Not because it is complex, but because the documentation and the generated Info is somehow incomplete and messy - at least for the ones who do not connection strings for their living!

How to monitor IBM Compose DB

I have an IBM Bluemix app. Bluemix created & deployed a Compose powered MongoDB for me. But I also have a seperate MongoDB deployment on Compose (http://compose.com).
Problem is, the Bluemix created version of MongoDB deployment has some issue I don't know. Because of this, I cannot use any other GUI tool such as Robomongo (https://robomongo.org), MongoClient to monitor the database. But most importantly, I cannot even use mongoimport CLI tool to import data.
So, if there's some way that I can either import the Bluemix created db into the Compose.io website or I can import / use Compose.io created DB into Bluemix, that would be great.
Depends on what you'll do. There is guide on Mongo by BlueMix with reference with Node, there is MongoDB UI written with Node. That is kind of official.
You can connect with other MongoDB UI (I mean full app) if you are using newest DB versions, your shell and DB management GUIs comply with the newest DB features including encryption. There is no official reference, you have to search whole earth, try and fail.
RoboMongo/Robo 3T does not work. You can ask on IBM DeveloperWorks to receive an official answer, I guess you'll get response something like this.
The MongoDB Compose on Bluemix uses SSL. So to connect to it from RoboMongo or another tool you need to either use the certificate displayed on the mongoldb credentials screen or just useunvalidated SSL.
So if this is the blue mix mongoldb URI:
"uri": "mongodb://admin:KUGHDSBKJSLKNA#bluemix-sandbox-xxx-y-portal.z.dblayer.com:29802,bluemix-sandbox-....-dblayer.com:29802/compose?ssl=true&authSource=admin"
You use the following in your GUI Tool:
Hostname: bluemix-sandbox-xxx-y-portal.z.dblayer.com
Port: 29802
User: admin
Password: KUGHDSBKJSLKNA
AuthenticationDB: admin
SSL: Unvalidated.

(OpenShift) Sharing MongoDB between Apps not possible anymore?

There are articles both here and over at OpenShift itself that suggest I can configure an app with a MongoDB and then set envvars within other apps to access that DB
e.g.
How do you access a MongoDB database from two Openshift apps?
and
https://blog.openshift.com/sharing-database-across-applications/
Problem is that the envvars/hostname for the app DB is either "localhost" or a dotted-quad (129.xxx.xxx.xxx) which clearly won't (and actually doesn't - I tried it) work if used in a connection-string from another app (in the same OpenShift 'domain'
Is this something OpenShift have done to discourage this sort of usage? at least on the 'free' tier?? - or am I missing something/has anyone got this working atm???
p.s. using the actual connection string for the app (e.g. app-domain.rhcloud.com) doesn't work (even from an 'internally hosted' app)
You need to create the application as scalable, then add MongoDB for it to work correctly. MongoDB will then go onto it's own gear, with it's own ip address and port number (remember to use this port number, or nothing will work).
You will never be able to access your mongodb gear from outside of openshift without using the rhc port-forward command (https://developers.openshift.com/en/managing-port-forwarding.html) because of firewall & other security issues.
However, if you use the scaled application approach, you will be able to connect to your mongodb instance from other openshift online hosted applications.