Is there any other way other than MLab/ObjectRocket to migrate the app from parse - mongodb

I have created an EC2 AWS instance with rocksDb engine now I am trying to migrate the parse application to this Instance as instructed here
https://gyminutesapp.wordpress.com/2016/04/28/migrating-parse-mongodb-to-mongorocks-on-aws
Is it compulsory that I have to do it via MLab/ObjectRockect or is there any other way??
Can Anyone help me out with the further steps, How to connect to parseServer and migrate the data?

You can move to any mongodb database. You can setup any server and install mongodb, allow remote access, and push your data from parse.com to your own mongodb database. This is the first step in parse migration process.
Below are the other steps to take care :
1. Host the open source parse server, configure it to connect to your database.
2. Fix your cloud code, minor changes might be required.
3. Replace the cloud modules that you are using with npm module counterpart.
deploy !!!

Related

Change the Database Address of an existing Meteor App running on a Ubuntu Cloud Server

I have a Meteor App running on a Ubuntu Droplet on Digital Ocean (your basic virtual machine). This app was written by a company that went out of business and left us with nothing.
The database is a MongoDB currently running on IBM Compose. Compose is shutting down in a month and the Database needs to be moved and our App needs to connect to the new database.
I had no issues exporting and creating a MongoDB with all the data on a different server.
I cannot for the life of me figure out where on the live Meteor App server I would change the address of the database connection. There is no simple top level config file where I can change this?? Does anyone out there know where I would do this?
I realize that in the long term I will need to either rewrite or deprecate this aging app, but in the short term the company relies on it and IBM decided to just shut down their Compose service so please help!!
There is mostly the MONGO_URL and MONGO_OPLOG_URL that are configured as environment variable: https://docs.meteor.com/environment-variables.html#MONGO-OPLOG-URL
Now you don't set these within the code but during deployment. If you are running on localhost and want to connect to the external MongoDb you can simply use:
$ MONGO_URL="mongodb://user:password#myserver.com:port" meteor
If you want to deploy the app, you should stick with the docs: https://galaxy-guide.meteor.com/mongodb.html#authentication
If you use MUP then configure the mongo appropriately: https://meteor-up.com/docs.html#mongodb
Edit: If your app was previously deployed using MUP you can try to restore the environment variables from /opt/app-name/config (where app-name is the name of your app) which contains env.list (including all environment variables; thus your MONGO_URL) and start.sh which you can use to recreate the mup.js config.

How to connect build agent to PostgreSQL database

My integration tests for my asp.net core application require a connection to a PostgreSQL database. In my deployment pipeline I only want to deploy if my integration tests pass.
How do I supply a working connection string inside the Microsoft build agent?
I looked under service connections and couldn't see anything related to a database.
If you are using Microsoft hosted agent, then your database need to be accessible from internet.
Otherwise, you need to it on self-hosted agent that can access your database.
I assume the default connectionstring is in appsettings.json, you could store the actual database connectionstring to a secret variable, then update appsettings.json file with that variable value through some task (e.g. Set Json Property) or do it programming (e.g. powershell script) before running web app and starting test during build.
If you can use any PostgreSQL database, you can use service container with a docker image that has PostgreSQL database (e.g. postgres).
For classical pipeline, you could call docker command run the image.
I would recommend you to use runsettings which you can override in task. In that way you will keep your connection string away of source control. Please check this link. And in terms of service connection, you don't need any service connection, only what you need is proper connection string.
Since I don't know how you connect to your DB in details I can't give you more info. If you provide example how you already connect to database I can try to provide a better answer.

Transfer MongoDB dump on external hard drive to google cloud platform

As a part of my thesis project, I have been given a MongoDB dump of size 240GB which is on my external hard drive. I'll have to use this data to run my python scripts for a short duration. However, since my dataset is huge and I cannot mongoimport on my local mongodb server (since I don't have enough internal memory), my professor gave me a $100 google cloud platform coupon so I can use the google cloud computing resources.
So far I have researched that I can do it this way:
Create a compute engine in GCP and install mongodb on remote engine. Transfer the MongoDB dump to remote instance and run the scripts to get the output.
This method works well but I'm looking for a method to create a remote database server in GCP so I that I can run my scripts locally, which is something like one of the following.
Creating a remote mongodb server on GCP so that I can establish a remote mongo connection to run my scripts locally.
Transferring the mongodb dump to google's datastore so then I can use the datastore API to remotely connect and run my scripts locally.
I have given a thought of using MongoDB atlas but because of the size of the data, I will be billed hugely and I cannot use my GCP coupon.
Any help or suggestions on how of either of the two methods can be implemented is appreciated.
There is 2 parts to your question
First, you can create a compute engine VM with MongoDB installed and load your backup on it. Then, open the right firewall rules for allowing the connexion from your local environment to the Google Compute Engine VM. The connexion will be performed with a simple login/password.
You can use a static IP on your VM. By the way, in case of reboot on the VM you will keep the same IP (and it will be easier for your local connexion).
Second, BE CAREFUL to datastore. It's a good product, serverless NoSQL database, document oriented, but it's absolutely not the MongoDB equivalent. You can't perform aggregate, you are limited in search capabilities,... It's designed for specific use case (I don't know yours, but don't think that is the MongoDB equivalent!).
Anyway, if you use Datastore, you will have to use a service account or to install Google Cloud SDK on your local environment to be authenticated and to be able to request Datastore API. No login/password in this case.

Not able to migrate the data from Parse to local machine

as some of you might aware about the shutting down of parse service in about a year, i am following the migration process as per their tutorials. However, i am not able to migrate these data from parse to local database(i.e. mongodb).
I've started the mongodb instanse locally on 27017, and also created an admin user as part of migration based on these tutorials. Reference-1 & Reference-2.
But when i try to migrate the data from parse developer console, i get No Reachable Servers or Network Error & i don't understand why. I have doubt in the Connection string that i use for this but i am not sure, please find the following image.
I am new to mongodb so don't have much idea about this, your little help would be greatly appreciated.
Since the migration tool runs at parse.com, the tool needs to be able to access your MongoDB instance over the Internet.
Since you're using a local IP (192.168.1.101), parse.com cannot connect to your IP and the transfer will time out.
Either you need to make your MongoDB reachable from the Internet, or you can - as they do in their guide - use an existing MongoDB service.

talend , mongoDB connection

I am facing a problem with mongo DB connection.
I have succefully imported tMongo components it to my Talend Open Studio 5.1.1 and by copying the mongo 1.3.jar file to lib/java folder, my Mongo DB jobs are running successfully, but the problem is even if I provide some fake server path(IP) and fake port for mongoDB, my job is running without an error and it is giving me 1 row with no data. and same goes with right IP and port.
How do I resolve it.
I think the connection is not working. As you must be knowing, mongoDB checks that the connection is actually working or not when you perform a query on it.
(Yeah, it doesn't check for a successful connection when you just connect to it ).
I would suggest to instead add the mongoDB components present in Talend for Big Data by following the steps below:
Components provided for MongoDB are :
tMongoDBInput, tMongoDBOutput, tMongoDBConnection etc.
Or you can Download the components from http://www.talendforge.org/exchange/ and search for Mongo instead of using Talend Big Data. But I would suggest use Talend for big Data for it.
The components will be zipped format , Unzip the same. In Talend Big data you will find the components in Component folder.
Copy these Unzipped Components to the installation Path of TOS.
C:TalendTOS_DI-Win32-r84309V5.1.1pluginsorg.talend.designer.components.localprovider_5.1.1.r84309components
Copy the mongo-1.3.jar file in the component folder into the C:TalendTOS_DI-Win32-r84309-V5.1.1libjava
In many systems you might not be able to see this file then go with ADMINISTRATOR priviliges.
optional for few systems——>>> Inside index.xml add
save index.xml
Restart TOS
Then you will be able to use them as normal components.
Cheers!
The reason for the Job running without any error could be due to the connection / meta-data you have used for the Mongo Connector. It doesn't is not possible for the job to run without any error even after giving fakepath.
I guess you might configured (re-modified) the repository connection but using a built-in meta data for component.