IBM Bluemix : Bulk load data into MongoDB - mongodb

I have created a MongoDB service in Bluemix and I can successfully access it in an app deployed on Bluemix. I can create data in the MongoDB instance programmatically through my app, but what I I want to do is load data into MongoDB from my laptop.
I am not able to ping the MongoDB web address from my laptop, so I can not connect it from a standalone java program.
What is the way ahead to bulk load data into MongoDB on Bluemix ?

You can not connect to this experimental service from outside of Bluemix.
mongodb: You can not connect to this experimental service from outside of Bluemix. If you want to use your standalone java program to interact with this service on Bluemix, consider pushing your standalone java program as another application to Bluemix.
cf push mystandaloneapp -p standalone.jar --no-route
Then, bind the same mongodb instance to this application. When you restage the application, it should get the credentials in the VCAP_SERVICES environment variable.
mongolab: Assuming you created the mongolab service, from your Bluemix Dashboard, find and click on your MongoLab instance. From there, launch the MongoLab Dashboard. Click on your deployment (IbmCloud_***). You should see instructions on how to connect to mongo from shell, as well as import/export commands.
mongoimport -h ds049570.mongolab.com:49570 -d IbmCloud_ee4rm8hq_ecl23uf8 -c <collection> -u <user> -p <password> --file <input file>
You should also be able to connect to this from your java program.
Finally, check out the MongoDB by Compose service, which is an IBM provided MongoDB service, with a dashboard.

Related

How to see/install pg_activity for the crunchy data postgres operator?

I have setup an Rancher (RKE) (kuberbetes) for my application.
and application using the postgres so i have setup Crunchydata postgres operator and create postgres cluster using that.
everything fine but now i want to see the pg_activity for my postgresql.
how i can see the activity of whole postgres ?
you use the monitoring tools in rancher to monitor the Postgres.
apart from that you can SSH inside the respective pod of the database and use the cli command and check the output.
In rancher, you can also use the client tool to connect with the rancher and run the cli command to check the pg_activity.
Client docker image : https://hub.docker.com/r/jbergknoff/postgresql-client/
you can also deploy the GUI docker client on rancher and use it
GUI postgress client : https://hub.docker.com/r/dpage/pgadmin4/
GUI Example : https://dataedo.com/kb/query/postgresql/list-database-sessions#:~:text=Using%20pgAdmin,all%20connected%20sessions%20(3).

Use hasura with Google Cloud Run and Google Cloud SQL

The docs describe that hasura needs the postgres connection string with the HASURA_GRAPHQL_DATABASE_URL env var.
Example:
docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://username:password#hostname:port/dbname \
hasura/graphql-engine:latest
It looks like that my problem is that the server instance connection name for google cloud sql looks like PROJECT_ID:REGION:INSTANCE_ID is not TCP
From the cloud run docs (https://cloud.google.com/sql/docs/postgres/connect-run) I got this example:
postgres://<db_user>:<db_pass>#/<db_name>?unix_sock=/cloudsql/<cloud_sql_instance_name>/.s.PGSQL.5432 but it does not seem to work. Ideas?
I'm currently adding the cloud_sql_proxy as a workaround to the container so that I can connect to TCP 127.0.0.1:5432, but I'm looking for a direct connection to google-cloud-sql.
// EDIT Thanks for the comments, beta8 did mostly the trick, but I also missed the set-cloudsql-instances parameter: https://cloud.google.com/sdk/gcloud/reference/beta/run/deploy#--set-cloudsql-instances
My full cloud-run command:
gcloud beta run deploy \
--image gcr.io/<PROJECT_ID>/graphql-server:latest \
--region <CLOUD_RUN_REGION> \
--platform managed \
--set-env-vars HASURA_GRAPHQL_DATABASE_URL="postgres://<DB_USER>:<DB_PASS>#/<DB_NAME>?host=/cloudsql/<PROJECT_ID>:<CLOUD_SQL_REGION>:<INSTANCE_ID>" \
--timeout 900 \
--set-cloudsql-instances <PROJECT_ID>:<CLOUD_SQL_REGION>:<INSTANCE_ID>
As per v1.0.0-beta.8, which has better support for Postgres connection string parameters, I've managed to make the unix connection to work, from Cloud Run to Cloud SQL, without embedding the proxy into the container.
The connection should look something like this:
postgres://<user>:<password>#/<database>?host=/cloudsql/<instance_name>
Notice that the client will add the suffix /.s.PGSQL.5432 for you.
Make sure you added also the Cloud SQL client permission.
If the Hasura database requires that exact connection string format, you can use it. However, you cannot use Cloud Run's Cloud SQL support. You will need to whitelist the entire Internet so that your Cloud Run instance can connect. Cloud Run does not publish a CIDR block of addresses. This method is not recommended.
The Unix Socket method is for Cloud SQL Proxy that Cloud Run supports. This is the connection method used internally to your container when Cloud Run is managing the connection to Cloud SQL. Note, for this method IP based hostnames are not supported in your client to connect to Cloud Run's Cloud SQL Proxy.
You can embed the Cloud SQL Proxy directly in your container. Then you can use 127.0.0.1 as the hostname part for the connection string. This will require that you create a shell script as your Cloud Run entrypoint to launch both the proxy and your application. Based on your scenario, I recommend this method.
The Cloud SQL Proxy is written in Go and the source code is published.
If you choose to embed the proxy, don't forget to add the Cloud SQL Client role to the Cloud Run service account.

IBM Cloud and Database for MongoDB

I'd like to know how to get the string connection of my MongoDB database to use with Mongoose, I was looking for and there are old info, and when I though I found a good documentation, well, it doesn't work(this is the link), the command doesn't exist, this is the comand [ ibmcloud cdb deployment-connections example-mongo -u admin
** ] specifically ***CDB*
I hope someone can help me please.
You can get the connection string to Databases for MongoDB using the ibmcloud cdb plugin but must be installed first. You can install it using the command:
ibmcloud plugin install cloud-databases
Then you can start using the cdb plugin. After that, you can get your MongoDB connection strings with:
ibmcloud cdb cxn <name of mongo deployment>
For the CA cert you'll need to connect, the plugin will decode it for you with:
ibmcloud cdb cacert <name of mongo deployment>
You'll also be able to change the admin password as well.
Or, you can go to your IBM Cloud dashboard, click on your Databases for MongoDB database, select "Service credentials" on the left panel of your MongoDB management panel, and then create some service credentials. This service credential will give you a username, password, the CA cert (encoded), and connection strings, as well.

How can I conect a NodeJS app to a MongoDB running in a Docker container on AWS?

I am attempting to deploy my first MEAN stack application ('weatherapp') to production on AWS.
I deployed my NodeJS/Express/Angular app to AWS Elastic Beanstalk (preconfigured Linux machine running Node). This works fine and I can view the app in the browser.
Separately I created a docker container running MongoDB and deployed it to AWS / EC2 following the steps in this post:
https://blog.codeship.com/running-mean-web-application-docker-containers-aws/
My question is - how do I connect the two?
In my NodeJS app I was connecting to my local Mongo instance locally like this:
'mongodb://localhost:27017/weatherapp'
What steps can I take to find out what the connection string should be for my production Mongo instance on docker?
Thanks in advance!
The answer to this is two-fold. We need to set some options on the Docker side in the EC2 instance and then some security groups and configuration on the AWS side. First, we'll start on on the Docker container side.
Container
When you run the MongoDB container, you will want to do two things:
Persist the data to disk.
Open the MongoDB port to the container.
To persist the data to disk you will want to do something like -v /data/db:/data/db. This will make the MongoDB data available at /data/db on the host. This makes sure that an accidental deletion or upgrade of the container doesn't lose any data.
Next, we need to publish the MongoDB port so that applications external of Docker can connect to it. The default MongoDB port is 27017 so let's publish that using -p 27017:27017.
If your original command for starting MongoDB was:
docker run --name mymongodb -d mongo
Then the new one would be:
docker run --name mymongodb -d -p 27017:27017 -v /data/db:/data/db
AWS
Now, we need to edit the security group of your EC2 instance and configuration of Elastic Beanstalk.
Security Groups
First, take a look at your Security Groups in the EC2 console. You will have a group for the Elastic Beanstalk application named similar to awseb-e-xanf9hqrw3-stack-AWSEBSecurityGroup-1N2T1AI2H05I8 with a ID similar to sg-07fb8c43. We'll use this ID in the next step so copy it somewhere.
Now find the Security Group attached to your EC2 instance running the Docker container. You will need to add a new rule to this group allowing access to the MongoDB container. Edit the group and add a new inbound rule for:
Type: Custom TCP
Protocol TCP
Port range: 27017
Source: sg-07fb8c43
This will allow the Elastic Beanstalk EC2 instances (using sg-07fb8c43) to access the MongoDB port on your Docker EC2 instance.
Elastic IP
You'll likely want a more static IP address for your EC2 instance in case it reboots. Navigate to the Elastic IPs section of the EC2 console and allocate a new address to your Docker EC2 instance.
The new Elastic IP will be the address you use in your Elastic Beanstalk configuration to connect to MongoDB. If your address was 54.67.29.50 then your application would connect to mongodb://54.67.29.50:27017.
Elastic Beanstalk
Now, instead of hardcoded this address in your Node.js application, you should configure your application to pull the information from an environment variable. In your application, you should read the MongoDB URL from something like process.env.MONGO_URI. Then, in your Elastic Beanstalk application configuration, navigate to the Software Configuration and then down to Environment Properties. Here, you create a property name of MONGO_URI and the value as mongodb://54.67.29.50:27017. This will allow you to easily change the MongoDB instance should it ever change or if you launch multiple environments with different databases.

Install Chef Server 11 with AWS RDS

Now AWS has postgresql service in RDS, So I tried to install Chef Server 11 with postgresql RDS instance by editing attributes in /opt/chef-server/embedded/cookbooks/chef-server/attributes/default.rb
default['chef_server']['postgresql']['vip'] = "rds instance endpoint"
and importing database with the following command
/opt/chef-server/embedded/bin/psql -h "rds instance endpoint" -p 5432 -U "user_name" "database_name" < /opt/chef-server/embedded/service/erchef/lib/chef_db-f086a97/priv/pgsql_schema.sql
But i am not able to achieve that. chef-server-ctl reconfigure gives an error
curl -sf http:// 127.0.0.1:8000 /_status returned 7
Please help me to configure chef server with RDS instance.
I think i am able to solve my problem. It is because of encrypted password in erchef config file. I edited
"/opt/chef-server/embedded/cookbooks/chef-server/templates/default/echef.config.rb"
for the same and it seems working perfectly fine now.
Thanks
The chef-server-rds cookbook on github can be used to install Chef Server 11 with AWS RDS.
Given an iam key and secret, it will provision the rds instance if it doesn't exist in the account, initialize the chef schema, and install the appropriate platform-specific chef-server Omnibus package and perform the initial configuration of Chef Server on an AWS elastic compute ubuntu instance.
Using postgres on Amazon RDS offloads DB resource use away from the chef-server host. It also enables various DB functions like scaling, backup, and restore to be done independently of the chef-server installations. Similar configurations can be written for other db service providers.