Can't retrieve MongoDB to local drive using SCP from AWS EC2 - mongodb

I have a Docker container using Strapi (which used MondoDB) on a now defunct AWS EC2. I need the content off that server - it can't run because it's too full. So i've tried to retrieve all the files using SCP - which worked a treat apart from download the database content (the actual stuff i need - Strapi and docker book up fine, but because it has to database content, it treats it as a new instance).
Every time i try to download the contents on db from AWS i get 'permission denied'
I'm using SCP something like this
scp -i /directory/to/***.pem -r user#ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com:strapi-docker/* /your/local/directory/files/to/download
Does anyone know how i can get this entire docker container running locally with the database content?

You can temporarily change permissions (recursively) on the directory in question to be world-readable using chmod.

Related

Postgres docker container external access issue when using a bind mounted data directory with pre-existing database

I am trying to use the data directory from a preexisting database & bring up a new postgres docker container (same version 9.5) with its '/var/lib/postgresql/data' bind mounted to the data directory.
I find that even though i am able to bring up the container & use psql within the container to connect to it, external connections fail with invalid password. This despite me setting the POSTGRES_USER, POSTGRES_DB & POSTGRES_PASSWORD environment variables.
Is there a way to make this work? I also tried this method but ran into permission error,
"sh: 1: /usr/local/bin/init.sh: Permission denied"
Thanks
This happends when your user/group id does not match the file owner. You should run your docker with --user
please have a look to Arbitrary --user Notes of https://hub.docker.com/_/postgres
Hope that will help you to fix your problem.
For composer look at https://docs.docker.com/compose/reference/run/
OK i figured out the way to do this & it turns out to be very simple. All i did was,
Add a script & copy it into docker-entrypoint-initdb.d in my Dockerfile
In the script i had a loop that was waiting for the db to be up & running before resetting the superuser password & privileges.
Thanks

How do I SSH from a Docker container to a remote server

I am building a docker image off postgres image, and I would like to seed it with some data.
I am following the initialization-scripts section of the documentation.
But the problem I am facing now, is that my initialisation scripts needs to ssh to a remote database and dumb data from there. Basically something like this:
ssh remote.host "pg_dump -U user -d somedb" > some.sql
but this fails with the error that ssh: command not found
Question now is, in general, how do I ssh from a docker container to a remote server. In this case, specifically how do I ssh from a docker container to a remote database server as part of the initialisation step of seeding a postgres database?
As a general rule you don't do things this way. Typical Docker images contain only the server they're running and some core tools, but network clients like ssh or curl generally aren't part of this. In the particular case of ssh, securely managing the credentials required is also tricky (not impossible, but not obvious).
In your particular case, I might rearrange things so that your scripts didn't have the hard assumption the database was running locally. Provision an empty database container, then run your script from the host targeting that empty database. It may even work to set the PGHOST and PGPORT environment variables to point to your host machine's host name and the port you publish the database interface on, and then run that script unmodified.
Looking closer at that specific command, you also may find it better to set up a cron job to run that specific database dump and put the contents somewhere. Then a developer can get a snapshot of the data without having to make a connection to the live database server, and you can limit the number of people who will have access. Once you have this dump file, you can use the /docker-entrypoint-initdb.d mechanism to cause it to be loaded at first startup time.

How to use docker with mongo to achieve replication and with opening authentication

I want to use docker run a vm mongodb, at the same time, the mongo configure file use my own defined configure file to archive replication and open authentication.
Scanning some files but don't resolve the problem.
Any ideas?
The docker mongo image has a docker-entrypoint.sh it calls in the Dockerfile
Check if you can:
create your own image which would create the right user and restart mongo with authentication on: see "umputun/mongo-auth" and its init.sh script
or mount a createUser.js script in docker-entrypoint-initdb.d.
See "how to make a mongo docker container with auth"

Importing data from external drive to Mongodb hosted on Google compute engine

I deployed MongoDB on google cloud. I have trouble importing data now. I have a json format data on my hard drive, and would like to import it to the database. I tried multiple ways that didn't work:
directly specifying the location of the file
saving the file in a Google storage bucket.
These are the commands I ran:
mongoimport -d test -c trialcollection - f /mongobucket/trial.json
mongoimport -d test -c trialcollection /mongobucket/trial.json
mongoimport -d test -c trialcollection - f C:/desktop/mongo/trial.json
How do I get data to import into Mongo hosted on the Google compute engine?
It sounds like you have the json files on your local computer and you need to mongoimport them into your remove GCE MongoDB instance. The best way to do that is to copy the files that you need over to your GCE instance.
If you haven't already, you should install the Google Cloud SDK on your local system. After you've installed that, you should be able to use the gcloud compute copy-files command to copy the files from your local system to your GCE instance. This command essentially works like scp.
From there you can use gcloud compute ssh to connect to your instance and then run the mongoimport command locally on your GCE instance.

Click to deploy MEAN Stack on Google Compute Engine Clone Repo Locally

On Compute Engine, using the click-to-deploy option for MEAN, how can we clone the repo of the sample app it locally creates so that we can start editing and pushing changes?
I tried gcloud init my-project however all it seems to do is initialize an empty repo. And indeed when I go to "source code" section for that project, there is nothing there.
How do I get the source code for this particular instance, setup a repo locally for it and then deploy changes to the same instance? Any help would be greatly appreciated.
OK, well I have made some progress. Once you click-to-deploy GCE will present you with a command to access your MEAN stack application through an SSH tunnel.
It will look something like this:
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=project-id --zone us-central1-f instance-name
You can change the port numbers as long as your firewall rules allow that specific port.
https://console.developers.google.com/project/your-project-id/firewalls/list
Once you SSH in, you will see the target directory, named the same as you told mean-io to use as the name of the application when you ran mean init
I first made a copy of this folder where mine was named "flow" cp -r flow flow-bck and then I removed some unnecessary directories with:
cd flow-bck && rm -rf node_modules bower_components .bower* .git
All of this to setup copying that folder to my local machine using gcloud compute copy-files availabe after installing Google Cloud SDK.
On my local machine, I ran the following:
gcloud compute copy-files my-instance-name:/remote/path/to/flow-bck /local/path/to/destination --zone the-instance-region
Above 'my-instance-name', '/remote/path/to', '/local/path/to', and 'the-instance-region' obviously need to changed to your deployment's info, etc.
This copied all the files from the remote instance to a folder called flow-bck on local found at the defined local path. I renamed this folder to what it is on remote flow and then did:
cd flow && npm install
This installed all the needed modules and stuff for MEAN io. Now the important part about this is you have to kill your remote ssh connection so that you can start running the local version of the app, because the ssh tunnel will be using that same port (3000) already, unless you changed it when you tunneled in.
Then in my local app directory flow I ran gulp to start the local version of the app on port 3000. So it loads up and runs just fine. I needed to create a new user as it's obviously not the same database.
Also I know this is basic stuff, but not too long ago I would have forgotten to start mongodb process by running mongod beforehand. In any case, mongo must be running before you can start the app locally.
Now the two things I haven't done yet, is editing and deploying a new version based on this... and the nagging question of whether this is all even necessary. That'd be great to find that this is all done with a few simple commands.