Python fabric with EC2 instance - deployment

I am working on setting up auto deployment script using Python Fabric on EC2 instance. We are already having code repositories cloned on EC2 instance with HTTPS (without user name,https://bitbucket.org/) instead of SSH.
If we clone the repositories using SSH, it will solve my problem for now. But, I just wanted to know if following is possible:-
After connecting to remote EC2 instance using Fabric, if my next command is hg clone, it asks for user name and password. I have to type this two things manually on command prompt.
Is there any way we can pass these values run time automatically?
Thanks!

You can use ssh keys and connect using the ssh+hg protocol. It'll auth w/o password if you set that up.

Related

how does ssh connect to master node on GCP?

I got the problem about how to connect to the master node on GCP. I want to build a directory in this master node,so I need to connect to it via ssh or any other method.But I found so many pages on browsers ,they can't work for me . This is following situation I met:
enter image description here
Above is the terminal I entered command on GCP VM . when I wanted to ssh into this master node , I got the permission error. Could anyone help me ?I will appreciate with you very much.
i think it's due to the Wrong SSH key or Default key permission being denied.
If that node is running into the GCP itself you can go to the instances section and there will be an option to SSH directly instead of running the command from the browser shell.
Go to instances to list all running instances and find the one and click on the SSH button it will do the SSH for you without the key.

Intellij SSH to remote DB

I'm trying to figure out how I can connect to my DB via SSH using the IntelliJ "Database" feature. We are using OpenShift and there is one pod running, which is only our connection to our DB.
Example:
oc rsh pod_name
After that comes the psql connection string to login into our DB.
Can I somehow point to my script, so I can connect or is there a better way? The issue with this is, the pod name is dynamically changed now and then. That's why I want to point to my script which solves this issue by fuzzy searching the pod name.
here is doc on how to connect using SSH. How does your script work? can you point it to change pod name in OpenSSH config for example and do not establish SSH connection? Then you just use this config in Intellij with alias to create ssh tunnel and use it with database connection.
or there is another way: in data source properties on 'Options' tab you can use 'Before connection' configuration where you can add your script, which will create ssh tunnel using some static local port and set in database connection 'localhost:port'.

Click to deploy MEAN Stack on Google Compute Engine Clone Repo Locally

On Compute Engine, using the click-to-deploy option for MEAN, how can we clone the repo of the sample app it locally creates so that we can start editing and pushing changes?
I tried gcloud init my-project however all it seems to do is initialize an empty repo. And indeed when I go to "source code" section for that project, there is nothing there.
How do I get the source code for this particular instance, setup a repo locally for it and then deploy changes to the same instance? Any help would be greatly appreciated.
OK, well I have made some progress. Once you click-to-deploy GCE will present you with a command to access your MEAN stack application through an SSH tunnel.
It will look something like this:
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=project-id --zone us-central1-f instance-name
You can change the port numbers as long as your firewall rules allow that specific port.
https://console.developers.google.com/project/your-project-id/firewalls/list
Once you SSH in, you will see the target directory, named the same as you told mean-io to use as the name of the application when you ran mean init
I first made a copy of this folder where mine was named "flow" cp -r flow flow-bck and then I removed some unnecessary directories with:
cd flow-bck && rm -rf node_modules bower_components .bower* .git
All of this to setup copying that folder to my local machine using gcloud compute copy-files availabe after installing Google Cloud SDK.
On my local machine, I ran the following:
gcloud compute copy-files my-instance-name:/remote/path/to/flow-bck /local/path/to/destination --zone the-instance-region
Above 'my-instance-name', '/remote/path/to', '/local/path/to', and 'the-instance-region' obviously need to changed to your deployment's info, etc.
This copied all the files from the remote instance to a folder called flow-bck on local found at the defined local path. I renamed this folder to what it is on remote flow and then did:
cd flow && npm install
This installed all the needed modules and stuff for MEAN io. Now the important part about this is you have to kill your remote ssh connection so that you can start running the local version of the app, because the ssh tunnel will be using that same port (3000) already, unless you changed it when you tunneled in.
Then in my local app directory flow I ran gulp to start the local version of the app on port 3000. So it loads up and runs just fine. I needed to create a new user as it's obviously not the same database.
Also I know this is basic stuff, but not too long ago I would have forgotten to start mongodb process by running mongod beforehand. In any case, mongo must be running before you can start the app locally.
Now the two things I haven't done yet, is editing and deploying a new version based on this... and the nagging question of whether this is all even necessary. That'd be great to find that this is all done with a few simple commands.

How to get MONGO_URL from command line Meteor Up deployment?

I am currently deploying to Digital Ocean using Meteor Up. If I don't specify a MONGO_URL in the mup.json, can I get the value from the command line while the website is running, i.e. I don't want to shutdown the site?
If I go to the app directory and run meteor mongo --url, I get the following error:
mongo: Meteor isn't running a local MongoDB server.
This command only works while Meteor is running your application
locally. Start your application first. (This error will also occur if
you asked Meteor to use a different MongoDB server with $MONGO_URL when
you ran your application.)
If you're trying to connect to the database of an app you deployed
with 'meteor deploy', specify your site's name with this command.
Even if I run the app from the app directory, it will only give the localhost MONGO_URL. I need the MONGO_URL for the deployed app.
I have also taken a look at a similar question as suggested by some of the answers. I disagree that it is "impossible" to get the MONGO_URL without some other program running on the server. It's not as if we are defying the laws of physics here, folks. Fundamentally, there should be a way to access it. Just because no one has yet figured it out doesn't mean it is impossible.
meteor mongo --url should return the URL.
Try opening another shell in the app directory and running that command.
Meteor Up packages your app in production mode with meteor build so that it runs via node rather than the meteor command line interface. Among other things, this means meteor foo won't work on the remote server (at least not by default). So what you're really looking for is a way to access mongo itself remotely.
I recently set up mongo on an AWS EC2 instance and listed some lessons learned here: https://stackoverflow.com/a/28846703/2669596. Some details of how you do it are going to be different on Digital Ocean, but these are the main things you have to take care of once mongo itself is installed:
Public IP/DNS Address: This is probably fine already since you can deploy to the server.
Port Security Rules: You need to make sure port 27017 is open for TCP access, at least from your IP address. MongoDB also has an http interface you can set up; if you want to use that you'll need to open 28017 as well.
/etc/mongod.conf (file location may differ depending on Linux flavor):
Uncomment port=27017 to make sure you have the default port (I don't think this is actually necessary, but it made me feel better and it's good to know where to change the default port...).
Comment out bind_ip=127.0.0.1 in order to listen to external interfaces (e.g. remote connections).
Uncomment httpinterface=true if you want to use the http interface.
You may have to restart the mongod host via sudo service mongod restart. That's a problem if you can't have downtime, but I don't know of a way around that if you change the config file.
Create User: You need to create an admin and/or user to access the database remotely.
Once you've done all of that, you should be able to access the database from your local machine (assuming you have the mongo client installed locally) by running
mongo server.url.com:27017/mup-app-name -u username -p
where server.url.com is the URL or IP address of your remote server, mup-app-name is the appName parameter from your mup.json file, username is the user you created to access the database, and you'll be prompted for that user's password after you run the command (or you could put it after -p on the same line, depending on the password).
There may also be a way to do this by setting up nginx to reverse-proxy 127.0.0.1:27017 on your remote server, but I've never done it and that's just me speculating.

capistrano insisting on password

First, my teammate is successfully deploying on almost exactly the same setup and using the exact same config as me re deploy. Therefore, cannot be a deploy configuration issue, there is nothing local or unique to any of our machines.
Second, I can successfully login via my machine using ssh user#server.com without password prompt.
However, I have tried everything to stop capistrano asking this question:
--recursive; fi"
servers: ["myserver.com"]
Password:
* [deploy:update_code] rolling back
I have tried every single password I have, and not entering a password. I don't even know what this password is for. Is it SSH? Because I don't even have a password protected key file.
I'm totally lost and I've literally been debugging this for 5 hours now without a single change in status. I'd really appreciate some help on how I can find out what the problem is.
Note, cap deploy simply works for my teammate using same config, same server. Everything, except different key file (note mine works and tested via ssh command).
Do you have to specify user#server.com to SSH to your server successfully (i.e., do you have a different username on your remote server from your local machine)?
You might just need to tell Capistrano what username it should be using to connect with by adding it to your deploy.rb:
set :user, "your-username"
You could also change the default username SSH will pick for that server by using ~/.ssh/config:
Host your.server.name
User your-username