Intellij SSH to remote DB - postgresql

I'm trying to figure out how I can connect to my DB via SSH using the IntelliJ "Database" feature. We are using OpenShift and there is one pod running, which is only our connection to our DB.
Example:
oc rsh pod_name
After that comes the psql connection string to login into our DB.
Can I somehow point to my script, so I can connect or is there a better way? The issue with this is, the pod name is dynamically changed now and then. That's why I want to point to my script which solves this issue by fuzzy searching the pod name.

here is doc on how to connect using SSH. How does your script work? can you point it to change pod name in OpenSSH config for example and do not establish SSH connection? Then you just use this config in Intellij with alias to create ssh tunnel and use it with database connection.
or there is another way: in data source properties on 'Options' tab you can use 'Before connection' configuration where you can add your script, which will create ssh tunnel using some static local port and set in database connection 'localhost:port'.

Related

How to set a remote connection to a Vagrant container using "Visual Studio Code Remote - SSH"?

I'm exploring the new set extensions called VSCode Remote Pack and I want to connect to a Vagrant container using the Remote Container extension. Using a Windows 10 OS, how could I do that?
I tried the extension but it requests me to have Docker installed, what I suppose from that is that it only works for Docker containers. But I wonder if somebody have already managed to connect to a Vagrant box.
This are the docs from the extension: https://code.visualstudio.com/docs/remote/containers
VS Code Remote containers currently only support Docker (its implementation executes docker commands). Please open a feature request if you would like to see other tools supported.
As an alternative, you could try using Remote SSH to connect to vagrant containers. That should work but will require some extra container setup
Sorry for updating this so late.
The solution was pretty simple, as #MnZrk commented, what it needs to be done for setting up the connection is the following:
Run vagrant ssh-config > some-file.txt. This will generate a file with the configuration to run using SSH. Here an example of that file:
Host default
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile C:/Users/User/project/.vagrant/machines/default/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL
ForwardAgent yes
ForwardX11 yes
Notice that the host name is default, you could rename it to whatever you want so you could identify it more easily.
Copy the content of some-file.txt inside your SSH configuration file. This file could be edit directly from vscode by pressing F1 and writing Remote-SSH: Open Configuration File..., then you select the file you use for ssh configuration. After that file opens, just copy the content of some-file.txt there.
Finally, just press again F1 and type Remote-SSH: Connect to Host..., choose the connection with the host name default or the want you wrote in the first step, and that's all.

How do I SSH from a Docker container to a remote server

I am building a docker image off postgres image, and I would like to seed it with some data.
I am following the initialization-scripts section of the documentation.
But the problem I am facing now, is that my initialisation scripts needs to ssh to a remote database and dumb data from there. Basically something like this:
ssh remote.host "pg_dump -U user -d somedb" > some.sql
but this fails with the error that ssh: command not found
Question now is, in general, how do I ssh from a docker container to a remote server. In this case, specifically how do I ssh from a docker container to a remote database server as part of the initialisation step of seeding a postgres database?
As a general rule you don't do things this way. Typical Docker images contain only the server they're running and some core tools, but network clients like ssh or curl generally aren't part of this. In the particular case of ssh, securely managing the credentials required is also tricky (not impossible, but not obvious).
In your particular case, I might rearrange things so that your scripts didn't have the hard assumption the database was running locally. Provision an empty database container, then run your script from the host targeting that empty database. It may even work to set the PGHOST and PGPORT environment variables to point to your host machine's host name and the port you publish the database interface on, and then run that script unmodified.
Looking closer at that specific command, you also may find it better to set up a cron job to run that specific database dump and put the contents somewhere. Then a developer can get a snapshot of the data without having to make a connection to the live database server, and you can limit the number of people who will have access. Once you have this dump file, you can use the /docker-entrypoint-initdb.d mechanism to cause it to be loaded at first startup time.

Error connection pgadmin 4

I add a server and there are Error like this
I use pgadmin4
How I can Fix this?
Firstly, check if the server is listening.
Use the following command on your server:
netstat -nlt|grep :5432
If it's ok, SSH (or whatever method you use to access the server) to your server and view the file:
/etc/postgresql/9.1/main/postgresql.conf
Find the line start with:
listen_addresses=
If the value after "=" is 'localhost', it means that you cannot connect from outside. To be able to do so, change it to:
listen_addresses='*'
Now you will be able to connect from anywhere, outside from the server itself. And, don't forget to restart you DBMS and check if it works.
Ah. One more thing, you may also need to give your user access rights to your database as well.
Open the file:
/etc/postgresql/9.1/main/pg_hba.conf
And add:
host all all * md5
You also need to restart you DBMS to fire up changes.
P/s: You should enable SSL.

How to get MONGO_URL from command line Meteor Up deployment?

I am currently deploying to Digital Ocean using Meteor Up. If I don't specify a MONGO_URL in the mup.json, can I get the value from the command line while the website is running, i.e. I don't want to shutdown the site?
If I go to the app directory and run meteor mongo --url, I get the following error:
mongo: Meteor isn't running a local MongoDB server.
This command only works while Meteor is running your application
locally. Start your application first. (This error will also occur if
you asked Meteor to use a different MongoDB server with $MONGO_URL when
you ran your application.)
If you're trying to connect to the database of an app you deployed
with 'meteor deploy', specify your site's name with this command.
Even if I run the app from the app directory, it will only give the localhost MONGO_URL. I need the MONGO_URL for the deployed app.
I have also taken a look at a similar question as suggested by some of the answers. I disagree that it is "impossible" to get the MONGO_URL without some other program running on the server. It's not as if we are defying the laws of physics here, folks. Fundamentally, there should be a way to access it. Just because no one has yet figured it out doesn't mean it is impossible.
meteor mongo --url should return the URL.
Try opening another shell in the app directory and running that command.
Meteor Up packages your app in production mode with meteor build so that it runs via node rather than the meteor command line interface. Among other things, this means meteor foo won't work on the remote server (at least not by default). So what you're really looking for is a way to access mongo itself remotely.
I recently set up mongo on an AWS EC2 instance and listed some lessons learned here: https://stackoverflow.com/a/28846703/2669596. Some details of how you do it are going to be different on Digital Ocean, but these are the main things you have to take care of once mongo itself is installed:
Public IP/DNS Address: This is probably fine already since you can deploy to the server.
Port Security Rules: You need to make sure port 27017 is open for TCP access, at least from your IP address. MongoDB also has an http interface you can set up; if you want to use that you'll need to open 28017 as well.
/etc/mongod.conf (file location may differ depending on Linux flavor):
Uncomment port=27017 to make sure you have the default port (I don't think this is actually necessary, but it made me feel better and it's good to know where to change the default port...).
Comment out bind_ip=127.0.0.1 in order to listen to external interfaces (e.g. remote connections).
Uncomment httpinterface=true if you want to use the http interface.
You may have to restart the mongod host via sudo service mongod restart. That's a problem if you can't have downtime, but I don't know of a way around that if you change the config file.
Create User: You need to create an admin and/or user to access the database remotely.
Once you've done all of that, you should be able to access the database from your local machine (assuming you have the mongo client installed locally) by running
mongo server.url.com:27017/mup-app-name -u username -p
where server.url.com is the URL or IP address of your remote server, mup-app-name is the appName parameter from your mup.json file, username is the user you created to access the database, and you'll be prompted for that user's password after you run the command (or you could put it after -p on the same line, depending on the password).
There may also be a way to do this by setting up nginx to reverse-proxy 127.0.0.1:27017 on your remote server, but I've never done it and that's just me speculating.

Python fabric with EC2 instance

I am working on setting up auto deployment script using Python Fabric on EC2 instance. We are already having code repositories cloned on EC2 instance with HTTPS (without user name,https://bitbucket.org/) instead of SSH.
If we clone the repositories using SSH, it will solve my problem for now. But, I just wanted to know if following is possible:-
After connecting to remote EC2 instance using Fabric, if my next command is hg clone, it asks for user name and password. I have to type this two things manually on command prompt.
Is there any way we can pass these values run time automatically?
Thanks!
You can use ssh keys and connect using the ssh+hg protocol. It'll auth w/o password if you set that up.