Locally I have no problems, but on the remote server I want to perform a routine on, it is not possible.
I also specified a connection string locally to perform the routine on the remote server, but it does not work.
Related
I have a cloud-service based on firebird databases. Every customer has his own database file. So many connection-definitions are loaded into my service at startup. This works all well.
Currently the load for the server is ok so the database files are on the same machine as the service itself. Later I could extend it by another server.
My question is:
Does it matter if I use a local firebird connection or should I prefer a remote connection (via TCP/IP). Although I am on the same machine.
Are there advantages / disadvantages or any limits? I got a lot of requests to this server.
I am using Firebird 2.5.7 (64Bit).
I am new to Postgres and wanted to understand if i make a remote server for Postgres and use PGadmin to connect to the remote server, will it use local resources to run the queries?
I tried it on a user machine here and it seems to be the case and would like the queries submitted to be run on the server itself and not consume local resources. Any suggestions would be helpful.
Thanks
Saurabh
No, the query is run on the server. However the download and the display of the results can take some time and can take ressources on the client, depending on the size of the result set.
I have a master and a slave database with streaming replication. From my local machine, I can connect to the master via psql and describe tables and everything works fine. However, from my web server, I can connect to the master fine, but when I run \d, it freezes. I can't cancel the command, it just freezes. I have to force quit the ssh session to exit. The same thing happens when I try to tab complete.
I can read from the standard postgres views, such as pg_stat_activity.
I can connect and read data just fine from the slave, on both local and the web server.
I'm using the same user and database on local and web, and pg_hba.conf allows that user/database combo from anywhere.
Any ideas?
I have my Oracle server installed in a remote machine and I want a script at my local machine which will check whether Oracle server is up and running or not. I know this can be check by creating a connection through sqlplus or JDBC. But in this case oracle client won't be present and I am saving JDBC approach as my last option. So is there any other simpler way to check this, which can be easily implemented in a shell script???
Thanks
Not really. The only way to be certain that the database is responding to queries is to run a query on it, such as the venerable:
select dummy from dual
I try to use Phing for deploying site to the server.
Command which should create database or make changes:
<pdosqlexec url="mysql:host=${db.host}; dbname=${db.name}"
userid="${db.user}"
password="${db.pass}"
src="${project.basedir}/deploy/mysqlbuiltscripts/create_database.sql"/>
It works good on local machine. But I need to make changes on server too.
Main problem - I have access to server database via SSH only.
Question - How can I execute this command via SSH tunnel?
P.S. I tried to use <ssh username="${username}" password="${password}" host="${host}" command="${myMysqlCommand}">, but it does not suit me because it does not write changes to Phing "changelog" table.
Are you using dbdeployTask? If you are generating a delta for the remote server, then your file should have the changelog present.
If you don't have access to your remote server, you may need to do the dbdeploy work on the remote server directly or tunnel your requests through ssh.
My dbdeploy steps are:
Run phing -> dbdeploy task
Get a delta sql
With mysql, run the delta sql script on the remote server
Enjoy