All,
I'm running a query on the target server that retrieves data from a source server. My query is using the digest function. Digest is on both servers. It is embedded in a UDF that is also present on both servers. BTW, the "select" portion of the query runs perfectly on the source server.
I would think that when you submit a remote query it will execute on the remote box. I am receiving a "1 function digest(text, unknown) does not exist ..." error. Also, since all the functions are in the public schema on both servers, I don't see how Postgres is failing is find the function.
Any help appreciated.
TIA,
Mike
Queries are executed on server specified in connection string. If no host was given, then on localhost. They are executed using role from connection string. That also includes search_path of that role.
Unrelated to your question, but I would try 2 things while debugging that issue you described:
Connect to remote server using the same role as dblink connection string and executing query
schema qualify that function in dblink: public.digest(..)
Related
I'm trying to connect to a Google Cloud SQL instance using dblink, which works well when setting up my username and password in the connection string, but I would like to save my Client credentials in the SQL instance not to have the need to explicitly put my password in the connection.
The .pgpass file which will used is the one that belongs to the OS user which is running the local database ('~/postgres/.pgpass', in most cases). And then for security reasons, it works only if you are locally a superuser. Can you meet those criteria?
but I would like to save my Client credentials in the SQL instance
What does "SQL instance" mean? I would not think that .pgpass would count as being inside the SQL instance.
An alternative solution is create a foreign server with "postgres_fdw". This doesn't seem to be documented (edit: it is documented here, but uses dblink_fdw not postgresql_fdw), but you can pass the name of a "postgres_fdw" foreign server (in single quotes) to dblink functions as the connection string. It will then pull the password to be used from the USER MAPPING for that server and user. I would think the USER MAPPING counts as inside the "SQL instance".
I have a pgpool 3.5.4 with memcache enabled, and I use it to connect to
Redshift.
I wrote two simple programs, one in Java (JDBC
postgresql-9.4.1212.jre6.jar) and another one in Python (using psycopg2
postgres package) that just connects to pgpool, and execute a simple query
(eg: select * from customer limit 10;) and I've noticed strange and
different behaviors. I also ran the queries using the command line tool
psql.
1) Using JDBC with pgpool with caching enabled I get an error
2016-11-15 10:56:27: pid 31043: FATAL: Backend throw an error message
2016-11-15 10:56:27: pid 31043: DETAIL: Exiting current session because of
an error from backend
2016-11-15 10:56:27: pid 31043: HINT: BACKEND Error: "portal "pgpool31043"
does not exist"
2) Using JDBC with pgpool with caching disabled it works
3) Using psycopg2 or psql command line with pgpool with caching either
enabled or disabled it works
Can someone help me understand why only JDBC is not working?
There are two protocols JDBC uses to communicate, simple query protocol and extended query protocol.
pgpool II however, doesn't work very well with extended query protocol.
In the documentation of pgjdbc driver in github (https://github.com/pgjdbc/pgjdbc) there is a parameter named preferQueryMode. To fix this issue, just set the preferQueryMode to simple, and the problem will just go away.
I tested this set up with two customers so far, using pgpool in front of postgres and redshift, and it worked perfectly.
I have a problem connecting to DB2 through VBScript. I am using the connecting string as
Driver={IBM DB2 ODBC
DRIVER};Database=mydatabase;Hostname=myHostName;Port=myPortName;Protocol=TCPIP;Uid=myUserID;Pwd=myPassword;
Upon using the above connection string, I am getting an error message stating:
[IBM][CLI Driver] SQL3006 1N The database alias or database name
"myDatabase" was not found at the remote node. SQLSTATE=08004
Can anyone please suggest a solution for this? I tried using DBALIAS in place of Database, but it says the parameter is incorrect.
Suggestions?
Looks like your database name is incorrect.
You can find the correct value by issuing following query in either QMF or SPUFI :
SELECT CURRENT SERVER FROM SYSIBM.SYSDUMMY1
Yes, most likely incorrect database name has been specified. Also you can not perform SQL without connection as proposed by Vivek8086, bit you can try to find it in Db2 MSTR output in JES if you have ID on remote system or try to perform -DIS DDF Db2 command (if you know Db2 SYSID).
I have a script that I wrote to query mongodb in python I am using PyMongo. I am trying to use this script to connect to a remote MongoDB server and then run the query within the script and then I want to be able to dump the data I get back from the mongodb into a file.
What are the parameters I need to have at the top of the script to connect to this database, use my username and password, switch to the correct database and then run the query?
Couple of options.
First you could provide a MongoDB URI which can be provided to the MongoClient as an argument. Then you can switch as needed using the standard methods for getting a database once connected.
Alternately, you can connect as normal, use the getting a database once connected method to get the desired database and then use the authenticate function to authenticate against the database.
I'm trying to create a Catalyst project connecting to an existing MS SQL Server database. I got the correct connection string and it's authenticating, but it's not finding any tables. Anyone have an idea of what I might be missing?
I substituted the real ip address, database name, username, and password but you get the idea.
This is the command I run:
script\qa_utility_create.pl model DB DBIC::Schema QA_Utility::Schema create=static "db_schema=DatabaseName" "dbi:ODBC:Driver={sql server};Server=1.1.1.1,1433;Database=DatabaseName" username password
When I run this, I get the below error:
exists "C:\strawberry\perl\site\bin\QA_Utility\lib\QA_Utility\Model"
exists "C:\strawberry\perl\site\bin\QA_Utility\t"
Dumping manual schema for QA_Utility::Schema to directory C:\strawberry\perl\site\bin\QA_Utility\lib ...
Schema dump completed.
WARNING: No tables found, did you forget to specify db_schema?
exists "C:\strawberry\perl\site\bin\QA_Utility\lib\QA_Utility\Model\DB.pm"
Check your db_schema as the error suggests. The default is usually "dbo".
So I had similar issues connecting with a mySQL database which drove me crazy for about 4 hours (I'm a newbie to Catalyst).
the create script was executing ok, but failed to pick up any tables giving the "WARNING No tables found...."
The tables were present however in the database.
Prior to this, I had been getting errors when the script tried to connect to the database, and after playing with the arguments for a while, the connection errors cleared and I assumed all was good at this point (wrong !!!!).
The suggested solution to specify the db_schema was misleading at this point, as the problem was more an issue with the connection failing to return any valid data. So I think what was happening was it was finding the database, connecting ok, but not returning any data, thus no tables....
After about 4 hours of playing with the arguments for the connection one combination just magically worked.
So here is the successful command line....
script/testcatalyst_create.pl model DB DBIC::Schema testcatalyst::Schema::perl_test create=static dbi:mysql:perl_test:user=root
The parameter which was causing the error was the last parameter which specifies the connection parameters dbi:mysql...
previously I had tried...
script/testcatalyst_create.pl model DB DBIC::Schema testcatalyst::Schema::perl_test create=dynamic dbi:mysql:perl_test,username=root
and many other formats from various online searches. The ":user=root" turned out to be the correct format.
Hope this helps someone else !!!!!!!