Impala FDW for Postgres 9.5 - postgresql

I am looking for Impala Foreign Data Wrapper for Postgres 9.5. I have tried to figure out from the internet and can only have one reference to https://github.com/lapug/impala_fdw
But it seems the fdw is yet to be completed as per the readme file.
Can someone guide me to any other Impala FDW available which I can use to connect Postgres to Impala?

Since Impala supports JDBC and ODBC, you've got some options.
My jdbc2_fdw fork with 9.5 patches - compiles and able to retrieve results. Not fully tested yet. Incorporates mc-soi's jdbc2_fdw patch for PostgreSQL 9.5 and includes additional changes.
jdbc2_fdw - only works with PostgreSQL 9.4 and earlier
odbc_fdw
I'm using odbc_fdw and heimir-sverrisson's jdbc2_fdw successfully with PostgreSQL 9.4.
PostgreSQL 9.5 changed the API. I just got the jdbc2_fdw working to some degree, but had to do additional patches on the path I mentioned. It allows retrieving results from a foreign table and creating a materialized view on it. More testing is needed. Link shared above.

Related

Postgresql with IBM App Connect Enterprise

I am trying to use IBM APP Connect Enterprise to connect to a Postgresql Datasource and execute Database complex queries (Complex SELECT, INSERT, UPDATE Statements)
All I can find is the loopback node which is using limited (select, insert and update) statements.
Is there any option that I can include an esql having a PASSTHRU function as the one used with ODBC (Oracle datasources)?
You could use App Connect for the interactions with Postgresql: https://www.ibm.com/docs/en/app-connect/containers_cd?topic=examples-connecting-app-connect-postgresql
You should be able to call your App Connecto flow using a Callable Flow (via the Switch Server): https://www.ibm.com/docs/en/app-connect/12.0?topic=pecf-preparing-environment-split-processing-between-app-connect-enterprise-app-connect-cloud
You should be able to add a postgresql database as an ODBC database through the UnixODBC layer (i.e. add an entry to the odbcinst.ini file), it's not directly supported though, so if you find an issue you'd need to reproduce with a supported database.
I haven't tried it myself yet, I should hopefully have time in the not too distant future.
Searching the internet for "unixodbc postgres odbcinst.ini" gave some good results, which is where I'd start.

How to check whether the cstore FDW is present on a citus build of PostgreSQL?

I am running citus PostgreSQL build from here:
https://github.com/citusdata/docker/blob/master/docker-compose.yml
But I don't know how to check whether the instance has the cstore foreign data wrapper available for columnar support? I'm guessing there's a way to do it from psql much like HSTORE?

Postgres link to Big Query using ODBC drivers

I'm trying to build a link from Postgres (Windows installation) to Google Big Query. In order to do so I've found 3rd party ODBC drivers by Simba, installed and configured successfully. Next step was to create a link in Postgres. I was looking at dblink function in Postgres to do so. Documentation of dblink_connect states that I need to pass libpq-style connection info string which should be similar to hostaddr=127.0.0.1 port=5432 dbname=mydb user=postgres password=mypasswd.
Question is how should I create a dblink connection using installed ODBC drivers? What should be my hostaddr and port?
When I am googling for Postgres dblink connection using ODBC I always find how to connect to Postgres instead of from Postgres. Is it at all possible?
You could also simply install the FDW and query forign tables of bigquery: https://github.com/gabfl/bigquery_fdw
Postgres dblink is a module that supports connections to other PostgreSQL databases. It doesn't support ODBC data sources.
You may want to try ODBC-Link which allows to connect to any ODBC data source. Another approach is to use ODBC version of Postgres Foreign Data Wrappers. There are several extensions that implement FDW for ODBC data sources and they are listed on Postgres FDW page.

Connecting SAS 9.2 with Amazon Redshift

I need to create reports/summary tables on Redshift using SAS. My client data is on Amazon Redshift and he provided me all credentials to access the database. I have SAS 9.2 (32bit) and downloaded PostgresSQL 32bit driver to my system (as Redshift is based on PostgresSQL). I setup ODBC data source successfully and now I am connecting SAS using below command:
LIBNAME RdSft ODBC DSN='Redshift server' user='xxxxxxx' pw='xxxxxx';
data Rdsft.new_table;
set Rdsft.old_table(obs=10);
run;
I am able to connect and can see contents of tables on Redshift but not able to make any table there. Sometimes I could but its taking hours to create a table just with 10 observations. Someone suggested me to use DbVisulizer to do this task but I am comfortable with SAS only.
Please suggest.
If you have SAS/ACCESS try using the postgres engine for the library instead of going via ODBC eg:
libname RdSft postgres server="<server-address>" database=<db-name> port=5432 user='xxxxxxx' pw='xxxxxx';
Also, try adding conopts="UseServerSidePrepare=1" to the libname as suggested by this article: http://support.sas.com/kb/52/585.html
The simple fact of the matter, is that when you're connecting to Redshift via ODBC, even your simple data step query:
"data Rdsft.new_table;
set Rdsft.old_table(obs=10);
run;"
Is essentially translating to "select * from rdsft.old_table" before the obs subset is getting applied.
The SAS/ACCESS postgres solution is solid, you may also want to use proc sql, select only the columns you want, and subset as much as possible. Proc Sql will translate a bit easier into Redshift query language through an ODBC than the data step will.
SAS will hopefully be issuing a SAS/ACCESS for REDSHIFT option sometime soon! :)

POSTGRESQL 9.1 backup and restore to 8.4

I'm trying to upload a database, which I developed locally, into our development server.
I installed PostgreSQL 9.1 on my machine and the development server uses 8.4.
When trying to restore the database to 8.4 using the dump file created by 9.1 I get the error:
pg_restore: [archiver (db)] could not execute query: ERROR: syntax error at or near "EXTENSION"
LINE 1: CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalo...
and a quick research tells me that "EXTENSION" doesn't exist prior to 9.1.
I'm not really sure I should look for an option in pg_dump that ignores "extensions" as the database I'm trying to upload relies on the PostGIS extension for most of data.
While upgrading the development server and installing PostGIS in the dev server is an option, I'd like to know of a different route, one wherein I do not need to edit anything on the server while maintaining the functions of the database I developed.
Of course other workarounds are welcomed, my sole aim in uploading my database to the server is to reduce the amount of reconfiguration I have to do on my project whenever I need to deploy something for our team.
This is an old post but I had the same problem today and there is a better more reliable way of loading a PG 9.1 db into a PG 8.4 server. The method proposed by Craig will fail on the target machine because the PLPGSQL language will not be created.
pg_dump -Upostgres -hlocalhost > 9.1.db
replace this line
CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;
with this line
CREATE LANGUAGE plpgsql;
delete this line or comment it out
COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
you can use sed to make the changes
Often it is not possible to upgrade an 8.4 server because of application dependencies.
Backporting databases can be painful and difficult.
You could try using 8.4's pg_dump to dump it, but it'll probably fail.
You'll probably want to extract the table and function definitions from a --schema-only dump text file, load them into the old DB by hand, then do a pg_dump --data-only and restore that to import the data.
After that, if you're going to continue working on your machine too, install PostgreSQL 8.4 and use that for further development so you don't introduce more incompatibilities and so it's easy to move dumps around.
In your position I'd just upgrade the outdated target server to 9.1.