How can I link a Google spreadsheet to PostgreSQL? [closed] - postgresql

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
How can I link a Google spreadsheet to PostgreSQL? I googled and got some sample code for MySQL, and a couple of products which do it. As per this link ,support exists for a few databases.

My approach is using R and its googlesheets and DBI libraries. Googlesheets connects the spreadsheet with R and DBI connects the R with PostgreSQL. You can update the table whenever you want by simply running the script. With this approach you can also add transformations to the data before storing them in PostgreSQL. Another way is using Python and pandas and gspread libraries.
More info:
Googlesheets: https://cran.r-project.org/web/packages/googlesheets/vignettes/basic-usage.html
DBI: https://cran.r-project.org/web/packages/DBI/vignettes/DBI-1.html

We have been pulling Google Sheets into QGIS via PostgreSQL Foreign Data Wrappers. We then build a materialized view that connects records to geometry (via School Numbers, for example) and use the materialized view as a standard spatial table.
From there we have also published this 'google sheet materialized view' to a web application via node.js and add a button to refresh the map if the google sheet data has been changed. It works really well.
Method 1: Multicorn FDW and GSheets Extension:
(Note: The gsspreadsheet extension to the multicorn FDW is questionably maintained, and there is the risk that v4 of the GSheets API may break it... we don't know yet, and are planning for the possibility that we might have to implement Method 2 (below) if it does indeed break. )
To connect to a Google Sheet via a PostgreSQL FDW, use the Multicorn FDW:
https://github.com/Kozea/Multicorn
Then install the gspreadsheet_fdw extension for Multicorn:
https://github.com/lincolnturner/gspreadsheet_fdw
In PostgreSQL, create the multicorn FDW:
CREATE SERVER multicorn_gspreadsheet
FOREIGN DATA WRAPPER multicorn
OPTIONS (wrapper 'gspreadsheet_fdw.GspreadsheetFdw');
Then create the FDW table connecting to the Google Sheet using the gspreadsheet extension of multicorn:
CREATE FOREIGN TABLE speced_fdw.centerprogram_current_gsheet (
column1 integer NULL,
column2 varchar NULL,
column3 varchar NULL
)
SERVER multicorn_gspreadsheet
OPTIONS (keyfile '/usr/pgsql-9.5/share/credential.json', gskey 'example_d33lk2kdislids');
You now have a foreign table pointing directly to your Google Sheet, which you can built a materialized view from.
Method 2: Using FILE_FDW and CSV GSheet Export:
The other option is to use the FILE_FDW right out of PostgreSQL and connect to the CSV export of the GSheet using WGET.
First, create the server:
CREATE SERVER fdw_files
FOREIGN DATA WRAPPER file_fdw
OPTIONS ()
Then create the FDW table:
CREATE FOREIGN TABLE public.test_file_fdw (
"name" varchar NOT NULL,
"date" varchar NULL,
"address" varchar NULL
)
SERVER fdw_files
OPTIONS (program 'wget -q -O - "https://docs.google.com/spreadsheets/d/2343randomurlcharacters_r0/export?gid=969976&format=csv"', format 'csv', header 'true');
The above options for wget are listed here: https://www.gnu.org/software/wget/manual/html_node/HTTP-Options.html

You can do this using Google app script and database connector like psycopg2 if you're using python or node-postgres if you're using node. Run your db connector on some server, front it with a REST API and fetch the data using app script from google sheet.
One thing you'd need to figure out is permissions (Google calls them scopes).
You can build an addon and publish it internally. Here is a good tutorial on how to do this.
If you don't want to build your own addon, you can use database connectors like Castodia, Kpibees and Seekwell from Google marketplace. Most of them support Postgres.

Wow, I forgot about this question. I got this working by making an additional MySQL database for two databases. The Mysql database had a single table which was used to talk to the google sheets api. In turn this mysql table was a foreign table in the Postgres database.

Related

Transfer data from redshift to postgresql

I tried searching for it but couldn't find out
What is the best way to copy data from Redshift to Postgresql Database ?
using Talend job/any other tool/code ,etc
anyhow i want to transfer data from Redshift to PostgreSQL database
also,you can use any third party database tool if it has similar kind of functionality.
Also,as far as I know,we can do so using AWS Data Migration Service,but not sure our source db and destination db matches that criteria or not
Can anyone please suggest something better ?
The way I do it is with a Postgres Foreign Data Wrapper and dblink,
This way, the redshift table is available directly within Postgres.
Follow the instructions here to set it up https://aws.amazon.com/blogs/big-data/join-amazon-redshift-and-amazon-rds-postgresql-with-dblink/
The important part of that link is this code:
CREATE EXTENSION postgres_fdw;
CREATE EXTENSION dblink;
CREATE SERVER foreign_server
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (host '<amazon_redshift _ip>', port '<port>', dbname '<database_name>', sslmode 'require');
CREATE USER MAPPING FOR <rds_postgresql_username>
SERVER foreign_server
OPTIONS (user '<amazon_redshift_username>', password '<password>');
For my use case I then set up a postgres materialised view with indexes based upon that.
create materialized view if not exists your_new_view as
SELECT some,
columns,
etc
FROM dblink('foreign_server'::text, '
<the redshift sql>
'::text) t1(some bigint, columns bigint, etc character varying(50));
create unique index if not exists index1
on your_new_view (some);
create index if not exists index2
on your_new_view (columns);
Then on a regular basis I run (on postgres)
REFRESH MATERIALIZED VIEW your_new_view;
or
REFRESH MATERIALIZED VIEW CONCURRENTLY your_new_view;
In the past, I managed to transfer data from one PostgreSQL database to another by doing a pg_dump and piping the output as an SQL command to the second instance.
Amazon Redshift is based on PostgreSQL, so this method should work, too.
You can control whether pg_dump should include the DDL to create tables, or whether it should just load the data (--data-only).
See: PostgreSQL: Documentation: 8.0: pg_dump

Get information about schema, tables, primary keys

How to get the name of the schema, tables and primary keys?
How to know his authorizations?
The only information I have is obtained by the command below:
db2 => connect
Database Connection Information
Database server = DB2/AIX64 11.1.3.3
SQL authorization ID = mkrugger
Local database alias = DBRCF
You can use the command line (interactive command line processor), if you want, but if you are starting out then it is easier to use a GUI tool.
Example free GUI, IBM Data Studio, and there are many more (any GUI that works with JDBC should work with Db2 on Linux/Unix/Windows). These are easy to find online and download if you are permitted.
To use the Db2 command-line (clp) which is what you show in your question,
Example command lines:
list tables for all
list tables for user
list tables for schema ...
describe table ...
describe indexes for table ...
Reference for LIST TABLES command
You can also use plain SQL to read the catalog views, which describes the schemas, tables, primary keys as a series of views.
Look in the online free documentation for details of views like SYSCAT.TABLES, SYSCAT.COLUMNS , SYSCAT.INDEXES and hundreds of other views.
Depending on which Db2 product is installed locally, there are a range of other command-line based tools. One in particular is db2look which lets you extract all of the DDL of the database (or a subset of it) into a plain text file if you prefer that.

google cloud Import data from Cloud Storage Could not complete the operation

it my first time using google cloud.
I am setting up everything and I was going to add a sample database to my google cloud account to test few things, however when I try to import my sampleDB I get this error:
Could not complete the operation.
I have already make a bucket and imported my sql file in there,
this is my sample sql file:
> CREATE DATABASE IF NOT EXISTS student DEFAULT CHARACTER SET utf8mb4
> COLLATE utf8mb4_0900_ai_ci; USE student;
>
> CREATE TABLE person ( id int(11) NOT NULL, name varchar(25) NOT
> NULL, age int(3) NOT NULL, sex text, email text NOT NULL,
> study varchar(20) NOT NULL, birthday date NOT NULL ) ENGINE=InnoDB
> DEFAULT CHARSET=utf8mb4;
>
> INSERT INTO person (id, `name`, age, sex, email, study, birthday)
> VALUES (1, 'Saeed', 30, 'M', 'nakafisarrd#gmail.com', 'computer',
> '1987-04-30');
>
> ALTER TABLE person ADD PRIMARY KEY (id);
this is the tutorial I am following.
further more I have installed google app engine sdk for node. is doesn show me any error so I cant figure out what is going wrong here!
I think you might be mixing two topics here, so I will guide you through both process:
· First, you want to create a Cloud SQL instance importing a *.sql file, as explained in the tutorial you followed. However, bear in mind that this tutorial is using MySQL, so you should be creating a PostgreSQL instance instead if that fits your environment better. I would also recommend you to follow the official documentation on that topic, as it is explained in a clear way how to achieve what you want to do. I have tested the script you provided in a 2nd Generation MySQL instance changing the first line from CREATE DATABASE IF NOT EXISTS student DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci; to CREATE DATABASE IF NOT EXISTS student; and it worked.
· Second, you want to upload your NodeJS application to the Google Cloud Platform, and with that purpose I suggest using App Engine. As of today, NodeJS support is only available in App Engine Flexible, so you can also follow a guided quickstart in the documentation.
So I would recommend you to first clarify if you want to use either MySQL or PostgreSQL, create a Cloud SQL instance that matches your requirements, and then import the .sql file you shared. Everything should work fine.

Linking MS Access table to PG Admin schema

I would like to link a MS Access table to a table in PG admin if it is possible for use in a Postgres query. I have searched for an answer but all I can find is answers for listing postgres tables in Access which is almost the opposite of what I want to do.
I want to be able to access the data entered in an access form without having to continually import the data into a table in PG Admin.
I'm not even sure that is possible but any method that is easier than importing the table into PG Admin every day would be useful.
Thanks
Gary
Try the PostgreSQL OGR Foreign Data Wrapper. Its built for spatial data, but it works perfectly well with non-spatial tables. If you have the PostGIS extension installed you will already have it.
https://github.com/pramsey/pgsql-ogr-fdw
There are several examples on that page, but the command
ogr_fdw_info -s <pathToAccessFile> -l <tablename>
will return create server and a create foreign table statements which you can edit as required then run in pgAdmin.

Show linked servers in PostgreSQL

I used the below to create a link to an Oracle server from Postgresql. I see there are methods to alter and drop the server but I can not find a command that lists all available servers that have been created.
Is the information stored anywhere in postgresql?
Where I can view it?
CREATE EXTENSION oracle_fdw;
CREATE SERVER oradb FOREIGN DATA WRAPPER oracle_fdw
You want the pg_foreign_server and pg_foreign_table views.
http://www.postgresql.org/docs/current/static/catalog-pg-foreign-server.html
http://www.postgresql.org/docs/current/static/catalog-pg-foreign-table.html
select * from information_schema._pg_foreign_servers;
You can also use the following commands in pgsql-client
\det+ - List of foreign tables
\des+ - List of foreign servers