I have 2 exactly same databases on 2 different machines(with different data that is), and I want to transfer contents of one table to the table from the other database, how do I do that from PgAdmin? I'm new to PostgreSQL Database, I'd do that easily with mysql phpmyadmin just export sql and I'd get text file with bunch of insert into statements, is there equivalent with PgAdmin ?
Yes, backup using "PLAIN" format (SQL statements) and then (when connected to the other DB) open the file and run it.
Or you could select "COMPRESS" format in the "backup" dialogue, and then you could use the restore dialogue.
Also there's an equivalent of phpMyAdmin for Postgres, called "phppgadmin". Select the table in question and then use the "Export" tab.
pg_dump from the command line
Related
I use Azure SQL DB (Single DB, Basic, DTU, Provisioned).
There are two different DBs, say, DB-1 and DB-2.
For DB-1, I have Admin access.
For DB-2, I have read-only access. (No access to create new table.)
The two DBs have no links. I access them using SSMS.
The requirement:
In DB-2, there is a table [EMP] with 1000 rows.
Only 250 of them to be exported and inserted into a new table in DB-1 (with all columns).
How can I achieve in SSMS?
Thanks in advance!
There is no way to do this in only SSMS. If this is an ad-hoc project, I would query the records, copy and paste them into Excel, configure them in Excel for an insert statement, then paste them into an insert statement against DB-1.
If this is something that will need to be sustainable, I'd recommend looking into Azure Data Factory.
I tried searching for it but couldn't find out
What is the best way to copy data from Redshift to Postgresql Database ?
using Talend job/any other tool/code ,etc
anyhow i want to transfer data from Redshift to PostgreSQL database
also,you can use any third party database tool if it has similar kind of functionality.
Also,as far as I know,we can do so using AWS Data Migration Service,but not sure our source db and destination db matches that criteria or not
Can anyone please suggest something better ?
The way I do it is with a Postgres Foreign Data Wrapper and dblink,
This way, the redshift table is available directly within Postgres.
Follow the instructions here to set it up https://aws.amazon.com/blogs/big-data/join-amazon-redshift-and-amazon-rds-postgresql-with-dblink/
The important part of that link is this code:
CREATE EXTENSION postgres_fdw;
CREATE EXTENSION dblink;
CREATE SERVER foreign_server
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (host '<amazon_redshift _ip>', port '<port>', dbname '<database_name>', sslmode 'require');
CREATE USER MAPPING FOR <rds_postgresql_username>
SERVER foreign_server
OPTIONS (user '<amazon_redshift_username>', password '<password>');
For my use case I then set up a postgres materialised view with indexes based upon that.
create materialized view if not exists your_new_view as
SELECT some,
columns,
etc
FROM dblink('foreign_server'::text, '
<the redshift sql>
'::text) t1(some bigint, columns bigint, etc character varying(50));
create unique index if not exists index1
on your_new_view (some);
create index if not exists index2
on your_new_view (columns);
Then on a regular basis I run (on postgres)
REFRESH MATERIALIZED VIEW your_new_view;
or
REFRESH MATERIALIZED VIEW CONCURRENTLY your_new_view;
In the past, I managed to transfer data from one PostgreSQL database to another by doing a pg_dump and piping the output as an SQL command to the second instance.
Amazon Redshift is based on PostgreSQL, so this method should work, too.
You can control whether pg_dump should include the DDL to create tables, or whether it should just load the data (--data-only).
See: PostgreSQL: Documentation: 8.0: pg_dump
How to get the name of the schema, tables and primary keys?
How to know his authorizations?
The only information I have is obtained by the command below:
db2 => connect
Database Connection Information
Database server = DB2/AIX64 11.1.3.3
SQL authorization ID = mkrugger
Local database alias = DBRCF
You can use the command line (interactive command line processor), if you want, but if you are starting out then it is easier to use a GUI tool.
Example free GUI, IBM Data Studio, and there are many more (any GUI that works with JDBC should work with Db2 on Linux/Unix/Windows). These are easy to find online and download if you are permitted.
To use the Db2 command-line (clp) which is what you show in your question,
Example command lines:
list tables for all
list tables for user
list tables for schema ...
describe table ...
describe indexes for table ...
Reference for LIST TABLES command
You can also use plain SQL to read the catalog views, which describes the schemas, tables, primary keys as a series of views.
Look in the online free documentation for details of views like SYSCAT.TABLES, SYSCAT.COLUMNS , SYSCAT.INDEXES and hundreds of other views.
Depending on which Db2 product is installed locally, there are a range of other command-line based tools. One in particular is db2look which lets you extract all of the DDL of the database (or a subset of it) into a plain text file if you prefer that.
We have many postgresql databases with the same structure using only public shcema on each one.
How can I group all of them in a single database using separate schemas?
You can dump the database definition and data out, edit the output by putting the default schema as whatever you choose and run the scripts back into database.
Remember to make the dump in SQL format, pg_dump with default custom format won't work. The schema change will only need a change on a row like
SET search_path TO *whateverschema*
If you don't want to edit the dumps (maybe they're very large), you can of course also restore them one by one to the public schema, alter the tables into the desired schema and then repeat for the next one.
There is no special way to convert an existing database into a schema in another database unfortunately.
I forgot to post the answer afer all klin comment was the answer, this step was the solution,
Inside customer_x database:
alter schema public rename to customer_x;
And then take pg_dump customer_x:
pg_dump "customer_x" --schema "customer_x" -f customer_x.sql
Inside new conglomerated database:
DROP schema customer_x CASCADE;
create schema customer_x;
Then load the dump of customer_x:
psql "conglomerated_database" -f customer_x.sql
I have to transfer data from old database to new database where table name and column name is different.
Can it be done with DOS command or any other solution?
One is POSTGRESQL and old is MYSQL.
My concern is table name and column names are different, column number is same.
Thank you
I do not know the postgresql part but for sql server you can use sqlcmd.exe to export data as text format with or w/o column names
Please check
http://msdn.microsoft.com/en-us/library/ms162773.aspx