dump subset of table - postgresql

I want to dump a subset of a table of my postgres database. Is there a way to dump a SELECT statement without creating a view?
I need to copy a part of the table to an other postgres database.

Use COPY to dump it directly to disk.
Example (from the fine manual) using a SELECT:
COPY
(SELECT * FROM country WHERE country_name LIKE 'A%')
TO '/usr1/proj/bray/sql/a_list_countries.copy';

Related

how to dump data into a temporary table(without actually creating the temporary table) from an external table in Hive Script during run time

In SQL stored procedures, we have an option of creating a temporary table "#temp" whose structure is as that of another table that it is referring to. Here we don't explicitly create and mention the structure of "#temp" table.
Do we have similar option is HQL Hive script to create a temp table during run time without actually creating the table structure. Thus I can dump data to temp table and use it. Below code shows an example of #temp table in SQL.
SELECT name, age, gender
INTO #MaleStudents
FROM student
WHERE gender = 'Male'
Hive has the concept of temporary tables, which are local to a user's session. These tables behave just like any other table, and can be created using CTAS commands too. Hive automatically deletes all temporary tables at the end of the Hive session in which they are created.
Read more about them here.
Hive Documentation
DWGEEK
You can create simple temporary table. On this table you can perform any operation.
Once you are done with work and log out of your session they will be deleted automatically.
Syntax for temporary table is :
CREATE TEMPORARY TABLE TABLE_NAME_HERE (key string, value string)

create (or copy) table schema using postgres_fdw or dblink

I have many tables in different databases and want to bring them to a database.
It seems like I have to create foreign table in the database (where I want to merge them all) with schemas of all the tables.
I am sure, there is a way to automate this (by the way, I am going to use psql command) but I do not know where to start.
what I have found so far is I can use
select * from information_schema.columns
where table_schema = 'public' and table_name = 'mytable'
I added more detail explanation.
I wanted to copy tables from another database
the tables have same column names and data type
using postgres_fdw, I needed to set up a field name and data type for each tables (the table names are also same)
then, I want to union the tables have same name all to have one single table.
for that, I am going to add prefix on table
for instance, mytable in db1, mytable in db2, mytable in db3 as in
db1_mytable, db2_mytable, db3_mytable in my local database.
Thanks to Albe's comment, I managed it and now I need to figure out doing 4th step using psql command.

Dumping only certain part of table with pg_dump

I want to dump a table of a PostgreSQL database (on Heroku) but want to only get the rows of the table matching certain a criteria, e.g.
created_at > "2016-01-01".
Is that even possible using the pg_dump utility?
pg_dump cannot do that. You can use COPY to extract data from a single table with a condition:
COPY (SELECT * FROM tab WHERE created_at > '2016-01-01') TO '/data/dumpfile';

Can I copy from one table to another in Redshift

I understand that the COPY command imports lots of data very efficiently. But copying data from one table to another with the INSERT command is slow. Is there a more efficient way to copy data from one table to the other? Or should I use the UNLOAD command to unload the table into S3, then COPY it back from there?
You can do insert into new_table (select * from old_table) .
But for bigger tables you should always do unload from old table then copy to new table.
The copy commands load data in parallel and it works fast. Unload also unloads data parallel. So unload and copy is good option to copy data from one table to other.
when you do copy command it automatically do the encoding ( compression ) for your data. When you do insert into ( select * from ) it will not do compression/encoding. You need to explicitly apply encoding types when you create new table.
If you want to copy the records from source_table to target_table. Then query must be below
insert into target_table select * from source_table

PostgreSQL: How to insert huge data into table?

Here I need to insert huge records to my database tables. How can I do that in PostgreSQL 9.3 version?
Example:
/* Table creation */
create table tabletest(slno int,name text,lname text, address text, city text);
/* Records insertion */
insert into tabletest values -- Here i need to insert thousands of records in a bulk.
Short answer: use the COPY command.
Details available in the Postgres 9.3 documentation
Note that the file should be available to the Postgres server machine because COPY is meant to be used mainly by DBAs.
And, if you have Excel, you'd have to export the data to CSV format first as Postgres cannot read Excel-formatted data directly.