I want to copy some data from a table to another but I dont want to copy all the rows, just a few a them (like only the first 100).
I didn't find an option in the COPY command for this.
So that's only possible or not ?
Thank you in advance.
You didn't look very hard.
https://www.postgresql.org/docs/current/static/sql-copy.html
To copy into a file just the countries whose names start with 'A':
COPY (SELECT * FROM country WHERE country_name LIKE 'A%') TO
'/usr1/proj/bray/sql/a_list_countries.copy';
So you would want COPY (SELECT ... LIMIT 100).
I'm assuming you want to copy between databases or via an intermediate file or something, otherwise you just want to use INSERT ... SELECT as shown in another answer.
You can use
INSERT INTO table2(col_list)
SELECT col_list FROM table1 WHERE condtitions ORDER BY order_col_list LIMIT X;
You can limit number of rows when you copy (SELECT ...LIMIT) to, but not COPY from. You did not find such options in docs, because there are no such.
use insert into ... select from ... limit 100 instead
Related
So I have a query that shows a huge amount of mutations in postgres. The quality of data is bad and i have "cleaned" it as much as possible.
To make my report so user-friendly as possible I want to filter out some rows that I know the customer don't need.
I have following columns id, change_type, atr, module, value_old and value_new
For change_type = update i always want to show every row.
For the rest of the rows i want to build some kind of logic with a combination of atr and module.
For example if the change_type <> 'update' and concat atr and module is 'weightperson' than i don't want to show that row.
In this case id 3 and 11 are worthless and should not be shown.
Is this the best way to solve this or does anyone have another idea?
select * from t1
where concat(atr,module) not in ('weightperson','floorrentalcontract')
In the end my "not in" part will be filled with over 100 combinations and the query will not look good. Maybe a solution with a cte would make it look prettier and im also concerned about the perfomance..
CREATE TABLE t1(id integer, change_type text, atr text, module text, value_old text, value_new text) ;
INSERT INTO t1 VALUES
(1,'create','id','person',null ,'9'),
(2,'create','username','person',null ,'abc'),
(3,'create','weight','person',null ,'60'),
(4,'update','id','order','4231' ,'4232'),
(5,'update','filename','document','first.jpg' ,'second.jpg'),
(6,'delete','id','rent','12' ,null),
(7,'delete','cost','rent','600' ,null),
(8,'create','id','rentalcontract',null ,'110'),
(9,'create','tenant','rentalcontract',null ,'Jack'),
(10,'create','rent','rentalcontract',null ,'420'),
(11,'create','floor','rentalcontract',null ,'1')
Fiddle
You could put the list of combinations in a separate table and join with that table, or have them listed directly in a with-clause like this:
with combinations_to_remove as (
select *
from (values
('weight', 'person'),
('floor' ,'rentalcontract')
) as t (atr, module)
)
select t1.*
from t1
left join combinations_to_remove using(atr, module)
where combinations_to_remove.atr is null
I guess it would be cleaner and easier to maintain if you put them in a separate table!
Read more on with-queries if that sounds strange to you.
I have a field, let's call it total_sales where the value it returns is 3621731641
I would like to convert that so it has a thousand separator commas inserted into it. So it would ultimately return as 3,621,731,641
I've looked through the Redshift documentation and have not been able to find anything.
Similar to following query should work for you.
select to_char(<columnname>,'999,999,999,999') from table1;
Make sure to put Maximum size in while specifying pattern into second parameter.
It should not give you $ if don't specify 'l' in second parameter like below.
select to_char(<columnname>,'l999,999,999,999') from table1;
Money format: select '$'||trim(to_char(1000000000.555,'999G999G999G999.99'))
select to_char(<columnname>,'999,999,999,999') from table1;
or
select to_char(<columnname>,'999G999G999G999') from table1;
I'm currently writing a script which will allow me to input a file (generally .sql) and it'll generate a list of every table that's used in that file. the process is simple as it opened the input file, checks for a substring and if that substring exists outputs the line to the screen.
the substring that being checked is tsql keywords that is indicative of a selected table such as INTO, FROM and JOIN. not being a T-SQL wizard those 3 keywords are the only ones i know of that are used to select a table in a query.
So my question is, in T-SQL are INTO, FROM an JOIN the only ways to get a table? or are these others?
There're many ways to get a table, here're some of them:
DELETE
FROM
INTO
JOIN
MERGE
OBJECT_ID (N'dbo.mytable', N'U') where U is the object type for table.
TABLE, e.g. ALTER TABLE, TRUNCATE TABLE, DROP TABLE
UPDATE
However, by using your script, you'll not only get real tables, but maybe VIEW and temporary table. Here're 2 examples:
-- Example 1
SELECT *
FROM dbo.myview
-- Example 2
WITH tmptable AS
(
SELECT *
FROM mytable
)
SELECT *
FROM tmptable
Is there a way to make a Redshift Copy while at the same time generating the row_number() within the destination table?
I am basically looking for the equivalent of the below except that the group of rows does not come from a select but from a copy command for a file on S3
insert into aTable
(id,x,y,z)
select
#{startIncrement}+row_number() over(order by x) as id,
x,y,z,
from anotherTable;
Thx
I understand from your question is that, you need to insert an additional column id into table and that id was not there in the csv file. If my understanding is right, please follow the below approach,
Copy the data from the csv file to a temp table say "aTableTemp" which has the schema without column "id". Then insert data from "aTableTemp" into "aTable" as follows
Insert into aTable
Select #{startIncrement}+row_number() over(order by x) as id, * from aTableTemp
I hope this should solve your problem
Maybe copy into a table with a identity column just don’t copy into that field?
I'm trying to get the whole result of a query into a variable, so I can loop through it and make inserts.
I don't know if it's possible.
I'm new to postgre and procedures, any help will be very welcome.
Something like:
declare result (I don't know what kind of data type I should use to get a query);
select into result label, number, desc from data
Thanks in advance!
I think you have to read PostgreSQL documentation about cursors.
But if you want just insert data from one table to another, you can do:
insert into data2 (label, number, desc)
select label, number, desc
from data
if you want to "save" data from query, you also can use temporary table, which you can create by usual create table or create table as:
create temporary table temp_data as
(
select label, number, desc
from data
)
see documentation