update testdata.test
set abcd = (select abc
from DATA1
order by random()
limit 1
)
Doing this only makes one random entry from table DATA1 is getting populated in all the rows of TEST table.
What I need is - > to generate each row with random entry from DATA 1 table to TEST table
Reference the outer table from the subquery so that it becomes a correlated subquery. Then it has to be executed for every row:
UPDATE testdata.test
SET abcd = (SELECT CASE WHEN test.abcd IS NOT DISTINCT FROM test.abcd
THEN abc
END
FROM data1
ORDER BY random()
LIMIT 1
);
Related
Table a has 100 row with name columns.
Table b is the same structure as Table A, but has 10 million rows.
Create a query to verify that the value of table a is not in table b.
However, comparing the value of table a with the value of table b takes too long.
I want to complete the work in 5 seconds, but I don't know how.
Below is the method I tried. Both table name columns have b-tree indexes.
1.
select
name
from
a
where
and not exists (select
name
from
b
where
a.name = b.name
);
select
a.name
from
a left outer join b
on a.name = b.name
where
b.name is null;
You want the following index on the B table:
CREATE INDEX name_idx ON b (name);
This should allow Postgres to rapidly lookup any of the 100 names in the a table against the above index. This should avoid the need to do a full table scan of the b table, which, as you are seeing, can be costly.
I have created the following query:
select t.id, t.row_id, t.content, t.location, t.retweet_count, t.favorite_count, t.happened_at,
a.id, a.screen_name, a.name, a.description, a.followers_count, a.friends_count, a.statuses_count,
c.id, c.code, c.name,
t.parent_id
from tweets t
join accounts a on a.id = t.author_id
left outer join countries c on c.id = t.country_id
where t.row_id > %s
-- order by t.row_id
limit 100
Where %s is a number that starts at 0 and is incremented by 100 after each such query is conducted. I want to fetch all records from the database using this method, where I just increase the %s in the where condition. I found this approach on https://ivopereira.net/efficient-pagination-dont-use-offset-limit. I also included a column in my table which is corresponding to row number (I named it row_id). Now the problem is when I run this query the first time, it returns rows which have an row_id of 3 million. I would like the cursor (not sure if my terminology is correct) to start from rows with row_id 1 through 100 and so on. The table contains 7 million rows. Am I missing something obvious with which I could achieve my goal?
Postgresql version 9.4
I have a table with an integer column, which has a number of integers with some gaps, like the sample below; I'm trying to get an existing id from the column at random with the following query, but it returns NULL occasionally:
CREATE TABLE
IF NOT EXISTS test_tbl(
id INTEGER);
INSERT INTO test_tbl
VALUES (10),
(13),
(14),
(16),
(18),
(20);
-------------------------------
SELECT * FROM test_tbl;
-------------------------------
SELECT COALESCE(tmp.id, 20) AS classification_id
FROM (
SELECT tt.id,
row_number() over(
ORDER BY tt.id) AS row_num
FROM test_tbl tt
) tmp
WHERE tmp.row_num =floor(random() * 10);
Please let me know where I'm doing wrong.
but it returns NULL occasionally
and I must add to this that it sometimes returns more than 1 rows, right?
in your sample data there are 6 rows, so the column row_num will have a value from 1 to 6.
This:
floor(random() * 10)
creates a random number from 0 up to 0.9999...
You should use:
floor(random() * 6 + 1)::int
to get a random integer from 1 to 6.
But this would not solve the problem, because the WHERE clause is executed once for each row, so there is a case that row_num will never match the created random number, so it will return nothing, or it will match more than once so it will return more than 1 rows.
See the demo.
The proper (although sometimes not the most efficient) way to get a random row is:
SELECT id FROM test_tbl ORDER BY random() LIMIT 1
Also check other links from SO, like:
quick random row selection in Postgres
You could select one row and order by random(), this way you are ensured to hit an existing row
select id
from test_tbl
order by random()
LIMIT 1;
I have table a, in this table after a SQL request, I have the same records a few times.
Here is my request.
for server_id in (select bs.id from status.servers bs
join settings.config blc on bs.id = blc.server_id
where blc.lane_number = (dataitem->>'No')::SMALLINT AND blc.min_length <= (dataitem->>'len')::real
)
LOOP
insert into a(measurement_id, server_id, status)
VALUES (
measurement_id,server_id,false
);
END LOOP;
And as result i have in table a, records like:
id meas_id serv_id status
1 12 1 f
2 12 1 f
3 12 1 f
i've changed code a little, in working code there are not syntax mistakes
answering
"why i have the same records with dif id?"
table a probably have a default value for column id, so values are taken from sequence. most probably you created it with serial data type... Those results are expected then. If you want to define your value, you should not skip column in scalar list, so
insert into a(measurement_id, server_id, status)
must become
insert into a(id, measurement_id, server_id, status)
and the value passed accordingly...
If you expected one result (assuming it from same value of server_id), you need to add distinct to the
for server_id in (select distinct bs.id from status.servers bs
because currently your select returns three rows with same bs.id as result of a join with three matching rows on join key...
I am working on postgres query to remove duplicates from a table. The following table is dynamically generated and I want to write a select query which will remove the record if the first row has duplicate values.
The table looks something like this
Ist col 2nd col
4 62
6 34
5 26
5 12
I want to write a select query which remove either row 3 or 4.
There is no need for an intermediate table:
delete from df1
where ctid not in (select min(ctid)
from df1
group by first_column);
If you are deleting many rows from a large table, the approach with an intermediate table is probably faster.
If you just want to get unique values for one column, you can use:
select distinct on (first_column) *
from the_table
order by first_column;
Or simply
select first_column, min(second_column)
from the_table
group by first_column;
select count(first) as cnt, first, second
from df1
group by first
having(count(first) = 1)
if you want to keep one of the rows (sorry, I initially missed it if you wanted that):
select first, min(second)
from df1
group by first
Where the table's name is df1 and the columns are named first and second.
You can actually leave off the count(first) as cnt if you want.
At the risk of stating the obvious, once you know how to select the data you want (or don't want) the delete the records any of a dozen ways is simple.
If you want to replace the table or make a new table you can just use create table as for the deletion:
create table tmp as
select count(first) as cnt, first, second
from df1
group by first
having(count(first) = 1);
drop table df1;
create table df1 as select * from tmp;
or using DELETE FROM:
DELETE FROM df1 WHERE first NOT IN (SELECT first FROM tmp);
You could also use select into, etc, etc.
if you want to SELECT unique rows:
SELECT * FROM ztable u
WHERE NOT EXISTS ( -- There is no other record
SELECT * FROM ztable x
WHERE x.id = u.id -- with the same id
AND x.ctid < u.ctid -- , but with a different(lower) "internal" rowid
); -- so u.* must be unique
if you want to SELECT the other rows, which were suppressed in the previous query:
SELECT * FROM ztable nu
WHERE EXISTS ( -- another record exists
SELECT * FROM ztable x
WHERE x.id = nu.id -- with the same id
AND x.ctid < nu.ctid -- , but with a different(lower) "internal" rowid
);
if you want to DELETE records, making the table unique (but keeping one record per id):
DELETE FROM ztable d
WHERE EXISTS ( -- another record exists
SELECT * FROM ztable x
WHERE x.id = d.id -- with the same id
AND x.ctid < d.ctid -- , but with a different(lower) "internal" rowid
);
So basically I did this
create temp t1 as
select first, min (second) as second
from df1
group by first
select * from df1
inner join t1 on t1.first = df1.first and t1.second = df1.second
Its a satisfactory answer. Thanks for your help #Hack-R