I have a table variable #temp having empty columns test001 till test0048.
I want to update each column individually for each row with some voltage values
Output should be like the below table:
test001 test002 test003
101.6 NULL 99.25
NULL 102.5 89.45
NULL 68.45 103.0
I can do it using while loop and update cell for individual columns but while loop
takes more than 3 minutes to execute thousands of records.
Is there is any alternate way to update the columns row-wise?
Thanks in advance
Related
I have seen a strange behavior in PostgreSQL (PostGIS).
I have two tables in PostGIS with geometry columns. one table is a grid and the other one is lines. I want to delete all grid cells that no line passes through them.
In other words, I want to delete the rows from a table when that row has no spatial intersection with any rows of second table.
First, in a subquery, I find the ids of the rows that have any intersection. Then, I delete any row that its id is not in that returned list of ids.
DELETE FROM base_grid_916453354
WHERE id NOT IN
(
SELECT DISTINCT bg.id
FROM base_grid_916453354 bg, (SELECT * FROM tracks_heatmap_1000
LIMIT 100000) tr
WHERE bg.geom && tr.buffer
);
The following subquery returns in only 12 seconds
SELECT DISTINCT bg.id
FROM base_grid_916453354 bg, (SELECT * FROM tracks_heatmap_1000 LIMIT
100000) tr
WHERE bg.geom && tr.buffer
, while the whole query did not return even in 1 hour!!
I ran explain command and it is the result of it, but I cannot interpret it:
How can I improve this query and why deleting from the returned list takes so much of time?
It is very strange because the subquery is a spatial query between 2 tables of 9 million and 100k rows, while the delete part is just checking a list and deleting!! In my mind, the delete part is much much easier.
Don't post text as images of text!
Increase work_mem until the subplan becomes a hashed subplan.
Or rewrite it to use 'NOT EXISTS' rather than NOT IN
I found a fast way to do this query. As #jjanes said, I used EXISTS() function:
DELETE FROM base_grid_916453354 bg
WHERE NOT EXISTS
(
SELECT 1
FROM tracks_heatmap_1000 tr
WHERE bg.geom && tr.buffer
);
This query takes around 1 minute and it is acceptable for the size of my tables.
I use postgresql and I have a database table with more than 5 million records. The structure of the table is as follows:
A lot of records is inserted every day. There are many records with the same reference.
I want to select all records but I do not want duplicates, the records with the same reference.
I tried with query as follows:
SELECT DISTINCT ON (reference) reference_url, reference FROM daily_run_vehicle WHERE handled = False and retries < 5 ORDER BY reference DESC;
It executed, it gives me correct result, but it takes to long to execute.
Is there any better way to do this?
Create Sort keys on columns which yo used in where condition
after large data movement into the table, we need to do "vacuum" command it will refresh all the keys and after that Analyze the table with "Analyze" command. it will help to rebuild the stats of the table.
Instead of stating each column name individually, is there a more efficient way to select all rows which do not have any nulls from a table in a Postgres database?
For example, if there are 20 columns in a table, how to avoid typing out each of those columns individually?
Just check the whole row:
select *
from my_table
where my_table is not null
my_table is not null is only true if all columns in that rows are not null.
I want to sum and subtract two or more timestamp columns.
I'm using PostgreSQL and I have a structure as you can see:
I can't round the minutes or seconds, so I'm trying to extract the EPOCH and doing the operation after, but I always get an error because the first EXTRACT recognizes the column, but when I put the second EXTRACT in the same SQL command I get an error message saying that the second column does not exist.
I'll give you an example:
SELECT
EXAMPLE.PERSON_ID,
COALESCE(EXTRACT(EPOCH from EXAMPLE.LEFT_AT),0) +
COALESCE(EXTRACT(EPOCH from EXAMPLE.ARRIVED_AT),0) AS CREDIT
FROM
EXAMPLE
WHERE
EXAMPLE.PERSON_ID = 1;
In this example I would get an error like:
Column ARRIVED_AT does not exist
Why is this happening?
Could I sum/subtract time values from same row?
Is ARRIVED_AT a calculated value instead of a column? What did you run to get the query results image you posted showing those columns?
The following script does what you expect, so there's something about the structure of the table you're querying that isn't what you expect.
CREATE SCHEMA so46801016;
SET search_path=so46801016;
CREATE TABLE trips (
person_id serial primary key,
arrived_at time,
left_at time
);
INSERT INTO trips (arrived_at, left_at) VALUES
('14:30'::time, '19:30'::time)
, ('11:27'::time, '20:00'::time)
;
SELECT
t.person_id,
COALESCE(EXTRACT(EPOCH from t.left_at),0) +
COALESCE(EXTRACT(EPOCH from t.arrived_at),0) AS credit
FROM
trips t;
DROP SCHEMA so46801016 CASCADE;
My table looks like below,
Table Name: Number_List
Columns Name: Num_ID INTEGER
First_Number VARCHAR(16)
Last_Number VARCHAR(16)
In that, Num_ID is PK. and the rest of the columns First_Number and Last_Number always have a 8 digit number.
my requirement is to update that column to 6 Digit entry..
Consider the Entries in the two columns are 32659814 (First_Number) and 32659819 (Last_Number). Now I need to write a update query to change the entries in the table to 326598 (First_Number) and 326598 (Last_Number).
and this table has 15K entries and i need to update the whole in single query in single execution.
Please help me to resolve this.
TIA.
All you need is SUBSTR:
UPDATE SESSION.NUMBER_LIST
SET FIRST_NUMBER = SUBSTR(FIRST_NUMBER, 1,6)
,LAST_NUMBER = SUBSTR(LAST_NUMBER, 1,6)