Oracle: TABLE ACCESS FULL with Primary key? - oracle10g

There is a table:
CREATE TABLE temp
(
IDR decimal(9) NOT NULL,
IDS decimal(9) NOT NULL,
DT date NOT NULL,
VAL decimal(10) NOT NULL,
AFFID decimal(9),
CONSTRAINT PKtemp PRIMARY KEY (IDR,IDS,DT)
)
;
Let's see the plan for select star query:
SQL>explain plan for select * from temp;
Explained.
SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'));
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
---------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
---------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 61 | 2 (0)|
| 1 | TABLE ACCESS FULL| TEMP | 1 | 61 | 2 (0)|
---------------------------------------------------------------
Note
-----
- 'PLAN_TABLE' is old version
11 rows selected.
SQL server 2008 shows in the same situation Clustered index scan. What is the reason?

select * with no where clause -- means read every row in the table, fetch every column.
What do you gain by using an index? You have to go to the index, get a rowid, translate the rowid into a table offset, read the file.
What happens when you do a full table scan? You go the th first rowid in the table, then read on through the table to the end.
Which one of these is faster given the table you have above? Full table scan. Why? because it skips having to to go the index, retreive values, then going back to the other to where the table lives and fetching.

To answer this more simply without mumbo-jumbo, the reason is:
Clustered Index = Table
That's by definition in SQL Server. If this is not clear, look up the definition.
To be absolutely clear once again, since most people seem to miss this, the Clustered Index IS the table itself. It therefore follows that "Clustered Index Scan" is another way of saying "Table Scan". Or what Oracle calls "TABLE ACCESS FULL"

Related

Sometime Postgresql Query takes to much time to load Data

I am using PostgreSQL 13.4 and has intermediate level experience with PostgreSQL.
I have a table which stores reading from a device for each customer. For each customer we receive around ~3,000 data in a day and we store them in to our usage_data table as below
We have the proper indexing in place.
Column | Data Type | Index name | Idx Access Type
-------------+-----------------------------+---------------------------+---------------------------
id | bigint | |
customer_id | bigint | idx_customer_id | btree
date | timestamp | idx_date | btree
usage | bigint | idx_usage | btree
amount | bigint | idx_amount | btree
Also I have common index on 2 columns which is below.
CREATE INDEX idx_cust_date
ON public.usage_data USING btree
(customer_id ASC NULLS LAST, date ASC NULLS LAST)
;
Problem
Few weird incidents I have observed.
When I tried to get data for 06/02/2022 for a customer it took almost 20 seconds. It's simple query as below
SELECT * FROM usage_data WHERE customer_id =1 AND date = '2022-02-06'
The execution plan
When I execute same query for 15 days then I receive result in 32 milliseconds.
SELECT * FROM usage_data WHERE customer_id =1 AND date > '2022-05-15' AND date <= '2022-05-30'
The execution plan
Tried Solution
I thought it might be the issue of indexing as for this particular date I am facing this issue.
Hence I dropped all the indexing from the table and recreate it.
But the problem didn't resolve.
Solution Worked (Not Recommended)
To solve this I tried another way. Created a new database and restored the old database
I executed the same query in new database and this time it takes 10 milliseconds.
The execution plan
I don't think this is proper solution for production server
Any idea why for any specific data we are facing this issue?
Please let me know if any additional information is required.
Please guide. Thanks

Is this INSERT statement containing SELECT subquery safe for multiple concurrent writes?

In Postgres, suppose I have the following table to be used like to a singly linked list, where each row has a reference to the previous row.
Table node
Column | Type | Collation | Nullable | Default
-------------+--------------------------+-----------+----------+----------------------
id | uuid | | not null | gen_random_uuid()
created_at | timestamp with time zone | | not null | now()
name | text | | not null |
prev_id | uuid | | |
I have the following INSERT statement, which includes A SELECT to look up the last row as data for the new row to be inserted.
INSERT INTO node(name, prev_id)
VALUES (
:name,
(
SELECT id
FROM node
ORDER BY created_at DESC
LIMIT 1
)
)
RETURNING id;
I understand storing prev_id may seem redundant in this example (ordering can be derived from created_at), but that is beside the point. My question: Is the above INSERT statement safe for multiple concurrent writes? Or, is it necessary to explicitly use LOCK in some way?
For clarity, by "safe", I mean is it possible that by the time the SELECT subquery executed and found the "last row", another concurrent query would have just finished an insert, so the "last row" found earlier is no longer the last row, so this insert would use the wrong "last row" value. The effect is multiple rows may share the same prev_id values, which is invalid for a linked list structure.

PostgreSQL 13 - Performance Improvement to delete large table data

I am using PostgreSQL 13 and has intermediate level experience with PostgreSQL.
I have a table named tbl_employee. it stores employee details for number of customers.
Below is my table structure, followed by datatype and index access method
Column | Data Type | Index name | Idx Access Type
-------------+-----------------------------+---------------------------+---------------------------
id | bigint | |
name | character varying | |
customer_id | bigint | idx_customer_id | btree
is_active | boolean | idx_is_active | btree
is_delete | boolean | idx_is_delete | btree
I want to delete employees for specific customer by customer_id.
In table I have total 18,00,000+ records.
When I execute below query for customer_id 1001 it returns 85,000.
SELECT COUNT(*) FROM tbl_employee WHERE customer_id=1001;
When I perform delete operation using below query for this customer then it takes 2 hours, 45 minutes to delete the records.
DELETE FROM tbl_employee WHERE customer_id=1001
Problem
My concern is that this query should take less than 1 min to delete the records. Is this normal to take such long time or is there any way we can optimise and reduce the execution time?
Below is Explain output of delete query
The values of seq_page_cost = 1 and random_page_cost = 4.
Below are no.of pages occupied by the table "tbl_employee" from pg_class.
Please guide. Thanks
During :
DELETE FROM tbl_employee WHERE customer_id=1001
Is there any other operation accessing this table? If only this SQL accessing this table, I don't think it will take so much time.
In RDBMS systems each SQL statement is also a transaction, unless it's wrapped in BEGIN; and COMMIT; to make multi-statement transactions.
It's possible your multirow DELETE statement is generating a very large transaction that's forcing PostgreSQL to thrash -- to spill its transaction logs from RAM to disk.
You can try repeating this statement until you've deleted all the rows you need to delete:
DELETE FROM tbl_employee WHERE customer_id=1001 LIMIT 1000;
Doing it this way will keep your transactions smaller, and may avoid the thrashing.
SQL: DELETE FROM tbl_employee WHERE customer_id=1001 LIMIT 1000;
will not work then.
To make the batch delete smaller, you can try this:
DELETE FROM tbl_employee WHERE ctid IN (SELECT ctid FROM tbl_employee where customer_id=1001 limit 1000)
Until there is nothing to delete.
Here the "ctid" is an internal column of Postgresql Tables. It can locate the rows.

Update part (order number) that is primary key in Postgres

I have table:
book_id | part | name
1 | 1 | chap 1
1 | 2 | chap 2
1 | 3 | chap 3
1 | 4 | chap 4
Primary key is book_id and part.
How can delete part 2 and update order of parts to get:
book_id | part | name
1 | 1 | chap 1
1 | 2 | chap 3
1 | 3 | chap 4
I can do a transaction and firstly delete part 2, but how can i then update part column without getting duplicate primary key error?
I would choose a different approach. Instead of persisting the part number, persist the order of the parts:
CREATE TABLE book_part (
book_id bigint NOT NULL,
part_order real NOT NULL,
name text NOT NULL,
PRIMARY KEY (book_id, part_order)
);
The first part that gets entered gets a part_order of 0.0. If you add a part at the beginning or the end, you just assign to part_order 1.0 less or more than the previous minimum or maximum. If you insert a part between two existing parts, you assign a part_order that is the arithmetic mean of the adjacent parts.
An example:
-- insert the first part
INSERT INTO book_part VALUES (1, 0.0, 'Introduction');
-- insert a part at the end
INSERT INTO book_part VALUES (1, 1.0, 'Getting started with PostgreSQL');
-- insert a part between the two existing parts
INSERT INTO book_part VALUES (1, 0.5, 'The history of PostgreSQL');
-- adding yet another part between two existing parts
INSERT INTO book_part VALUES (1, 0.25, 'An introductory example');
The actual part number is calculated when you query the table:
SELECT book_id,
row_number() OVER (PARTITION BY book_id ORDER BY part_order) AS part,
name
FROM book_part;
The beauty of that is that you don't need to update a lot of rows when you add or delete a part.
Unlike most RDBMS, PostGreSQL does not support updating a primary key that might violate a preexisting value without having to use a deferred constraint.
In fact PostGreSQL execute the update row by row which conducts to find a "phantom" duplicate key, while other RDBMS that respects the standard uses a set based approach (MS SQL Server, Oracle, DB2...)
So you must use a deferred constraint.
ALTER TABLE book_part
ALTER CONSTRAINT ??? *PK constraint name* ??? DEFERRABLE INITIALLY IMMEDIATE;
This is a severe limitations of PG... See "5 – The hard way to udpates unique values" in
http://mssqlserver.fr/postgresql-vs-sql-server-mssql-part-3-very-extremely-detailed-comparison/

Is it possible to use postgres/psql COPY into a specific partition of a table?

I am currently looking into an efficient way to allocate data into a partitioned table. Is it possible to use postgres/psql to COPY data into a specific table partition (instead of using INSERT)?
According to the documentation on COPY here:
COPY FROM can be used with plain, foreign, or partitioned tables or with views that have INSTEAD OF INSERT triggers.
And according to the documentation on partitioning here:
Be aware that COPY ignores rules. If you want to use COPY to insert data, you'll need to copy into the correct partition table rather than into the master. COPY does fire triggers, so you can use it normally if you use the trigger approach.
From my understanding of the aforementioned resources, it seems possible to copy into partition; however, I can't find any examples or support for that online.
In other words, can I write something like:
COPY some_table_partition_one FROM '/some_dir/some_file'
COPY to a partitioned table was introduced in v11:
Allow INSERT, UPDATE, and COPY on partitioned tables to properly route rows to foreign partitions (Etsuro Fujita, Amit Langote)
But COPY directly to a partition is possible in all releases since v10, where declarative partitioning was introduced.
It seems like we forgot to remove the second quotation from the documentation.
It is possible at least with PG 12.2:
CREATE TABLE measurement (
city_id int not null,
logdate date not null,
peaktemp int,
unitsales int
) PARTITION BY RANGE (logdate);
CREATE TABLE
CREATE TABLE measurement_y2020m03 PARTITION OF measurement
FOR VALUES FROM ('2020-03-01') TO ('2020-03-31');
CREATE TABLE
CREATE TABLE measurement_y2020m04 PARTITION OF measurement
FOR VALUES FROM ('2020-04-01') TO ('2020-04-30');
CREATE TABLE
insert into measurement values (1, current_date, 10,100);
INSERT 0 1
select * from measurement;
city_id | logdate | peaktemp | unitsales
---------+------------+----------+-----------
1 | 2020-03-27 | 10 | 100
(1 row)
cat /tmp/m.dat
4,2020-04-01,40,400
copy measurement_y2020m04 from '/tmp/m.dat' delimiter ',';
COPY 1
select * from measurement;
city_id | logdate | peaktemp | unitsales
---------+------------+----------+-----------
1 | 2020-03-27 | 10 | 100
4 | 2020-04-01 | 40 | 400
(2 rows)