I have table called
CREATE TABLE process (
batch_id Integer
,product_id Integer
,machine_id Integer
,created_date DATE
,updated_date DATE
,primary key(batch_id,product_id,machine_id)
)
But I generally use SQL like
SELECT *
FROM process
WHERE product_id = 123
AND machine_id = 1
When i check SQL plan for this id does not uses primary key index.
Do i need to create another index of both columns?
Database is DB2
Related
I have 4 different tables that are linked to each other in the following way (I only kept the essential columns in each table to emphasise the relationships between them):
create TABLE public.country (
country_code varchar(2) NOT NULL PRIMARY KEY,
country_name text NOT NULL,
);
create table public.address
(
id integer generated always as identity primary key,
country_code text not null,
CONSTRAINT FK_address_2 FOREIGN KEY (country_code) REFERENCES public.country (country_code)
);
create table public.client_order
(
id integer generated always as identity primary key,
address_id integer null,
CONSTRAINT FK_client_order_1 FOREIGN KEY (address_id) REFERENCES public.address (id)
);
create table public.client_order_line
(
id integer generated always as identity primary key,
client_order_id integer not null,
product_id integer not null,
client_order_status_id integer not null default 0,
quantity integer not null,
CONSTRAINT FK_client_order_line_0 FOREIGN KEY (client_order_id) REFERENCES public.client_order (id)
);
I want to get the data in the following way: for each client order line to show the product_id, quantity and country_name(corresponding to that client order line).
I tried this so far:
SELECT country_name FROM public.country WHERE country_code = (
SELECT country_code FROM public.address WHERE id = (
SELECT address_id FROM public.client_order WHERE id= 5
)
)
to get the country name given a client_order_id from client_order_line table. I don't know how to change this to get all the information mentioned above, from client_order_line table which looks like this:
id client_order_id. product_id. status. quantity
1 1 122 0 1000
2 2 122 0 3000
3 2 125 0 3000
4 3 445 0 2000
Thanks a lot!
You need a few join-s.
select col.client_order_id,
col.product_id,
col.client_order_status_id as status,
col.quantity,
c.country_name
from client_order_line col
left join client_order co on col.client_order_id = co.id
left join address a on co.address_id = a.id
left join country c on a.country_code = c.country_code
order by col.client_order_id;
Alternatively you can use your select query as a scalar subquery expression.
Table definition is as follows:
CREATE TABLE public.the_table
(
id integer NOT NULL DEFAULT nextval('the_table_id_seq'::regclass),
report_timestamp timestamp without time zone NOT NULL,
value_id integer NOT NULL,
text_value character varying(255),
numeric_value double precision,
bool_value boolean,
dt_value timestamp with time zone,
exported boolean NOT NULL DEFAULT false,
CONSTRAINT the_table_fkey_valdef FOREIGN KEY (value_id)
REFERENCES public.value_defs (value_id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE RESTRICT
)
WITH (
OIDS=FALSE
);
ALTER TABLE public.the_table
OWNER TO postgres;
Indices:
CREATE INDEX the_table_idx_id ON public.the_table USING brin (id);
CREATE INDEX the_table_idx_timestamp ON public.the_table USING btree (report_timestamp);
CREATE INDEX the_table_idx_tsvid ON public.the_table USING brin (report_timestamp, value_id);
CREATE INDEX the_table_idx_valueid ON public.the_table USING btree (value_id);
The query is:
SELECT * FROM the_table r WHERE r.value_id = 1064 ORDER BY r.report_timestamp desc LIMIT 1;
While running the query PostgreSQL does not use the_table_idx_valueid index.
Why?
If anything, this index will help:
CREATE INDEX ON the_table (value_id, report_timestamp);
Depending on the selectivity of the condition and the number of rows in the table, PostgreSQL may correctly deduce that a sequential scan and a sort is faster than an index scan.
I have a database about weather that updates every second.
It contains temperature and wind speed.
This is my database:
CREATE TABLE `new_table`.`test` (
`id` INT(10) NOT NULL,
`date` DATETIME() NOT NULL,
`temperature` VARCHAR(25) NOT NULL,
`wind_speed` INT(10) NOT NULL,
`humidity` FLOAT NOT NULL,
PRIMARY KEY (`id`))
ENGINE = InnoDB
DEFAULT CHARACTER SET = utf8
COLLATE = utf8_bin;
I need to find the average temperature every hour.
This is my code:
Select SELECT AVG( temperature ), date
FROM new_table
GROUP BY HOUR ( date )
My coding is working but the problem is that I want to move the value and date of the average to another table.
This is the table:
CREATE TABLE `new_table.`table1` (
`idsea_state` INT(10) NOT NULL,
`dateavg` DATETIME() NOT NULL,
`avg_temperature` VARCHAR(25) NOT NULL,
PRIMARY KEY (`idsea_state`))
ENGINE = InnoDB
DEFAULT CHARACTER SET = utf8
COLLATE = utf8_bin;
Is it possible? Can you give me the coding?
In order to insert new rows into a database based on data you have obtained from another table, you can do this by setting up an INSERT query targeting the destination table, then run a sub-query which will pull the data from the source table and then the result set returned from the sub-query will be used to provide the VALUES used for the INSERT command
Here is the basic structure, note that the VALUES keyword is not used:
INSERT INTO `table1`
(`dateavg`, `avg_temperature`)
SELECT `date` , avg(`temperature`)
FROM `test`;
Its also important to note that the position of the columns returned by result set will be sequentially matched to its respective position in the INSERT fields of the outer query
e.g. if you had a query
INSERT INTO table1 (`foo`, `bar`, `baz`)
SELECT (`a`, `y`, `g`) FROM table2
a would be inserted into foo
y would go into bar
g would go into baz
due to their respective positions
I have made a working demo - http://www.sqlfiddle.com/#!9/ff740/4
I made the below changes to simplify the example and just demonstrate the concept involved.
Here is the DDL changes I made to your original code
CREATE TABLE `test` (
`id` INT(10) NOT NULL AUTO_INCREMENT,
`date` DATETIME NOT NULL,
`temperature` FLOAT NOT NULL,
`wind_speed` INT(10),
`humidity` FLOAT ,
PRIMARY KEY (`id`))
ENGINE = InnoDB
DEFAULT CHARACTER SET = utf8
COLLATE = utf8_bin;
CREATE TABLE `table1` (
`idsea_state` INT(10) NOT NULL AUTO_INCREMENT,
`dateavg` VARCHAR(55),
`avg_temperature` VARCHAR(25),
PRIMARY KEY (`idsea_state`))
ENGINE = InnoDB
DEFAULT CHARACTER SET = utf8
COLLATE = utf8_bin;
INSERT INTO `test`
(`date`, `temperature`) VALUES
('2013-05-03', 7.5),
('2013-06-12', 17.5),
('2013-10-12', 37.5);
INSERT INTO `table1`
(`dateavg`, `avg_temperature`)
SELECT `date` , avg(`temperature`)
FROM `test`;
There is this field in a table:
room_id INT NOT NULL CONSTRAINT room_id_ref_room REFERENCES room
I have three 2 tables for two kinds of rooms: standard_room and family_room
How to do something like this:
room_id INT NOT NULL CONSTRAINT room_id_ref_room REFERENCES standard_room or family_room
I mean, room_id should reference either standard_room or family_room.
Is it possible to do so?
Here is the pattern I've been using.
CREATE TABLE room (
room_id serial primary key,
room_type VARCHAR not null,
CHECK CONSTRAINT room_type in ("standard_room","family_room"),
UNIQUE (room_id, room_type)
);
CREATE_TABLE standard_room (
room_id integer primary key,
room_type VARCHAR not null default "standard_room",
FOREIGN KEY (room_id, room_type) REFERENCES room (room_id, room_type),
CHECK CONSTRAINT room_type = "standard_room"
);
CREATE_TABLE family_room (
room_id integer primary key,
room_type VARCHAR not null default "family_room",
FOREIGN KEY (room_id, room_type) REFERENCES room (room_id, room_type),
CHECK CONSTRAINT room_type = "family_room"
);
That is, the 'subclasses' point at the super-class, by way of a type descriminator column (such that the pointed to base class is of the correct type, and that primary key of the super class is the same as the child classes.
Here's the same SQL from the accepted answer that works for PostGres 12.8. There's a few issues not only the CREATE_TABLE syntax mistake:
CREATE TABLE room (
room_id serial primary key,
room_type VARCHAR not null,
CONSTRAINT room_in_scope CHECK (room_type in ('standard_room','family_room')),
CONSTRAINT unique_room_type_combo UNIQUE (room_id, room_type)
);
CREATE TABLE standard_room (
room_id integer primary key,
room_type VARCHAR not null default 'standard_room',
CONSTRAINT roomid_std_roomtype_fk FOREIGN KEY (room_id, room_type) REFERENCES public."room" (room_id, room_type),
CONSTRAINT std_room_constraint CHECK (room_type = 'standard_room')
);
CREATE TABLE family_room (
room_id integer primary key,
room_type VARCHAR not null default 'family_room',
CONSTRAINT roomid_fam_roomtype_fk FOREIGN KEY (room_id, room_type) REFERENCES "room" (room_id, room_type),
CONSTRAINT fam_room_constraint CHECK (room_type = 'family_room')
);
NOTE: The SQL above uses constraints to enforce the child room_type values default to the parent tables' room_type values: 'standard_room' or 'family_room'.
PROBLEM: Since the child tables Primary Key's expect either the standard and family room Primary Key that means you can't insert more than one record in thsee two child tables.
insert into room (room_type) VALUES ('standard_room'); //Works
insert into room (room_type) values ('family_room'); //Works
insert into standard_room (room_id,pictureAttachment) VALUES (1,'Before Paint'); //Works
insert into standard_room (room_id,pictureAttachment) VALUES (1,'After Paint'); //Fails
insert into standard_room (room_id,pictureAttachment) VALUES (1,'With Furniture');
insert into family_room (room_id,pictureAttachment) VALUES (2, 'Beofre Kids'); //Works
insert into family_room (room_id,pictureAttachment) VALUES (2,'With Kids'); //Fails
To make the tables accept > 1 row you have to remove the Primary Keys from the 'standard_room' and 'family_room' tables which is BAD database design.
Despite 26 upvotes I will ping OP about this as I can see the answer was typed free hand.
Alternate Solutions
For smallish tables with less than a handful of variations a simple alterative is a single table with Bool columns for different table Primary Key fields.
Single Table "Room"
Id
IsStandardRoom
IsFamilyRoom
Desc
Dimensions
1
True
False
Double Bed, BIR
3 x 4
2
False
True
3 Set Lounge
5.5 x 7
SELECT * FROM Room WHERE IsStdRoom = true;
At the end of the day, in a relational database it's not very common to be adding Room Types when it involves creating the necessary related database tables using DDL commands (CREATE, ALTER, DROP).
A typical future proof database design allowing for more Tables would look something like this:
Multi Many-To-Many Table "Room"
Id
TableName
TableId
1
Std
8544
2
Fam
236
3
Std
4351
Either Standard or Family:
select * from standard_room sr where sr.room_id in
(select TableId from room where TableName = 'Std');
select * from family_room fr where fr.room_id in
(select id from room where TableName = 'Fam');
Or both:
select * from standard_room sr where sr.room_id in
(select TableId from room where TableName = 'Std')
UNION
select * from family_room fr where fr.room_id in
(select id from room where TableName = 'Fam');
Sample SQL to demo Polymorphic fields:
If you want to have different Data Types in the polymorphic foreign key fields then you can use this solution. Table r1 stores a TEXT column, r2 stores a TEXT[] Array column and r3 a POLYGON column:
CREATE OR REPLACE FUNCTION null_zero(anyelement)
RETURNS INTEGER
LANGUAGE SQL
AS $$
SELECT CASE WHEN $1 IS NULL THEN 0 ELSE 1 END;
$$;
CREATE TABLE r1 (
r1_id SERIAL PRIMARY KEY
, r1_text TEXT
);
INSERT INTO r1 (r1_text)
VALUES ('foo bar'); --TEXT
CREATE TABLE r2 (
r2_id SERIAL PRIMARY KEY
, r2_text_array TEXT[]
);
INSERT INTO r2 (r2_text_array)
VALUES ('{"baz","blurf"}'); --TEXT[] ARRAY
CREATE TABLE r3 (
r3_id SERIAL PRIMARY KEY
, r3_poly POLYGON
);
INSERT INTO r3 (r3_poly)
VALUES ( '((1,2),(3,4),(5,6),(7,8))' ); --POLYGON
CREATE TABLE flex_key_shadow (
flex_key_shadow_id SERIAL PRIMARY KEY
, r1_id INTEGER REFERENCES r1(r1_id)
, r2_id INTEGER REFERENCES r2(r2_id)
, r3_id INTEGER REFERENCES r3(r3_id)
);
ALTER TABLE flex_key_shadow ADD CONSTRAINT only_one_r
CHECK(
null_zero(r1_id)
+ null_zero(r2_id)
+ null_zero(r3_id)
= 1)
;
CREATE VIEW flex_key AS
SELECT
flex_key_shadow_id as Id
, CASE
WHEN r1_id IS NOT NULL THEN 'r1'
WHEN r2_id IS NOT NULL THEN 'r2'
WHEN r3_id IS NOT NULL THEN 'r3'
ELSE 'wtf?!?'
END AS "TableName"
, CASE
WHEN r1_id IS NOT NULL THEN r1_id
WHEN r2_id IS NOT NULL THEN r2_id
WHEN r3_id IS NOT NULL THEN r3_id
ELSE NULL
END AS "TableId"
FROM flex_key_shadow
;
INSERT INTO public.flex_key_shadow (r1_id,r2_id,r3_id) VALUES
(1,NULL,NULL),
(NULL,1,NULL),
(NULL,NULL,1);
SELECT * FROM flex_key;
Using: Firebird 2.52
For performance of SELECT for the following query, do I require indexes on additional fields in my table:
Desired query:
select inventory_id, max(batch_no) from invty_batch
where inventory_id = :I
group by inventory_id
Table structure:
CREATE TABLE INVTY_BATCH (
ROW_ID INTEGER NOT NULL,
INVENTORY_ID INTEGER NOT NULL,
BATCH_NO VARCHAR(8) NOT NULL,
INVTYRCPT_ID INTEGER NOT NULL,
UNITPRICE NUMERIC(12, 2) DEFAULT 0.0 NOT NULL);
ALTER TABLE INVTY_BATCH ADD PRIMARY KEY (ROW_ID);
CREATE UNIQUE INDEX IXINVTYIDBATCHNO ON INVTY_BATCH(INVENTORY_ID,BATCH_NO);
Will creating indexes on inventory_id and batch_no columns benefit performance for the given query?
try to create an index for the field "batch_no", because the query is doing a search in this field.
PS : Use a desc index, because the search is for the max value