How can I find a null field efficiently using index - postgresql

I have a query where I'm trying to find a null field from millions of records. There will only be one or two.
The query looks like this:
SELECT *
FROM “table”
WHERE “id” = $1
AND “end_time” IS NULL
ORDER BY “start_time” DESC LIMIT 1
How can I make this query more performant eg using indexes in the database.

try a partial index, smth like:
create index iname on "table" (id, start_time) where end_time is null;

Related

creating index with true=any() on postgres and making queries faster

I have one query which cannot be modified but I need to create indexes on them to make it faster. the query is:
select allcommuni0_.assignedto as col_0_0_
from communication.vwallcommunication allcommuni0_
where (allcommuni0_.projectid in (750))
and true=any (select case when lower(communicat1_.claimnumber)=lower('193') then true else false end from communication.vwclaimdetails communicat1_
where allcommuni0_.communicationviewid=communicat1_.communicationviewid and allcommuni0_.type=communicat1_.type)
group by allcommuni0_.assignedto limit 100;
There are no indexes right now. How to create optimized index with true=any() operator on Postgres?
create an index on column type and column communicationviewid.
CREATE INDEX vwclaimdetails_lower_claimnumber_idx ON communication.claimdetails ((CASE
WHEN communicationoutboundid IS NOT NULL THEN communicationoutboundid
ELSE communicationinboundid
END),(CASE
WHEN communicationoutboundid IS NOT NULL THEN 'Outbound'::text
ELSE 'Inbound'::text
END));

index with where condition - Oracle

I would like to have an equivalent Oracle query for the below SQL Query
SQL QUERY:
CREATE UNIQUE NONCLUSTERED INDEX ValidSub_Category ON ValidSub (Category ASC) WHERE (category IS NOT NULL)
purpose: This index is created to make sure that the column has more than 1 NULL records but does not have duplicate strings.
Thanks in advance
I found it
CREATE UNIQUE INDEX VALIDSUB_CATEGORY ON VALIDSUB (Case WHEN Category
IS NOT NULL THEN CATEGORY END);

Select rows in postgres table where an array field contains NULL

Our system uses postgres for its database.
We have queries that can select rows from a database table where an array field in the table contains a specific value, e.g.:
Find which employee manages the employee with ID 123.
staff_managed_ids is a postgres array field containing an array of the employees that THIS employee manages.
This query works as expected:
select *
from employees
where 123=any(staff_managed_ids)
We now need to query where an array field contains a postgres NULL. We tried the following query, but it doesn't work:
select *
from employees
where NULL=any(staff_managed_ids)
We know the staff_managed_ids array field contains NULLs from other queries.
Are we using NULL wrongly?
NULL can not be compared using =. The only operators that work with that are IS NULL and IS NOT NULL.
To check for nulls, you need to unnest the elements:
select e.*
from employees e
where exists (select *
from unnest(e.staff_managed_ids) as x(staff_id)
where x.staff_id is null);
if all your id values are positive, you could write something like this:
select *
from employees
where (-1 < all(staff_managed_ids)) is null;
how this works is that -1 should be less than all values, however comparison with null will make the whole array comparison expression null.

OrientDB Indexes are not used

I have a problem.... here is a part of sample query:
select * from tablename WHERE (id = "some4294-0643-4eaa-a262-7479c1859860" OR code = "some4294-0643-4eaa-a262-7479c1859860")
and deleted is null and blablablabla...>
And there are 2 indexes on this table: id and code.
If I'm querying tablename this way, indexes are not used.
In other query: select from (select * from tablename WHERE (id = "some4294-0643-4eaa-a262-7479c1859860" OR code = "some4294-0643-4eaa-a262-7479c1859860"))
where deleted is null and blablablabla...>
Indexes are used.
problem is my query is even more complex..... and I don't really want to deal with select in select.... but I really wish indexes to be used...
Is there any way to build index for 1st statement?

Is it possible to optimize a SELECT COUNT(*) query using a filtered index as a hint to achieve constant speed?

I'd like to count all the Orders that are not urgent and whose order status = 1 (shipped).
This should be a very simple query to optimize. I'd like to put a simple filtered index on the Orders table to cover this query to make it a constant time/O(1) operation. However, when I look at the query plan, it looks like it's using a Index Scan which doesn't make sense. Ideally, this query should just returning the number of items in the index.
The table look like this (simplified to get to the essence):
CREATE TABLE [dbo].[Orders](
[Id] [int] IDENTITY(1,1) NOT NULL,
[IsUrgent] [bit] NOT NULL,
[Status] [tinyint] NOT NULL
CONSTRAINT [PK_Orders] PRIMARY KEY CLUSTERED ( [Id] ASC )
I've created this filtered index:
CREATE INDEX IX_Orders_ShippedNonUrgent ON Orders(Id) WHERE IsUrgent = 0 AND Status = 1;
Now, when I do this query:
SELECT COUNT(*) FROM Orders WHERE IsUrgent = 0 AND Status = 1
I see that the query plan is using IX_Orders_ShippedNonUrgent, but it's doing an Index Scan and performing around 200 reads across the ~150,000 rows in Orders.
Is it possible to always have this query run in constant time assuming the filtered index is kept up to date? Ideally, it should only perform 1 read to get the size of the index.
If I switch to a non-filtered index like this:
CREATE INDEX IX_Orders_IsUrgentStatus ON Orders(IsUrgent, Status);
The query plan uses an Index Seek, but still performs many more reads than should be necessary to answer this simple query.
UPDATE
I'm able to do this
SELECT TOP 1 rows FROM sys.partitions p
INNER JOIN sys.indexes i
ON i.name = 'IX_Orders_ShippedNonUrgent'
AND i.object_id = p.object_id
AND i.index_id = p.index_id
and get the result in 9 reads but it seems like there should be a much easier and less brittle way of using the simple COUNT(*) query.
It seems like what I'm wanting isn't possible. The best answer was left in the comments by Nikola Markovinović which is to forget about the filtered index and use an indexed view instead:
CREATE VIEW [dbo].vw_Orders_TotalShippedNonUrgent WITH SCHEMABINDING
AS
SELECT COUNT_BIG(*) AS TotalOrders
FROM dbo.Orders WHERE IsUrgent = 0 AND Status = 1;
with
CREATE UNIQUE CLUSTERED INDEX IX_vw_Orders_TotalShippedNonUrgent ON vw_Orders_TotalShippedNonUrgent(TotalOrders);
This forces creating views and their index for each summary statistic that I want as well as rewriting the query to ask the view instead of the simple approach, but it is fast at only 2 reads.
I'll leave this question open for awhile in case anyone has a simpler approach that's just as fast.