index with where condition - Oracle - oracle12c

I would like to have an equivalent Oracle query for the below SQL Query
SQL QUERY:
CREATE UNIQUE NONCLUSTERED INDEX ValidSub_Category ON ValidSub (Category ASC) WHERE (category IS NOT NULL)
purpose: This index is created to make sure that the column has more than 1 NULL records but does not have duplicate strings.
Thanks in advance

I found it
CREATE UNIQUE INDEX VALIDSUB_CATEGORY ON VALIDSUB (Case WHEN Category
IS NOT NULL THEN CATEGORY END);

Related

creating index with true=any() on postgres and making queries faster

I have one query which cannot be modified but I need to create indexes on them to make it faster. the query is:
select allcommuni0_.assignedto as col_0_0_
from communication.vwallcommunication allcommuni0_
where (allcommuni0_.projectid in (750))
and true=any (select case when lower(communicat1_.claimnumber)=lower('193') then true else false end from communication.vwclaimdetails communicat1_
where allcommuni0_.communicationviewid=communicat1_.communicationviewid and allcommuni0_.type=communicat1_.type)
group by allcommuni0_.assignedto limit 100;
There are no indexes right now. How to create optimized index with true=any() operator on Postgres?
create an index on column type and column communicationviewid.
CREATE INDEX vwclaimdetails_lower_claimnumber_idx ON communication.claimdetails ((CASE
WHEN communicationoutboundid IS NOT NULL THEN communicationoutboundid
ELSE communicationinboundid
END),(CASE
WHEN communicationoutboundid IS NOT NULL THEN 'Outbound'::text
ELSE 'Inbound'::text
END));

Within the where condition we filter using row, Is there any way to index on it?

So within a postgres select statement, I have this query which filters like this as an example.
select *
from table1
where ROW (table1.created_on, table1.id) < ROW ('2022-02-05 09:37:06.719', 'b8e4c048-ec10-4c7e-9811');
Can we index this?
Just a plain multicolumn index should do it
create index on table1 (created_on, id)

How to create index for postgresql jsonb field (array data) and text field

Please let me know how to create index for below query.
SELECT * FROM customers
WHERE identifiers #>
'[{"systemName": "SAP", "systemReference": "33557"}]'
AND country_code = 'IN';
identifiers is jsonb type and data is as below.
[{"systemName": "ERP", "systemReference": "TEST"}, {"systemName": "FEED", "systemReference": "2733"}, {"systemName": "SAP", "systemReference": "33557"}]
country_code is varchar type.
Either create a GIN index on identifiers ..
CREATE INDEX customers_identifiers_idx ON customers
USING GIN(identifiers);
.. or a composite index with identifiers and country_code.
CREATE INDEX customers_country_code_identifiers_idx ON customers
USING GIN(identifiers,country_code gin_trgm_ops);
The second option will depend on the values distribution of country_code.
Demo: db<>fiddle
You can create gin index for jsonb typed columns in Postgresql. gin index has built-in operator classes to handle jsonb operators. Learn more about gin index here https://www.postgresql.org/docs/12/gin-intro.html
For varchar types, btree index is good enough.

How can I find a null field efficiently using index

I have a query where I'm trying to find a null field from millions of records. There will only be one or two.
The query looks like this:
SELECT *
FROM “table”
WHERE “id” = $1
AND “end_time” IS NULL
ORDER BY “start_time” DESC LIMIT 1
How can I make this query more performant eg using indexes in the database.
try a partial index, smth like:
create index iname on "table" (id, start_time) where end_time is null;

PostgreSQL gist index

I have a table with two date like dateTo and dateFrom, i would like use daterange approach in queries and a gist index, but it seem doesn't work. The table looks like:
CREATE TABLE test (
id bigeserial,
begin_date date,
end_date date
);
CREATE INDEX "idx1"
ON test
USING gist (daterange(begin_date, end_date));
Then when i try to explain a query like:
SELECT t.*
FROM test t
WHERE daterange(t.begin_date,t.end_date,'[]') && daterange('2015-12-30 00:00:00.0','2016-10-28 00:00:00.0','[]')
i get a Seq Scan.
Is this usage of gist index wrong, or is this scenario not feasible?
You have an index on the expression daterange(begin_date, end_date), but you query your table with daterange(begin_date, end_date, '[]') && .... PostgreSQL won't do math instead of you. To re-phrase your problem, it is like you're indexing (int_col + 2) and querying WHERE int_col + 1 > 2. Because the two expressions are different, the index will not be used in any circumstances. But as you can see, you can do the math (i.e. re-phrase the formula) sometimes.
You'll either need:
CREATE INDEX idx1 ON test USING gist (daterange(begin_date, end_date, '[]'));
Or:
CREATE INDEX idx2 ON test USING gist (daterange(begin_date, end_date + 1));
Note: both of them creates a range which includes end_date. The latter one uses the fact that daterange is discrete.
And use the following predicates for each of the indexes above:
WHERE daterange(begin_date, end_date, '[]') && daterange(?, ?, ?)
Or:
WHERE daterange(begin_date, end_date + 1) && daterange(?, ?, ?)
Note: the third parameter of the range constructor on the right side of && does not matter (in the context of index usage).