Group By expression in Oracle 19c - oracle12c

I am installing a database schema in Oracle 19c, and the installation scripts have been used repeatedly in Oracle 12 without problems.
My problem with 19c is when it runs our views script, it throws an on at some of views. The error we are seeing the not a group by expression.
We have a few views where for example we have something like this:
SELECT name, TRUNC(date) as Day
FROM sometable
GROUP BY name, TRUNC(date)
It is pointing the error at the select as though it doesn't see that the field is already in the group by expression.
As said, these queries work fine in Oracle 12 for years, it is only now when moving to 19 that we are seeing problems.
Is this a bug in 19c or does something need to be applied?

19c, you say? Can't reproduce it.
SQL> select banner from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Table you used (so that you wouldn't say that this is the culprit) (obviously, date is an invalid column name; that's reserved for the date datatype).
SQL> create table sometable as
2 select 'Little' name, sysdate datum
3 from dual
4 connect by level <= 3;
Table created.
Does query itself work? Yes:
SQL> select name, trunc(datum) as day
2 from sometable
3 group by name, trunc(datum);
NAME DAY
------ --------
Little 26.01.22
Note that - as you aren't aggregating anything - you could have used DISTINCT instead of GROUP BY:
SQL> select DISTINCT name, trunc(datum) as day
2 from sometable;
NAME DAY
------ --------
Little 26.01.22
Can I create a view? Yes:
SQL> create view v_sometable as
2 select name, trunc(datum) as day
3 from sometable
4 group by name, trunc(datum);
View created.
SQL>
As I said, I can't reproduce it.
Please, copy/paste your own SQL*Plus session (just like I did) so that we'd see what exactly you did and how Oracle responded.

Related

PostgreSQL how do I COUNT with a condition?

Can someone please assist with a query I am working on for school using a sample database from PostgreSQL tutorial? Here is my query in PostgreSQL that gets me the raw data that I can export to excel and then put in a pivot table to get the needed counts. The goal is to make a query that counts so I don't have to do the manual extraction to excel and subsequent pivot table:
SELECT
i.film_id,
r.rental_id
FROM
rental as r
INNER JOIN inventory as i ON i.inventory_id = r.inventory_id
ORDER BY film_id, rental_id
;
From the database this gives me a list of films (by film_id) showing each time the film was rented (by rental_id). That query works fine if just exporting to excel. Since we don't want to do that manual process what I need is to add into my query how to count how many times a given film (by film_id) was rented. The results should be something like this (just showing the first five here, the query need not do that):
film_id | COUNT of rental_id
1 | 23
2 | 7
3 | 12
4 | 23
5 | 12
Database setup instructions can be found here: LINK
I have tried using COUNTIF and CASE (following other posts here) and I can't get either to work, please help.
Did you try this?:
SELECT
i.film_id,
COUNT(1)
FROM
rental as r
INNER JOIN inventory as i ON i.inventory_id = r.inventory_id
GROUP BY i.film_id
ORDER BY film_id;
If there can be >1 rental_id in your data you may want to use COUNT(DISTINCT r.rental_id)

How to find the difference between two table sizes in postgres using shell script

I have one table named 'Table_size_details' in my database which stores the size of the tables in my Postgres database, every Friday.
Date Table_Name Table_size Growth_Difference Growth_Percentage
---- ----------- ---------- ----------------- -----------------
20-08-2021 Demo 1.2 GB
13-08-2021 Demo 578 MB
I have got a task to add two more columns named 'Growth_Difference' and 'Growth_Percentage'. In 'Growth_Difference' column I need to find the difference between current table size(1.3 GB) and previous week table size(578 MB) and display it in MB format. Also I need to find the growth percentage of both- the current table size and previous week's table size.
I have asked to develop using SHELL SCRIPT.
Table_size_old=`psql -d abc -At -c "SELECT Table_size_details from abc order by Date desc limit 1;"`
Table_size_new=`psql -At -c "SELECT pg_size_pretty(pg_total_relation_size('Table_size_details'));`
growth_table=`expr $Table_size_new - $Table_size_old;`
Above logic I have used to find the difference between new and old table size but I'm getting expr: syntax error on growth_table variable line. I believe its because I trying to find the difference between 1.2 GB and 578 MB.
I'm new to shell scripting, could anyone help me to find a solution?
Appreciate your help in advance.
There is no need to even attempt this is a shell script, further since both growth columns are computed values there is no need to store them. This can be done in a single query, or perhaps even better a single query that populates a view. With that getting what you are looking for is a simple Select from that view.
First off however do not store your size as a string with number and unit size code. Store instead a single numeric value in a constant unit size, then convert all values to that constant unit size. For example select GB as the constant unit, then 587MB would be stored as .587, this way there is no unit conversion needed. With that done (or added) create a view as follows:
create or replace view table_size_growth as
select table_name
, run_date
, size_in_gb
, Round( (size_in_gb - gb_last_week)::numeric,6) growth_in_gb
, case when gb_last_week < 0.0000001 -- set to desired precision
then null::double precision
else round((100 * (size_in_gb - gb_last_week)/abs(gb_last_week))::numeric,6)
end growth_in_pct
from (select ts.*, lag(ts.size_in_gb) over( partition by ts.table_name
order by ts.run_date) gb_last_week
from table_size_details ts
) s
order by table_name, run_date;
Your script (or anywhere else) now needs the single query: select * from table_size_growth Note: this provides every week for every table you are capturing. Use where clause as needed. See example here.

Postgresql Update not updating, query finish successfully

I'm trying to do this:
api-prod::DATABASE=> SELECT winner from factoring_bid WHERE id = 16184;
winner
--------
f
(1 row)
api-prod::DATABASE=> SELECT winner from factoring_bid WHERE id = 16184;
winner
--------
f
(1 row)
api-prod::DATABASE=> UPDATE factoring_bid set winner = 't' WHERE id=16184;
UPDATE 1
api-prod::DATABASE=> SELECT winner from factoring_bid WHERE id = 16184;
winner
--------
f
(1 row)
Update seems to work fine, but any change in this register (and some others) are not really happening. And yes, the user running this query has write permissions.
More info:
This is a Heroku postgresql database.
Updating some other registers on factoring_bid table works just fine.
factoring_bid table is related to another table called factoring_auction. All factoring_bid's related to the factoring_auction of the example have the same problem. Can't be updated.
factoring_bids related to other auctions have no problem. Maybe my application (a Django Rest Framework API) introduced some error in the bids of this specific auction?, I can't think of anything, but seems weird that they are related.
We don't use locks, there seems to be no lock running on the database currently.

To Pull records between two dates in db2

I tried below two ways they not working
Select * from Table
where SERV_DATE BETWEEN '03/01/2013'AND
'03/31/2013'
ALSO This is not working
Select * from Table
where SERV_DATE BETWEEN DATE('03/01/2013') AND
DATE('03/31/2013')
What should be the correct format ?
Did you tried what NealB suggested? The reason for not accepting 03/01/2013 as an entry date format is, that it is region dependent in the US it is March 1, 2013 an in the UK it is January 3, 2013. So without considering the local, it is not certain, what the actual date is.
"why would db2 give error on the same format and will go well when given different format" - Don't forget, that db2 is an old lady and as all old ladies she has peculiarities. You just get used to it and there will be an happy ending.
SELECT * FROM tableName WHERE date(modifiedBy_date) between '2017-07-28' AND '2017-08-01';
Works cool for DB2.
Select * from Table
where SERV_DATE BETWEEN DATE('2013-03-01') AND DATE('2013-03-31');
Worked for me.
Select * from Table
where (SERV_DATE BETWEEN '03/01/2013'AND
'03/31/2013')
Select * from Table
where (SERV_DATE BETWEEN '2013-03-01'AND
'2013-03-31')
select count(*) from TABLE where time_stamp BETWEEN DATE('2018-01-01') AND DATE('2018-01-31');
Here time_stamp is field name and copy your timestamp filed name instead of time_stamp.

Feasibility of recreating complex SQL query in Crystal Reports XI

I have about 10 fairly complex SQL queries on SQL Server 2008 - but the client wants to be able to run them from their internal network (as opposed to from the non-local web app) through Crystal Reports XI.
The client's internal network does not allow us to (a) have write access to their proprietary db, nor (b) allow us to set up an intermediary SQL server (meaning we can not set up stored procedures or other data cleaning).
The SQL contains multiple instances of row_number() over (partition by col1, col2), group by col1, col2 with cube|rollup, and/or (multiple) pivots.
Can this even be done? Everything I've read seems to indicate that this is only feasible via stored procedure and I would still need to pull the data from the proprietary db first.
Following is a stripped back version of one of the queries (eg, JOINs not directly related to functionality, WHERE clauses, and half a dozen columns have been removed)...
select sum(programID)
, sum([a.Asian]) as [Episodes - Asian], sum([b.Asian]) as [Eps w/ Next Svc - Asian], sum([c.Asian])/sum([b.Asian]) as [Avg Days to Next Svc - Asian]
, etc... (repeats for each ethnicity)
from (
select programID, 'a.' + ethnicity as ethnicityA, 'b.' + ethnicity as ethnicityB, 'c.' + ethnicity as ethnicityC
, count(*) as episodes, count(daysToNextService) as episodesWithNextService, sum(daysToNextService) as daysToNextService
from (
select programID, ethnicity, datediff(dateOfDischarge, nextDateOfService) as daysToNextService from (
select t1.userID, t1.programID, t1.ethnicity, t1.dateOfDischarge, t1.dateOfService, min(t2.dateOfService) as nextDateOfService
from TABLE1 as t1 left join TABLE1 as t2
on datediff(d, t1.dateOfService, t2.dateOfService) between 1 and 31 and t1.userID = t2.userID
group by t1.userID, t1.programID, t1.ethnicity, t1.dateOfDischarge, t1.dateOfService
) as a
) as a
group by programID
) as a
pivot (
max(episodes) for ethnicityA in ([A.Asian],[A.Black],[A.Hispanic],[A.Native American],[A.Native Hawaiian/ Pacific Isl.],[A.White],[A.Unknown])
) as pA
pivot (
max(episodesWithNextService) for ethnicityB in ([B.Asian],[B.Black],[B.Hispanic],[B.Native American],[B.Native Hawaiian/ Pacific Isl.],[B.White],[B.Unknown])
) as pB
pivot (
max(daysToNextService) for ethnicityC in ([C.Asian],[C.Black],[C.Hispanic],[C.Native American],[C.Native Hawaiian/ Pacific Isl.],[C.White],[C.Unknown])
) as pC
group by programID with rollup
Sooooooo.... can something like this even be translated into Crystal Reports XI?
Thanks!
When you create your report instead of selecting a table or stored procedure choose add command
This will allow you to put whatever valid TSQL statement in there that you want. Using Common Table Expressions (CTE's) and inline Views I've managed to create some rather large complex statements (excess of 400 lines) against Oracle and SQL Server so it is indeed feasible, however if you use parameters you should consider using sp_executesql you'll have to figure out how to avoid SQL injection.