how i can select the missing date from oracle table - select

I have a table emp_attn (emp_name varchar2(30),attn_dt date) values like
select * from emp_attn;
result like
EMP_NAME ATTN_DT
-------------------- ----------
SAM 02/05/2013
SAM 03/05/2013
SAM 07/05/2013
SAM 08/05/2013
SAM 13/05/2013
SAM 14/05/2013
SAM 17/05/2013
SAM 18/05/2013
SAM 19/05/2013
SAM 20/05/2013
SAM 21/05/2013
SAM 22/05/2013
SAM 23/05/2013
SAM 24/05/2013
SAM 25/05/2013
RAM 01/05/2013
RAM 03/05/2013
RAM 07/05/2013
RAM 08/05/2013
RAM 10/05/2013
RAM 11/05/2013
RAM 14/05/2013
RAM 18/05/2013
RAM 19/05/2013
RAM 20/05/2013
RAM 23/05/2013
RAM 24/05/2013
RAM 25/05/2013
I need the missing attn_dt and name from this table

You can use dual table for find out missing dates.
Consider below example
http://sqlfiddle.com/#!4/5cbc7/10
Here test table has 10 rows.
Select dat from ( SELECT to_char(SYSDATE + LEVEL, 'dd/MM/yyyy') dat
FROM DUAL CONNECT BY LEVEL <= 12 -- You can change as per your date range
) a where dat not in (select dates from test);
The above select query returns give the result set of not available dates in test table.
You can get the missed date by passing two parameters on this query.
SYSDATE - Give your start date
LEVEL (In where condition) - Give as per your date range.

Related

Group By expression in Oracle 19c

I am installing a database schema in Oracle 19c, and the installation scripts have been used repeatedly in Oracle 12 without problems.
My problem with 19c is when it runs our views script, it throws an on at some of views. The error we are seeing the not a group by expression.
We have a few views where for example we have something like this:
SELECT name, TRUNC(date) as Day
FROM sometable
GROUP BY name, TRUNC(date)
It is pointing the error at the select as though it doesn't see that the field is already in the group by expression.
As said, these queries work fine in Oracle 12 for years, it is only now when moving to 19 that we are seeing problems.
Is this a bug in 19c or does something need to be applied?
19c, you say? Can't reproduce it.
SQL> select banner from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Table you used (so that you wouldn't say that this is the culprit) (obviously, date is an invalid column name; that's reserved for the date datatype).
SQL> create table sometable as
2 select 'Little' name, sysdate datum
3 from dual
4 connect by level <= 3;
Table created.
Does query itself work? Yes:
SQL> select name, trunc(datum) as day
2 from sometable
3 group by name, trunc(datum);
NAME DAY
------ --------
Little 26.01.22
Note that - as you aren't aggregating anything - you could have used DISTINCT instead of GROUP BY:
SQL> select DISTINCT name, trunc(datum) as day
2 from sometable;
NAME DAY
------ --------
Little 26.01.22
Can I create a view? Yes:
SQL> create view v_sometable as
2 select name, trunc(datum) as day
3 from sometable
4 group by name, trunc(datum);
View created.
SQL>
As I said, I can't reproduce it.
Please, copy/paste your own SQL*Plus session (just like I did) so that we'd see what exactly you did and how Oracle responded.

How to find the difference between two table sizes in postgres using shell script

I have one table named 'Table_size_details' in my database which stores the size of the tables in my Postgres database, every Friday.
Date Table_Name Table_size Growth_Difference Growth_Percentage
---- ----------- ---------- ----------------- -----------------
20-08-2021 Demo 1.2 GB
13-08-2021 Demo 578 MB
I have got a task to add two more columns named 'Growth_Difference' and 'Growth_Percentage'. In 'Growth_Difference' column I need to find the difference between current table size(1.3 GB) and previous week table size(578 MB) and display it in MB format. Also I need to find the growth percentage of both- the current table size and previous week's table size.
I have asked to develop using SHELL SCRIPT.
Table_size_old=`psql -d abc -At -c "SELECT Table_size_details from abc order by Date desc limit 1;"`
Table_size_new=`psql -At -c "SELECT pg_size_pretty(pg_total_relation_size('Table_size_details'));`
growth_table=`expr $Table_size_new - $Table_size_old;`
Above logic I have used to find the difference between new and old table size but I'm getting expr: syntax error on growth_table variable line. I believe its because I trying to find the difference between 1.2 GB and 578 MB.
I'm new to shell scripting, could anyone help me to find a solution?
Appreciate your help in advance.
There is no need to even attempt this is a shell script, further since both growth columns are computed values there is no need to store them. This can be done in a single query, or perhaps even better a single query that populates a view. With that getting what you are looking for is a simple Select from that view.
First off however do not store your size as a string with number and unit size code. Store instead a single numeric value in a constant unit size, then convert all values to that constant unit size. For example select GB as the constant unit, then 587MB would be stored as .587, this way there is no unit conversion needed. With that done (or added) create a view as follows:
create or replace view table_size_growth as
select table_name
, run_date
, size_in_gb
, Round( (size_in_gb - gb_last_week)::numeric,6) growth_in_gb
, case when gb_last_week < 0.0000001 -- set to desired precision
then null::double precision
else round((100 * (size_in_gb - gb_last_week)/abs(gb_last_week))::numeric,6)
end growth_in_pct
from (select ts.*, lag(ts.size_in_gb) over( partition by ts.table_name
order by ts.run_date) gb_last_week
from table_size_details ts
) s
order by table_name, run_date;
Your script (or anywhere else) now needs the single query: select * from table_size_growth Note: this provides every week for every table you are capturing. Use where clause as needed. See example here.

TimescaleDB: Understanding the return values after creating hypertable and the creation of chunks after populating the hypertable

I have an existing table in my database named price (has 264 rows) and I converted it into a hypertable price_hypertable doing:
CREATE TABLE price_hypertable (LIKE price INCLUDING DEFAULTS INCLUDING CONSTRAINTS EXCLUDING INDEXES);
SELECT create_hypertable('price_hypertable', 'start');
and the output it gave me is as follows:
create_hypertable
-------------------------------
(4,public,price_hypertable,t)
(1 row)
The next thing I did was to populate the price_hypertable as follows:
insert into price_hypertable select * from price;
And I got the following output:
INSERT 0 264
Now, I wanted to check the chunks created, for which I did:
select public.show_chunks('price_hypertable');
and the output I got:
show_chunks
----------------------------------------
_timescaledb_internal._hyper_4_3_chunk
_timescaledb_internal._hyper_4_4_chunk
(2 rows)
When I do:
select * from _timescaledb_internal._hyper_4_3_chunk;
select * from _timescaledb_internal._hyper_4_4_chunk ;
I see that the 264 entries are split as follows:
_timescaledb_internal._hyper_4_3_chunk has 98 rows
_timescaledb_internal._hyper_4_4_chunk has 166 rows
I have a few questions about these steps and their outputs:
Can someone please explain to me what do the values 4 and t represent, when I did
SELECT create_hypertable('price_hypertable', 'start');?
After populating the price_hypertable, the data was automatically split into chunks, but of different size. Why does this happen? Why wasn't the data just split in half (132 rows in each chunk instead of 98 and 166)?
Any help is appreciated. Thanks
For the first question, it is easier to see what they represent by executing create_hypertable as
SELECT * FROM create_hypertable('price_hypertable', 'start');
This gives something like:
hypertable_id | schema_name | table_name | created
---------------+-------------+--------------------+---------
4 | public | price_hypertable | t
For the second question, TmTron already answered. This is because the rows are sorted into buckets based on the time, and they are not necessarily evenly spaced. There is no automation that pick the correct interval for each bucket.
You can find information about the return values in the API documentation on create_hypertable which also discuss the parameter chunk_time_interval that can be used to set the chunk size.
related to your 2nd question:
When you don't specify the chunk_time_interval explicitly, the default is 7 days: see create-hypertable, Best Practices.
So the number of rows in each chunks depends on the distribution of your data (according to your start date-time column).

Export data from db2 with column names

I want to export data from db2 tables to csv format.I also need that first row should be all the column names.
I have little success by using the following comand
EXPORT TO "TEST.csv"
OF DEL
MODIFIED BY NOCHARDEL coldel: ,
SELECT col1,'COL1',x'0A',col2,'COL2',x'0A'
FROM TEST_TABLE;
But with this i get data like
Row1 Value:COL1:
Row1 Value:COL2:
Row2 Value:COL1:
Row2 Value:COL2:
etc.
I also tried the following query
EXPORT TO "TEST.csv"
OF DEL
MODIFIED BY NOCHARDEL
SELECT 'COL1',col1,'COL2',col2
FROM ADMIN_EXPORT;
But this lists column name with each row data when opened with excel.
Is there a way i can get data in the format below
COL1 COL2
value value
value value
when opened in excel.
Thanks
After days of searching I solved this problem that way:
EXPORT TO ...
SELECT 1 as id, 'COL1', 'COL2', 'COL3' FROM sysibm.sysdummy1
UNION ALL
(SELECT 2 as id, COL1, COL2, COL3 FROM myTable)
ORDER BY id
You can't select a constant string in db2 from nothing, so you have to select from sysibm.sysdummy1.
To have the manually added columns in first row you have to add a pseudo-id and sort the UNION result by that id. Otherwise the header can be at the bottom of the resulting file.
Quite old question, but I've encountered recently the a similar one realized this can be achieved much easier in 11.5 release with EXTERNAL TABLE feature, see the answer here:
https://stackoverflow.com/a/57584730/11946299
Example:
$ db2 "create external table '/home/db2v115/staff.csv'
using (delimiter ',' includeheader on) as select * from staff"
DB20000I The SQL command completed successfully.
$ head /home/db2v115/staff.csv | column -t -s ','
ID NAME DEPT JOB YEARS SALARY COMM
10 Sanders 20 Mgr 7 98357.50
20 Pernal 20 Sales 8 78171.25 612.45
30 Marenghi 38 Mgr 5 77506.75
40 O'Brien 38 Sales 6 78006.00 846.55
50 Hanes 15 Mgr 10 80659.80
60 Quigley 38 Sales 66808.30 650.25
70 Rothman 15 Sales 7 76502.83 1152.00
80 James 20 Clerk 43504.60 128.20
90 Koonitz 42 Sales 6 38001.75 1386.70
Insert the column names as the first row in your table.
Use order by to make sure that the row with the column names comes out first.

Postgresql Custom Order for fixed values

I need a select result to look like this
UK
Europe
USA
The values are fixed ( no table is need it ). The order is important so ORDER BY 1 is not working.
What is the sql query ( as simple as possible ) that will build this result ?
You could use VALUES lists:
VALUES ('UK'), ('Europe'), ('USA');
column1
---------
UK
Europe
USA
(3 rows)