How to convert decimal number to time. suppose if I have a decimal number 2.56 by converting it into the time it would be 02:33:36.
-->CREATE TABLE t1(COL1 INTEGER NOT NULL,COL2 DECIMAL(10,2));
-->insert into t1(col1,clo2 )
values(1024 , 2.56 ),
(1024 , 4.23 ),
(1024 , 1.67 ),
(1024 , 0.56 )
$ db2 "select i, j
, time('00.00.00')
+ int(j) hours
+ int(mod(j,1)*60) minutes
+ int(mod(mod(j,1)*60*60,60)) seconds as time
from table( values(1024 , 2.56 ), (1024 , 4.23 )
, (1024 , 1.67 ), (1024 , 0.56 )
) a(i,j)"
I J TIME
----------- ----- --------
1024 2.56 02:33:36
1024 4.23 04:13:48
1024 1.67 01:40:12
1024 0.56 00:33:36
4 record(s) selected.
BTW you should store time values a TIME datatype not DECIMAL
Related
Database part
I have created a table:
Here is the postgresql query that I have use to create this table,
INSERT INTO growth_test.trax_qa_db_growth
SELECT
now()::date as executed_date,
schemaname as table_schema,
relname as table_name,
pg_size_pretty(pg_relation_size(relid)) as data_size,
pg_relation_size(relid) as full_data_size
FROM pg_catalog.pg_statio_user_tables
WHERE schemaname='production'
order by pg_relation_size(relid) desc;
Power BI part
I have to get the data in this table to powerbi. I want to visualize the size of db growth based on "full_data_size" column based on specific two dates using sliser. How can I visualize dbgrowth based on specific takes?
I have tried the "DATESBETWEEN" but that not give the output that I want.
Start Date & End Date are the measures that I have create for the sliser,
DAX function
Data Size in the period =
VAR total =
SUM ( trax_qa_db_growth[full_data_size] ) + 0
RETURN
IF (
total < 1024,
CALCULATE(FORMAT ( total, "#0.0# B" ),DATESBETWEEN('trax_qa_db_growth'[executed_date],[Start Date],[End Date])),
IF (
total < POWER ( 2, 20 ),
CALCULATE(FORMAT ( total / POWER ( 2, 10 ), "#0.0# KB" ),DATESBETWEEN('trax_qa_db_growth'[executed_date],[Start Date],[End Date])),
IF (
total < POWER ( 2, 30 ),
CALCULATE(FORMAT ( total / POWER ( 2, 20 ), "#0.0# MB" ),DATESBETWEEN('trax_qa_db_growth'[executed_date],[Start Date],[End Date])),
IF (
total < POWER ( 2, 40 ),
CALCULATE(FORMAT ( total / POWER ( 2, 30 ), "#0.0# GB" ),DATESBETWEEN('trax_qa_db_growth'[executed_date],[Start Date],[End Date])),
CALCULATE(FORMAT ( total / POWER ( 2, 40 ), "#0.0# TB" ),DATESBETWEEN('trax_qa_db_growth'[executed_date],[Start Date],[End Date]))
)
)
))
This DAX function didn't provide me the result the way I want. Instead it shows the size from full_db_size.
So, I have tried some different method. I hope that will be the correct way.
New DAX function
Data Size in the period =
VAR total =
SUM ( trax_qa_db_growth[full_data_size] ) + 0
RETURN
IF (
total < 1024,
CALCULATE(FORMAT ( total, "#0.0# B" )),
IF (
total < POWER ( 2, 20 ),
CALCULATE(FORMAT ( total / POWER ( 2, 10 ), "#0.0# KB" ),FILTER(trax_qa_db_growth, trax_qa_db_growth[executed_date] = [End Date]))-CALCULATE(FORMAT ( total / POWER ( 2, 10 ), "#0.0# KB" ),FILTER(trax_qa_db_growth, trax_qa_db_growth[executed_date] = [Start Date])),
IF (
total < POWER ( 2, 30 ),
CALCULATE(FORMAT ( total / POWER ( 2, 20 ), "#0.0# MB" ),FILTER(trax_qa_db_growth, trax_qa_db_growth[executed_date] = [End Date]))-CALCULATE(FORMAT ( total / POWER ( 2, 10 ), "#0.0# KB" ),FILTER(trax_qa_db_growth, trax_qa_db_growth[executed_date] = [Start Date])),
IF (
total < POWER ( 2, 40 ),
CALCULATE(FORMAT ( total / POWER ( 2, 30 ), "#0.0# GB" ),FILTER(trax_qa_db_growth, trax_qa_db_growth[executed_date] = [End Date]))-CALCULATE(FORMAT ( total / POWER ( 2, 10 ), "#0.0# KB" ),FILTER(trax_qa_db_growth, trax_qa_db_growth[executed_date] = [Start Date])),
CALCULATE(FORMAT ( total / POWER ( 2, 40 ), "#0.0# TB" ),FILTER(trax_qa_db_growth, trax_qa_db_growth[executed_date] = [End Date]))-CALCULATE(FORMAT ( total / POWER ( 2, 10 ), "#0.0# KB" ),FILTER(trax_qa_db_growth, trax_qa_db_growth[executed_date] = [Start Date]))
)
)
))
What is the issue with the "New DAX function"? How to calculate the DB growth?
Is there any way to calculate db growth in postgresql? (In the oracle DB has that type of options)
Is it possible to Average after every fixed interval and group by one column in MSSQL ?
Suppose I have a table A as under:
NAME Interval Data1 Data2
1 0.01 1 4
1 0.05 4 2
1 0.09 7 6
1 0.11 1 2
1 0.15 7 6
1 0.18 3 1
1 0.19 2 5
2 0.209 9 0
I want the Output to group by Name and run average every 10 counts.
So for expamle
Name - 1
Interval Start - 0
Interval End - 10
Data 1 Avg - 4 [(1 + 4 + 7) / 3]
Data 3 Avg - 4 [(4 + 2 + 6) / 3]
AND
Name - 1
Interval Start - 10
Interval End - 20
Data 1 Avg - 3.25 [(1 + 7 + 3 + 2) / 4]
Data 3 Avg - 3.50 [(2 + 6 + 1 + 5) / 4]
So I want the Ouput as below. The interval per "Name" column is different.
Name Interval-Start Interval-End DataAvg1 DataAvg2
1 0 10 4 4
1 10 20 3.25 3.50
2 0 10 0 0
2 10 20 0 0
2 20 30 9 0
I used the below query, but cant figure out logic per interval.
SELECT Name, Interval, AVG(Data1) AS Data1Avg, AVG(Data2) AS Data2Avg
FROM TableA
GROUP BY Name;
Can someone please help me with it.
using cursor and temp table
--drop table dbo.#result
--drop table dbo.#steps
CREATE TABLE dbo.#result
(
[Name] varchar(50),
[Interval-Start] float,
[Interval-End] float,
[DataAvg1] float,
[DataAvg2] float
)
CREATE TABLE dbo.#steps
(
[IntervalStart] float,
[IntervalEnd] float
)
declare #min int, #max int, #step float
DECLARE #Name varchar(50), #IntervalStart float, #IntervalEnd float;
set #min = 0
set #max = 1
set #step = 0.1
insert into #steps
select #min + Number * #step IntervalStart, #min + Number * #step + #step IntervalEnd
from master..spt_values
where type = 'P' and number between 0 and (#max - #min) / #step
DECLARE _cursor CURSOR FOR
SELECT [Name], [IntervalStart], [IntervalEnd] FROM
(select [Name] from [TableA] Group by [Name]) t
INNER JOIN #steps on 1=1
OPEN _cursor;
FETCH NEXT FROM _cursor
INTO #Name, #IntervalStart, #IntervalEnd;
WHILE ##FETCH_STATUS = 0
BEGIN
insert into dbo.#result
select #Name, #IntervalStart, #IntervalEnd, AVG(CAST(Data1 as FLOAT)), AVG(CAST(Data2 as FLOAT))
FROM [TableA]
where [NAME] = #Name and Interval between #IntervalStart and #IntervalEnd
FETCH NEXT FROM _cursor
INTO #Name, #IntervalStart, #IntervalEnd;
END
CLOSE _cursor;
DEALLOCATE _cursor;
select * from dbo.#result
This question already has answers here:
How to avoid the "divide by zero" error in SQL?
(19 answers)
Closed 5 years ago.
I have the following table:
Item Ordqty Total_Costprice TotalSaleprice onhand Markup
ENG-MAN-0102 3852 203.34 2494 73.992
SPG-P-018 2716 1232.80 473.2232
A8 8.62 9.335 0.71
A136 1621 148.35 518 0.3777
LA 1228 7.68 14.897 7.217
ENG-DOR 1039 34.94 50.8166 15.8766
A13-00-S 968 153.64 107 0.9997
Code is
SELECT
total_costprice,
markup,
CASE WHEN markup=0 THEN 0 ELSE 100*(markup)/costprice AS pctmarkup`
This gives a divide by zero error. I need to show the percentage markup for the markup values.
You need to use NULLIF function
select
total_costprice
,markup
,case when markup=0 then 0 else 100*(markup/NULLIF(costprice,0)) END as pctmarkup
Based on your values this will work. I inserted 0 where you dont have any data - I dont know if that is true.
declare #myt table (item nvarchar(50),ordqty int, total_costprice numeric(18,4),totalsalesprice numeric(18,4),onhand numeric(18,4),markup numeric(18,4)
)
insert into #myt
values
('ENG-MAN-0102', 0 , 3852 , 203.34 , 2494 , 73.992 ),
('SPG-P-018' , 0 , 2716 , 1232.80 , 473.2232 , 0 ),
('A8' , 0 , 8.62 , 9.335 , 0 , 0.71 ),
('A136' , 0 , 1621 , 148.35 , 518 , 0.3777 ),
('LA' , 1228 , 7.68 , 14.897 , 0 , 7.217 ),
('ENG-DOR' , 1039 , 34.94 , 50.8166 , 0 , 15.8766 ),
('A13-00-S' , 968 , 153.64 , 107 , 0 , 0.9997 )
select * ,CASE WHEN markup=0 THEN 0 ELSE 100*(markup)/total_costprice end as Pct from #myt
Result
I have 2 columns on my table that contains time in and out. I need to subtract them and get the spent hours even converting them to time or any other way.
CREATE TABLE [dbo].[FELData](
[RCIN1] [numeric](4, 0) NOT NULL,
[RCOUT1] [numeric](4, 0) NOT NULL
) ON [PRIMARY]
RCIN1 RCOUT1 Desire Result
150 1930 17:40
615 1747 11:32
410 1830 14:20
400 1600 12:00
here is what I have done so far but is not returning the right numbers
SELECT rcin1, rcout1, DATEDIFF(mi,
CAST(STUFF(RIGHT('0'+CAST(rcin1 AS VARCHAR(8)),4),3,0,':') AS DATETIME),
CAST(STUFF(RIGHT('0'+CAST(rcout1 AS VARCHAR(8)),4),3,0,':') AS DATETIME)
)/60.0 AS [Hours]
FROM FELData;
RCIN1 RCOUT1 Returning
150 1930 17.66
615 1747 11.53
410 1830 13.25
400 1600 12.83
How can I fix this?
Update
Sometimes there might be user data entry errors like
RCIN1 RCOUT1 Returning
49 1930 Should return 0
Thanks
This does almost what you want, but it fails to discern that 49, 1930 is bad data. How do you know?
Since the data you have is numeric there is no need to convert it to prose, split it, parse it, convert it, recombine it, ... . Just do the math!
[UPDATED CODE]
-- Sample data.
declare #Samples as Table ( RCIn1 Numeric(4,0), RCOut1 Numeric(4,0) );
insert into #Samples ( RCIn1, RCOut1 ) values
( 150, 1930 ), ( 615, 1747 ), ( 410, 1830 ), ( 400, 1600 ),
( 49, 1930 ); -- This is allegedly "bad" data, but no explanation is given.
select * from #Samples;
with
-- Numeric values converted to instances of TIME datatype.
Times as (
select RCIn1, RCOut1,
Cast( DateAdd( minute, Floor( RCIn1 / 100 ) * 60 + RCIn1 % 100, 0 ) as Time ) as RCIn1Time,
Cast( DateAdd( minute, Floor( RCOut1 / 100 ) * 60 + RCOut1 % 100, 0 ) as Time ) as RCOut1Time
from #Samples )
-- Calculate delta times.
select *, Cast( DateAdd( minute, DateDiff( minute, RCIn1Time, RCOut1Time ), 0 ) as Time ) as DeltaTime
from Times
Your first problem is storing time as a number,
but here is how I would convert a number into a time datatype:
declare #timeNumb int = 1500
select
convert( time , (
reverse( substring( reverse( convert( varchar, #timeNumb) ) , 3, 2 ) ) + ':' +
reverse( substring( reverse( convert( varchar, #timeNumb) ) , 1, 2 ) )
)
)
After which using the datediff() should give you the interval length you are looking for.
try this
-- temp table for data sample
DECLARE #FELData AS TABLE
(
[RCIN1] NUMERIC(4, 0) ,
[RCOUT1] NUMERIC(4, 0)
)
INSERT INTO #FELData
( RCIN1, RCOUT1 )
VALUES ( 150, 1930 ),
( 615, 1747 ),
( 410, 1830 ),
( 400, 1600 )
--Solution
SELECT T.[RCIN1] ,
T.[RCOUT1] ,
RIGHT('0' + CONVERT(VARCHAR(2), T.M / 60), 2) + ':'
+ RIGHT('00' + CONVERT(VARCHAR(2), T.M % 60), 2) AS Duration
FROM ( SELECT FD.* ,
DATEDIFF(MINUTE,
CONVERT(TIME, STUFF(RIGHT('0'
+ CONVERT(VARCHAR(8), FD.[RCIN1]),
4), 3, 0, ':')),
CONVERT(TIME, STUFF(RIGHT('0'
+ CONVERT(VARCHAR(8), FD.[RCOUT1]),
4), 3, 0, ':'))) AS M
FROM #FELData AS FD
) T
output result
I'd like to write a query that will calculate the total amount of activity that occurred within each 15 minute interval of the day using only timestamps that correspond to activity start and stop times.
Here is a sample data set:
DATE StartDateTime StopDateTime
2/2/2015 2/2/2015 7:00 2/2/2015 7:25
2/2/2015 2/2/2015 7:20 2/2/2015 7:29
2/2/2015 2/2/2015 7:35 2/2/2015 7:42
2/2/2015 2/2/2015 8:05 2/2/2015 8:14
2/2/2015 2/2/2015 8:16 2/2/2015 8:20
2/2/2015 2/2/2015 8:29 2/2/2015 8:40
2/2/2015 2/2/2015 8:55 2/2/2015 9:25
And this is what I'd like to be able to get:
DATE Interval activityTime(min)
2/2/2015 2/2/2015 7:00 15
2/2/2015 2/2/2015 7:15 19
2/2/2015 2/2/2015 7:30 7
2/2/2015 2/2/2015 7:45 0
2/2/2015 2/2/2015 8:00 9
2/2/2015 2/2/2015 8:15 5
2/2/2015 2/2/2015 8:30 10
2/2/2015 2/2/2015 8:45 5
2/2/2015 2/2/2015 9:00 15
2/2/2015 2/2/2015 9:15 10
I've searched to find a way to organize the data in the way that I need and this is the closest that I've been able to find so far though I haven't been able to get it to work:
Splitting time + duration into intervals in t-sql
I'm pretty new to SQL so any explanation of solutions would be much appreciated. This is also my first post on stackoverflow so please let me know if the data are not in the preferred format or if there any additional questions. Thanks!
Assuming some reasonable recent version of SQL Server, this ought to be a good start:
-- Some sample data.
declare #Samples as Table ( SampleId Int Identity, Start DateTime, Stop DateTime );
insert into #Samples ( Start, Stop ) values
( '2/2/2015 7:00', '2/2/2015 7:25' ),
( '2/2/2015 7:20', '2/2/2015 7:29' ),
( '2/2/2015 7:35', '2/2/2015 7:42' ),
( '2/2/2015 8:05', '2/2/2015 8:14' ),
( '2/2/2015 8:16', '2/2/2015 8:20' ),
( '2/2/2015 8:29', '2/2/2015 8:40' ),
( '2/2/2015 8:55', '2/2/2015 9:25' );
select * from #Samples;
-- Find the limits and align them to quarter hours.
declare #Min as DateTime;
declare #Max as DateTime;
select #Min = min( Start ), #Max = max( Stop )
from #Samples;
set #Min = DateAdd( minute, -DatePart( minute, #Min ) % 15, #Min );
set #Max = DateAdd( minute, 15 - DatePart( minute, #Max ) % 15, #Max );
select #Min as [Min], #Max as [Max];
-- Go for it.
with QuarterHours ( QuarterStart, QuarterStop )
as (
select #Min, DateAdd( minute, 15, #Min )
union all
select QuarterStop, DateAdd( minute, 15, QuarterStop )
from QuarterHours
where QuarterStop < #Max ),
Overlaps
as ( select QH.QuarterStart, QH.QuarterStop, S.Start, S.Stop,
case
when S.Start <= QH.QuarterStart and S.Stop >= QH.QuarterStop then 15
when S.Start <= QH.QuarterStart and S.Stop < QH.QuarterStop then DateDiff( minute, QH.QuarterStart, S.Stop )
when S.Start > QH.QuarterStart and S.Stop >= QH.QuarterStop then DateDiff( minute, S.Start, QH.QuarterStop )
when S.Start > QH.QuarterStart and S.Stop < QH.QuarterStop then DateDiff( minute, S.Start, S.Stop )
else 0 end as Overlap
from QuarterHours as QH left outer join
#Samples as S on S.Start <= QH.QuarterStop and S.Stop >= QH.QuarterStart )
select QuarterStart, sum( Overlap ) as [ActivityTime]
from Overlaps
group by QuarterStart
order by QuarterStart;
You can change the last select to either select * from QuarterHours or select * from Overlaps to see some of the intermediate values.
Explanatory notes:
You can use any range (#Min/#Max) you want, I just took them from the sample data so that the example would run. I used a table variable for the same reason, no need to create a "real" table for the sake of an example.
The Common Table Expression (CTE) creates, via recursion, a table of QuarterHours that covers the desired range. (A numbers table or tally table could also be used to generate the quarter hours.) Then a LEFT OUTER JOIN with the sample data is used to locate all of the Overlaps, if any, with each quarter hour. That preserves the quarter hours for which there is no activity.
The final SELECT summarizes the results.
The following query will give you each 15-minute increment that contains at least one start time and the total amount (in minutes) of activity for the entire duration that started in that 15-minute increment.
select Date,
Convert( SmallDatetime, Floor( Cast( StartDateTime as float ) * 96.0 ) / 96.0 ) Increment,
Sum( DateDiff( second, StartDateTime, StopDateTime )) / 60 Duration
from Activities
group by Date, Convert( SmallDatetime, Floor( Cast( StartDateTime as float ) * 96.0 ) / 96.0 );
Which returns this:
Date Increment Duration
---------- ------------------- --------
2015-02-02 2015-02-02 07:00:00 25
2015-02-02 2015-02-02 07:15:00 9
2015-02-02 2015-02-02 07:30:00 7
2015-02-02 2015-02-02 08:00:00 9
2015-02-02 2015-02-02 08:15:00 15
2015-02-02 2015-02-02 08:45:00 30
I was just looking into calculating the running total with overflow into the next increment, when a coupla things occurred to me. One is that you're going to need every 15-minute increment during the time of your query, whether any activity starts in it or not. So we would have to use a tally table to ensure every interval is generated, if nothing else, to catch some overflow minutes from the previous interval.
Then, of course, there is tracking the running total with overflow. While this is possible (see https://stackoverflow.com/a/861073/3658753 for a good explanation of how), it hit me that the combination of the two (tally table and running total) is an awful lot of overhead to be performed in SQL. Remember that performing calculations in SQL is many times faster than even the fastest disk access, but performing calculations in any high level language (Java, C++, C# or even scripting languages like Perl) is going to be many times faster than SQL. Plus the maintainability of the SQL solution will be deep in the basement.
So my recommendation at this point is to take the query above and feed it into a good reporting engine or your application and have them perform the additional calculations. Performance-wise, you'll be way ahead.