I am trying to make time adjustment (due to summer time change).
I have a bottom table with datetime as data type (in Azure SQL) created.
This is DIM table, and this is what I have now:
start_dt end_dt hour_diff
2022-01-22 00:00:00.000 2022-03-13 01:59:59.000 -5
2022-03-13 02:00:00.000 2022-11-06 01:59:59.000 -4
What I am trying to do is, if punch time is between 1/22/2022 12:00 AM --> 3/13/2022 01:59:59 AM (inclusive), it will take hour_diff of -5.
So, if somebody clocks in at 3/13/2022 1:59:59 AM, it will be -5 hour_diff.
If punch time is between 3/13/2022 2:00 AM --> 11/6/2022 01:59:59 AM (inclusive), it will take hour_diff of -4.
So, if somebody clocks in at 3/13/2022 2:00:00 AM, it will be -4 hour_diff.
Just FYI, bottom is SQL statement that I am using:
select
CONVERT(nvarchar(255), DateAdd(hour,hour_diff, ps), 0) punch_start,
CONVERT(nvarchar(255), DateAdd(hour,hour_diff, ps1), 0) punch_end
from [dbo].[Table1]
cross apply
(
values(
Try_Convert(datetime, punch_start),
Try_Convert(datetime, punch_end)
)
) x (ps,ps1)
left join
[dbo].[DIM] d on
ps between d.start_dt and d.end_dt
or
ps1 between d.start_dt and d.end_dt
I am trying to make sure this query works with current data in DIM table (using BETWEEN).
Related
Lets say we have the dates
'2017-01-01'
and
'2017-01-15'
and I would like to get a series of exactly N timestamps in between these dates, in this case 7 dates:
SELECT * FROM
generate_series_n(
'2017-01-01'::timestamp,
'2017-01-04'::timestamp,
7
)
Which I would like to return something like this:
2017-01-01-00:00:00
2017-01-01-12:00:00
2017-01-02-00:00:00
2017-01-02-12:00:00
2017-01-03-00:00:00
2017-01-03-12:00:00
2017-01-04-00:00:00
How can I do this in postgres?
Possibly this can be useful, using the generate series, and doing the math in the select
select '2022-01-01'::date + generate_series *('2022-05-31'::date - '2022-01-01'::date)/15
FROM generate_series(1, 15)
;
output
?column?
------------
2022-01-11
2022-01-21
2022-01-31
2022-02-10
2022-02-20
2022-03-02
2022-03-12
2022-03-22
2022-04-01
2022-04-11
2022-04-21
2022-05-01
2022-05-11
2022-05-21
2022-05-31
(15 rows)
WITH seconds AS
(
SELECT EXTRACT(epoch FROM('2017-01-04'::timestamp - '2017-01-01'::timestamp))::integer AS sec
),
step_seconds AS
(
SELECT sec / 7 AS step FROM seconds
)
SELECT generate_series('2017-01-01'::timestamp, '2017-01-04'::timestamp, (step || 'S')::interval)
FROM step_seconds
Conversion to function is easy, let me know if have trouble with it.
One problem with this solution is that extract epoch always assumes 30-days months. If this is problem for your use case (long intervals), you can tweak the logic for getting seconds from interval.
You can divide the difference between the end and the start value by the number of values you want:
SELECT *
FROM generate_series('2017-01-01'::timestamp,
'2017-01-04'::timestamp,
('2017-01-04'::timestamp - '2017-01-01'::timestamp) / 7)
This could be wrapped into a function if you want to avoid repeating the start and end value.
I am relatively new to DAX and PowerBI, and have an issue that is driving me up the wall.
I have a table with data from my source system and a Calendar table, I also have a Measures table where various DAX Measures are calculated/stored.
In the source System, I have 3 columns that are relevant here: Unique_ID, PurchaseDate, TerminationDate
TerminationDate either has a value (as in the subscription is terminated) or is NULL (and therefore active)
The problem I am trying to solve is that I wish to know how many active subscriptions I had N Number of Months ago and have this as a Dynamic value.
I initially thought I had success with the below bit of code, however when I change the variable Month1 indexing it by increments of 1 eg: =MONTH(TODAY())-2 =MONTH(TODAY())-3 etc.) the value didn't change.
Previous 1 month Total active Services =
var month1= MONTH(TODAY())-1
var year= YEAR(TODAY())
return
calculate(
COUNTROWS(Table1),
FILTER(Table1,
Table1[PurchaseDate] >= month1
&& Table1[PurchaseDate] >= year
&& ((Table1[TerminationDate] <= month1
&& Table1[TerminationDate] <= year)
|| isblank(Table1[TerminationDate]) )
))
So, then I decided that the above was trying to be too clever and that I'd simplify things by having a measure for all subscriptions with a purchase date equal or less than todays' Month-1/Year, another measure where the Termination date was less than or equal to todays Month-1/Year and then subtracting one from the other should give the answer.
Previous 1 month Total active Services calculation 2 =
var month1= MONTH(TODAY())-1
var year= YEAR(TODAY())
return
calculate(
COUNTROWS(Table1),
FILTER(Table1,
Table1[TerminationDate] <= month1
&& Table1[TerminationDate] <= year
))
Bearing in mind I'm validating the accuracy of the above by running this in the SQL backend (and this is all being done on a copy of the DB, so I know there's no production changes)
select count(t.unique_id)
from table1 t
where t.terminationdate <= '2020-07-31'
and then doing the same but with '2020-06-30' etc.
Ideally the solution wouldn't necessitate the creation of additional columns nor utilize slicers - this is for (what should be) a very simple calculation.
Sample Data:
Unique_ID PurchaseDate TerminationDate
WM-SP-998407 2016-06-01 07:42:41.000 2020-01-02 11:25:26.000
WM-SP-998412 2016-06-01 08:02:11.000 2017-08-30 11:26:31.000
WM-SP-998417 2016-06-01 08:11:01.000 2017-08-30 11:26:05.000
WM-SP-998422 2016-06-01 08:11:02.000 2017-08-30 11:25:49.000
WM-SP-998427 2016-06-01 08:22:41.000 2018-08-30 11:26:18.000
WM-SP-998432 2016-06-01 08:47:41.000 NULL
WM-SP-998437 2016-06-01 09:22:41.000 2020-03-07 08:10:49.000
WM-SP-998442 2016-06-01 09:25:42.000 NULL
WM-SP-998447 2016-06-01 09:51:11.000 2018-08-30 11:26:33.000
WM-SP-998452 2016-06-01 09:51:11.000 NULL
WM-SP-998457 2016-06-01 10:00:51.000 NULL
Excel file(104976x10) includes large data.
A column: Time (unit year)
B column: Year
C column: Day of the year
D column: Hour
E column: Minute
and others including values
I would like to convert column which begins with B column until E column to date format like 'dd/mm/yyyy HH:MM'.
Example for the data:
1998,41655251 1998 152 1 0 12,5 12,0 11,8 11,9 12,0
I would like to do date instead of 2-th, 3-th, 4-th and 5-th columns.
1998,41655251 01/06/1998 01:00 12,5 12,0 11,8 11,9 12,0
or
1998,41655251 01/06/1998 01:00 1998 152 1 0 12,5 12,0 11,8 11,9 12,0
Welcome to SO.
Matlab has two types of date-format:
datetime, introduced in 2014b.
datenum, introcuced in long ago (before 2006b), it is basically a double precision value giving the number of days from January 0, 0000.
I think the best way is to use datetime, and give it the year, month, day, hour and minute values like this:
t=datetime(1998,0,152,1,0,0)
t= '01-May-1998 01:00:00'
As you can see the days automatically overflow into the months. But I end up 1st of may, not 1st of june like in your example.
to change the format:
t.Format='dd/MM/yyyy hh:mm'
t= '01/05/1998 01:00'
to convert it to a string, you can simply use string(t)
This is an example that combines the above functions to read an xlsx file and writes a new one with the updated column.
data=xlsread('test.xlsx');
S = size(data);
t = datetime(data(:,2),0,data(:,3),data(:,4),data(:,5),0);
t.Format='dd/MM/yyyy HH:mm';
data2=num2cell(data(:,1));
data2(:,2)=cellstr(string(t));
data2(:,3:S(2)-3)=num2cell(data(:,6:end));
xlswrite('test2.xlsx',data2);
I am working recently with postgres and I have to make several calculations. However I have not been able to imitate the HOUR () function of Excel, I read the official information but it did not help me much.
The function receives a decimal and obtains the hour, minutes and seconds of the same, example the decimal 0,99988426 returns 11:59:50. Try doing this in postgres (i use PostgreSQL 10.4) with the to_timestamp function: select to_char (to_timestamp (0.99988426), 'HH24: MI: SS'); this return 19:00:00. Surely I am omitting something, some idea of how to solve this?
24:00:00 or 86400 seconds = 1
Half day(12:00 noon) or 43200 seconds = 43200/86400 = 0.5
11:59:50 or 86390 seconds = 86390/86400 = 0.99988426
So to convert your decimal value to time, all you have to do is multiply it with 86400 which will give you seconds and convert it to your format in following ways:
SELECT TO_CHAR((0.99988426 * 86400) * '1 second'::interval, 'HH24:MI:SS');
SELECT (0.99988426 * 86400) * interval '1 sec';
There are two major differences to handle:
Excel does not consider the time zone. The serial date 0 starts at 0h00, but Postgres uses the time zone so it becomes 19h. You would need to use UTC in Postgres result to have the same as in Excel.
select to_char (to_timestamp (0), 'HH24: MI: SS'),to_char (to_timestamp (0) AT TIME ZONE 'UTC', 'HH24: MI: SS');
to_char | to_char
------------+------------
19: 00: 00 | 00: 00: 00
Excel considers that 1 is one day, while Postgres considers 1 as 1 second. To get the same behavior, multiply your number by the 86400, i.e. the number of seconds in a day
select to_char (to_timestamp (0.99988426*86400) AT TIME ZONE 'UTC', 'HH24: MI: SS');
to_char
------------
23: 59: 50
(1 row)
Is there a built in function in PostgreSQL 9.5 version to calculate the appropriate century/millenium?
When I use birth_date::TIMESTAMP from a table, sometimes it prefix 19 and sometimes it prefix 20. Below example
Input:
28JUN80
25APR48
Output:
"1980-06-28 00:00:00"
"2048-04-25 00:00:00"
I also have records in the table with birth_date holding values like "07APR1963" which gets computed appropriately as "1963-04-07 00:00:00".
I need use CASE statement when the length is 7 characters, then prefix with 19 millennium and when its 9 characters, just load it as it is.
https://en.wikipedia.org/wiki/Unix_time Unix epoch is
beginning (00:00:00 1 January 1970)
So if you don't specify the century, but just last YY it will be 20th century from 00:00:00 1 January and 21st century before YY equal 70. If you want it to guess the 20th century either append year as you do, or specify CC, eg:
t=> select
to_timestamp('1JAN70', 'ddmonYY')
, to_timestamp('31DEC69', 'ddmonyy')
, to_timestamp('31DEC69 20', 'ddmonyy cc');
to_timestamp | to_timestamp | to_timestamp
------------------------+------------------------+------------------------
1970-01-01 00:00:00+00 | 2069-12-31 00:00:00+00 | 1969-12-31 00:00:00+00
(1 row)
https://www.postgresql.org/docs/current/static/functions-formatting.html
In conversions from string to timestamp or date, the CC (century)
field is ignored if there is a YYY, YYYY or Y,YYY field. If CC is used
with YY or Y then the year is computed as the year in the specified
century. If the century is specified but the year is not, the first
year of the century is assumed.
update
So in your case you should do smth like:
vao=# create table arasu (member_birth_date character(9)); insert into arasu values ('28JUN80'),('25APR48');
CREATE TABLE
INSERT 0 2
vao=# select to_timestamp(member_birth_date||' 20', 'ddmonYY cc') from arasu;
to_timestamp
------------------------
1980-06-28 00:00:00+03
1948-04-25 00:00:00+03
(2 rows)