T-SQL Calculating time stats when StartTime and EndTime span different months - tsql

I am querying a database with records of meeting room reservations. Since we are a global company we have meeting reservations that span different months, example:
StartTime ........................ EndTime........................ MeetingName
06/30/2010 11:45PM ........ 07/01/2010 01:00AM ..... My Meeting
If I want to determine utilization for July 2010, how would I include the hour that a room was utilized in the above reservation? If my report is for only hours utilized within July.
And same holds true for if I was reporting on June...how would I only report on the 45 minutes that the meeting used in June?
If you need more information, let me know.

Since your database isn't standardized on time, then you will need to determine the timezone for each record, or, if possible, org unit, and then you can do your work with everything in the same time zone.

Related

How to calculate YTD hours on a Crystal Report?

My company has a payroll program that prints payroll checks and pay stubs using Crystal Reports. The report selects payroll data from a table that has these columns: Hours, PeriodAmount, and YTDAmount. The hours are based on payroll period ending date. The report selects records based on a single pay period end date. So, hours would be selected for a single pay period only. However, we would like to print YTD hours on the pay stub. In other words, hours accumulated from beginning of year to current pay period. Since YTD hours is not a column in the table, would we need to include a sub report to accumulate hours? I don't think running totals would work since we are selecting a single period only. Any thoughts? Thanks.
Yes, a subreport is one option.
Another option is to select all pay periods in the current year and compute the total for the current period using a conditional total. For example, create af ormula that returns the hours if the record belongs to the current period and zero otherwise. Then, SUM that formula per employee. That SUM would give you the hours in the current period.

Powerapps Filter Collection By Today's Date

Good day all,
I am trying to filter todays result in SQL table to a collection in powerapps. The column "dt" represents the column in sql of datetime type.
This is my powerapps filter:
ClearCollect(myCollectionName, Filter(myDatasource, Text(dt,"dd/mm/yyyy") = Text(Now(),"dd/mm/yyyy" )));
Seems like the collection is still empty even there is data for today in sql. May I know if my approach is the correct way in filtering?
Short answer: the data is likely being changed based on the client time zone. To fix it, you can update it by applying the time zone offset to the data from the SQL table, something along the lines of:
ClearCollect(
myCollectionName,
Filter(
myDatasource,
Text(DateAdd(dt, TimeZoneOffset(dt), Minutes), "dd/mm/yyyy") =
Text(Now(), "dd/mm/yyyy")))
Long(er) answer: the datetime type in SQL Server represents an absolute value of date and time. For example, the value '2021-12-23 09:30:00' represents 9:30 in the morning of the 23rd day of December, 2021 - at any part of the world. The date/time type in Power Apps, however, represents a point in time, typically referring to the local time where the app is being executed (or created). For example, if I selected that value and I'm in the US Pacific Time Zone (UTC-08:00), that would represent the same value as if someone in London (UTC+00:00) selected 2021-12-23 17:30:00. Since the two types represent different concepts, we may have mismatches like you are facing. To fix this, we can either use a type in SQL Server that has the same semantics as Power Apps (for example, 'datetimeoffset'), or adjust the time when it is being transferred between SQL and Power Apps.
The blog post at https://powerapps.microsoft.com/en-us/blog/working-with-datetime-values-in-sql explains in more details how to work with date/time values in SQL and Power Apps.

OBIEE YTD Issues

I have a fact table housing different granularity (date grain)
Monthly
Daily
The month data can be accessed by filtering by end of month date or using YYYYMM date format. In OBIEE RPD repo, the fact is set to LAST Aggregation.
I want to perform Year to Date analysis. And I want to sum only month end dates.
Using function TODATE(Measure), it tends to sum up all the data through out the month e.grain
Date Amount YTD TODate(Amount)
31/01/2106 100 100
28/02/2016 200 300
14/03/2016 50 350*
31/03/2016 100 450
I want YTD to ignore 50 and return 400, so also any other dates that falls within any month. And if if I Select 14/03/2016 I want 350 to return.
Thanks.
Alter the table to add a flag, something that flags Y if the record is at the specified monthly grain, and N if the record is not at the specified monthly grain.
In the logical layer, create two distinct LTSs with the first filtering on the flag for Y. This will be where you will calculate and source all your to date measures. The second LTS can either be filtered to N, or can be left to all the data depending on what you want to do with it.
The performance increases should come from the fact that any month measures you build off that monthly LTS will only hit records flagged as month, and will bypass all that other data that is not relevant. So if a user runs a report only asking for monthly measures, the query will automatically filter to that specific data.
What will happen is if a user selects your to date measure and a specific date measure on the same report, OBIEE should fire off two separate queries to get the data and stitch together based on common dimensions.
Could someone create this in the front end? Probably. You would have to do some sort of PERIODROLLING function, and tell it to aggregate at the month level, but I am afraid it may still roll those days up into a larger than desired number. A TODATE function will not work here.

How to handle dates in neo4j

I'm an historian of medieval history and I'm trying to code networks between kings, dukes, popes etc. over a period of time of about 50 years (from 1220 to 1270) in medieval Germany. As I'm not a specialist for graph-databases I'm looking for a possibility to handle dates and date-ranges.
Are there any possibilities to handle over a date-range to an edge so that the edges, which represents a relationship, disappears after e.g. 3 years?
Are there any possibility to ask for relationships who have their date-tag in a date-range?
The common way to deal with dates in Neo4j is storing them either as a string representation or as millis since epoch (aka msec passed since Jan 01 1970).
The first approach makes the graph more easily readable the latter allows you to do math e.g. calculate deltas.
In your case I'd store two properties called validFrom and validTo on the relationships. You queries need to make sure you're looking for the correct time interval.
E.g. to find the king(s) in charge of France from Jan 01 1220 to Dec 31st 1221 you do:
MATCH (c:Country{name:'France'})-[r:HAS_KING]->(king)
WHERE r.validFrom >= -23667123600000 and r.validTo <=-23604051600000
RETURN king, r.validFrom, r.validTo
addendum
Since Neo4j 3.0 there's the APOC library which provides couple of functions for converting timestamps to/from human readable date strings.
You can also store the dates in their number representation in the following format: YYYYMMDD
In your case 12200101 would be Jan 1st 1220 and 12701231 would be Dec 31st 1270.
It's a useful and readable format and you can perform range searches like:
MATCH (h:HistoricEvent)
WHERE h.date >= 12200101 AND h.date < 12701231
RETURN h
It would also let you order by dates, if you need to.
As of Neo4J 3.4, the system handles duration and dates, see the official documentation. See more examples here.
An example related to the original question: Retrieve the historical events that happened in the last 30 days from now :
WITH duration({days: 30}) AS duration
MATCH (h:HistoricEvent)
WHERE date() - duration < date(h.date)
RETURN h
Another option for dates that keeps the number of nodes/properties you create fairly low is a linked list years (earliest year of interest - latest year), one of months (1-12), and one of dates in a month (1-31). Then every "event" in your graph can be connected to a year, month, and day. This way you don't have to create a new node for every new combination of a year month and day. You just have a single set of months, one of days, and one year. I scale the numbers to make manipulating them easier like so
Years are yyyy*10000
Months are mm*100
Date are dd
so if you run a query such as
match (event)-[:happened]->(t:time)
with event,sum(t.num) as date
return event.name,date
order by date
You will get a list of all events in chronological order with dates like Janurary 17th, 1904 appearing as 19040117 (yyyymmdd format)
Further, since these are linked lists where, for example,
...-(t0:time {num:19040000})-[:precedes]->(t1:time {num:19050000})-...
ordering is built into the nodes too.
This is, so far, how I have liked to do my event dating

Query across timezones

I'm developing an app where a user can request that an email be sent to them at a specific time every day in their timezone. For example User A lives in London and schedules an email at 2pm every day London time and User B lives in New York and schedules an email at 2pm New York time.
I'm wondering what way I should configure my database postgres such that a scheduler can fire every minute and query for all emails to be sent at that minute regardless of what timezone their in.
The one thing I want to avoid is having to run multiple queries, once per timezone.
Due to the (rather idiotic, quite frankly) rules for daylight saving times (DST) across the world, a local time can mean all kind of things in absolute (UTC time).
Save a time (not timetz!) and the time zone name (not the abbreviation) for when to send the emails. Tricky details under this related question:
Time zone names with identical properties yield different result when applied to timestamp
CREATE TABLE event (
event_id serial PRIMARY KEY
, alarm_time time -- local alarm time
, tz text -- time zone name
, ...
);
Use the following expression to "cook" the exact daily point in time, taking local DST settings into account:
SELECT current_date + alarm_time AT TIME ZONE tz;
Example:
SELECT current_date + '2:30'::time AT TIME ZONE 'Europe/London' AS alarm_ts
Returns:
alarm_ts
2014-05-19 02:30:00+02
Use timestamp with time zone (timestamptz) across your whole application. Be sure to understand how it works. This comprehensive post may be of help (also explains the AT TIME ZONE construct:
Ignoring timezones altogether in Rails and PostgreSQL
Just to be clear, once you have "cooked" the daily UTC time, you can translate it to and work with any local time just as well. But it might be less confusing to do all the rest in UTC.