how to select the rows between two different two time spans in pgadmin4 - select

I have a table in pgadmin4. one of the columns is date-time . I want to extract all columns when date time is greater than 0 and smaller than 6 and also when date-time is between 18 and 23.
I tried the following query but the result was an empty table.
select
*
from tl
where date_part('hour',tl.tstamp) >=0 and date_part('hour',tl.tstamp) <6
and date_part('hour',tl.tstamp) >=18 and date_part('hour',tl.tstamp) <=23;
each of these time spans work separately well but when I try to execute them together I have the mentioned problem.
I would appreciate it if somebody could tell me what is the problem and how can I fix it.

Related

Compute rolling average across years while displaying data split by year

The dashboard in the linked workbook shows a table with sales split by year on the top. Below, there's a table with the rolling average of the last 4 weeks, including the current. It's set to show NULL if there are not enough data points. I'd like for it to compute the first January 2018 value based on the current week and 3 full weeks from the end of 2017. Carrying that concept forward, all NULLs from 2018 onward will be eliminated. The NULLs for the first 5 weeks of 2017 will be the only NULL values. The average should always be computed on a full 4 weeks (28 days) even when week 53 doesn't contain 7 days.
How can I write a calculation to achieve what's described above?
I've tried putting the WINDOW_AVG function inside a LOD, but that's not allowed. Furthermore, I've also tried using FIXED and even FIXED inside WINDOW_AVG.
Here's one of my attempts:
{FIXED [Week_int]:
WINDOW_AVG(SUM([Sales]), -4, 0)
}
It returns this error: "Error: Level of detail expressions cannot contain table calculations or the ATTR function"
Here's the data structure. It includes one value of Sales per day.
Basically I created a dummy data in Excel by creating dates (from 1-1-2017 to 2-2-2021) and filling some random values (unif dist *5000) against these.
I added Week[date] to columns and year[date] to rows as in your screenshot. I added sum(value) on the text marks card.
Thereafter, I added table calculation --> Moving average --> edited it for previous 4 values , next 0 values, (check current value if you want to include current record), then check Null if there are not enough values. (your requirement). --> click compute using -Specific Dimensions change the order of fields below - drag Year above than week (table across then down will also create the same view)
You should be able to get a view as desired.
Regarding your query on number of days in the week, Tableau caters it automatically if you have chosen it datepart.
Edit I verified this in Excel, the method is correctly working.
See, the average of first 28 values in Excel
and the view built in tableau:
Here's the corrected dashboard hosted on Tableau Public.

Retrieving all entries 10 days prior to days in table, which matches where clause

I have a random table of data - with dates and numbers:
Date Open Volume abschange
2016.12.08D00:00:00.000000000 11035.76 1.74835e+008 1.30177
2016.12.09D00:00:00.000000000 11170.18 1.0383e+008 0.2994598
2016.12.12D00:00:00.000000000 11198.42 8.98117e+007 0.07331357
...
2016.12.30D00:00:00.000000000 11443.31 4.18109e+007 0.3298871
2017.01.02D00:00:00.000000000 11426.38 4.74561e+007 1.504853
So from this table, i would like to be able to create a list, which holds all the entries from 10 days prior to the days, which has abschange>1.
I thought it would be easiest to start with a focus on those dates:
Date abschange
---------------------------------------
2016.12.08D00:00:00.000000000 1.30177
2017.01.02D00:00:00.000000000 1.504853
2017.01.25D00:00:00.000000000 1.099709
2017.01.31D00:00:00.000000000 1.344625
2017.02.06D00:00:00.000000000 1.016427
2017.02.21D00:00:00.000000000 1.265196
Then create a flat list:
mynewdates: raze tablewithDateAndabschange each
which gives me:
2016.12.08D........ 2017.01.02D......
Then i get stuck, when i want to add 10 prior dates for each entry in this list.
Could i actually get my wanted result in one line of code, based on the first table or should i follow the path i am on ?
For both - if possible, what would the possible solution to this be ?
If my understanding is correct your requirement is:
for each date for which abschange>1, get last 10 dates before this date from table.
Below query will create that map. It is based on following assumptions:
Date column is unique and ordered(ascending) as it appears from your example.
If above condition is not true then it will require a minor change in below query to work with duplicates and unordered list.
Table is not keyed.
q) (tbl[`Date]a)!b#'where#'not null b:tbl[`Date] -1+(a:where 1<tbl`abschange)-\:til 10
UPDATE: Based on discussion in comment section.
Just add the second step to check if dates list for first result is empty. In that case generate last 10 dates from that date.
Finally it generates the table where each row contains prior dates(max 10) for each date with abschange>1.
q)d:b#'where#'not null b:tbl[`Date] -1+(a:where 1<tbl`abschange)-\:til 10
q)d[0]:$[0=count d 0;(t[`Date]a 0)-1+til 10;d 0]
q)([]dates:d)

How to get all missing days between two dates

I will try to explain the problem on an abstract level first:
I have X amount of data as input, which is always going to have a field DATE. Before, the dates that came as input (after some process) where put in a table as output. Now, I am asked to put both the input dates and any date between the minimun date received and one year from that moment. If there was originally no input for some day between this two dates, all fields must come with 0, or equivalent.
Example. I have two inputs. One with '18/03/2017' and other with '18/03/2018'. I now need to create output data for all the missing dates between '18/03/2017' and '18/04/2017'. So, output '19/03/2017' with every field to 0, and the same for the 20th and 21st and so on.
I know to do this programmatically, but on powercenter I do not. I've been told to do the following (which I have done, but I would like to know of a better method):
Get the minimun date, day0. Then, with an aggregator, create 365 fields, each has that "day0"+1, day0+2, and so on, to create an artificial year.
After that we do several transformations like sorting the dates, union between them, to get the data ready for a joiner. The idea of the joiner is to do an Full Outer Join between the original data, and the data that is going to have all fields to 0 and that we got from the previous aggregator.
Then a router picks with one of its groups the data that had actual dates (and fields without nulls) and other group where all fields are null, and then said fields are given a 0 to finally be written to a table.
I am wondering how can this be achieved by, for starters, removing the need to add 365 days to a date. If I were to do this same process for 10 years intead of one, the task gets ridicolous really quick.
I was wondering about an XOR type of operation, or some other function that would cut the number of steps that need to be done for what I (maybe wrongly) feel is a simple task. Currently I now need 5 steps just to know which dates are missing between two dates, a minimun and one year from that point.
I have tried to be as clear as posible but if I failed at any point please let me know!
Im not sure what the aggregator is supposed to do?
The same with the 'full outer' join? A normal join on a constant port is fine :) c
Can you calculate the needed number of 'dublicates' before the 'joiner'? In that case a lookup configured to return 'all rows' and a less-than-or-equal predicate can help make the mapping much more readable.
In any case You will need a helper table (or file) with a sequence of numbers between 1 and the number of potential dublicates (or more)
I use our time-dimension in the warehouse, which have one row per day from 1753-01-01 and 200000 next days, and a primary integer column with values from 1 and up ...
You've identified you know how to do this programmatically and to be fair this problem is more suited to that sort of solution... but that doesn't exclude powercenter by any means, just feed the 2 dates into a java transformation, apply some code to produce all dates between them and for a record to be output for each. Java transformation is ideal for record generation
You've identified you know how to do this programmatically and to be fair this problem is more suited to that sort of solution... but that doesn't exclude powercenter by any means, just feed the 2 dates into a java transformation, apply some code to produce all dates between them and for a record to be output for each. Java transformation is ideal for record generation
Ok... so you could override your source qualifier to achieve this in the selection query itself (am giving Oracle based example as its what I'm used to and I'm assuming your data in is from a table). I looked up the connect syntax here
SQL to generate a list of numbers from 1 to 100
SELECT (MIN(tablea.DATEFIELD) + levquery.n - 1) AS Port1 FROM tablea, (SELECT LEVEL n FROM DUAL CONNECT BY LEVEL <= 365) as levquery
(Check if the query works for you - haven't access to pc to test it at the minute)

Tableau Future and Current References

Tough problem I am working on here.
I have a table of CustomerIDs and CallDates. I want to measure whether there is a 'repeat call' within a certain period of time (up to 30 days).
I plan on creating a parameter called RepeatTime which is a range from 0 - 30 days, so the user can slide a scale to see the number/percentage of total repeats.
In Excel, I have this working. I sort CustomerID in order and then sort CallDate from earliest to latest. I then have formulas like:
=IF(AND(CurrentCustomerID = FutureCustomerID, FutureCallDate - CurrentCallDate <= RepeatTime), 1,0)
CurrentCustomerID = the current row, and the FutureCustomerID = the following row (so it is saying if the customer ID is the same).
FutureCallDate = the following row and the CurrentCallDate = the current row. It is subtracting the future call time from the first call time to measure the time in between.
The goal is to be able to see, dynamically, how many customers called in for a specific reason within maybe 4 hours or 1 day or 5 days, etc. All of the way up until 30 days (this is our actual metric but it is good to see the calls which are repeats within a shorter time frame so we can investigate).
I had a similar problem, see here for detailed version Array calculation in Tableau, maxif routine
In your case, that is basically the same thing as mine, so you could apply that solution, but I find it easier to understand the one I'm about to give, I would do:
1) Create a calculated field called RepeatTime:
DATEDIFF('day',MAX(CallDates),LOOKUP(MAX(CallDates),-1))
This will calculated how many days have passed since the last call to the current. You can add a IFNULL not to get Null values for the first entry.
2) Drag CustomersID, CallDates and RepeatTime to the worksheet (can be on the marks tab, don't need to be on rows or column).
3) Configure the table calculation of RepeatTIme, Compute using Advanced..., partitioning CustomersID, Adressing CallDates
Also Sort by Field CallDates, Maximum, Ascending.
This will guarantee the table calculation works properly
4) Now you have a base that you can use for what you need. You can either export it to csv or mdb and connect to it.
The best approach, actually, is to have this RepeatTime field calculated outside Tableau, on your database, so it's already there when you connect to it. But this is a way to use Tableau to do the calculation for you.
Unfortunately there's no direct way to do this directly with your database.

Want to create 2 columns with different period parameters from the same data in SSRS

I want to create a monthly report, calculating the % from the previous 2 month average from the previous 12 months average. Basically I want to see which shops have dropped in sales in the previous 2 months, and hopefully only show the shops that have decreased 20% in sales.
So i believe the columns need to be like this
Shop|Products|Avg of 12 months|Avg of 2 months| %
Since i have many entries for the sales, i also need to sum the previous 12 months and then average it... as well as sum the previous 2 months and average it
I have thought of some ways to do it, but it didnt seem to work and seems all too complicated and complex.
Im hoping if there is a simpler solution to this? Do i need to use pivot table?
I'm using PostGres 9.1 on Visual Studio 10
Thanks a bunch
When something seems too complicated to resolve with a single query, I create and populate a DataTable runtime and pass it to ReportViewer.
In this case you can:
create a DataTable with Shop and Product as a PK (if you want print the report for a period of months you can also add Month as PK). The other 2 columns will be Avg12Months and Avg2Months
insert a record for each combination of Shop/Product (and eventually Month)
for each record Shop/Product calculate and save the results for Avg12Months and Avg2Months
pass your DataTable to ReportViewer
use a single Tablix to display the results (sort, grouping and other operations can be done in the Tablix)
Some passages can be combined in order to speed up the process.