Create a stock chart using postgreSQL - postgresql

I am trying to make one of those stock charts using postgreSQL that will look like the following.
My data would look something like this:
stock_data
stock_price trade_datetime
5.1 | 1/1/2000 1:00 PM
6.2 | 1/1/2000 2:00 PM
5.0 | 1/2/2000 1:00 PM
3.4 | 1/2/2000 2:00 PM
4.8 | 1/2/2000 3:00 PM
7.0 | 1/3/2000 2:30 PM
5.9 | 1/3/2000 5:55 PM
Desired result
MIN | MAX | AVG | close | date
5.1 | 6.2 | 5.65| 6.2 | 1/1/2000
3.4 | 5.0 | 4.4 | 4.8 | 1/2/2000
5.9 | 7.0 | 6.45| 5.9 | 1/3/2000
I am thinking I probably need to use windowed functions, but I just can't seem to get this one right.

You can do this by using the expected aggregate functions and then joining to a derived table that uses the LAST_VALUE window function:
SELECT
MIN(stock_price) AS "MIN"
, MAX(stock_price) AS "MAX"
, AVG(stock_price) AS "AVG"
, MAX(closing.closing_price) AS "close"
, trade_datetime::date AS "date"
FROM
stock_data
INNER JOIN LATERAL (
SELECT
LAST_VALUE(stock_price) OVER (PARTITION BY trade_datetime::date) AS closing_price
FROM
stock_data AS closing_data
WHERE closing_data.trade_datetime::date = stock_data.trade_datetime::date
) AS closing ON true
GROUP BY
trade_datetime::date
ORDER BY
trade_datetime::date ASC
Yields:
| MIN | MAX | AVG | close | date |
| --- | --- | ------------------ | ----- | ------------------------ |
| 5.1 | 6.2 | 5.6500000000000000 | 6.2 | 2000-01-01T00:00:00.000Z |
| 3.4 | 5.0 | 4.4000000000000000 | 4.8 | 2000-01-02T00:00:00.000Z |
| 5.9 | 7.0 | 6.4500000000000000 | 5.9 | 2000-01-03T00:00:00.000Z |
DB Fiddle

Related

PostgreSQL Query Using Crosstab

Now, I am leaning postgreSQL. In the study, I found crosstab in postgreSQL. I tried to apply this function to my customized table, but it dose not work. please help!!
This is my Table
year | type | count
------+----------+----
2015 | AS | 6
2015 | HY | 6
2015 | KR | 6
2015 | SE | 6
2016 | AS | 2
2016 | HY | 2
2016 | KR | 2
2016 | SE | 2
2017 | AS | 1
2017 | HY | 1
2017 | KR | 1
2017 | SE | 1
2018 | AS | 2
2018 | HY | 2
2018 | KR | 2
2018 | SE | 2
I want to change this table like this
year | AS | HY | KR | SE |
----------------------------------
2015 | 6 | 6 | 6 | 6 |
2016 | 2 | 2 | 2 | 2 |
2017 | 1 | 1 | 1 | 1 |
2018 | 2 | 2 | 2 | 2 |
To make that table, I designed query using crosstab, but dose not work!
Please Let me know the query of this problem.
You could achieve this without Crosstab, you can use Aggregate function.
Query :
select
year,
max(counts) filter (where type = 'AS') as "AS",
max(counts) filter (where type = 'HY') as "HY",
max(counts) filter (where type = 'KR') as "KR",
max(counts) filter (where type = 'SE') as "SE"
from
tbl
group by
year
order by
year asc
Demo <> DB Fiddle
And if you are try learning Crosstab this answer are really great and explain really well about to do pivot use Crosstab.
PostgreSQL Crosstab Query

Tableau - How check if a value equals a value from another row and column

I have the following table:
+------------+--------------+---------+---------+---------+
| Category | Subcategory |FruitName| Date1 | Date2 |
+------------+--------------+---------+---------+---------+
| A | 1 | Foo | 2011 | 2017 |
| | +---------+---------+---------+
| | |Pineapple| 2011 | 2013 |
| | +---------+---------+---------+
| | | Apple | 2017 | 2018 |
| +--------------+---------+---------+---------+
| | 2 | Peach | 2014 | 2015 |
| | +---------+---------+---------+
| | | Orange | 2015 | 2018 |
| | +---------+---------+---------+
| | | Banana | 2009 | 2013 |
+------------+--------------+---------+---------+---------+
I'd like to display the fruit names where Date1 from one row == Date2 from another row, but only if they are equals within the same Subcategory. In the table above, this filter should retrieve the rows based on those criterias:
And the final table would look like this:
+------------+--------------+---------+---------+---------+
| Category | Subcategory |FruitName| Date1 | Date2 |
+------------+--------------+---------+---------+---------+
| A | 1 | Foo | 2011 | 2017 |
| | +---------+---------+---------+
| | | Apple | 2017 | 2018 |
| +--------------+---------+---------+---------+
| | 2 | Peach | 2014 | 2015 |
| | +---------+---------+---------+
| | | Orange | 2015 | 2018 |
+------------+--------------+---------+---------+---------+
How can I possibly achieve this?
Your logic provided doesnot match with the output provided. If you are after the output, your logic should be:
SELECT f1.* from fruits f1 JOIN fruits f2
ON f1.Subcategory=f2.Subcategory
WHERE f1.Date1=f2.Date2 OR f1.Date2 = f2.Date1;
If your data source supports custom SQL, you can straight away use the above query. If not you can still achieve it in Tableau using a Full Outer Join and a calculated Field.(Tableau doesn't support OR condition in Joins.)
Create a self full outerjoin with the following criteria
Create a calculation called 'FILTER' as below
Apply a datasource filter to keep only 'FILTER' = True
Hide Fields from the rightside connection and you will have the required output.

PostgreSQL Crosstab generate_series of weeks for columns

From a table of "time entries" I'm trying to create a report of weekly totals for each user.
Sample of the table:
+-----+---------+-------------------------+--------------+
| id | user_id | start_time | hours_worked |
+-----+---------+-------------------------+--------------+
| 997 | 6 | 2018-01-01 03:05:00 UTC | 1.0 |
| 996 | 6 | 2017-12-01 05:05:00 UTC | 1.0 |
| 998 | 6 | 2017-12-01 05:05:00 UTC | 1.5 |
| 999 | 20 | 2017-11-15 19:00:00 UTC | 1.0 |
| 995 | 6 | 2017-11-11 20:47:42 UTC | 0.04 |
+-----+---------+-------------------------+--------------+
Right now I can run the following and basically get what I need
SELECT COALESCE(SUM(time_entries.hours_worked),0) AS total,
time_entries.user_id,
week::date
--Using generate_series here to account for weeks with no time entries when
--doing the join
FROM generate_series( (DATE_TRUNC('week', '2017-11-01 00:00:00'::date)),
(DATE_TRUNC('week', '2017-12-31 23:59:59.999999'::date)),
interval '7 day') as week LEFT JOIN time_entries
ON DATE_TRUNC('week', time_entries.start_time) = week
GROUP BY week, time_entries.user_id
ORDER BY week
This will return
+-------+---------+------------+
| total | user_id | week |
+-------+---------+------------+
| 14.08 | 5 | 2017-10-30 |
| 21.92 | 6 | 2017-10-30 |
| 10.92 | 7 | 2017-10-30 |
| 14.26 | 8 | 2017-10-30 |
| 14.78 | 10 | 2017-10-30 |
| 14.08 | 13 | 2017-10-30 |
| 15.83 | 15 | 2017-10-30 |
| 8.75 | 5 | 2017-11-06 |
| 10.53 | 6 | 2017-11-06 |
| 13.73 | 7 | 2017-11-06 |
| 14.26 | 8 | 2017-11-06 |
| 19.45 | 10 | 2017-11-06 |
| 15.95 | 13 | 2017-11-06 |
| 14.16 | 15 | 2017-11-06 |
| 1.00 | 20 | 2017-11-13 |
| 0 | | 2017-11-20 |
| 2.50 | 6 | 2017-11-27 |
| 0 | | 2017-12-04 |
| 0 | | 2017-12-11 |
| 0 | | 2017-12-18 |
| 0 | | 2017-12-25 |
+-------+---------+------------+
However, this is difficult to parse particularly when there's no data for a week. What I would like is a pivot or crosstab table where the weeks are the columns and the rows are the users. And to include nulls from each (for instance if a user had no entries in that week or week without entries from any user).
Something like this
+---------+---------------+--------------+--------------+
| user_id | 2017-10-30 | 2017-11-06 | 2017-11-13 |
+---------+---------------+--------------+--------------+
| 6 | 4.0 | 1.0 | 0 |
| 7 | 4.0 | 1.0 | 0 |
| 8 | 4.0 | 0 | 0 |
| 9 | 0 | 1.0 | 0 |
| 10 | 4.0 | 0.04 | 0 |
+---------+---------------+--------------+--------------+
I've been looking around online and it seems that "dynamically" generating a list of columns for crosstab is difficult. I'd rather not hard code them, which seems weird to do anyway for dates. Or use something like this case with week number.
Should I look for another solution besides crosstab? If I could get the series of weeks for each user including all nulls I think that would be good enough. It just seems that right now my join strategy isn't returning that.
Personally I would use a Date Dimension table and use that table as the basis for the query. I find it far easier to use tabular data for these types of calculations as it leads to SQL that's easier to read and maintain. There's a great article on creating a Date Dimension table in PostgreSQL at https://medium.com/#duffn/creating-a-date-dimension-table-in-postgresql-af3f8e2941ac, though you could get away with a much simpler version of this table.
Ultimately what you would do is use the Date table as the base for the SELECT cols FROM table section and then join against that, or probably use Common Table Expressions, to create the calculations.
I'll write up a solution to that if you would like demonstrating how you could create such a query.

Symfony2 Query to find last working date from Holiday Calender

I had a calender entity in my project which manages the open and close time of business day of the whole year.
Below is the record of a specific month
id | today_date | year | month_of_year | day_of_month | is_business_day
-------+---------------------+------+---------------+-------------+---------------+
10103 | 2016-02-01 00:00:00 | 2016 | 2 | 1 | t
10104 | 2016-02-02 00:00:00 | 2016 | 2 | 2 | t
10105 | 2016-02-03 00:00:00 | 2016 | 2 | 3 | t
10106 | 2016-02-04 00:00:00 | 2016 | 2 | 4 | t
10107 | 2016-02-05 00:00:00 | 2016 | 2 | 5 | t
10108 | 2016-02-06 00:00:00 | 2016 | 2 | 6 | f
10109 | 2016-02-07 00:00:00 | 2016 | 2 | 7 | f
10110 | 2016-02-08 00:00:00 | 2016 | 2 | 8 | t
10111 | 2016-02-09 00:00:00 | 2016 | 2 | 9 | t
10112 | 2016-02-10 00:00:00 | 2016 | 2 | 10 | t
10113 | 2016-02-11 00:00:00 | 2016 | 2 | 11 | t
10114 | 2016-02-12 00:00:00 | 2016 | 2 | 12 | t
10115 | 2016-02-13 00:00:00 | 2016 | 2 | 13 | f
10116 | 2016-02-14 00:00:00 | 2016 | 2 | 14 | f
10117 | 2016-02-15 00:00:00 | 2016 | 2 | 15 | t
10118 | 2016-02-16 00:00:00 | 2016 | 2 | 16 | t
10119 | 2016-02-17 00:00:00 | 2016 | 2 | 17 | t
10120 | 2016-02-18 00:00:00 | 2016 | 2 | 18 | t
I want the get the today_date of last 7 working date. Supporse today_date is 2016-02-18 and date of last 7 working dates as 2016-02-09.
You can use row_number() for this like this:
SELECT * FROM
(SELECT t.*,row_number() OVER(order by today_date desc) as rnk
FROM Calender t
WHERE today_date <= current_date
AND is_business_day = 't')
WHERE rnk = 7
This will give you the row of the 7th business day from todays date
I see that you tagged your question with Doctrine, ORM and Datetime. Were you after a QueryBuilder solution? Maybe this is closer to what you want:
$qb->select('c.today_date')
->from(Calendar::class, 'c')
->where("c.today_date <= :today")
->andWhere("c.is_business_day = 't'")
->setMaxResults(7)
->orderBy("c.today_date", "DESC")
->setParameter('today', new \DateTime('now'), \Doctrine\DBAL\Types\Type::DATETIME));

How to sort MySQLi table by time in ASC order

I want to display my table according to their time in ascending order.
table sample:
+------+----------+-----+
| id | time | val |
+------+----------+-----+
| 1 | 01:22 AM | a |
+------+----------+-----+
| 2 | 03:12 PM | b |
+------+----------+-----+
| 3 | 07:21 AM | c |
+------+----------+-----+
| 4 | 01:52 PM | d |
+------+----------+-----+
| 5 | 07:40 PM | e |
+------+----------+-----+
it should be arrange from - AM to PM order
a - 01:22 AM
c - 07:21 AM
d - 01:52 PM
b - 03:12 PM
e - 07:40 PM
table is not in date- time format it's only in varchar
table is not in date- time format it's only in varchar - that's your problem.
Change it to TIME format and everything will be sorted all right