I have a table (table1) has fact data. Let's say (products, start, end, value1, month[calculated column]) are the columns and start and end columns are timestamp.
What I am trying to have is a table and bar chart which give me sum of value1 for each month divided by a factor number according to each month (this report is a yearly bases. I mean, I load the data into qlik sense for one year).
I used the start and end to generate autoCalendar as a timestamp field in qlik sense data manager. Then, I get the month from start and store it in the calculated column "month" in the table1 using the feature of autoCalendar (Month(start.autoCalendar.Month)).
After that, I created another table having two columns (month, value2) the value2 column is a factor value which I need it to divide the value1 according to each month. that's mean (sum(value1) /1520 [for January], sum(value2) / 650 [for February]) and so on. Here the month and month columns are relational columns in qlik sense. then I could in my expression calculated the sum(value1) and get the targeted value2 which compatible with the month for the table2.
I could make the calculation correctly. but still one thing is missed. The data of the products does not have value (value1 ) in every month. For example, let's say that I have a products (p1,p2...). I have data in the table 1 for (Jun, Feb, Nov), and for p2 for (Mrz, Apr,Mai, Dec). Hence, When the data are presented in a qlik sense table as well as in a bar chart I can see only the months which have values in the fact table. The qlik sense table contains (2 dimensions which are [products] and [month] and the measure is m1[sum(value1)/value2]).
What I want to have a yearly report showing the 12 months. and in my example I can see for p1 (only 3 months) and for p2 (4 months). When there is no data the measure column [m1] 0 and I want to have the 0 in my table and chart.
I am think, it might be a solution if I can show the data of the the qlik sense table as right outer join of my relation relationship (table1.month>>table2.month).So, is it possible in qlik sense to have outer join in such an example? or there is a better solution to my problem.
Update
Got it. Not sure if that this is the best approach but in this cases I usually fill the missing records during the script load.
// Main table
Sales:
Load
*,
ProductId & '-' & Month as Key_Product_Month
;
Load * Inline [
ProductId, Month, SalesAmount
P1 , 1 , 10
P1 , 2 , 20
P1 , 3 , 30
P2 , 1 , 40
P2 , 2 , 50
];
// Get distinct products and assign 0 as SalesAmount
Products_Temp:
Load
distinct ProductId,
0 as SalesAmount
Resident
Sales
;
join (Products_Temp) // Cross join in this case
Load
distinct Month
Resident
Sales
;
// After the cross join Products_Temp table contains
// all possible combinations between ProductId and Month
// and for each combination SalesAmount = 0
Products_Temp_1:
Load
*,
ProductId & '-' & Month as Key_Product_Month1 // Generate the unique id
Resident
Products_Temp
;
Drop Table Products_Temp; // we dont need this anymore
Concatenate (Sales)
// Concatenate to main table only the missing ProductId-Month
// combinations that are missing
Load
*
Resident
Products_Temp_1
Where
Not Exists(Key_Product_Month, Key_Product_Month1)
;
Drop Table Products_Temp_1; // not needed any more
Drop Fields Key_Product_Month1, Key_Product_Month; // not needed any more
Before the script:
After the script:
The table link in Qlik Sense (and Qlikview) is more like full outer join. if you want to show the id only from one table (and not all) you can create additional field in the table you want and then perform your calculations on top of this field instead on the linked one. For example:
Table1:
Load
id,
value1
From
MyQVD1.qvd (qvd)
;
Table2:
Load
id,
id as MyRightId
value2
From
MyQVD2.qvd (qvd)
;
In the example above both tables will still be linked on id field but if you want to count only the id values in the right table (Table2) you just need to type
count( MyRightId )
I know this questions has been answered and I quite like Stefan's approach but hope my answer will help other users. I recently ran into something similar and I used a slightly different logic with the following script:
// Main table
Sales:
Load * Inline [
ProductId, Month, SalesAmount
P1 , 1 , 10
P1 , 2 , 20
P1 , 3 , 30
P2 , 1 , 40
P2 , 2 , 50
];
Cartesian:
//Create a combination of all ProductId and Month and then load the existing data into this table
NoConcatenate Load distinct ProductId Resident Sales;
Join
Load Distinct Month Resident Sales;
Join Load ProductId, Month, SalesAmount Resident Sales; //Existing data loaded
Drop Table Sales;
This results in the following output table:
The Null value in the new (bottom-most) row can stay like that but if you prefer replacing it then use Map..Using process
Related
I'm working on a system to manage the problems in different projects.
I have the following tables:
Projects
id
Description
Country
1
3D experience
Brazil
2
Lorem Epsum
Chile
Problems
id
idProject
Description
1
1
Not loading
2
1
Breaking down
Problems_status
id
idProblem
Status
Start_date
End_date
1
1
Red
2020-10-17
2020-10-25
2
1
Yellow
2020-10-25
2020-11-20
3
1
Red
2020-11-20
4
2
Red
2020-11-01
2020-11-25
5
2
Yellow
2020-11-25
2020-12-22
6
2
Red
2020-12-22
2020-12-23
7
2
Green
2020-12-23
In the above examples, the problem 1 is still red, and the problem 2 is green (no end date).
I need to create a chart when the user selects an specific project, where the status of the problems along the weeks (starting by the week of the first registered problem) will be shown. The chart of the project 1 should look like this:
I'm trying to write a code in postgreSQL to return a table like this, so that I can populate this chart:
Week
Green
Yellow
Red
42/20
0
0
1
43/20
0
0
1
44/20
0
1
0
...
...
...
...
04/21
1
0
1
I've been trying multiple ways but just can't figure out how to do that, could someone help me please?
Bellow a db-fiddle to help:
CREATE TABLE projects (
id serial NOT NULL,
description character varying(50) NOT NULL,
country character varying(50) NOT NULL,
CONSTRAINT projects_pkey PRIMARY KEY (id)
);
CREATE TABLE problems (
id serial NOT NULL,
id_project integer NOT NULL,
description character varying(50) NOT NULL,
CONSTRAINT problems_pkey PRIMARY KEY (id),
CONSTRAINT problems_id_project_fkey FOREIGN KEY (id_project)
REFERENCES projects (id) MATCH SIMPLE
);
CREATE TABLE problems_status (
id serial NOT NULL,
id_problem integer NOT NULL,
status character varying(50) NOT NULL,
start_date date NOT NULL,
end_date date,
CONSTRAINT problems_status_pkey PRIMARY KEY (id),
CONSTRAINT problems_status_id_problem_fkey FOREIGN KEY (id_problem)
REFERENCES problems (id) MATCH SIMPLE
);
INSERT INTO projects (description, country) VALUES ('3D experience','Brazil');
INSERT INTO projects (description, country) VALUES ('Lorem Epsum','Chile');
INSERT INTO problems (id_project ,description) VALUES (1,'Not loading');
INSERT INTO problems (id_project ,description) VALUES (1,'Breaking down');
INSERT INTO problems_status (id_problem, status, start_date, end_date) VALUES
(1, 'Red', '2020-10-17', '2020-10-25'),(1, 'Yellow', '2020-10-25', '2020-11-20'),
(1, 'Red', '2020-11-20', NULL),(2, 'Red', '2020-11-01', '2020-11-25'),
(2, 'Yellow', '2020-11-25', '2020-12-22'),(2, 'Red', '2020-12-22', '2020-12-23'),
(2, 'Green', '2020-12-23', NULL);
If I understood correctly your goal is to produce a weekly tally by problem status for a particular project for a specific time period (Min db date to current date). Further if a problem status spans week then is should be included in each weeks tally. That involve 2 time periods, the report period against the status start/end dates and checking for overlap of those dates. Now there ate 5 overlaps scenarios that need checking; lets call the ranges let A the any week in the report period and B. the start/end of status. Now, allowing that A must end within the reporting period. but B does not we have the following.
A starts, B starts, A ends, B ends. B overlaps end of A.
A starts, B starts, B ends, A ends. B totally contained within A.
B starts, A starts, B ends, A ends. B overlaps start of A.
B starts, A starts, A ends, B ends. A totally enclosed within B.
Fortunately, Postgres provides functionally to handle all the above meaning the query does not have to handle the individual validations. This is DATERANGEs and the Overlap operator. The difficult work then becomes defining each week with in A. Then employ the Overlap operator on daterange for each week in A against the daterange for B (start_date, end_date). Then do conditional aggregation. for each overlap detected. See full example here.
with problem_list( problem_id ) as
-- identify the specific problem_ids desirded
(select ps.id
from projects p
join problems ps on(ps.id_project = p.id)
where p.id = &selected_project
) --select * from problem_list;
, report_period(srange, erange) as
-- generate the first day of week (Mon) for the
-- oldest start date through day of week of Current_Date
(select min(first_of_week(ps.start_date))
, first_of_week(current_date)
from problem_status ps
join problem_list pl
on (pl.problem_id = ps.id_problem)
) --select * from report_period;
, weekly_calendar(wk,yr, week_dates) as
-- expand the start, end date ranges to week dates (Mon-Sun)
-- and identify the week number with year
(select extract( week from mon)::integer wk
, extract( isoyear from mon)::integer yr
, daterange(mon, mon+6, '[]'::text) wk_dates
from (select generate_series(srange,erange, interval '7 days')::date mon
from report_period
) d
) -- select * from weekly_calendar;
, status_by_week(yr,wk,status) as
-- determine where problem start_date, end_date overlaps each calendar week
-- then where multiple statuses exist for any week keep only the lat
( select yr,wk,status
from (select wc.yr,wc.wk,ps.status
-- , ps.start_date, wc.week_dates,id_problem
, row_number() over (partition by ps.id_problem,yr,wk order by yr, wk, start_date desc) rn
from problem_status ps
join problem_list pl on (pl.problem_id = ps.id_problem)
join weekly_calendar wc on (wc.week_dates && daterange(ps.start_date,ps.end_date)) -- actual overlap test
) ac
where rn=1
) -- select * from status_by_week order by wk;
select 'Project ' || p.id || ': ' || p.description Project
, to_char(wk,'fm09') || '/' || substr(to_char(yr,'fm0000'),3) "WK"
, "Red", "Yellow", "Green"
from projects p
cross join (select sbw.yr,sbw.wk
, count(*) filter (where sbw.status = 'Red') "Red"
, count(*) filter (where sbw.status = 'Yellow') "Yellow"
, count(*) filter (where sbw.status = 'Green') "Green"
from status_by_week sbw
group by sbw.yr, sbw.wk
) sr
where p.id = &selected_project
order by yr,wk;
The CTEs and main operate as follows:
problem_list: Identifies the Problems (id_problem) related the
specified project.
report_period: Identifies the full reporting period start to end.
weekly_calendar: Generates the beginning date (Mon) and ending date (Sun) for each week within the reporting period (A above). Along the
way it also gathers week of the year and the ISO year.
status_by_week: This is the real work horse preforming two tasks.
First is passes each problem by each of the week in the calendar. It
builds row for each overlap detected. Then it enforces the "one
status" rule.
Finally, the main select aggregates the status into the appropriate
buckets and adds the syntactic sugar getting the Program Name.
Note the function first_of_week(). This is a user defined function and available in the example and below. I created it some time ago and have found it useful. You are free to use it. But you do so without any claim of suitability or guaranty.
create or replace
function first_of_week(date_in date)
returns date
language sql
immutable strict
/*
* Given a date return the first day of the week according to ISO-8601
*
* ISO-8601 Standard (in short)
* 1 All weeks begin on Monday.
* 2 All Weeks have exactly 7 days.
* 3 First week of any year is the Monday on or before 4-Jan.
* This implies that the last few days on Dec may be in the
* first week of the following year and that the first few
* days of Jan may be in week 53 (53) of the prior year.
* (Not at the same time obviously.)
*
*/
as $$
with wk_adj(l_days) as (values (array[0,1,2,3,4,5,6]))
select date_in - l_days[ extract (isodow from date_in)::integer ]
from wk_adj;
$$;
In the example I have implemented the query as a SQL function as it seems db<>fiddle has issues with bound variables
and substitution variables, Besides it gave the ability to parameterize it. (Hate hard coded values). For the example I
added additional data fro extra testing, Mostly as data that will not be selected. And an additional Status (what happens if it encounters something other than those 3 status values (in this case Pink). This easy to remove, just get rid on OTHER.
Your notice that "the daterange is covering mon-mon, instead of mon-sun" is incorrect, although it would appear that way for someone not use to looking at them. Lets take week 43. If you queried the date range it would show [2020-10-19,2020-10-26) and yes both those dates are Monday. However, the bracketing characters have meaning. The leading character [ says the date is to included and the trailing character ) says the date is not to be included. A standard condition:
somedate && [2020-10-19,2020-10-26)
is the same as
somedate >= 2020-10-19 and somedate < 2020-10-26
This is why when you change the increment from "mon+6" to "mon+5" you fixed week 43, but introduced errors into other weeks.
You can fill in blanks using COALESCE to select the first non-null value in the list.
SELECT COALESCE(<some_value_that_could_be_null>, <some_value_that_will_not_be_null>);
If you want to force the bounds of your time range into a result set you can UNION your result set with a specific date.
SELECT ... -- your data query here
UNION ALL
SELECT end_ts -- WHERE end_ts is a timestamptz type
In order to UNION you will need to have the same arity and same type of fields returned in the unioned query. You can fill in everything other than the timestamp with NULL casted to whichever the matching type is.
More concrete example:
WITH data AS -- get raw data
(
SELECT p.id
, ps.status
, ps.start_date
, COALESCE(ps.end_date, CURRENT_DATE, '01-01-2025'::DATE) -- you can fill in NULL values with COALESCE
, pj.country
, pj.description
, MAX(start_date) OVER (PARTITION BY p.id) AS latest_update
FROM problems p
JOIN projects pj ON (pj.id = p.id_project)
JOIN problem_status ps ON (p.id = ps.id_problem)
UNION ALL -- force bounds in the following
SELECT NULL::INTEGER -- could be null or a defaulted value
, NULL::TEXT -- could be null or a defaulted value
, start_date -- either as an input param to a function or a hard-coded date
, end_date -- either as an input param to a function or a hard-coded date
, NULL::TEXT
, NULL::TEXT
, NULL::DATE
) -- aggregate in the following
SELECT <week> -- you'll have to figure out how you're getting weeks out of the DATE data
, COUNT(*) FILTER (WHERE status = 'Red')
, COUNT(*) FILTER (WHERE status = 'Yellow')
, COUNT(*) FILTER (WHERE status = 'Green')
FROM data
WHERE start_date = latest_update
GROUP BY <week>
;
Some of the features used in this query are very powerful and you should look them up if they're new to you and you are going to be doing a bunch of reporting queries. Mainly coalesce, common table expressions (CTE), window functions, and aggregate expressions.
Aggregate Expressions
WITH Queries (CTEs)
COALESCE
Window Functions
I wrote a dbfiddle for you to take a look at here after you updated your requirements.
Suppose, I have following tables
product_prices
product|price|date
-------+-----+----------
apple |10 |2014-03-01
-------+-----+----------
apple |20 |2014-05-02
-------+-----+----------
egg |2 |2014-03-03
-------+-----+----------
egg |4 |2015-10-12
purchases:
user|product|date
----+-------+----------
John|apple |2014-03-02
----+-------+----------
John|apple |2014-06-03
----+-------+----------
John|egg |2014-08-13
----+-------+----------
John|egg |2016-08-13
What I need is table similar to this:
name|product|purchase date |price date|price
----+-------+--------------+----------+-----
John|apple |2014-03-02 |2014-03-01|10
----+-------+--------------+----------+-----
John|apple |2014-06-03 |2014-05-02|20
----+-------+--------------+----------+-----
John|egg |2014-08-13 |2014-08-13|2
----+-------+--------------+----------+-----
John|egg |2016-08-13 |2015-10-12|4
Or "what is the price for product at this day". Where price is calculated based on date from products table.
On real DB I tried to use something similar to:
SELECT name, product, pu.date, pp.date, pp.price
FROM purchases AS pu
LEFT JOIN product_prices AS pp
ON pu.date = (
SELECT date
FROM product_prices
ORDER BY date DESC LIMIT 1);
But I keep either getting only left part of table (with (null) instead of product dates and prices) or many rows with all the combinations of prices and dates.
I would suggest changing product_prices table to use a daterange column instead (or at least a start_date and an end_date).
You can use an exclusion constraint to make sure you never have overlapping ranges for one product and an insert trigger that "closes" the "current" prices and creates a new unbounded range for the newly inserted price.
A daterange can efficiently be indexed and with that in place the query gets as easy as:
SELECT name, product, pu.date, pp.valid_during, pp.price
FROM purchases AS pu
LEFT JOIN product_prices AS pp ON pu.date <# pp.valid_during
(assuming the range column is named valid_during)
The exclusion constraint would only work however if the product was an integer (not a varchar) - but I guess your real product_purchases table uses a foreign key to some product table anyway (which is an integer).
The new table definitions could look something like this:
create table purchase_prices
(
product_id integer not null references products,
price numeric(16,4) not null,
valid_during daterange not null
);
And the constraint that prevents overlapping ranges:
alter table purchase_prices
add constraint check_price_range
exclude using gist (product_id with =, valid_during with &&);
The constraint needs the btree_gist extension.
As always improving query speed comes with a price and in this case it's the higher maintenance costs for the GiST index. You would need to run some tests to see if the easier (and most probably much faster) query outweighs the slower insert performance on purchase_prices.
Look at your scalar sub-query very closely. It is not correlated back to the outer query. In other words, it will return the same result every time: the latest date in the product_prices table. Period. Think about the query out of context:
SELECT date
FROM product_prices
ORDER BY date DESC LIMIT 1
There are two problems with it:
It will return 2015-10-12 for every row in the join and ultimately, nothing was purchased on that date, hence, null.
Your approximation of closest is that the dates are equal. Unless you have a product_prices row for every product for every single date, you'll always have misses. "Closest" implies distance and ranking.
WITH close_prices_by_purchase AS (
SELECT
p.user,
p.product,
p.date pp.date,
pp.price,
row_number() over (partition by pp.product, order by pp.date desc) as distance -- calculate distance between purchase date and price date
FROM purchases AS p
INNER JOIN product_prices AS pp on pp.product = p.product
WHERE pp.date < p.date
)
SELECT user as name, product, pu.date as purchase_date, pp.date as price_date, price
FROM close_prices_by_purchase AS cpbp
WHERE distance = 1; -- shortest distance
You can try something like this, although I am sure there's a better way:
with diffs as (
select
a.*,
b."date" as bdate,
b.price,
b."date" - a."date" as diffdays,
row_number() over (
partition by "user", a."product", a."date"
order by "user", a."product", a."date", b."date" - a."date" desc
) as sr
from purchases a
inner join product_prices b on a.product = b.product
where b."date" - a."date" < 1
)
select
"user" as "name",
product,
"date" as "purchase date",
bdate as "price date",
price
from diffs
where sr = 1
Example: https://www.db-fiddle.com/f/dwQ9EXmp1SdpNpxyV1wc6M/0
Explanation
I attempted to join both tables and find the difference between dates of purchase and price, and ranked them by closest date prior to the purchase. Rank of 1 will go to the closest date. Then, data with rank of 1 was extracted.
This is a great place to use date ranges! We know the start date of the price range and we can use a window function to get the next date. At that point, it's really easy to figure out the price on any day.
with price_ranges as
(select product,
price,
date as price_date,
daterange(date, lead(date, 1)
OVER (partition by product order by date), '[)'
) as valid_price_range from product_prices
)
select "user" as name,
purchases.product,
purchases.date,
price_date,
price
from purchases
join price_ranges on purchases.product = price_ranges.product
and purchases.date <# price_ranges.valid_price_range
order by purchases.date;
Full disclosure: I've seen 1 variation of this question for mySQL, and the PostgreSQL answer didn't satisfy me.
I have 2 tables: Reviews & businesses. In the Reviews table, the only 3 relevant columns for the purpose of this question are 'business_id', 'date' (yyyy-mm-dd), and stars (1-5), and the primary key is (review_id). In the businesses table, the relevant columns are 'business_id', 'year', and 'month'.' The 'year' and 'month' columns are there because there is another column in the business table called 'review_count', which represents the number of reviews a business received on each month of each year. Because of this, the composite primary key of this table is (business_id, year, month).
Essentially, I am trying to create a column in the business table with the average rating (represented by stars) a business received on each month of each year it was open.
The following query gives me the exact result I want:
SELECT round(CAST(AVG(stars) AS NUMERIC), 2)
FROM reviews_for_trending_businesses
WHERE business_id IN (SELECT DISTINCT(business_id)
FROM trending_businesses_v2)
GROUP BY business_id, EXTRACT("year" FROM reviews_for_trending_businesses.date), EXTRACT('month' FROM reviews_for_trending_businesses.date);
This code returns the column and all the correct values that I want to insert into my business table.
However, when I try to actually update the table, I get an error saying more than one row was returned by the subquery used as an expression. This is the code I'm trying to update with:
UPDATE trending_businesses_v2
SET avg_monthly_rating = (SELECT round(CAST(AVG(stars) AS NUMERIC), 2)
FROM reviews_for_trending_businesses
WHERE business_id IN (SELECT DISTINCT(business_id)
FROM trending_businesses_v2)
GROUP BY business_id, EXTRACT("year" FROM reviews_for_trending_businesses.date), EXTRACT('month' FROM reviews_for_trending_businesses.date);
I've tried a number of other solutions as well, including using joins, but keep getting a similar error.
UPDATE: Still No Answer but getting Closer:
Still can't quite figure out where I'm going wrong here. I also don't understand why I have to groupby 'rtb.date' here if I'm only extracting values from it (returned error if I didn't).
UPDATE trending_businesses_v2 tb
SET avg_monthly_rating = t.val
FROM (SELECT business_id, EXTRACT("year" FROM rtb.date) AS year, EXTRACT('month' FROM rtb.date) AS month, round(CAST(AVG(stars) AS NUMERIC), 2) as val
FROM reviews_for_trending_businesses rtb
WHERE business_id IN (SELECT DISTINCT(business_id)
FROM trending_businesses_v2
)
GROUP BY business_id, year, month, rtb.date
) t
WHERE t.business_id = tb.business_id AND
t.year = tb.year AND t.month = tb.month;
You need to match the rows, presumably using a business id and date. Something like this:
UPDATE trending_businesses_v2 tb
SET avg_monthly_rating = t.val
FROM (SELECT business_id, date_trunc('month', rtb.date) as yyyymm, round(CAST(AVG(stars) AS NUMERIC), 2) as val
FROM reviews_for_trending_businesses rtb
WHERE business_id IN (SELECT DISTINCT(business_id)
FROM trending_businesses_v2
)
GROUP BY business_id, date_trunc('month', rtb.date)
) t
WHERE t.business_id = tb.business_id AND
t.yyyymm = tb.?;
I have a pretty simple chart with a likely common issue. I've searched for several hours on the interweb but only get so far in finding a similar situation.
the basics of what I'm pulling contains a created_by, person_id and risk score
the risk score can be:
1 VERY LOW
2 LOW
3 MODERATE STABLE
4 MODERATE AT RISK
5 HIGH
6 VERY HIGH
I want to get a headcount of persons at each risk score and display a risk count even if there is a count of 0 for that risk score but SSRS 2005 likes to suppress zero counts.
I've tried this in the point labels
=IIF(IsNothing(count(Fields!person_id.value)),0,count(Fields!person_id.value))
Ex: I'm missing values for "1 LOW" as the creator does not have any "1 LOW" they've assigned risk scores for.
*here's a screenshot of what I get but I'd like to have a column even for a count when it still doesn't exist in the returned results.
#Nathan
Example scenario:
select professor.name, grades.score, student.person_id
from student
inner join grades on student.person_id = grades.person_id
inner join professor on student.professor_id = professor.professor_id
where
student.professor_id = #professor
Not all students are necessarily in the grades table.
I have a =Count(Fields!person_id.Value) for my data points & series is grouped on =Fields!score.Value
If there were a bunch of A,B,D grades but no C & F's how would I show labels for potentially non-existent counts
In your example, the problem is that no results are returned for grades that are not linked to any students. To solve this ideally there would be a table in your source system which listed all the possible values of "score" (e.g. A - F) and you would join this into your query such that at least one row was returned for each possible value.
If such a table doesn't exist and the possible score values are known and static, then you could manually create a list of them in your query. In the example below I create a subquery that returns a combination of all professors and all possible scores (A - F) and then LEFT join this to the grades and students tables (left join means that the professor/score rows will be returned even if no students have those scores in the "grades" table).
SELECT
professor.name
, professorgrades.score
, student.person_id
FROM
(
SELECT professor_id, score
FROM professor
CROSS JOIN
(
SELECT 'A' AS score
UNION
SELECT 'B'
UNION
SELECT 'C'
UNION
SELECT 'D'
UNION
SELECT 'E'
UNION
SELECT 'F'
) availablegrades
) professorgrades
INNER JOIN professor ON professorgrades.professor_id = professor.professor_id
LEFT JOIN grades ON professorgrades.score = grades.score
LEFT JOIN student ON grades.person_id = student.person_id AND
professorgrades.professor_id = student.professor_id
WHERE professorgrades.professor_id = 1
See a live example of how this works here: SQLFIDDLE
SELECT RS.RiskScoreId, RS.Description, SUM(DT.RiskCount) AS RiskCount
FROM (
SELECT RiskScoreId, 1 AS RiskCount
FROM People
UNION ALL
SELECT RiskScoreId, 0 AS RiskCount
FROM RiskScores
) DT
INNER JOIN RiskScores RS ON RS.RiskScoreId = DT.RiskScoreId
GROUP BY RS.RiskScoreId, RS.Description
ORDER BY RS.RiskScoreId
Each quarter's sales data is contained in a row in the data source.
Account 1's 4 quarters of sales data would be in 4 separate records, each containing the account name, quarter number, and count of items purchased.
The report should show, in each detail row: account name, q1 count, q2 count, q3 count, q4 count, total year count.
I'm new to Crystal, but it seems like this should be easy; how would I do this?
I'd probably create the result list using some slightly complex sql and they just display it on the Crystal report...but if you're wanting to accomplish this entirely inside Crystal, take a look at http://aspalliance.com/1041_Creating_a_Crosstab_Report_in_Visual_Studio_2005_Using_Crystal_Reports.all.
Here's a stab at the SQL that would be required...
select
accountName,
(select sum(itemCount) from myTable where quarterName = 'q1') as q1Count,
(select sum(itemCount) from myTable where quarterName = 'q2') as q2Count,
(select sum(itemCount) from myTable where quarterName = 'q3') as q3Count,
(select sum(itemCount) from myTable where quarterName = 'q4') as q4Count,
(select sum(itemCount) from myTable) as yearCount
from myTable
group by accountName ;
If your data source has the sales date in it (and I assume it would), you can create a formula called #SalesQuarter:
if month({TableName.SalesQuarter}) in [1,2,3] then '1' else
if month({TableName.SalesQuarter}) in [4,5,6] then '2' else
if month({TableName.SalesQuarter}) in [7,8,9] then '3'
else '4'
You can then add a cross-tab to your report, and use the new #SalesQuarter field as the column header of your cross-tab.
This assumes your sales are all within the same year.
Add a group on {account}
In the group footer add a Running total for each quarter.
For each quarter, create a running total with following settings:
Running Total Name: create a unique name for each formula, for example Q1,Q2,Q3,Q4
Field to summarize: {items purchased}
Type of summary: sum
Evaluate: Use a formula - {quarter number}= --should be 1,2,3, or 4, depending on which quarter you are summing
Reset: On Change of Group {account}