Hello I am trying to create a query that would show results for a civics test for seniors. I am having trouble getting the results to show like I want them to.
Here is my query:
select sch.name,
u.SCS_GI_CIVICS_TEST
from students s
left join u_students u
on s.dcid = u.studentsdcid
left join schools sch
on s.schoolid = sch.school_number
WHERE s.enroll_status = '0 '
and s.grade_level ='12'
and sch.ADDRESS is not null
and (u.scs_gi_civics_test like '%SpEd-Exempted%'
or u.scs_gi_civics_test IS NULL
or u.scs_gi_civics_test like '%Passed%'
or u.scs_gi_civics_test like '%Failed%'
or u.scs_gi_civics_test like '%Has Not Taken%');
The schools column is fine. In my civics test results, there are NULLs, which are the same as Has not Taken. I want to merge those and count them. I need a count and percentage for NULL + HAS NOT Taken, Passed, Failed, and Sped-Exempted. I know I need to do a pivot or a case to accomplish this but, I am quite lost on how to do it with the nulls involved.
I need the following columns:
School, SpEd-Exempted Count, SpEd-Exempted Percent, Passed Count, Passed Percent,Failed Count, and Failed Percent, and Has Not Taken Count and Has not Taken Percent (This includes Has Not Taken + Null Values)
*** The percentage of each would come from a total of all these values.
Any assistance would be greatly appreciated!
I hope the below would give you the answer. In the below query the first select with the case statement would give you all students of all schools who passed/failed/not taken/SpEd-Exempted.
The upper select would count all the students(tot_students) and also count the people who passed/failed and give the result.
The count(pass)*100/count(dcid) would give the percentage of people who passed from a school.
This query is grouped by School.
SELECT sch.name,
COUNT(sped),
(COUNT(sped)*100/COUNT(dcid)) perc_sped,
COUNT(pass),
(COUNT(pass)*100/COUNT(dcid)) perc_pass,
COUNT(fail),
COUNT(fail)*100/COUNT(dcid) perc_fail,
COUNT(not_taken) not_taken,
COUNT(not_taken)*100/COUNT(dcid) perc_not_taken,
COUNT(dcid) tot_students
FROM
(SELECT s.dcid,
sch.name,
CASE
WHEN u.scs_gi_civics_test LIKE '%SpEd-Exempted%'
THEN 1
ELSE NULL
END sped ,
CASE
WHEN u.scs_gi_civics_test LIKE '%Passed%'
THEN 1
ELSE NULL
END pass ,
CASE
WHEN u.scs_gi_civics_test LIKE '%Failed%'
THEN 1
ELSE NULL
END fail ,
CASE
WHEN u.scs_gi_civics_test LIKE '%Has Not Taken%'
THEN 1
ELSE NULL
END not_taken
FROM
FROM students s
LEFT JOIN u_students u
ON s.dcid = u.studentsdcid
LEFT JOIN schools sch
ON s.schoolid = sch.school_number
WHERE s.enroll_status = '0 '
AND s.grade_level ='12'
AND sch.ADDRESS IS NOT NULL
)
GROUP BY sch.name
Related
I have to tables : mutation and ann_commune. The colums code_postal and id_commune are in ann_commune, all the other ones in mutation.
I need to get together the average for the top 3 values of several departments
WITH six as (
SELECT commune as commune_six, AVG(valeur_fonciere) as moyenne_six
FROM mutation
LEFT JOIN ann_commune on mutation.id_commune = ann_commune.id_commune
WHERE code_postal LIKE '06%'
GROUP BY commune_six
ORDER BY moyenne_six DESC
LIMIT 3
),
treize as (
SELECT commune as commune_treize, AVG(valeur_fonciere) as moyenne_treize
FROM mutation
LEFT JOIN ann_commune on mutation.id_commune = ann_commune.id_commune
WHERE code_postal LIKE '13%'
GROUP BY commune_treize
ORDER BY moyenne_treize DESC
LIMIT 3
)
select AVG(moyenne_six)::NUMERIC(7,0) as Moyenne_top3_06,
AVG(moyenne_treize)::NUMERIC(7,0) as Moyenne_top3_13
from six, treize
I am sure there is a way to create a nicer quode, because I have to do the same with 10 averages. I did not find the solutions, but I am sure it exists.
Thank you in advance :).
I have a set of data like this
The Result should look Like this
My Query
SELECT max(pi.pi_serial) AS proforma_invoice_id,
max(mo.manufacturing_order_master_id) AS manufacturing_order_master_id,
max(pi.amount_in_local_currency) AS sales_value,
FROM proforma_invoice pi
JOIN schema_order_map som ON pi.pi_serial = som.pi_id
LEFT JOIN manufacturing_order_master mo ON som.mo_id = mo.manufacturing_order_master_id
WHERE to_date(pi.proforma_invoice_date, 'DD/MM/YYYY') BETWEEN to_date('01/03/2021', 'DD/MM/YYYY') AND to_date('19/04/2021', 'DD/MM/YYYY')
AND pi.pi_serial in (9221,
9299)
GROUP BY mo.manufacturing_order_master_id,
pi.pi_serial
ORDER BY pi.pi_serial
Option 1: Create a "Running Total" field in Crystal Reports to sum up only one "sales_value" per "proforma_invoice_id".
Option 2: Add a helper column to your Postgresql query like so:
case
when row_number()
over (partition by proforma_invoice_id
order by manufacturing_order_master_id)
= 1
then sales_value
else 0
end
as sales_value
I prepared this SQLFiddle with an example for you (and would of course like to encourage you to do the same for your next db query related question on SO, too :-)
select driverid, count(*)
from f1db.results
where position is null
group by driverid order by driverid
My thought: first find out all the records that position is null then do the aggregate function.
select driverid, count(*) filter (where position is null) as outs
from f1db.results
group by driverid order by driverid
First time, I meet with filter clause, not sure what does it mean?
Two code block results are different.
Already googled, seems don't have many tutorials about the FILTER. Kindly share some links.
A more helpful term to search for is aggregate filters, since FILTER (WHERE) is only used with aggregates.
Filter clauses are an additional filter that is applied only to that aggregate and nowhere else. That means it doesn't affect the rest of the columns!
You can use it to get a subset of the data, like for example, to get the percentage of cats in a pet shop:
SELECT shop_id,
CAST(COUNT(*) FILTER (WHERE species = 'cat')
AS DOUBLE PRECISION) / COUNT(*) as "percentage"
FROM animals
GROUP BY shop_id
For more information, see the docs
FILTER ehm... filters the records which should be aggregated - in your case, your aggregation counts all positions that are NULL.
E.g. you have 10 records:
SELECT COUNT(*)
would return 10.
If 2 of the records have position = NULL
than
SELECT COUNT(*) FILTER (WHERE position IS NULL)
would return 2.
What you want to achieve is to count all non-NULL records:
SELECT COUNT(*) FILTER (WHERE position IS NOT NULL)
In the example, it returns 8.
All,
I am iOS developer. Currently we have stored 2.5 lacks data in database. And we have implemented search functionality on that. Below is the query that we are using.
select CustomerMaster.CustomerName ,CustomerMaster.CustomerNumber,
CallActivityList.CallActivityID,CallActivityList.CustomerID,CallActivityList.UserID,
CallActivityList.ActivityType,CallActivityList.Objective,CallActivityList.Result,
CallActivityList.Comments,CallActivityList.CreatedDate,CallActivityList.UpdateDate,
CallActivityList.CallDate,CallActivityList.OrderID,CallActivityList.SalesPerson,
CallActivityList.GratisProduct,CallActivityList.CallActivityDeviceID,
CallActivityList.IsExported,CallActivityList.isDeleted,CallActivityList.TerritoryID,
CallActivityList.TerritoryName,CallActivityList.Hours,UserMaster.UserName,
(FirstName ||' '||LastName) as UserNameFull,UserMaster.TerritoryID as UserTerritory
from
CallActivityList
inner join CustomerMaster
ON CustomerMaster.DeviceCustomerID = CallActivityList.CustomerID
inner Join UserMaster
On UserMaster.UserID = CallActivityList.UserID
where
(CustomerMaster.CustomerName like '%T%' or
CustomerMaster.CustomerNumber like '%T%' or
CallActivityList.ActivityType like '%T%' or
CallActivityList.TerritoryName like '%T%' or
CallActivityList.SalesPerson like '%T%' )
and CallActivityList.IsExported!='2' and CallActivityList.isDeleted != '1'
order by
CustomerMaster.CustomerName
limit 50 offset 0
Without using 'order by' The query is returning result in 0.5 second. But when i am attaching 'order by', Time is increasing to 2 seconds.
I have tried indexing but it is not making any noticeable change. Any one please help. If we are not going through Query then how can we do it fast.
Thanks in advance.
This is due to the the limit. Without ORDER BY only 50 records have to be processed and any 50 will be returned. With ORDER BY all the records have to be processed in order to determine which ones are the first 50 (in order).
The problem is that the ORDER BY is performed on a joined table. Otherise you could apply the limit on the main table (I assume it is the CallActivityList) first and then join.
SELECT ...
FROM
(SELECT ... FROM CallActivityList ORDER BY ... LIMIT 50 OFFSET 0) AS CAL
INNER JOIN CustomerMaster ON ...
INNER JOIN UserMaster ON ...
ORDER BY ...
This would reduce the costs for joining the tables. If this is not possible, try at least to join CallActivityList with CustomerMaster. Apply the limit to those and finally join with UserMaster.
SELECT ...
FROM
(SELECT ...
FROM
CallActivityList
INNER JOIN CustomerMaster ON ...
ORDER BY CustomerMaster.CustomerName
LIMIT 50 OFFSET 0) AS ActCust
INNER JOIN UserMaster ON ...
ORDER BY ...
Also, in order to make the ordering unambiguous, I would include more columns into the order by, like call date and call id. Otherwise this could result in a inconsistent paging.
is there a way to set an upper limit to a calculation (calculated field) which is already in a CASE clause? I'm calculating percentages and, obviously, don't want the highest value exceed '100'.
If it wasn't in a CASE clause already, I'd create something like 'case when calculation > 100.0 then 100 else calculation end as needed_percent' but I can't do it now..
Thanks for any suggestions.
I think using least function will be the best option.
select least((case when ...), 100) from ...
There is a way to set an upper limit on a calculated field by creating an outer query. Check out my example below. The inner query will be the query that you have currently. Then just create an outer query on it and use a WHERE clause to limit it to <= 1.
SELECT
z.id,
z.name,
z.percent
FROM(
SELECT
id,
name,
CASE WHEN id = 2 THEN sales/SUM(sales) ELSE NULL END AS percent
FROM
users_table
) AS z
WHERE z.percent <= 1