I'm new to SSRS. I want to group a transactions table by customerid and count how many transactions per customerid. I was able to do that.
But then I want to sort by that count, and/or filter by that count. How do you that?
Thanks!
To set up sorting and filtering on row groups, right click on the row group.
You can access the Group Sorting and Filtering properties here. They should both allow you to set up rules based on the name of your count column.
Option 1
If you have no need to show the transactions in the report then the aggregation should be performed at the database level in the query, not by SSRS. You'll get the benefits of:
Faster rendering.
You'll be sending less data over the network.
There'll be less data for the SSRS engine to process, therefore any ordering can be performed quicker.
Your data set can be 'pre-ordered' by putting the most common/expected values in the ORDER BY clause of the underlying query.
Thereby giving any rendering a speed boost, also.
Any filters can be applied directly against the aggregated data returned by the query without having to try and do complex expressions in SSRS.
This will also give a performance boost when rendering.
You could have a "filter" parameter that could be used in the HAVING clause of an aggregate query
Again, a performance boost due to less data across the network, and to be processed.
Gives a level of interactivity to your reports as opposed to trying to pre-define user tastes and having filter conditions set on expressions or a 'best-guess'.
Example
-- Will filter out any customers who have 2 or fewer transactions
DECLARE #Filter AS int = 2
;
SELECT
CustomerId
,COUNT(TransactionId)
FROM
Transactions
GROUP BY
CustomerId
HAVING
COUNT(TransactionId) > #Filter
Option 2
If you still need to show the transactions, then add an additional column to your query that performs the Count() using the OVER clause and PARTITION BY customerid, like so:
COUNT(transactions) OVER (PARTITION BY customerid) AS CustomerTransactionCount
Assuming a very simple table structure you'll end up with a query structure like so:
SELECT
CustomerId
,TransactionId
,TransactionAttribute_1
,TransactionAttribute_2
,TransactionAttribute_3
.
.
.
,TransactionAttribute_n
,COUNT(TransactionId) OVER (PARTITION BY CustomerId) AS CustomerTransactionCount
FROM
Transactions
You'll be able to use CustomerTransactionCount as a filter and sorting column in any row/column groups within SSRS.
Drawback of this approach
Window functions, i.e. using the OVER (PARTITION BY...) cannot be used in HAVING clauses as no GROUP BY clause is used. This means any filtering will have to be carried out by SSRS.
Workaround options
We take the query above and wrap a CTE around it. This will allow us to filter based on the aggregate results.
Put the aggregate in a derived table.
CTE Example
--Filter variable
DECLARE #Filter AS int = 2
;
WITH DataSet_CTE AS
(
-- Build the data set with transaction information and the aggregate column
SELECT
CustomerId
,TransactionId
,TransactionAttribute_1
,TransactionAttribute_2
,TransactionAttribute_3
.
.
.
,TransactionAttribute_n
,COUNT(TransactionId) OVER (PARTITION BY CustomerId) AS CustomerTransationCount
FROM
Transactions
)
-- Filter and return data
SELECT *
FROM DataSet_CTE
WHERE CustomerTransationCount > #Filter
Derived Table Example
--Filter variable
DECLARE #Filter AS int = 2
;
SELECT
*
FROM
(
-- Build the data set with transaction information and the aggregate column
SELECT
CustomerId
,TransactionId
,TransactionAttribute_1
,TransactionAttribute_2
,TransactionAttribute_3
.
.
.
,TransactionAttribute_n
,COUNT(TransactionId) OVER (PARTITION BY CustomerId) AS CustomerTransationCount
FROM
Transactions
) AS DataSet
WHERE
DataSet.CustomerTransationCount > #Filter
Related
When i use SELECT * FROM table, PostgreSQL is returning the data ordered by id. But when i use SELECT DISTINCT * FROM table, PostgreSQL is returning the same dataset as there are no duplicates but the order has been changed which is beyond my understanding.
How does PostgreSQL sort the data while using DISTINCT * and without specifying any ORDER BY clause.
If you put DISTINCT into a query, PostgreSQL sorts the result set by all result columns in order to eliminate duplicates. The sort order is “implementation defined” unless you add an explicit ORDER BY clause.
Two remarks:
without the DISTINCT, the table is returned in id order because you inserted it that way and performed no updates or deletes, and because there are no concurrent sequential scans on the table. You can never rely on an order in the result set unless you use ORDER BY.
DISTINCT can be very expensive on large result sets. Use it only if you are certain you need it.
I have created a materialized view for the purposes of feeding into a dashboard.
My goal is to make this table selectable in the fastest way possible and I'm not sure how to approach it. I was hoping that if I describe the table and how it will be used, someone could offer some direction.
The context is a website with funnel steps.Each row is an instance of a user triggering a funnel step such as add to cart, checkout, payment details and then finally transaction.
Since the table is for the purposes of analytics, it will be refreshed automatically with cron once a day only, in the morning, so I'm not worried about real time update speed, only select speed with various where clauses.
Suppose I have the fields described below:
(N = ~13M and expected to be ~20 by January, growing several million per month)
Table is unique with the combination of session id, user id and funnel step.
- Session Id (Id, so some duplication but generally very very granular - Varchar)
- User Id (Id, so some duplication but generally very very granular - Varchar)
- Date (Date)
- Funnel Step (10 distinct value - Varchar)
- Device Category (3 distinct values - Varchar)
- Country (~ 100 distinct values - varchar)
- City (~1000+ distinct values - varchar)
- Source (several thousand distinct values, nevertheless, stakeholder would like a filter - varchar)
Would I index each field individually? Or, should I index all fields in a oner? Per the documentation, I think I can index up to 32 fields at once. But would that be advisable here given my primary goal of select query speed over everything else?
The table will feed into dashboard that reads the table and dynamically translates filter inputs into where clauses. Each time the user adjusts a filter, the table will be read and grouped and aggregated based on the filter / where clause inputs.
Example query:
select
event_action,
count(distinct user_id) as users
from website_data.ecom_funnel
where date >= $input_start_date
and date <= $input_end_date
and device_category in ($mobile, $desktop, $tablet)
and country in ($list of all countries minus any not selected)
and source in ($list of all sources minus any not selected)
group by 1 order by users desc
This will result in a funnel shaped table of data.
I cannot aggregate before hand because the primary metric of concern is users, not sessions. These must be de-duped from the underlying table. Classic example... Suppose a person visits a website once a day for a week. Then the sum of unique visitors for that week is 1, however if I summed visitors by day I would get 7. Similar with my table, some users take multiple sessions to complete the funnel. So, this is why I cannot pre aggregate the table, since I need to apply filters to the underlying data and then count(distinct user id).
Here's explain on a subset of fields if it is useful:
QUERY PLAN
Sort (cost=862194.66..862194.68 rows=9 width=24)
Sort Key: (count(DISTINCT client_id)) DESC
-> GroupAggregate (cost=847955.01..862194.51 rows=9 width=24)
Group Key: event_action
-> Sort (cost=847955.01..852701.48 rows=1898589 width=37)
Sort Key: event_action
-> Seq Scan on ecom_funnel (cost=0.00..589150.14 rows=1898589 width=37)
Filter: ((device_category = ANY ('{mobile,desktop}'::text[])) AND (source = 'google'::text))
My overarching, specific question is, given my use case, should I index each field individually or should I create one single index? Does it matter?
On top of that, any tips for optimising this materialized view to run a select query faster would be appreciated.
Looking at your filter conditions, you should check the cardinality of device_category field by posting
select device_category, count(*) from website_data.ecom_funnel group by device_category
and looking at the values to determine if an index should firstly include this column. Possible index here (without knowing the cardinality) would be multicolumn and include:
(device_category, date)
Saying that, there's no benefit from creating indexes on each separate column as your query wouldn't use them all, so it does matter. You would slow down other CRUD operations that aren't Read operation.
Creating an index on all columns won't probably speed it up too much for you as well, but that's based on the data lying under the hood (in the table) and how your filters compare to the overall query without them (cardinality of values in columns being filtered). This would most likely create a huge overhead of going through the index tree and then obtaining rowids to return the data you need.
Summing up, I would try to narrow the index down to the columns that matter most in your filtering which means they cut most of the data being retrieved. If your query is meant to return majority of rows from the table then there's a need to aggregate, unfortunately, as this wouldn't speed things up.
Hope it helps.
Edit: I've just read that you already posted count of distinct values among your table. I'm not sure what Funnel Step is bound to in your table, but assuming it's a column named event_action, it might be beneficial to instead create an index that would help in grouping as well by doing:
(date, event_action)
It seems like you have omitted the GROUP BY clause at all, which should be included and it should be grouping by event_action, since that's what your select part is doing.
If you narrow the date down to several days/months every time you perform a select query, it might be a huge benefit to create index with first date column.
Remember, that position of column in an index matters.
If you look for values from several months let's say, you should preaggregate and store precalculated values from each month in another table and then UNION ALL that data to the current query which would only select data from current (still being updated) time.
I'm trying to implement various sorts as described in this article.
I have a typical Sales Measure Group partitioned by fiscal period. If I try to add an order by clause to the query it fails when processing because SSAS wraps the query into a subquery. Is there a way to prevent this from happening? How do I ensure the sort order in a case like this?
Here is the code that is generated for a partition:
SELECT *
FROM
(
SELECT *
FROM [Sales]
WHERE SaleDate between '1/1/2015' and '1/28/2015'
order by SaleDate
)
AS [Sales]
I replaced the field names with * for clarity.
SELECT TOP 100 PERCENT * FROM Sales ORDER BY SaleDate
That is not guaranteed to work. The best way to order it is to ensure the clustered index is on the column you want to order by.
Can anyone please help me in writing a single query joining these two queries.
I am using IBM DB2.
(SELECT
TABLE1.COLS,TBLE2.COLS,TABLE3.COLS
FROM
TABLE1,TABLE2,TABLE3,TABLE_PROB
WHERE
TABLE_PROB.COL=TABLE1.COL,OTHER_CLAUSE )
UNION
(SELECT
TABLE1.COLS,TBLE2.COLS,TABLE3.COLS
FROM
TABLE1,TABLE2,TABLE3,TABLE_PROB1
WHERE TABLE_PROB1.COL=TABLE1.COL,OTHER_CLAUSE )
The two queries before and after union are same except that instead of "TABLE_PROB" it is changed to "TABLE_PROB1". There are no columns is to be selected from both the tables, they are only used to filter in the where clause.
Can anyone tell me how to combine both into a single query.
This query can be considered for the following scenario.
There are few employee details table which contains details of all employees.
"TABLE_PROB" contains list of contract employees and "TABLE_PROB1" contains list of permanent employees. I need to get the details of both the contract and not contract employees based on few criteria.
Since the query has big Whereclause and select clause firing two queries by using union,increases the cost of the query. So I need to merge it by making a single query.
Thanks for the help in advance.
You cannot avoid the UNION because you still have to access both TABLE_PROB and TABLE_PROB1. Depending on your DB2 version, platform, and the system configuration this might perform a bit better:
SELECT
TABLE1.COLS,TBLE2.COLS,TABLE3.COLS
FROM
TABLE1,TABLE2,TABLE3
WHERE
OTHER_CLAUSE
AND
EXISTS (
SELECT 1
FROM TABLE_PROB
WHERE COL=TABLE1.COL
UNION
SELECT 1
FROM TABLE_PROB1
WHERE COL=TABLE1.COL
)
Depending on the contents of TABLE_PROB.COL and TABLE_PROB1.COL UNION ALL instead of UNION might also prove beneficial.
SELECT DISTINCT tblJobReq.JobReqId
, tblJobReq.JobStatusId
, tblJobClass.JobClassId
, tblJobClass.Title
, tblJobReq.JobClassSubTitle
, tblJobAnnouncement.JobClassDesc
, tblJobAnnouncement.EndDate
, blJobAnnouncement.AgencyMktgVerbage
, tblJobAnnouncement.SpecInfo
, tblJobAnnouncement.Benefits
, tblSalary.MinRateSal
, tblSalary.MaxRateSal
, tblSalary.MinRateHour
, tblSalary.MaxRateHour
, tblJobClass.StatementEval
, tblJobReq.ApprovalDate
, tblJobReq.RecruiterId
, tblJobReq.AgencyId
FROM ((tblJobReq
LEFT JOIN tblJobAnnouncement ON tblJobReq.JobReqId = tblJobAnnouncement.JobReqId)
INNER JOIN tblJobClass ON tblJobReq.JobClassId = tblJobClass.JobClassId)
LEFT JOIN tblSalary ON tblJobClass.SalaryCode = tblSalary.SalaryCode
WHERE (tblJobReq.JobClassId in (SELECT JobClassId
from tblJobClass
WHERE tblJobClass.Title like '%Family Therapist%'))
When i try to execute the query it results in the following error.
Cannot sort a row of size 8130, which is greater than the allowable maximum of 8094
I checked and didn't find any solution. The only way is to truncate (substring())the "tblJobAnnouncement.JobClassDesc" in the query which has column size of around 8000.
Do we have any work around so that i need not truncate the values. Or Can this query be optimised? Any setting in SQL Server 2000?
The [non obvious] reason why SQL needs to SORT is the DISTINCT keyword.
Depending on the data and underlying table structures, you may be able to do away with this DISTINCT, and hence not trigger this error.
You readily found the alternative solution which is to truncate some of the fields in the SELECT list.
Edit: Answering "Can you please explain how DISTINCT would be the reason here?"
Generally, the fashion in which the DISTINCT requirement is satisfied varies with
the data context (expected number of rows, presence/absence of index, size of row...)
the version/make of the SQL implementation (the query optimizer in particular receives new or modified heuristics with each new version, sometimes resulting in alternate query plans for various constructs in various contexts)
Yet, all the possible plans associated with a "DISTINCT query" involve *some form* of sorting of the qualifying records. In its simplest form, the plan "fist" produces the list of qualifying rows (records) (the list of records which satisfy the WHERE/JOINs/etc. parts of the query) and then sorts this list (which possibly includes some duplicates), only retaining the very first occurrence of each distinct row. In other cases, for example when only a few columns are selected and when some index(es) covering these columns is(are) available, no explicit sorting step is used in the query plan but the reliance on an index implicitly implies the "sortability" of the underlying columns. In other cases yet, steps involving various forms of merging or hashing are selected by the query optimizer, and these too, eventually, imply the ability of comparing two rows.
Bottom line: DISTINCT implies some sorting.
In the specific case of the question, the error reported by SQL Server and preventing the completion of the query is that "Sorting is not possible on rows bigger than..." AND, the DISTINCT keyword is the only apparent reason for the query to require any sorting (BTW many other SQL constructs imply sorting: for example UNION) hence the idea of removing the DISTINCT (if it is logically possible).
In fact you should remove it, for test purposes, to assert that, without DISTINCT, the query completes OK (if only including some duplicates). Once this fact is confirmed, and if effectively the query could produce duplicate rows, look into ways of producing a duplicate-free query without the DISTINCT keyword; constructs involving subqueries can sometimes be used for this purpose.
An unrelated hint, is to use table aliases, using a short string to avoid repeating these long table names. For example (only did a few tables, but you get the idea...)
SELECT DISTINCT JR.JobReqId, JR.JobStatusId,
tblJobClass.JobClassId, tblJobClass.Title,
JR.JobClassSubTitle, JA.JobClassDesc, JA.EndDate, JA.AgencyMktgVerbage,
JA.SpecInfo, JA.Benefits,
S.MinRateSal, S.MaxRateSal, S.MinRateHour, S.MaxRateHour,
tblJobClass.StatementEval,
JR.ApprovalDate, JR.RecruiterId, JR.AgencyId
FROM (
(tblJobReq AS JR
LEFT JOIN tblJobAnnouncement AS JA ON JR.JobReqId = JA.JobReqId)
INNER JOIN tblJobClass ON tblJobReq.JobClassId = tblJobClass.JobClassId)
LEFT JOIN tblSalary AS S ON tblJobClass.SalaryCode = S.SalaryCode
WHERE (JR.JobClassId in
(SELECT JobClassId from tblJobClass
WHERE tblJobClass.Title like '%Family Therapist%'))
FYI, running this SQL command on your DB can fix the problem if it is caused by space that needs to be reclaimed after dropping variable length columns:
DBCC CLEANTABLE (0,[dbo.TableName])
See: http://msdn.microsoft.com/en-us/library/ms174418.aspx
This is a limitation of SQL Server 2000. You can:
Split it into two queries and combine elsewhere
SELECT ID, ColumnA, ColumnB FROM TableA JOIN TableB
SELECT ID, ColumnC, ColumnD FROM TableA JOIN TableB
Truncate the columns appropriately
SELECT LEFT(LongColumn,2000)...
Remove any redundant columns from the SELECT
SELECT ColumnA, ColumnB, --IDColumnNotUsedInOutput
FROM TableA
Migrate off of SQL Server 2000