Getting percentage of breaching SLA tickets - amazon-redshift

I have table where I have data of zendesk tickets with all the SLA details.
For instance, with this table I can write a query like below to see where a particular service ticket has breached the SLA or not.
case when m.reply_time_in_minutes.[calendar] > 120 then 'Yes' else 'No' end as 'fr_sla_breach'
If I want to get a percentage of breach for each person, how can I approach this.
I am not developer, but If you guys show some way to think I can develop it.
The O/P of the data will look like below:
person
ticket id
fr_sla_breach
Person A
1234
yes
Person B
3453
no
This is the result I am able to get. But If I want to get the below O/P , how can i go about it.
person
fr_sla_breach_percentage
Person A
50%
I can able to see for a particualr person how many were breaching SLA and how many are not. But If I want to figure out the percentage, not sure how to approach it.
Complete Query:
SELECT assignee_name,
'https://hevodata.zendesk.com/agent/tickets/'|| t.id AS URL,
t.created_at,
m.solved_at,
m.initially_assigned_at,
CASE
WHEN m.reply_time_in_minutes.[calendar] > 120 THEN 'Yes'
ELSE 'No'
END AS "fr_sla_breach"
FROM tickets t
JOIN ticket_metrics m ON t.id = m.ticket_id
WHERE date(t.created_at) BETWEEN 'MM/DD/YYYY' AND 'MM/DD/YYYY'
AND t.assignee_name LIKE ('XXX%')

Related

Tableau: Getting Aggregate Count Based on Boolean Attributes

I am really new in Tableau and I would be needing help in some calculation.
My simplified data consists of three columns:
customer no, transaction date, lost_flag
here lost_flag is a boolean which marks as true if a customer made a transaction in the last 365 days.
(max([transaction date)< dateadd('year',-1,max([Report Date])))
I need to find the:
1. number of customers that are lost
2. number of customers that are not lost
3. attrition rate
For number one, I initially did
countd(if ([Lost_flag]) then [Customer No] else "" END)
But obviously it did not work.
Note: Customer_No is not unique here since this is a transactional sales data source
Thanks in advance.
First you need to make sure that your lost flag is being calculated at the customer level rather than the transaction level. In order to do this use the following formula, note that it is similar to yours however I have made it be fixed at customer id and also replaced report date with todays date:
Lost Flag = { FIXED [Customer ID]: (max([Transacton Date])<dateadd('year',-1,max(TODAY())))}
This will add a TRUE or FALSE flag against every transaction for a customer.It is important that this is fixed at the customer id level rather than the transaction otherwise all old transactions for a customer will be flagged as lost even if they have a recent transaction.
So in order to see how many customers are lost do the following:
1) drag lost_flag onto the rows shelf
2) drag customer id onto the text mark and then right click- measure - count distinct.

SQL - using the Min field to achieve desired result

Wondering the best SQL to handle below situation: Client only wants to see invoices that have been declined. I started with only show me when STATUS_ID = 2, but then realized that it was paid as it was resubmitted and accepted so that didn't work. What is the best way to handle 2 records like below where I don't want the SQL to return any records if manifest + order code have a 1. Would you do a Min on Status ID or something of that nature?
VENDOR NAME manifest ORDER_CODE STATUS_ID
VENDOR 12345 BHGSDKJF1234 RU07 2 (invoice decline)
VENDOR 12345 BHGSDKJF1234 RU07 1 (paid)
This trick can be work for you in this case, but it's not solve the general case (what happens if the STATUS_ID for paid is 3, and all possible values are 0-5?)
you can use in general SWICH-CASE clause, that gives you some 1 (true) if the client has STATUS_ID = 1, and 0 otherwise. Then, pick the MAX() for each invoice.
You can also consider another design that might work for you:
Add time\time-stamp column (Maybe, for your purpose, you can use SYSDATE time for insertion time of the record to db).
After you have a time column, you probably can choose the columns with the last time STATUS_ID for each invoice (get the STATUS_ID in the row with the max time).

make a dashboard to has to make similar query too many times and performance bad

I have a question here. we have a customer list and product list and sale table. We want to show each customer to buy each product's total sale.
so I use the query like following:
select ...
from ...
where customer="" and product="".
the query is standard simple one. but the table/dashboard is 20*10. It means for each customer and product pair i have to run a query. i have to run query 200 times. which is super slow.
how to improve this? thanks
right now the dashboard give me 20 customer and 10 product and then i go to database for 200 times. it is from a customer list to pick first 20 and another 20 like this way. the product is the same way to choose.
You can use group by ... Something like
Select customer, product, sum(sales) -- or whatever you need
From ...
Group by customer, product
The server would do the aggregation, and the query is much faster than doing 200 queries

Faster CROSS JOIN alternative - PostgreSQL

I am trying to CROSS JOIN two tables, customers and items, so I can then create a sales by customer by item report. I have 2000 customer and 2000 items.
SELECT customer_name FROM customers; --Takes 100ms
SELECT item_number FROM items; --Takes 50ms
SELECT customer_name, item_number FROM customers CROSS JOIN items; Takes 200000ms
I know this is 4 million rows, but is it possible to get this to run any faster? I want to eventually join this with a sales table like this:
SELECT customer_name, item_number, sales_total FROM customers CROSS JOIN items LEFT JOIN sales ON (customer.customer_name = sales.customer_name, item.item_number=sales.item_number);
The sales table will obviously not have all customers or all items, so the goal here is to have a report that shows all customers and all items along with what was sold and not sold.
I'm using PostgreSQL 8.4
To answer your question: No, you can't do a cross join faster than that - if you could then that would be how CROSS JOIN would be implemented.
But really you don't want a cross join. You probably want two separate queries, one which lists all customers, and another which lists all items and whether or not they were sold.
This really needs to be multiple reports. I can think of several off the top of my head that will yield more efficient packaging of information:
Report: count of all purchases by customer/item (obvious).
Report: list of all items not purchased, by customer.
Report: Summary of Report #2 (count of items) in order to prioritize which customers to focus on.
Report: list of all customer that have not bought an item by item.
Report: Summary of Report #3 (count of customers) in order to identify both the most popular and unpopular items for further action.
Report: List of all customers who purchased an item in the past, but did not purchase it his reporting period. This report is only relevant when the sales table has a date and the customers are expected to be regular buyers (i.e. disposable widgets). Won't work as well for things like service contracts.
The point here is that one should not insist that the tool process every possible outcome at once and generate more data and anyone could possibly digest manually. One should engage the end-users and consumers of the data as to what their needs are and tailor the output to meet those needs. It will make both sides' lives much easier in the long run.
If you wish to see all items for a given client (even if the cient has no items), i would rather try
SELECT c.customer_name, i.item_number, s.sales_total
FROM customers c LEFT JOIN
sales s ON c.customer_name = s.customer_name LEFT OIN
items i on i.item_number=s.item_number
This should give you a list of all clients, and all items joined by sales.
Perhaps you want something like this?
select c.customer_name, i.item_number, count( s.customer_name ) as total_sales
from customers c full join sales s on s.customer_name = c.customer_name
full join items i on i.item_number = s.item_number
group by c.customer_name, i.item_number

TSQL - Best way to select data where a leave date falls in range of an invoice

Background: I have a payroll system where leave is paid only if it falls in range of the invoice being paid. So if the invoice covers the last 2 weeks then only leave in the last 2 weeks is to paid.
I want to write a sql query to select the leave.
Assume a table called DailyLeaveLedger which has among others a LeaveDate and Paid flag.
Assume a table called Invoice that was a WeekEnding field and a NumberWeeksCovered field.
Now assume week ending date 15/05/09 and NumberWeeksCovered = 2 and a LeaveDate of 11/05/09.
This is an example of how I want it written. The actual query is quite complex but I want the LeaveDate check to be a In subquery.
SELECT *
FROM DailyLeaveLedger
WHERE Paid = 0 AND
LeaveDate IN (SELECT etc...What should this be to do this)
Not sure if its possible the way I mention?
Malcolm
So LeaveDate should be between (WeekEnding-NoOfWeeksCovered) and (WeekEnding) for some Invoice?
If I've understood it right, you might be able to use an EXISTS() subquery, something like this:
SELECT *
FROM DailyLeaveLedger dl
WHERE Paid = 0 AND
EXISTS (SELECT *
FROM Invoice i
WHERE DateAdd(week,-i.NumberOfWeeksCovered,i.WeekEnding) < dl.LeaveDate
AND i.WeekEnding > dl.LeaveDate
/* and an extra clause in here to make sure
the invoice is for the same person as the dailyleaveledger row */
)