I have a Qgis project with lines classified in a field in 5 types. Composing a report, I need a table that shows the five types of line in the first column and in the second column the length for each type of line. No idea how to do this.
Any help, please?
Solved.
CASE
WHEN "Infra_class" = 0 THEN round ( sum ( "length","Infra_class" is '0') )
WHEN "Infra_class" = 1 THEN round ( sum ( "length","Infra_class" is '1') )
WHEN "Infra_class" = 2 THEN round ( sum ( "length","Infra_class" is '2') )
WHEN "Infra_class" = 3 THEN round ( sum ( "length","Infra_class" is '3') )
ELSE round ( sum ( "length","Infra_class" is '4') )
END
Need to plot data extracted for table into a pie-chart.
I am selecting data from a table to count tickets with different scenarios. I am able to simply select data to be plotted in excel.
But I need to select the same data in such a way that It can be plotted in pie-chart also.
SELECT Sum(CASE
WHEN Date(reportdate) < Date(current timestamp)
AND ( status NOT IN (SELECT value
FROM synonymdomain
WHERE maxvalue IN ( 'RESOLVED', 'CLOSED'
,
'REJECTED' )
AND domainid IN ( 'INCIDENTSTATUS'
)) )
AND incident.ir IS NOT NULL THEN 1
end) AS IMs_Balance_Carry_Forward,
( Sum(CASE
WHEN Date(reportdate) = Date(current timestamp)
AND ( status NOT IN (SELECT value
FROM synonymdomain
WHERE maxvalue IN (
'RESOLVED', 'CLOSED',
'REJECTED' )
AND domainid IN (
'INCIDENTSTATUS' )) )
AND ( incident.ir IS NOT NULL ) THEN 1
end) ) AS IM_Added_During_the_day,
from INCIDENT
Current Result:
IMS_BALANCE_CARRY_FORWARD IM_ADDED_DURING_THE_DAY
120 8
Required Result"
Column1 Column2
IMS_BALANCE_CARRY_FORWARD 120
IM_ADDED_DURING_THE_DAY 8
You want an unpivot capability, this has been answered here before. previous question and answer
I have such a query
select r.timestamp, r,value
from result_table r
where timestamp > ( NOW() - INTERVAL '120 hour' )
and r.id%10=1`
where id is the autoincremental primary key.
Instead, 120 and 10 can by any other number (decided by the user depending on his needs). Basically, the user wants data for some time interval with some decimation.
Obviously, it works too slow on a big amount of data. What should be the index(s) here?
PostgreSQL supports SQL expression or function indexes
where
timestamp > ( NOW() - INTERVAL '120 hour' )
and r.id % 10 = 1
Needs the index (timestamp, (id % 10)) to get more performance.
Query
CREATE INDEX
timestamp__idmod10
ON
result_table
(timestamp, (id % 10))
see demos
with index http://sqlfiddle.com/#!17/8e63b/6
without index http://sqlfiddle.com/#!17/9be99/3
Editted because of comment
Thanks, Raymond, However, (id % 10) is not that good since instead of
10 can be any other number. 9, 11, 100, 1, etc
Other approach use generate_series() and a delivered table to generate a id list matching % number = 1.
And use that resultset with a IN clause.
p.s this statement assumes a id column with SERIAL and a table equal or less then 1 million records. Also keep in mind that the generate_series() function takes some time.
SQL statement
SELECT
numbers.number FROM (
SELECT
generate_series(1, 1000000) as number
) AS numbers
WHERE
numbers.number % number = 1
Then you can use the index
CREATE INDEX timestamp_id ON result_table(timestamp, id);
And the query
SELECT
*
FROM
result_table
WHERE
timestamp > ( NOW() - INTERVAL '120 hour' )
AND
id IN (
SELECT
numbers.number FROM (
SELECT
generate_series(1, 1000000) as number
) AS numbers
WHERE
numbers.number % 10 = 1
)
see demo http://sqlfiddle.com/#!17/5013c0/6 with example data.
I have a concern regarding the use of multiple WITH clauses in a query because
in some condition, it is slowing down the performance of the query like the below example,
So first WITH clause taking the 0.345 sec to fetch the 98948 records and second WITH clause taking the 13 sec to fetch the 68199 records even its less record as compare to first one so the only difference is that we have used the aggregate function in the second WITH clause to calculate the sum of charges.
Can anybody please help us to understand why the second query taking too much time.
1.This clause taking the 0.318 sec to fetch the 98948 record,
WITH delinquency_lease_details AS (
SELECT
dp.cid,
dp.id AS delinquency_policy_id,
p.id,
dp.threshold_amount,
dp.small_balance_amount,
dp.delinquency_threshold_type_id,
cl.id,
cl.primary_customer_id AS customer_id,
cl.lease_status_type_id,
cl.occupancy_type_id,
COALESCE( cl.building_name || ' - ' || cl.unit_number_cache, cl.building_name, cl.unit_number_cache ) AS unit_number, func_format_customer_name ( cl.name_first, cl.name_last, cl.company_name ) customer_name, cl.property_name, TZ.time_zone_name AS property_timezone
FROM
cached_leases cl
JOIN lease_details ld ON ( ld.cid = cl.cid AND ld.lease_id = cl.id )
JOIN delinquency_policies dp ON ( dp.cid = ld.cid AND ld.delinquency_policy_id = dp.id )
JOIN properties p ON ( p.cid = lp.cid AND p.id = lp.property_id )
JOIN time_zones TZ ON ( TZ.id = p.time_zone_id )
WHERE
cl.cid = 1111
AND cl.lease_status_type_id IN ( 4, 5 )
AND cl.occupancy_type_id IN ( 1, 2, 3, 4, 6, 9, 10, 11 )
AND cl.termination_list_type_id IS NULL
)
SELECT * FROM delinquency_lease_details;
2. This clause taking the 13 sec to fetch the 68199 records and if I just run the query without WITH clause then it is taking 0.564 seconds,
WITH delinquent_balance AS (
SELECT
dld.cid,
dld.id,
min( c.post_date ) AS min_post_date,
sum( c.transaction_amount_due ) AS delinquent_amount
FROM
cached_leases dld
JOIN charges at ON ( at.cid = dld.cid AND at.lease_id = dld.id AND c.is_temporary = FALSE AND c.is_deleted = FALSE )
JOIN charge_codes cc ON ( c.ar_code_id = cc.id AND c.cid = cc.cid AND cc.ledger_filter_id = 27 )
WHERE
dld.cid = 1111
AND ( ( c.transaction_amount_due > 0 AND c.post_date < CURRENT_DATE ) OR c.transaction_amount_due < 0 )
AND NOT EXISTS (
select
1
from
repayment_charges
WHERE
cid = c.cid
AND property_id = c.property_id
AND charge_id = c.id
AND is_active = true
)
GROUP BY
dld.cid,
dld.id
) select * from delinquent_balance;
As per this link, the WITH clause is the optimization barrier for Postgres database so it's really a cause then what we should use in place of WITH clause for complex queries because I have used the 10 WITH clauses in query and it is slowing down the performance of the query but out of that I have given two clauses only to get the some conclusion because the second clause taking more time as compared to another one.
I need a example of a cursor for my meter system, where the system reads the meter every month.
The cursor needs to check, that every meter has a reading registered in the current year. For meters with missing readings, an estimated value is added, such that the daily consumption is like the daily comsumption in the previous period plus 15%. In no previous period exiss, the above Kwh value is used.
How about something like this. (The MonthSeed table could become a real table in your database)
declare #MonthSeed table (MonthNumber int)
insert into #MonthSeed values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12)
-- assumes declared table "Reading" with fields ( Id int, [Date] datetime, MeterNo varchar(50), Consumption int )
select
m.MeterNo,
r.Date,
calculatedConsumption = isnull(r.Consumption, -- read consumption
isnull((select max(r2.Consumption) Consumption from Reading r2 where datepart(month, r2.Date) = (m.MonthNumber - 1) and r2.MeterNo = m.MeterNo) * 1.15, -- previous consumption + 15%
9999)) -- default consumption
from
(select distinct
MeterNo,
MonthNumber
from
Reading, #MonthSeed) m
left join
Reading r on r.MeterNo = m.MeterNo and datepart(month, r.Date) = m.monthNumber
EDIT FOLLOWING COMMENTS - EXAMPLE OF ADDING MISSING READINGS
As commented need to include an insert before the select insert into Reading (MeterNo, Date, Consumption) and making use of the left join to the reading table include a check for the reading id to be null ie missing where r.Id is null.
I noticed that this would result in null date entries when inserting into the reading table. So I included a date aggregate in the main sub-select Date = dateadd(month, monthnumber, #seeddate); the main select was amended to show a date for missing entries isnull(r.Date, m.Date),
I've calculated the #SeedDate to be the 1st of the current month one year ago but you may want to pass in an earlier date.
declare #MonthSeed table (MonthNumber int)
insert into #MonthSeed values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12)
-- assumes declared table "Reading" with fields ( Id int, [Date] datetime, MeterNo varchar(50), Consumption int )
declare #SeedDate datetime = (select dateadd(month, datediff(month, 0, getdate())-12, 0)) -- this month, last year
insert into Reading (MeterNo, Date, Consumption)
select
m.MeterNo,
isnull(r.Date, m.Date),
calculatedConsumption =
isnull(r.Consumption, -- read consumption
isnull(1.15 * (select max(r2.Consumption) Consumption
from Reading r2
where datepart(month, r2.Date) = (m.MonthNumber - 1)
and r2.MeterNo = m.MeterNo), -- previous consumption + 15%
9999)) -- default consumption
from
(select distinct
MeterNo,
MonthNumber,
Date = dateadd(month, monthnumber, #seeddate)
from
Reading
cross join
#MonthSeed) m
left join
Reading r on r.MeterNo = m.MeterNo and datepart(month, r.Date) = m.monthNumber
where
r.Id is null
select * from Reading
(The following assumes SQL Server 2005 or later.)
Scrounge around in here and see if there's anything of value:
declare #StartDate as Date = '2012-01-01'
declare #Now as Date = GetDate()
declare #DefaultConsumption as Int = 2000 -- KWh.
declare #MeterReadings as Table
( MeterReadingId Int Identity, ReadingDate Date, MeterNumber VarChar(10), Consumption Int )
insert into #MeterReadings ( ReadingDate, MeterNumber, Consumption ) values
( '2012-01-13', 'E154', 2710 ),
( '2012-01-19', 'BR549', 650 ),
( '2012-02-15', 'E154', 2970 ),
( '2012-02-19', 'BR549', 618 ),
( '2012-03-16', 'BR549', 758 ),
( '2012-04-11', 'E154', 2633 ),
( '2012-04-20', 'BR549', 691 )
; with Months ( Month ) as (
select #StartDate as [Month]
union all
select DateAdd( mm, 1, Month )
from Months
where Month < #Now
),
MeterNumbers ( MeterNumber ) as (
select distinct MeterNumber
from #MeterReadings )
select M.Month, MN.MeterNumber,
MR.MeterReadingId, MR.ReadingDate, MR.Consumption,
Coalesce( MR.Consumption, #DefaultConsumption ) as [BillableConsumption],
( select Max( ReadingDate ) from #MeterReadings where MeterNumber = MN.MeterNumber and ReadingDate < M.Month ) as [PriorReadingDate],
( select Consumption from #MeterReadings where MeterNumber = MN.MeterNumber and ReadingDate =
( select Max( ReadingDate ) from #MeterReadings where MeterNumber = MN.MeterNumber and ReadingDate < M.Month ) ) as [PriorConsumption],
( select Consumption from #MeterReadings where MeterNumber = MN.MeterNumber and ReadingDate =
( select Max( ReadingDate ) from #MeterReadings where MeterNumber = MN.MeterNumber and ReadingDate < M.Month ) ) * 1.15 as [PriorConsumptionPlus15Percent]
from Months as M cross join
MeterNumbers as MN left outer join
#MeterReadings as MR on MR.MeterNumber = MN.MeterNumber and DateAdd( dd, 1 - DatePart( dd, MR.ReadingDate ), MR.ReadingDate ) = M.Month
order by M.Month, MN.MeterNumber