Qgis: calculating distances for different types of lines for a report table - qgis

I have a Qgis project with lines classified in a field in 5 types. Composing a report, I need a table that shows the five types of line in the first column and in the second column the length for each type of line. No idea how to do this.
Any help, please?

Solved.
CASE
WHEN "Infra_class" = 0 THEN round ( sum ( "length","Infra_class" is '0') )
WHEN "Infra_class" = 1 THEN round ( sum ( "length","Infra_class" is '1') )
WHEN "Infra_class" = 2 THEN round ( sum ( "length","Infra_class" is '2') )
WHEN "Infra_class" = 3 THEN round ( sum ( "length","Infra_class" is '3') )
ELSE round ( sum ( "length","Infra_class" is '4') )
END

Related

How to properly nest select queries in postgresql? A specific (somewhat complex) example

I'd like some help with a query that I'm trying to build.
I have the following query that I'm building:
select sd.address,
sd.colour_code,
s.id as deactivated_id,
s.type,
'http://site/groups/' || c.group_id || '/user/' || c.subject_id as URL
from sensors s
join contracts c on s.contract_id=c.id
join sensordevices sd on sd.id=s.device_id
where s.begin_time is not null and
s.end_time is not null and
c.end_date is null and
not exists (select * from sensors sb
where sb.begin_time is not null and --b properly entered
sb.end_time is null and -- b active
sb.contract_id=s.contract_id and
s.type=sb.type and
s.end_time<=sb.begin_time)
;
Which generates a table like this (without the barcode column which I would like to add):
On which I would like a barcode column, but the barcode is not in the database but a derived attribute.
In order to get barcodes I was given the following query:
SELECT
ean || '' ||
(
10
-
(
(
SELECT
SUM(digit)
FROM (
SELECT
ROW_NUMBER() over () AS pos,
digit
FROM (
SELECT
REGEXP_SPLIT_TO_TABLE(
ean::TEXT,
''::TEXT
)::INT AS digit
) AS split
) AS sub
WHERE pos < LENGTH(ean::TEXT)
AND pos % 2 = LENGTH(ean::TEXT) % 2
)
+
(
SELECT
SUM(digit * 3)
FROM (
SELECT
ROW_NUMBER() over () AS pos,
digit
FROM (
SELECT
REGEXP_SPLIT_TO_TABLE(
ean::TEXT,
''::TEXT
)::INT AS digit
) AS split
) AS sub
WHERE pos < LENGTH(ean::TEXT)
AND pos % 2 = (LENGTH(ean::TEXT) - 1) % 2
)
) % 10
) % 10 as barcode
FROM
(select '0' || sd.type || '00000' || sd.address as ean
from sensors s
join sensordevices sd on s.device_id = sd.id
) as ean;
Which returns a table with a single column, barcode.
So, my question is, how can I use the second query to select the barcode for the first statement?
What I tried was the following:
On my query after select I added brc.barcode and after from (SECOND-LARGE-QUERY-HERE) brc, but I got this error: [53100] ERROR: could not write to file "base/pgsql_tmp/pgsql_tmp62390.6721": No space left on device which I assume is because it's trying to temporarily save a list of every possible barcode before executing the query. I also tried adding some constraints inside the nested query, but I got another error there.
Could someone help me with the query?
Thank you very much for your time,
Bill

Find and Replace numbers in a string

If I input a string as given below, I should be able to convert as mentioned below.
Ex 1: String - 5AB89C should be converted as 0000000005AB0000000089C
Ex 2: String GH1HJ should be converted as GH0000000001HJ
Ex 3: String N99K7H45 should be B0000000099K0000000007H0000000045
Each number should be complimented with 10 leading zeros including the number. In Ex:1, number 5 is complemented with 9 leading zeros making 10 digits, same way 89 is complimented with 8 leading zeros making total of 10 digits. Alphabets and any special characters should be untouched.
Once you get a copy of PatternSplitCM This is easy as pie.
Here's how we do it with one value:
DECLARE #string VARCHAR(8000) = '5AB89C'
SELECT CASE f.[matched] WHEN 1 THEN '00000000'+'' ELSE '' END + f.item
FROM dbo.patternsplitCM(#String,'[0-9]') AS f
ORDER BY f.ItemNumber
FOR XML PATH('');
Returns: 000000005AB0000000089C
Now against a table:
-- sample data
DECLARE #table TABLE (StringId INT IDENTITY, String VARCHAR(8000));
INSERT #table(String)
VALUES('5AB89C'),('GH1HJ'),('N99K7H45');
SELECT t.StringId, oldstring = t.String, newstring = f.padded
FROM #table AS t
CROSS APPLY
(
SELECT CASE f.[matched] WHEN 1 THEN '00000000'+'' ELSE '' END + f.item
FROM dbo.patternsplitCM(t.String,'[0-9]') AS f
ORDER BY f.ItemNumber
FOR XML PATH('')
) AS f(padded);
Returns:
StringId oldstring newstring
----------- ----------------- --------------------------------------
1 5AB89C 000000005AB0000000089C
2 GH1HJ GH000000001HJ
3 N99K7H45 N0000000099K000000007H0000000045
... and that's it. The code to create PatternSplitCM is below.
PatternSplitCM Code:
CREATE FUNCTION dbo.PatternSplitCM
(
#List VARCHAR(8000) = NULL
,#Pattern VARCHAR(50)
) RETURNS TABLE WITH SCHEMABINDING
AS
RETURN
WITH numbers AS (
SELECT TOP(ISNULL(DATALENGTH(#List), 0))
n = ROW_NUMBER() OVER(ORDER BY (SELECT NULL))
FROM
(VALUES (0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) d (n),
(VALUES (0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) e (n),
(VALUES (0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) f (n),
(VALUES (0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) g (n))
SELECT
ItemNumber = ROW_NUMBER() OVER(ORDER BY MIN(n)),
Item = SUBSTRING(#List,MIN(n),1+MAX(n)-MIN(n)),
Matched
FROM (
SELECT n, y.Matched, Grouper = n - ROW_NUMBER() OVER(ORDER BY y.Matched,n)
FROM numbers
CROSS APPLY (
SELECT Matched = CASE WHEN SUBSTRING(#List,n,1) LIKE #Pattern THEN 1 ELSE 0 END
) y
) d
GROUP BY Matched, Grouper

DAX closest value match with no relationship

I'm trying to migrate a report from Excel into Power BI and I'm hoping someone can help me as I'm new to DAX.
I have two tables and one (let's call it table A) contains a column of planned start Date/Times for events while the other contains the actual start Date/Times of the same events. There is usually only a few minutes difference between the planned and actual start times.
I need to match the closest actual start Date/Time from Table B to the planned start Date/Times in table A.
There are no existing columns that I can use to create a relationship between the two tables.
If I can find the closest actual start time and pull it into Table A then I can create a relationship from that.
In Excel I would do this with an array formula such as this: (here I'm just assuming everything is in column A of each table)
{=Index(TableB!A:A,match(min(abs(TableB!A:A-TableA!A1)),abs(TableB!:A:A-TableA!A1),0),1)}
I have found the following DAX code online but it will only return the next lowest value even if there is a closer value which is higher.
If (
Hasonevalue ( TableA[A] ),
Calculate (
Max ( TableB[A] ),
Filter ( TableB, TableB[A] <= Values ( TableA[A] ) )
)
)
I've also tried to figure out a way to do this if I build a date/time table which contains every minute of the date range that my data covers (about 2 years) at but as I said I'm new to DAX and haven't been able to figure it out.
Is there any way to use something similar to the (min(abs( part of the excel formula in DAX (as it has these functions) to calculate this in a calculated column? Is this possible without an existing relationship or will I have to continue to do this part of the work in Excel every time I want to update this report?
Any help greatly appreciated.
Create a calculated column in the Planned table, call it ActualClosestDate and use this expression:
ActualClosestDate =
IF (
DATEDIFF (
CALCULATE (
MAX ( TableB[Actual] ),
FILTER ( TableB, [Planned] >= [Actual] && TableA[Event] = TableB[Event] )
),
[Planned],
SECOND
)
< DATEDIFF (
[Planned],
CALCULATE (
MIN ( TableB[Actual] ),
FILTER ( TableB, [Planned] <= [Actual] && TableA[Event] = TableB[Event] )
),
SECOND
),
CALCULATE (
MAX ( TableB[Actual] ),
FILTER ( TableB, [Planned] >= [Actual] && TableA[Event] = TableB[Event] )
),
CALCULATE (
MIN ( TableB[Actual] ),
FILTER ( TableB, [Planned] <= [Actual] && TableA[Event] = TableB[Event] )
)
)
Where:
[Planned] is the Planned Start Date/time column in TableA
[Actual] is the Actual Start Date/Time column in TableB
Replace according to your model.
If you don't have a Event column in each table supress that condition in the filters functions.
UPDATE: Calculating three different columns could improve performance instead of perform the calculation in one expression.
BeforePlanned =
DATEDIFF (
CALCULATE (
MAX ( TableB[Actual] ),
FILTER ( TableB, [Planned] >= [Actual] && TableA[Event] = TableB[Event] )
),
[Planned],
SECOND
)
AfterPlanned =
DATEDIFF (
[Planned],
CALCULATE (
MIN ( TableB[Actual] ),
FILTER ( TableB, [Planned] <= [Actual] && TableA[Event] = TableB[Event] )
),
SECOND
)
ActualClosestDate =
IF (
[BeforePlanned] < [AfterPlanned],
CALCULATE (
MAX ( TableB[Actual] ),
FILTER ( TableB, [Planned] >= [Actual] && TableA[Event] = TableB[Event] )
),
CALCULATE (
MIN ( TableB[Actual] ),
FILTER ( TableB, [Planned] <= [Actual] && TableA[Event] = TableB[Event] )
)
)
You could even split it in more columns, i.e. a column to get the previous actual date and a column to get the next actual date then you just need:
ActualClosestDate =
IF ( [BeforePlanned] < [AfterPlanned], [PreviousActualDate], [NextActualDate] )
Let me know if this helps.

Pivot and Null Value

In this simple Pivot Example (T-SQL), I am trying to replace the NULL with 0. I have tried all the suggestions that I found but still getting NULL. How do I replace the NULL value in a PIVOT with 0
select *
from
(
select ISNULL([vendid],0)as [vendid],isnull(origdocamt,0)as origdocamt
from APDoc
) X
pivot
(sum(origdocamt) for vendid in ([AAA],[BBB])) As P
Output
AAA BBB
45800 NULL
You will want to use IsNull or Coalesce to perform the replacement of null in the final select:
select
[AAA] = IsNull([AAA], 0),
[BBB] = IsNull([BBB], 0)
from
(
select [vendid],
origdocamt
from APDoc
) X
pivot
(
sum(origdocamt)
for vendid in ([AAA],[BBB])
) As P
The NULL's you are seeing are because of the new values:
select ISNULL([AAA],0) as [AAA], ISNULL([BBB],0) as [BBB]
from
(
select ISNULL([vendid],0)as [vendid],isnull(origdocamt,0)as origdocamt
from APDoc
) X
pivot
(sum(origdocamt) for vendid in ([AAA],[BBB])) As P

Check for equal amounts of negative numbers as positive numbers

I have a table with two columns: intGroupID, decAmount
I want to have a query that can basically return the intGroupID as a result if for every positive(+) decAmount, there is an equal and opposite negative(-) decAmount.
So a table of (id=1,amount=1.0),(1,2.0),(1,-1.0),(1,-2.0) would return back the intGroupID of 1, because for each positive number there exists a negative number to match.
What I know so far is that there must be an equal number of decAmounts (so I enforce a count(*) % 2 = 0) and the sum of all rows must = 0.0. However, some cases that get by that logic are:
ID | Amount
1 | 1.0
1 | -1.0
1 | 2.0
1 | -2.0
1 | 3.0
1 | 2.0
1 | -4.0
1 | -1.0
This has a sum of 0.0 and has an even number of rows, but there is not a 1-for-1 relationship of positives to negatives. I need a query that can basically tell me if there is a negative amount for each positive amount, without reusing any of the rows.
I tried counting the distinct absolute values of the numbers and enforcing that it is less than the count of all rows, but it's not catching everything.
The code I have so far:
DECLARE #tblTest TABLE(
intGroupID INT
,decAmount DECIMAL(19,2)
);
INSERT INTO #tblTest (intGroupID ,decAmount)
VALUES (1,-1.0),(1,1.0),(1,2.0),(1,-2.0),(1,3.0),(1,2.0),(1,-4.0),(1,-1.0);
DECLARE #intABSCount INT = 0
,#intFullCount INT = 0;
SELECT #intFullCount = COUNT(*) FROM #tblTest;
SELECT #intABSCount = COUNT(*) FROM (
SELECT DISTINCT ABS(decAmount) AS absCount FROM #tblTest GROUP BY ABS(decAmount)
) AS absCount
SELECT t1.intGroupID
FROM #tblTest AS t1
/* Make Sure Even Number Of Rows */
INNER JOIN
(SELECT COUNT(*) AS intCount FROM #tblTest
)
AS t2 ON t2.intCount % 2 = 0
/* Make Sure Sum = 0.0 */
INNER JOIN
(SELECT SUM(decAmount) AS decSum FROM #tblTest)
AS t3 ON decSum = 0.0
/* Make Sure Count of Absolute Values < Count of Values */
WHERE
#intABSCount < #intFullCount
GROUP BY t1.intGroupID
I think there is probably a better way to check this table, possibly by finding pairs and removing them from the table and seeing if there's anything left in the table once there are no more positive/negative matches, but I'd rather not have to use recursion/cursors.
Create TABLE #tblTest (
intA INT
,decA DECIMAL(19,2)
);
INSERT INTO #tblTest (intA,decA)
VALUES (1,-1.0),(1,1.0),(1,2.0),(1,-2.0),(1,3.0),(1,2.0),(1,-4.0),(1,-1.0), (5,-5.0),(5,5.0) ;
SELECT * FROM #tblTest;
SELECT
intA
, MIN(Result) as IsBalanced
FROM
(
SELECT intA, X,Result =
CASE
WHEN count(*)%2 = 0 THEN 1
ELSE 0
END
FROM
(
---- Start thinking here --- inside-out
SELECT
intA
, x =
CASE
WHEN decA < 0 THEN
-1 * decA
ELSE
decA
END
FROM #tblTest
) t1
Group by intA, X
)t2
GROUP BY intA
Not tested but I think you can get the idea
This returns the id that do not conform
The not is easier to test / debug
select pos.*, neg.*
from
( select id, amount, count(*) as ccount
from tbl
where amount > 0
group by id, amount ) pos
full outer join
( select id, amount, count(*) as ccount
from tbl
where amount < 0
group by id, amount ) neg
on pos.id = neg.id
and pos.amount = -neg.amount
and pos.ccount = neg.ccount
where pos.id is null
or neg.id is null
I think this will return a list of id that do conform
select distinct(id) from tbl
except
select distinct(isnull(pos.id, neg.id))
from
( select id, amount, count(*) as ccount
from tbl
where amount > 0
group by id, amount ) pos
full outer join
( select id, amount, count(*) as ccount
from tbl
where amount < 0
group by id, amount ) neg
on pos.id = neg.id
and pos.amount = -neg.amount
and pos.ccount = neg.ccount
where pos.id is null
or neg.id is null
Boy, I found a simpler way to do this than my previous answers. I hope all my crazy edits are saved for posterity.
This works by grouping all numbers for an id by their absolute value (1, -1 grouped by 1).
The sum of the group determines if there are an equal number of pairs. If it is 0 then it is equal, any other value for the sum means there is an imbalance.
The detection of evenness by the COUNT aggregate is only necessary to detect an even number of zeros. I assumed that 0's could exist and they should occur an even number of times. Remove it if this isn't a concern, as 0 will always pass the first test.
I rewrote the query a bunch of different ways to get the best execution plan. The final result below only has one big heap sort which was unavoidable given the lack of an index.
Query
WITH tt AS (
SELECT intGroupID,
CASE WHEN SUM(decAmount) > 0 OR COUNT(*) % 2 = 1 THEN 1 ELSE 0 END unequal
FROM #tblTest
GROUP BY intGroupID, ABS(decAmount)
)
SELECT tt.intGroupID,
CASE WHEN SUM(unequal) != 0 THEN 'not equal' ELSE 'equals' END [pair]
FROM tt
GROUP BY intGroupID;
Tested Values
(1,-1.0),(1,1.0),(1,2),(1,-2), -- should work
(2,-1.0),(2,1.0),(2,2),(2,2), -- fail, two positive twos
(3,1.0),(3,1.0),(3,-1.0), -- fail two 1's , one -1
(4,1),(4,2),(4,-.5),(4,-2.5), -- fail: adds up the same sum, but different values
(5,1),(5,-1),(5,0),(5,0), -- work, test zeros
(6,1),(6,-1),(6,0), -- fail, test zeros
(7,1),(7,-1),(7,-1),(7,1),(7,1) -- fail, 3 x 1
Results
A pairs
_ _____
1 equal
2 not equal
3 not equal
4 not equal
5 equal
6 not equal
7 not equal
The following should return "disbalanced" groups:
;with pos as (
select intGroupID, ABS(decAmount) m
from TableName
where decAmount > 0
), neg as (
select intGroupID, ABS(decAmount) m
from TableName
where decAmount < 0
)
select distinct IsNull(p.intGroupID, n.intGroupID) as intGroupID
from pos p
full join neg n on n.id = p.id and abs(n.m - p.m) < 1e-8
where p.m is NULL or n.m is NULL
to get unpaired elements, select satement can be changed to following:
select IsNull(p.intGroupID, n.intGroupID) as intGroupID, IsNull(p.m, -n.m) as decAmount
from pos p
full join neg n on n.id = p.id and abs(n.m - p.m) < 1e-8
where p.m is NULL or n.m is NULL
Does this help?
-- Expected result - group 1 and 3
declare #matches table (groupid int, value decimal(5,2))
insert into #matches select 1, 1.0
insert into #matches select 1, -1.0
insert into #matches select 2, 2.0
insert into #matches select 2, -2.0
insert into #matches select 2, -2.0
insert into #matches select 3, 3.0
insert into #matches select 3, 3.5
insert into #matches select 3, -3.0
insert into #matches select 3, -3.5
insert into #matches select 4, 4.0
insert into #matches select 4, 4.0
insert into #matches select 4, -4.0
-- Get groups where we have matching positive/negatives, with the same number of each
select mat.groupid, min(case when pos.PositiveCount = neg.NegativeCount then 1 else 0 end) as 'Match'
from #matches mat
LEFT JOIN (select groupid, SUM(1) as 'PositiveCount', Value
from #matches where value > 0 group by groupid, value) pos
on pos.groupid = mat.groupid and pos.value = ABS(mat.value)
LEFT JOIN (select groupid, SUM(1) as 'NegativeCount', Value
from #matches where value < 0 group by groupid, value) neg
on neg.groupid = mat.groupid and neg.value = case when mat.value < 0 then mat.value else mat.value * -1 end
group by mat.groupid
-- If at least one pair within a group don't match, reject
having min(case when pos.PositiveCount = neg.NegativeCount then 1 else 0 end) = 1
You can compare your values this way:
declare #t table(id int, amount decimal(4,1))
insert #t values(1,1.0),(1,-1.0),(1,2.0),(1,-2.0),(1,3.0),(1,2.0),(1,-4.0),(1,-1.0),(2,-1.0),(2,1.0)
;with a as
(
select count(*) cnt, id, amount
from #t
group by id, amount
)
select id from #t
except
select b.id from a
full join a b
on a.cnt = b.cnt and a.amount = -b.amount
where a.id is null
For some reason i can't write comments, however Daniels comment is not correct, and my solution does accept (6,1),(6,-1),(6,0) which can be correct. 0 is not specified in the question and since it is a 0 value it can be handled eather way. My answer does NOT accept (3,1.0),(3,1.0),(3,-1.0)
To Blam: No I am not missing
or b.id is null
My solution is like yours, but not exactly identical