Here is my issue, I'm working on a project of water supply where I have to merge line features representing pipes if they have the same material of construction and if they are touching each other. The merge is done two by two what it means that some features will be duplicated in some cases like described in the figure below :
this shows exactly my issue. After the merge I will get three records but what I want is just one record which encompasses the whole pipes that fill the conditions put in the where clause :
Here is the query that helps me do the merge :
drop table if exists touches_material;
create table touches_material as
select distinct a.*,
st_Union(a.geom,b.geom) as fusion from pipe a, pipe b
where a.id < b.id and a.material = b.material and
st_touches(a.geom,b.geom)
group by a.id,a.geom,b.geom
the following picture picture shows the expected result on a test data, it's realized via QGIS GIS software :
but this is what I' getting with my query :
if you have any idea about how to achieve the aim that I invoked, I would be very thankful to get an answer from you. Best regards.
Related
I am trying to join/concatenate 2 tables, because it would be more comfortable to work because they have the "year-month" column in common. As you can see below:
I just have 1 table uploaded in Qlik Sense, the 1st one. And I'm trying to upload the 2nd one to just one table. There would be no point in having to move the "year-month" period in the columns 2 times.
I'm using only expressions and the tables have the relation year-month.
Any ideas?
With Only Expressions
It might be tricky because what you're trying to do is typically handled prior to going to the front-end. If you're only trying to easily work with the data and selecting it, add the two separate tables as a master item (whatever chart you would like) then add that chart to your sheet. That will allow you to call one chart data set.
Ultimately, it doesn't solve your problem, but it's the best you're going to have if you want to use only expressions.
Combining the Tables
I think is your intent: use the data load script. The data load script is a simple SQL-like script that executes when the "load data" button is pressed. Here you can make columns, do math, join tables, split tables, do whatever you want.
Information about the script loader:
https://help.qlik.com/en-US/sense/February2020/Subsystems/Hub/Content/Sense_Hub/Scripting/introduction-data-modeling.htm
It seems like you're trying to do is an outer join. Qlik data load script syntax reserved word for outer joins is simply join --It's an outer join by default.
At its core, you would simply have something like this:
LOAD * from table1.csv;
join LOAD a, d from table2.csv;
I am a novice in SAS program.
I have a question about merging two dataset.
The two data sets look like (please click this Image link (Excel sheet image):
Please let me know key concepts or code to make this happen!
I have searched the answer through Googling etc., but there is no site that exactly solve what I want.
(If it is possible to tackle above question without PROC SQL.)
To get the desired result you should do a cartesian product (Cross join) which returns all the rows in all tables. Each row in table1 is paired with all the rows in table2. I have used Proc SQL to do this and I am eager to see how this can be done using Data step. Here's what I know,
Proc Sql;
create table test_merge as
select a.*, b.type_rhs, b.rhs1, b.rhs2
from test a, test11 b
where a.yearmonth=b.yearmonth
;
quit;
Again, I am new to SAS as well and I think this is one of the ways to create the desired output.
When working with huge data, you will see a note in log that says "The execution of this query involves performing one or more Cartesian product joins that can not be optimized."
I know how to use talend's tMap component to output matched data in lookup data, however, I don't know how to output these rows that is not matched with data in lookup table. Maybe a simple question to senior user. Thanks all the way.
Regards,
Joe
Two steps are required to gather rejected rows:
On the left hand side you have to set Join Model to Inner Join on the join you want to find rejected rows
On the right hand side set Catch lookup inner join reject to true. This row will get all rejected entries. So you can create one row which gets all found entries and another row which delivers only the rejected rows
Usually this leads to a tMap with two output rows in your job.
in tMap output table there is setting options. Go to that and there you will see couple of options like "Catch lookup inner join reject" & "catch output reject" - you can set them to false/true based on your need. My guess is that you are looking for "Catch lookup inner join reject".
I have a query like this, which we use to generate data for our custom dashboard (A Rails app) -
SELECT AVG(wait_time) FROM (
SELECT TIMESTAMPDIFF(MINUTE,a.finished_time,b.start_time) wait_time
FROM (
SELECT max(start_time + INTERVAL avg_time_spent SECOND) finished_time, branch
FROM mytable
WHERE name IN ('test_name')
AND status = 'SUCCESS'
GROUP by branch) a
INNER JOIN
(
SELECT MIN(start_time) start_time, branch
FROM mytable
WHERE name IN ('test_name_specific')
GROUP by branch) b
ON a.branch = b.branch
HAVING avg_time_spent between 0 and 1000)t
GROUP BY week
Now I am trying to port this to tableau, and I am not being able to find a way to represent this data in tableau. I am stuck at how to represent the inner group by in a calculated field. I can also try to just use a custom sql data source, but I am already using another data source.
columns in mytable -
start_time
avg_time_spent
name
branch
status
I think this could be achieved new Level Of Details formulas, but unfortunately I am stuck at version 8.3
Save custom SQL for rare cases. This doesn't look like a rare case. Let Tableau generate the SQL for you.
If you simply connect to your table, then you can usually write calculated fields to get the information you want. I'm not exactly sure why you have test_name in one part of your query but test_name_specific in another, so ignoring that, here is a simplified example to a similar query.
If you define a calculated field called worst_case_test_time
datediff(min(start_time), dateadd('second', max(start_time), avg_time_spent)), which seems close to what your original query says.
It would help if you explained what exactly you are trying to compute. It appears to be some sort of worst case bound for avg test time. There may be an even simpler formula, but its hard to know without a little context.
You could filter on status = "Success" and avg_time_spent < 1000, and place branch and WEEK(start_time) on say the row and column shelves.
P.S. Your query seems a little off. Don't you need an aggregation function like MAX or AVG after the HAVING keyword?
I hope you can help find an answer to a problem that will become a recurring theme at work. This involves denormalising data from RDBMS tables to flat file formats with repeating groups (sharing domain and meaning) across columns. Unfortunately this is unavoidable.
Here's a very simplified example of the transformation I'd require:
TABLE A TABLE B
------------------- 1 -> MANY ----------------------------
A_KEY FIELD_A B_KEY A_KEY FIELD_B
A_KEY_01 A_VALUE_01 B_KEY_01 A_KEY_01 B_VALUE_01
A_KEY_02 A_VALUE_02 B_KEY_02 A_KEY_01 B_VALUE_02
B_KEY_03 A_KEY_02 B_VALUE_03
This will become:
A_KEY FIELD_A B_KEY1 FIELD_B1 B_KEY2 FIELD_B2
A_KEY_01 A_VALUE_01 B_KEY_01 B_VALUE_01 B_KEY_02 B_VALUE_02
A_KEY_02 A_VALUE_02 B_KEY_03 B_VALUE_03
Each entry from TABLE A will have one row in the output flat file with one column per related field from TABLE B. Columns in the output file can have empty values for fields obtained from TABLE B.
I realise this will create an extremely wide file, but this is a requirement. I've had a look at MapForce and Apatar, but I think this problem is too bizarre or I can't use them correctly.
My question: is there already a tool that will accomplish this or should I develop one from scratch (I don't want to reinvent the wheel)?
I'm pretty sure you can't solve this in plain SQL, but depending on your RDBMS, it may be possible to create a stored procedure or some such thing. Otherwise it's a fairly easy thing to do in a scripting language. Which technology are you using?
Does this help?
using-pivot-in-sql-server-2008
Thanks for all your help. As it turns out the relationship is ONE -> MAX of 3 and this constraint will not change as the data is now static so the following run-of-the-mill SQL works:
select A.A_KEY, A.FIELD_A, B.B_KEY, B.FIELD_B, B2.B_KEY, B2.FIELD_B, B3.B_KEY,
B3.FIELD_B
from
A left join B on (A.A_KEY = B.A_KEY)
left join B B2 on (A.A_KEY = B2.A_KEY and B2.B_KEY != B.B_KEY)
left join B B3 on (A.A_KEY = B3.A_KEY and B3.B_KEY != B.B_KEY
and B3.B_KEY != B2.B_KEY)
group by A.A_KEY
order by A.A_KEY