how to perform full outer join using tJoin in talend - talend

I am trying to implement full outer join using tJoin component but I am not getting as expected results. Could anyone help me on this?
Screenshot of tJoin:

In fact Talend does not implement a full join, but you can achieve it by reading your inputs twice, performing a left and a right join for each reading, then unite the two flows using tUnite and get unique rows by tUniqRow

I think tJoin is for LEFT or INNER joins.
For FULL joins you need to use a tMap.
Regards,
TRF

Related

How to use OPTIMIZE ZORDER BY in Databricks

I have two dataframes(from a delta lake table) that do a left join via an id column.
sd1, sd2
%sql
select
a.columnA,
b.columnB,
from sd1 a
left outer join sd2 b
on a.id = b.id
The problem is that my query takes a long time, looking for ways to improve the results I have found OPTIMIZE ZORDER BY Youtube video
according to the video seems to be useful when ordering columns if they are going to be part of the where condition`.
But since the two dataframes use the id in the join condition, could it be interesting to order that column?
spark.sql(f'OPTIMIZE delta.`{sd1_delta_table_path}` ZORDER BY (id)')
the logic that follows in my head is that if we first order that column then it will take less time to look for them to make the match. Is this correct ?
Thanks ind advance
OPTIMIZE ZORDER may help a bit by placing related data together, but it's usefulness may depend on the data type used for ID column. OPTIMIZE ZORDER relies on the data skipping functionality that just gives you min & max statistics, but may not be useful when you have big ranges in your joins.
You can also tune a file sizes, to avoid scanning of too many smaller files.
But from my personal experience, for joins, bloom filters give better performance because they allow to skip files more efficiently than data skipping. Just build bloom filter on the ID column...

Error while extracting the data from two dataframe using SQL

I'm trying to extract the data by joining the two table, in pyspark. My join Query looks like:
SELECT COUNT(DISTINCT m.ticker),to_date(m.date) FROM extractalpha_cam2 m LEFT OUTER JOIN TOP1000 u ON u.date = to_date(m.date) GROUP BY m.date ORDER BY m.date
It is throwing the error:
Error:Py4JJavaError: An error occurred while calling
z:org.apache.zeppelin.spark.ZeppelinContext.showDF
But when, i tried extracting the data from each table, it's working fine. My queries from single table are like
SELECT to_date(date) FROM extractalpha_cam2
SELECT date from TOP1000
These two queries working fine. Can anyone help me in extracting the data from both table by joining.
It would be really helpful if anyone can share any such link, which can guide me in writing the efficient queries in pyspark.
I checked and found that, this error comes when, the job you are running took more time than the time you set for timeout. In my case it was 300 seconds.
Let me know if anyone has more valuable answer than this. Thanks

using talend to output data not matched in lookup table

I know how to use talend's tMap component to output matched data in lookup data, however, I don't know how to output these rows that is not matched with data in lookup table. Maybe a simple question to senior user. Thanks all the way.
Regards,
Joe
Two steps are required to gather rejected rows:
On the left hand side you have to set Join Model to Inner Join on the join you want to find rejected rows
On the right hand side set Catch lookup inner join reject to true. This row will get all rejected entries. So you can create one row which gets all found entries and another row which delivers only the rejected rows
Usually this leads to a tMap with two output rows in your job.
in tMap output table there is setting options. Go to that and there you will see couple of options like "Catch lookup inner join reject" & "catch output reject" - you can set them to false/true based on your need. My guess is that you are looking for "Catch lookup inner join reject".

ZF_DB: how to make simple select for two or more tables without join?

Example:
SELECT `cat`.`id_catalog`, COUNT(parent.id_catalog) - 1) AS `level` FROM `tbl_catalog` AS `cat`, `tbl_catalog` AS `parent` WHERE (cat.`left` BETWEEN parent.`left` AND parent.`right`) GROUP BY `cat`.`id_catalog` ORDER BY `cat`.`left` ASC
It doesn't seem to work if it use ZF. ZF create this query with join only. How to create the select without join in ZF_DB.
By the way may be I do smth wrong in this query. It is simple nested set DB with parent, left and right fields. Perhaps there are another way to use join to get deep for some node. Anyway it would be interesting to get answer for both way:)
Thanks in advance to all who looks it through:)

Optimising (My)SQL Query

I usually use ORM instead of SQL and I am slightly out of touch on the different JOINs...
SELECT `order_invoice`.*
, `client`.*
, `order_product`.*
, SUM(product.cost) as net
FROM `order_invoice`
LEFT JOIN `client`
ON order_invoice.client_id = client.client_id
LEFT JOIN `order_product`
ON order_invoice.invoice_id = order_product.invoice_id
LEFT JOIN `product`
ON order_product.product_id = product.product_id
WHERE (order_invoice.date_created >= '2009-01-01')
AND (order_invoice.date_created <= '2009-02-01')
GROUP BY `order_invoice`.`invoice_id`
The tables/ columns are logically names... it's an shop type application... the query works... it's just very very slow...
I use the Zend Framework and would usually use Zend_Db_Table_Row::find(Parent|Dependent)Row(set)('TableClass') but I have to make lots of joins and I thought it'll improve performance by doing it all in one query instead of hundreds...
Can I improve the above query by using more appropriate JOINs or a different implementation? Many thanks.
The query is wrong, the GROUP BY is wrong. All columns in the SELECT-part that are not in an aggregate function, have to be in the GROUP BY. You mention only one column.
Change the SQL Mode, set it to ONLY_FULL_GROUP_BY.
When this is done and you have a correct query, use EXPLAIN to find out how the query is executed and what indexes are used. Then start optimizing.