Does datajoint "join" operator require tables to have the same value at shared secondary attributes? - datajoint

I am a researcher in Loren Frank's lab at UCSF. When calling populate on a Computed datajoint table that depends on two upstream tables which have entries that have the same value for shared primary attributes but different values for a shared secondary attribute ("analysis_file_name"), I get the following error:
~/anaconda3/envs/nwb_datajoint/lib/python3.8/site-packages/datajoint/condition.py in assert_join_compatibility(expr1, expr2)
63 if not isinstance(expr1, U) and not isinstance(expr2, U): # dj.U is always compatible
64 try:
---> 65 raise DataJointError(
66 "Cannot join query expressions on dependent attribute `%s`" % next(
67 r for r in set(expr1.heading.secondary_attributes).intersection(
DataJointError: Cannot join query expressions on dependent attribute `analysis_file_name`
To make clear why this situation is arising, our lab currently has a workflow where all datajoint tables that store data in nwb files have the secondary attribute "analysis_file_name", which contains the name of the analysis file storing the data. Consequently, entries across two tables can share values at primary attributes, but differ in the value at the secondary attribute "analysis_file_name". The above error seems to arise when "joining" two such tables, e.g. during the autopopulation of a third table that depends on those tables. Could folks from datajoint clarify whether to join two tables (e.g. during autopopulation of a third table that depends on those tables), it must be the case that entries which have the same value at shared primary attributes also have the same value at shared secondary attributes? Thanks for any clarification on this.

This is indeed a bug and I filed it here: issue 980
We are working on the solution PR981
The problem is that the key_source did not project out the secondary attributes before joining.
In the meantime, the workaround is to override the key_source attribute projecting out the secondary attributes in one of the tables. For example, if the two parents are A and B then you need to define the method:
#property
def key_source(self):
return A.proj() * B

Related

Reference foreign keys using SSIS-Lookup

I am asking for help on the following topic. I am trying to create an ETL process with two Excel data sources (S1 ~300 rows and S2 ~7000 rows). S1 contains project information and employee details and S2 contains the amount of hours, which each employee worked in which project at a timestamp.
I want to insert the amount of hours, which each employee worked in each project at a timestamp, into the fact table by referencing to the existing primary keys in the dimension tables. If an entry is not present in the dimension tables already, i want to add a new entry first and use the newly generated id. The destination table structure looks as follows (Data Warehouse, Star Schema):Destination Table Structure
In SSIS, i created three Data Flow tasks for filling the Dimension Tables (project, employee and time) with distinct values (using group by, as S1 and S2 contain a lot of duplicate rows)first, and a fourth data flow task (see image below) to insert the FactTable data, and this is where I'm running into problems:
Data Flow Task FactTable
I am using three LookUp functions to retrieve the foreignKeys project_id, employee_id and time_id from the Dimension tables (using project name, employee number and timestamp). If the id is found, it is passed on all the way to Merge Join 1, if not, a new Dimension Entry is created (lets say project) and the generated project_id passed on instead. Same goes for employee and time respectively.
There is two issues with this:
1) The "amount of hours" (passed by Multicast four, see image above) is not matched in the final result (No Match)
2) The amount of rows being inserted keeps increasing forever (Endless Join, I belive due to the Merge joins).
What I've tried:
I have used one UNION instead of three Merge Joins before, but this resulted in the foreign keys being in seperate rows each, instead of merged together.
I used Merge (instead of Merge Join) and combined the join as well as sort conditions in as I fell all possible ways.
I understand that this scenario might be confusing for everybody else, but thank your for taking time looking at it! Any help is greatly appreciated.
Solved it
For anybody having similar issues:
Seperate Data Flows for filling Dimension Tables with those filling Fact Tables will do the trick.
Its a clean solution and easier to debug.
Also: Dont run the LookUp Functions in parallel, but rather one after each other and pass on the attributes. Saves unnecessary Merges as well.
So as a Sum Up:
Four Data Flow Tasks, three for filling dimension tables ONLY and one for filling fact tables ONLY.
Loading Multiple Tables using SSIS keeping foreign key relationships
The answer posted by onupdatecascade is basically it.
Good luck!

How do I restrict the number of columns used in an INSERT when using EntityFramework?

Let's say you have an Entity with 26 columns. It matches the corresponding table which also has 26 columns.
From time to time I would like to be able to send fewer columns in an INSERT (Add) operation than are specified in the entity because of certain business rules (In our case we have a trigger on a table that will automatically populate certain fields with data. We routinely leave those columns out of our INSERT statements)
I know that I can use DTOs to restrict the number of columns returned, but how do I restrict the number of columns sent?
If there are operations that would insert entities that only provide a subset of columns (non-null-able for example) then you can consider using a bounded context with an entity declaration for just those applicable columns. The bounded context is a smaller, single-purpose context for reading and writing data since a single EF context does not support multiple entity definitions to a single table.

MATLAB- Joining tables w/ overlapping data using key variable WHERE neither table contains all data points from the other one

I am working on combining 2 tables with different types of patient information using the PID (Patient Identity) feature present in both tables. Usually the function "join" (https://www.mathworks.com/help/matlab/ref/table.join.html) does the trick when one of the tables have information on all the patients from the other one. But in my case, both tables have certain values of PID (or information for new patients) that isn't present in the other one. How do I create a new table for using patient info from both tables that only contains info on the patients present in both tables?
I could probably write some long, clunky code to do this manually, but I was wondering if there's a function (or a few functions) that can do the task more efficiently. Thank you
The solution is to use either innerjoin or outerjoin.

Merge neo4j relationships into one while returning the result if certain condition satisfies

My use case is:
I have to return whole graph in result but the condition is
If there are more than 1 relationship in between two particular nodes in the same direction then I have to just merge it into 1 relationship. For ex: Lets say there are two nodes 'm' and 'n' and there are 3 relations in between these nodes say r1, r2, r3 (in the same direction) then when I get the result after firing cypher query I should get only 1 relation in between 'n' and 'm'.
I need to perform some operations on top of it like the resultant relation that we got from merging all the relations should contain the properties and their values that I want to retain. Actually I will retain all the properties of any one of the relations that are merging depending upon the timestamp field that is one of the properties in relation.
Note : I have same properties throughout all my relations (The number of properties and name of properties are same across all relations. Values may differ for sure)
Any help would be appreciated. Thanks in advance.
You mean something like this?
Delete all except the first
MATCH (a)-[r]->(b)
WITH a,b,type(r) as type, collect(r) as rels
FOREACH (r in rels[1..] | DELETE r)
Ordering by timestamp first
MATCH (a)-[r]->(b)
WITH a,r,b
ORDER BY r.timestamp DESC
WITH a,b,type(r) as type, collect(r) as rels
FOREACH (r in rels[1..] | DELETE r)
If you want to do all those operations virtually just on query results you'd do them in your programming language of choice.

ETL Process when and how to add in Foreign Keys T-SQL SSIS

I am in the early stages of creating a Data Warehouse based loosely on the Kimball methodology.
I am currently investigating my source data. I understand by the adding of a Primary key (not a natural key) this will then allow me to make the connections between the facts and dimensions.
Sounds like a silly question but how exactly is this done? Are there any good articles that run through this process?
I would imagine we bring in all of the Dimensions first. And when the fact data is brought over a lookup is performed that "pushes" the Foreign key into the Fact table? At what point is this done? Within SSIS whats is the "best practice" method? Is this all done in one package for example?
Is that roughly how it happens?
In this case do we have to be particularly careful in what order we load our data, or we could be loading facts for which there is no corresponding dimension?
I would imagine we bring in all of the Dimensions first. And when the
fact data is brought over a lookup is performed that "pushes" the
Foreign key into the Fact table? At what point is this done? Within
SSIS whats is the "best practice" method? Is this all done in one
package for example?
It would depend on your schema and table design.
Assuming it's star schema and the FK is based on the data value itself:
DIM1 <- FACT1 -> DIM2
^
|
FACT2 -> DIM3
you'll first fill DIM1 and DIM2 before inserting into FACT1 as you would need the FK.
Assuming it's snowflake schema:
DIM1_1
^
|
DIM1 <- FACT1 -> DIM2
you'll first fill DIM1_1 then DIM1 and DIM2 before inserting into FACT1.
Assuming the FK relation is based on something else (mostly a number) instead of the data value itself (kinda an optimization when dealing with huge amount of data and/or strings as dimension values), you won't need to wait until you insert the data into DIM table. I'm sure it's very confusing :), so I'll try to explain in short. The steps involved would be something like (assume a simple star schema with 2 tables, FACT1 and DIMENSION1):
Extract FACT and DIMENSION values from the data set you are processing.
Generate a unique number based on the DIMENSION's value (which say is a string), using a reproducible algorithm (e.g. SHA1, given same string, it always gives same number).
Insert into FACT1 table, the number and FACT values.
Insert into DIMENSION1 table, the number and DIMENSION values.
Steps 3 & 4 can be done in parallel. as long as there is NO constraint in place. A join on a numeric column would be more efficient than one of a string.
And there is no need to store the mapping for #2 because it's reproducible (just ensure you pick the right algo).
Obviously this can be extended for snowflake schema and/or multiple dimensions.
HTH