Present the prev and next relations to current todo - emacs

It seems that org-todo are good at displaying and remindering tasks but loose at express their logic relations.
Suppose working on Task E.
* TODO Task E
In the middle of Task E, I realse its prev-task D should be get done in advance.
thus, I setup a new task
* PREV Task D
E.org::* TODO Task E
Since the task is set in another file from where Task E reside, so a link to E is added.
However,the current job E isn't aware of there exist some prev links to it. So
manual jobs are mondatory to insert Prev links to E:
* TODO Task E
D.org::* PREV Task D
When things get complicted with more prev jobs and next jobs bound to current task, even worse the current task itself work as prev or next to other project.
The interwinded relations implemented by manual jobs are disaster.
I am not very familar with some magic functions like containers etc then have to express the logic relation with inserting links by labor.
Is there any solutions to the problem?

Related

neo4j creating random empty nodes when merging

I'm trying to create a new node with label C and relationships from a-->c and b-->c, but if and only if the whole pattern a-->c,b-->c does exist.
a and b already exist (merged before the rest of the query).
The below query is a portion of the query I want to write to accomplish this.
However, it creates a random empty node devoid of properties and labels and attaches the relationship to that node instead. This shouldn't be possible and is certainty not what I want. How do I stop that from happening?
merge (a: A {id: 1})
merge (b: B {id:1})
with *
call {with a, b
match (a)-[:is_required]->(dummy:C), (a)-[:is_required]->(b)
with count(*) as cnt
where cnt = 0
merge (temp: Temporary {id: 12948125})
merge (a)-[:is_required]->(temp)
return temp
}
return *
Thanks
I think there are a couple of problems here:
There are restrictions on how you can use variables introduced with WITH in a sub-query. This article helps to explain them https://neo4j.com/developer/kb/conditional-cypher-execution/
I think you may be expecting the WHERE to introduce conditional flow like IF does in other languages. WHERE is a filter (maybe FILTER would have been a better choice of keyword than WHERE). In this case you are filtering out 'cnt's where they are 0, but then never reference cnt again, so the merge (temp: Temporary {id: 12948125}) and merge (a)-[:is_required]->(temp) always get executed. The trouble is, due to the above restrictions on using variables inside sub-queries, the (a) node you are trying to reference doesn't exist, it's not the one in the outer query. Neo4j then just creates an empty node, with no properties or labels and links it to the :Temporary node - this is completely valid and why you are getting empty nodes.
This query should result in what you intend:
merge (a: A {id: 1})
merge (b: B {id:1})
with *
// Check if a is connected to b or :C (can't use a again otherwise we'd overwrite it)
optional match(x:A {id: 1}) where exists((a)-[:is_required]->(:C)) or exists((a)-[:is_required]->(b))
with *, count(x) as cnt
// use a case to 'fool' foreach into creating the extra :Temporary node required if a is not related to b or :C
foreach ( i in case when cnt = 0 then [1] else [] end |
merge (temp: Temporary {id: 12948125})
merge (a)-[:is_required]->(temp)
)
with *
// Fetch the :Temporary node if it was created
optional match (a)-[:is_required]->(t:Temporary)
return *
There are apoc procedures you could use to perform conditional query execution (they are mentioned in the linked article). You could also play around with looking for a path from (a) and check its length, rather than introduce a new MATCH and the variable x then checking for the existance of related nodes.
If anyone is having the same problem, the answer is that the Neo4j browser is display nonexistent nodes. The query executes fineā€¦

Postgresql sequence: lock strategy to prevent record skipping on a table queue

I have a table that acts like a queue (let's call it queue) and has a sequence from 1..N.
Some triggers inserts on this queue (the triggers are inside transactions).
Then external machines have the sequence number and asks the remote database: give me sequences greater than 10 (for example).
The problem:
In some cases transaction 1 and 2 begins (numbers are examples). But transaction 2 ends before transaction 1. And in between host have asked queue for sequences greater than N and transaction 1 sequences are skipped.
How to prevent this?
I would proceed like this:
add a column state to the table that you change as soon as you process an entry
get the next entry with
SELECT ... FROM queuetab
WHERE state = 'new'
ORDER BY seq
LIMIT 1
FOR UPDATE SKIP LOCKED;
update state in the row you found and process it
As long as you do the last two actions in a single transaction, that will make sure that you are never blocked, get the first available entry and never skip an entry.

In Snowflake task do we have something like child task will wait until dependency are met other then parent task condition

I have 4 stream.
A_STREAM, B_STREAM, C_STREAM, D_ STREAM
I have chain of task where A_TASK is parent and it has 3 child task (B_TASK, C_TASK, D_TASK).
CREATE TASK A_TASK
WAREHOUSE = XYZ
SECHDULE = '15 MINUTE'
WHEN SYSTEM$STREAM_HAS_DATA('A_STREAM)
AS
DO Something;
CREATE TASK C_TASK
WAREHOUSE=XYZ
AFTER A_TASK
WHEN SYSTEM$STREAM_HAS_DATA('C_STREAM')
AS
DO SOMETHING;
Let say A_TASK got triggered and completed but when it came to execution for C_TASK stream C_STREAM didn't had data so task didn't got triggered.
After 5 minutes C_STREAM got data.
Here the issue is data will never got loaded to Target table from C_STREAM since next time A_TASK won't get triggered. How do we tackle these kind of secnario?
I can't seperate these task since they operate on same target table.
In Snowflake task do we have something like child task will wait until dependency is met?

Simple sequence of events

Assume events of either type A, B, C or D are being emitted. I want to detect whenever an event of type A is followed by an event of type B. In other words, I want to detect a sequences, for which Esper's EPL provides the -> operator.
However, what I described above is ambiguous, what I want is the following: Whenever a B is detected, I want it to be matched with the most recent A.
I have been playing around with EPL's syntax, but the best I could come up with was that:
select * from pattern [(every a=A) -> b=B]
This, however, matches each B with the oldest A that occured after the last match. Weird...
Help is much appreciated! :P
I use joins a lot for the simple matching. The other option is match-recognize. The join like this.
select * from B unidirectional, A.std:lastevent()

Use Spring Batch to write in different Data Sources

For a project I need to process items from one table and generate 3 different items for 3 different tables, all 3 in a second data source different from the one of the first item. The implementation is done with Spring Batch over Oracle DB. I think this question has something similar to what I need, but in there it is writing at the end only one different item.
To ilustrate the situation:
DataSource 1 DataSource 2
------------ ------------------------------
Table A Table B Table C Table D
The reader should read one item from table A. In the processor, using the information from the item in A, 3 new items will be created of type B, C and D. In addition, the item from table A will be updated.
The writer should be able to write at the same time all 4 items. My first implementation is using a JpaItemWriter to update the item A, but I don't know how the processor could give the other 3 items to the writer in order to save all at the same time.
Can a processor return several items from different types? Would I need to create 4 steps, each one writing one of the items? And in this case, would that be error safe (If there is an error writing D, then A, B, and C would be rollback)?
Thanks in advance for your support!
Your question is really two questions. Let's look at each individually:
Can an ItemProcessor return multiple items
An ItemProcessor can only return one item at a time for each item that is passed in. Because of this, in your specific scenario, you'll need your ItemProcessor to return a wrapper object that wraps items A, B, C, and D.
How can I write different types in the same step
Spring Batch relies heavily on composition in it's programming model. Since your ItemProcessor will be returning a wrapper object, you'll end up writing an ItemWriter that unwraps items A, B, C, and D and delegates the writing of each to the apropriate writer. So in the final solution, you'll end up with 5 ItemWriters: one for each item type and one that wraps all of those. Take a look at our CompositeItemWriter as an example here: https://github.com/spring-projects/spring-batch/blob/master/spring-batch-infrastructure/src/main/java/org/springframework/batch/item/support/CompositeItemWriter.java