I would like to create a json path for every node in the tree,
The goal of every path is to store all the ancestors of the current node.
I've already found a solution to solve this matter in this question:
Nested leaf node.
However, it only works for leaf nodes, and I need a query for every single node in the tree.
Looking for a solution in pgsql.
I did try to solve it on my own by the query I found mentioned above, and could not figure it out.
Glad for any hel
Related
My apologies in advance if this question has already been asked, if so I cannot find it.
So, I have this huge data base divided by country where I need to import from each country data base individually and then, in Power Query, append the queries as one.
When I imported the US files, the Power Query automatically generated a Transform File folder with 4 helper queries:
Then I just duplicated the query US - Sales and named it as UK - Sales pointing it to the UK sales folder:
The Transform File folder didn't duplicate, though.
Everything seems to be working just fine right now, however I'd like to know if this could be problem in the near future, because I still have several countries to go. Should I manually import new queries as new connections instead of just duplicating them or it just doesn't matter?
Many thanks!
The Transform Files Folder group contains the code that is called to transform a list of files. It is re-usable code. You can see the Sample File, which serves as the template for the transform actions.
As long as the file that is arrived at for the Sample File has the same structure as the files that you are feeding into the command, then you can use any query with any list of files.
One thing you need to make sure is that the Sample File is not removed from your data source. You may want to create a new dummy file just for that purpose, make sure it won't be deleted, and then point the Sample File query to pull just that file.
The Transform Helper Queries are special queries that you may edit the queries, but you cannot delete and recreate your own manually. They are automatically created by PQ when combining list of contents and are inherently linked to the parent query.
That said, you cannot replicate them, and must use the Combine function provided by PQ to create the helper queries.
You may however, avoid duplicating the queries, instead replicate your steps in the parent query, and use table union to join the list before combining the contents with the same helper queries.
I have a project that has iteration paths set up like this: Project-Name/sub-project/year/quarter/sprint
I want to get the list of sub-projects in each Project (or a list of all iteration paths and I can clean the data myself). Is this possible using REST API? If not, are there any other ways to achieve this?
Thank you.
You can use the classification nodes api to get the iteration tree
The following will give you a nested list of iteration with children depth set to 1 (which should contain your sub-projects as the last level. See here for an example of the returned data.
GET https://dev.azure.com/fabrikam/Fabrikam-Fiber-Git/_apis/wit/classificationnodes/Iterations?api-version=6.0&depth=1
Problem
We have a graph in which locations are connected by services. Services that have common key-values are connected by service paths. We want to find all the service paths we can use to get from Location A to Location Z.
The following query matches services that go directly from A to Z and service paths that take one hop to get from A to Z:
MATCH p=(origin:location)-[:route]->(:service)-[:service_path*0..1]->
(:service)-[:route]->(destination:location)
WHERE origin.name='A' AND destination.name='Z'
RETURN p;
and runs fine.
But if we expand the search to service paths that may take two hops between A and Z:
MATCH p=(origin:location)-[:route]->(:service)-[:service_path*0..2]->
(:service)-[:route]->(destination:location)
WHERE origin.name='A' AND destination.name='Z'
RETURN p;
the query runs forever.
It never times out or crashes the server - it just runs continuously.
However, if we make the variable-length part of the query bidirectional:
MATCH p=(origin:location)-[:route]->(:service)-[:service_path*0..2]-
(:service)-[:route]->(destination:location)
WHERE origin.name='A' AND destination.name='Z'
RETURN p;
The same query that ran forever now executes instantaneously (~30ms on a dev database with default Postgres config).
More info
Behavior in different versions
This problem occurs in AgensGraph 2.2dev (cloned from GitHub). In Agens 2.1.0 , the first query -[:service_path*0..1]-> still works fine, but the broken query -[:service_path*0..2]-> AND the version that works in 2.2dev, -[:service_path*0..2]- result in an error:
ERROR: btree index keys must be ordered by attribute
This leads us to believe that the problem is related to this commit, which was included as a bug fix in Agens 2.1.1
fix: Make re-scanning inner scan work
VLE threw "btree index keys must be ordered by attribute" because
re-scanning inner scan is not properly done in some cases. A regression
test is added to check it.
Duplicate paths returned, endlessly
In AgensBrowser v1.0, we are able to get results out using LIMIT on the end of the broken query. The query always returns the maximum number of rows, but the resulting graph is very sparse. Direct paths and paths with one hop show up, but only one longer path appears.
In the result set, the shorter paths are returned in individual records as expected, but first occurrence of a two-hop path is duplicated for the rest of the rows.
If we return some collection along with the paths, like RETURN p, nodes(p) LIMIT 100, the query again runs infinitely.
(Interestingly, this row duplication also occurs for a slightly different query in which we used the bidirectional fix, but the entire expected graph was returned. This may deserve its own post.)
The EXPLAIN plans are identical for the one-hop and two-hop queries
We could not compare EXPLAIN ANALYZE (because you can't ANALYZE a query that never finishes running) but the query plans were exactly identical between all of the queries - those that ran and those that didn't.
Increased logging revealed nothing
We set the logging level for Postgres to DEBUG5, the highest level, and ran the infinitely-running query. The logs showed nothing amiss.
Is this a bug or a problem with our data model?
My goal is, given a CuratorFramework that is decorated with a path to a root node, and a String, to watch for events two levels down any path to that String.
More specifically, I would like to watch for events on any path ROOT/<anything here>/INPUT_STRING. I also need to watch for nodes being added in the middle layer, but I am not interested in the contents of those middle nodes (only that they appeared, so I can watch for a child to be created for the INPUT_STRING).
My idea was to create a NodeCache for each path to ROOT/<added middle node>/INPUT_STRING whenever a middle node is added. I thought I could then watch for middle nodes being added using a PathChildrenCache, but that seems like overkill since I'm not interested in the contents of the middle nodes.
Is there some better way to create a NodeCache for the INPUT_STRING two levels down? Or should I be using a PathChildrenCache, even though I don't care about the contents of the middle nodes?
You can use TreeCache to cache/watch/listen to a tree of ZNodes. I believe that will do what you need. http://curator.apache.org/curator-recipes/tree-cache.html
I managed to download a planet file from OSM, and converted it to o5m format with osmconvert, and additional deleted all of the author information from it, to keep the file size smaller. I am trying to fetch every POI from this database from the whole world, so I am not interested in cities, towns, highways, ways, etc, only amenities.
First I tried to achieve this by using osmosis, which as it appears manages to do what I want, only it always runs out of memory, cause the file is too huge to process. (I could split the file up to smaller ones, but I would like to avoid this if possible).
I tried experimenting with osmfilter, there, I managed to filter out every node which has a tag named amenity in it, but I have several problems which I can't solve:
a. if I use the following command:
osmfilter planet.o5m -v --keep-tags="amenity=" -o=amenities.osm
It keeps all nodes, and filters out every tag which doesn't have amenity in it's name.
b. if I use this command:
osmfilter planet.o5m -v --keep-tags="all amenity=" -o=amenities.osm
It now filters out all nodes which doesn't have the amenity tag, but also filters out all additional tags from the matching nodes, which contain information that I need(for example, name of the POI, or description)
c. if I use this command:
osmfilter planet.o5m -v --keep-tags="all amenity= all name=" -o=amenities.osm
Filters out every node which has EITHER name or amenity in it's tags, which leaves me with several named cities or highways(data that I don't need).
I also tried separating this with an AND operator, but it says, that I can't use AND operator when filtering for tags. Any idea how could I achieve the desired result?
End note: I am running a Windows 7 system, so no Linux based program would help me:|
Please try --keep= option instead of --keep-tags option. The latter has no influence on which object will be kept in the file.
For example:
osmfilter planet.o5m --keep="amenity= and name=" -o=amenities.osm
will keep just that objects which have an amenity tag AND a name tag.
Be aware that all dependent objects will also be in the output file. For example, if there is a way object with the requested tags, each node of this way will be in the output file as well. The same is valid for relations and their dependent ways and nodes.
If you do not want this kind of behaviour, please add --ignore-dependencies
Some more information can be found here: https://wiki.openstreetmap.org/wiki/Osmfilter
osmfilter inputfile.osm --keep-nodes="amenity=" --keep-tags="all amenity= name=" --ignore-dependencies -o=outputfile.osm
this is exactly what you are looking for:
keep nodes with the tag amenity (--keep-nodes)
keep only info about amenities and names (--keep-tags)