Can SPARQL be used to find subjects having identical objects for a given predicate. Considering a class Variable with data property as
Variable--hasvalue-->Integer, if there are five instances such as
a ----hasvalue------> 2
b ----hasvalue------> 1
c ----hasvalue------> 2
d ----hasvalue------> 0
e ----hasvalue------> 1
How to extract a and c has same value whereas b and e has same value. Group option works in grouping as above, but is it possible to extract subjects corresponding to each grouped objects.
It's always easier to write example code if you provide sample data to work with. Please provide sample data in the future. Your suggested data looks like this:
#prefix : <urn:ex:>
:a :hasValue 2 .
:b :hasValue 1 .
:c :hasValue 2 .
:d :hasValue 0 .
:e :hasValue 1 .
You can use a query with group by and group_concat to concatenate the variables together for each distinct value:
prefix : <urn:ex:>
select ?value (group_concat(?variable) as ?variables) {
?variable :hasValue ?value
}
group by ?value
-------------------------------
| value | variables |
===============================
| 2 | "urn:ex:c urn:ex:a" |
| 1 | "urn:ex:e urn:ex:b" |
| 0 | "urn:ex:d" |
-------------------------------
Related
My table looks something like this:
id | data
1 | A=1000 B=2000
2 | A=200 C=300
In kdb is there a way to normalize the data such that the final table is as follows:
id | data.1 | data.2
1 | A | 1000
1 | B | 2000
2 | A | 200
2 | C | 300
One option would be to make use of 0: & it's key-value parsing functionality, documented here https://code.kx.com/q/ref/file-text/#key-value-pairs e.g.
q)ungroup delete data from {x,`data1`data2!"S= "0:x`data}'[t]
id data1 data2
---------------
1 A "1000"
1 B "2000"
2 A "200"
2 C "300"
assuming you want data2 to be long datatype (j), can do
update "J"$data2 from ungroup delete data from {x,`data1`data2!"S= "0:x`data}'[t]
You could use a combination of vs (vector from scalar), each-both ' and ungroup:
q)t:([]id:1 2;data:("A=1000 B=2000";"A=200 C=300"))
q)t
id data
------------------
1 "A=1000 B=2000"
2 "A=200 C=300"
q)select id, dataA:`$data[;0], dataB:"J"$data[;1] from
ungroup update data: "=" vs '' " " vs ' data from t
id dataA dataB
--------------
1 A 1000
1 B 2000
2 A 200
2 C 300
I wouldn't recommend naming the columns with . e.g. data.1
I've got a requirement to built a list report to show volume by 3 grouped by columns. The issue i'm having is if nothing happened on specific days for the specific grouped columns, i cant force it to show 0.
what i'm currently getting is something like:
ABC | AA | 01/11/2017 | 1
ABC | AA | 03/11/2017 | 2
ABC | AA | 05/11/2017 | 1
what i need is:
ABC | AA | 01/11/2017 | 1
ABC | AA | 02/11/2017 | 0
ABC | AA | 03/11/2017 | 2
ABC | AA | 04/11/2107 | 0
ABC | AA | 05/11/2017 | 1
ive tried going down the route of unioning a "dummy" query with no query filters, however there are days where nothing has happened, at all, for those first 2 columns so it doesn't always populate.
Hope that makes sense, any help would be greatly appreciated!
to anyone who wanted an answer i figured it out. Query 1 for just the dates, as there will always be some form of event happening daily so will always give a unique date range.
query 2 for the other 2 "grouped by" columns.
Create a data item in each with "1" as the result (but would work with anything as long as they are the same).
Query 1, left join to Query 2 on this new data item.
This then gives a full combination of all 3 columns needed. The resulting "Query 3" can then be left joined again to get the measures. Final query (depending on aggregation) may need to have the measure data item wrapped with a COALESCE/ISNULL to create a 0 on those days nothing happened.
I have a table1 containing a column A, where ~100,000 strings (varchar) are stored. Unfortunately, each string has multiple words which are seperated with spaces. Further they have different length, i.e. one string can consist of 3 words while an other string contains 7 words.
Then I have a column B stored in a second table2 which contains only 100 strings in the same manner. Hence, multiple words per string, seperated by spaces.
The target is, to look how likely a record of Column B is matching with probably multiple records of column A based on the words. The result should also have a ranking. I was thinking of using full text search in a loop but I don't know how to do this, or if there is a proper way to achieve this?
I don't know if you can "tturn" table to a dictionary to use full text for ranking here. But you can query it with some primityve ranking quite easily, eg:
t=# with a(a) as (values('a b c'),('a c d'),('b e f'),('r b t'),('q w'))
, b(i,b) as (values(1,'a b'), (2,'e'), (3,'b'))
, p as (select unnest(string_to_array(b.b,' ')) arr,i from b)
select a phrases,arr match_words,count(1) over (partition by arr) words_in_matches, count(1) over (partition by i) matches,i from a left join p on a.a like '%'||arr||'%';
phrases | match_words | words_in_matches | matches | i
---------+-------------+------------------+---------+---
r b t | b | 6 | 5 | 1
a b c | b | 6 | 5 | 1
b e f | b | 6 | 5 | 1
a b c | a | 2 | 5 | 1
a c d | a | 2 | 5 | 1
b e f | e | 1 | 1 | 2
r b t | b | 6 | 3 | 3
a b c | b | 6 | 3 | 3
b e f | b | 6 | 3 | 3
q w | | 1 | 1 |
(10 rows)
phrases are rows from your big table.
match_words are tokens from your small table (splitted by spaces)
words_in_matches amount of tokens in phrases
matches is amount of matches in big table phrases from small table phrases
i index of phrase from small table
So you can order by third or fourth column to get some sort of ranking...
If my question is a bit obscure, here is what I mean, we can aggregate one column of multiple rows using array_agg, for instance I have this table
foo | bar | baz
-------+-------+-------
1 | 10 | 20
1 | 12 | 23
1 | 15 | 26
1 | 16 | 21
If I invoke this query :
select
foo,
array_agg(bar) as bars
from table
group by (foo)
resulting in :
foo | bars
-------+----------------
1 | {10,12,15,16}
What would be the query to have this table (using bar,baz) ?
foo | barbazs
-------+------------------------------------
1 | {{10,20},{12,23},{15,26},{16,21}}
I checked into functions-aggregate (postgresql.org) but it doesn't seem to be any functions to have that effect or am I missing something ?
array_agg has arrays as possible input values.
We just need a way to build an array from the two input colums bar and baz, which can be done using the ARRAY constructor:
SELECT foo, array_agg(ARRAY[bar, baz]) as barbaz FROM table GROUP BY foo;
foo | barbaz
-----+-----------------------------------
1 | {{10,20},{12,23},{15,26},{16,21}}
Note : It also works with DISTINCT (...array_agg(distinct array[bar,baz])...)
I need a Postgresql Query that returns the count of every type of combination of record.
For example, I have a table T with columns A, B, C, D, E and other columns that are not of importance:
Table T
--------------
A | B | C | D | E
The query should return a table R with the values from columns A, B, C, D, and a count for how many times each configuration occurs with the specified E value.
Table R
---------------
A | B | C | D | count
When all of the counts for each record are added together, it should equal the total number of records in the original table.
It seems like a very simple problem, but due to my lack of SQL knowledge, I cannot figure out how to do this.
The only solution I can think of is this:
select a, b, c, d, count(*)
from T
where e = 'abc'
group by a, b, c, d
But when adding the counts up from this query, it is way more than the count of the original table. It seems like count(*) shouldn't be used, or i'm just totally going about this the wrong way. I'd really appreciate any advice as to how I should go about this. Thank you all.
NULL values couldn't possibly fool you. Consider this demo:
WITH t(a,b,c,d) AS (
VALUES
(1,2,3,4)
,(1,2,3,NULL)
,(2,2,3,NULL)
,(2,2,3,NULL)
,(2,2,3,4)
,(2,NULL,NULL,NULL)
,(NULL,NULL,NULL,NULL)
)
SELECT a, b, c, d, count(*)
FROM t
GROUP BY a, b, c, d
ORDER BY a, b, c, d;
a | b | c | d | count
---+---+---+---+-------
1 | 2 | 3 | 4 | 1
1 | 2 | 3 | | 1
2 | 2 | 3 | 4 | 1
2 | 2 | 3 | | 2
2 | | | | 1
| | | | 1
There must be some other misunderstanding here.
I figured it out, it was something really stupid. I forgot to specify the where 'E' = 'ABC' clause in the select count(*) when comparing the count. Thanks anyway for your help guys!