KDB+ / Q How to create a namespace within a function - kdb

I tried to build a function that takes in the name of namespace, a list of key, and a list of value. However, the function unable to read the key and value of the namespace.
For example:
q) makens:{[ns,keylist,valuelist] temp_ns: x.keylist.valuelist; set[.ns;temp_ns]}
q) makens:[`hello;`age`gender;`10 "M"]
I'm expecting my output to be:
q) .hello
| ::
age | `10
gender | "M"
Instead, the namespace doesn't work and when I use set at the end, it doesn't read .ns as the input, i.e.:
q) makens:[`hello;`age`gender;`10 "M"]
q) .hello
'.hello
[3] .hello
^
q) .ns
`10`M

You can use:
q)makens:{[ns;keylist;valuelist](` sv `,ns) upsert ((`,keylist)!(::),valuelist)};
q)makens[`hello;`age`gender;`10,"M"]
`.hello
q).hello
| ::
age | `10
gender| "M"

Related

direct answer to sparql select query of equivalent class for graphdb?

I have an "EquivalentTo" definition in Protege of a class EquivClass as (hasObjProp some ClassA) and (has_data_prop exactly 1 rdfs:Literal)
Is there a form of SPARQL query for GraphDB 9.4 to get the "direct" answer to a select query of an equivalent class without having to collect and traverse all the constituent blank nodes explicitly? Basically, I'm looking for a short cut. I'm not looking to get instances of the equivalent class, just the class definition itself in one go. I've tried to search for answers, but I'm not really clear on what possibly related answers are saying.
I'd like to get something akin to
(hasObjProp some ClassA) and (has_data_prop exactly 1 rdfs:Literal)
as an answer to the SELECT query on EquivClass. If the answer is "not possible", that's enough. I can write the blank node traversal with the necessary properties myself.
Thanks!!
Files are -
Ontology imported into GraphDB: tester.owl - https://pastebin.com/92K7dKRZ
SELECT of all triples from GraphDB *excluding* inferred triples: tester-graphdb-sparql-select-all-excl-inferred.tsv - https://pastebin.com/fYdG37v5
SELECT of all triples from GraphDB *including* inferred triples: tester-graphdb-sparql-select-all-incl-inferred.tsv - https://pastebin.com/vvqPH1FZ
Added sample query in response to #UninformedUser. I use "select *" for example, but really I'm interested in the "end results", ie, ?fp, ?fo, ?rop, ?roo. Essentially, I'm looking for something simpler and more succinct than what I have below.The example I posted only has a single intersection ("and" clause). In my real world set, there are multiple equiv classes with different numbers of "and" clauses.
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX : <http://www.semanticweb.org/ontologies/2020/9/tester#>
select * where {
:EquivClass owl:equivalentClass ?bneq .
?bneq ?p ?bnhead .
?bnhead rdf:first ?first .
?first ?fp ?fo .
?bn3 rdf:rest ?rest .
?rest ?rp ?ro .
?ro ?rop ?roo .
filter(?bn3 != owl:Class && ?ro!=rdf:nil)
}
You can unroll the list using a property path:
prefix : <http://www.semanticweb.org/ontologies/2020/9/tester#>
prefix owl: <http://www.w3.org/2002/07/owl#>
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
prefix xsd: <http://www.w3.org/2001/XMLSchema#>
prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>
select * {
:EquivClass owl:equivalentClass/owl:intersectionOf/rdf:rest*/rdf:first ?restr.
?restr a owl:Restriction .
optional {?restr owl:onProperty ?prop}
optional {?restr owl:cardinality ?cardinality}
optional {?restr owl:someValuesFrom ?class}
}
This returns:
| | restr | prop | cardinality | class |
| 1 | _:node3 | :hasObjProp | | :ClassA |
| 2 | _:node5 | :has_data_prop | "1" ^^xsd:nonNegativeInteger | |

What is the most efficient way to extract the last part of a split string in PostgreSQL?

I want to extract the subdomain of a fully qualified domain up to the second level in a PostgreSQL function.
At the moment I have the following snippet which works, but I'm not sure if this is the most efficient way of doing it:
subdomains := left(query, length(query) - length(tld));
RETURN reverse(split_part(reverse(subdomains), '.', 1)) || tld;
It is guaranteed that the query ends with the tld substring.
Examples:
+---------------------+---------+---------------+
| query | tld | output |
+---------------------+---------+---------------+
| abc.example.com | .com | example.com |
| x.y.z.example.co.uk | .co.uk | example.co.uk |
| zzz.123.yyy.com.br | .com.br | yyy.com.br |
+---------------------+---------+---------------+
This one is not horribly efficient too but at least does not reverse twice and I guess that array_length is cheap and string_to_array is roughly as expensive as split_part. This may be wrong but is worth trying.
sd_arr := string_to_array(subdomains, '.');
RETURN sd_arr[array_length(sd_arr , 1)] || tld;
Somewhat better w/o variable assignment:
RETURN (select arr[array_length(arr,1)] from (select string_to_array(subdomains, '.') as arr) t) || tld;
Not sure if this is more efficient, but you can compare it to your implementation:
create or replace function get_domain(p_input text, p_tld text)
returns text
as
$$
declare
l_tld text[];
l_items text[];
begin
l_tld := string_to_array(trim('.' from p_tld), '.');
l_items := string_to_array(trim('.' from p_input), '.');
return array_to_string(l_items[cardinality(l_items) - cardinality(l_tld):], '.');
end
$$
language plpgsql
immutable;
It essentially converts the input and the top level domain into arrays (stripping of any leading . to avoid empty array elements.
It then calculates the starting element to be returned by subtracting the length (=number of elements) of the tld from the length of the input. So for the input x.y.z.example.co.uk this is 6 - 2, which means it returns everything starting with the 4th element, which is then converted back to a "dotted" notation.
Online example

Syntax Error Raised When Trying to update value with XOR in Postgresql

I'm trying to update a column of a salary table in my Postgres database. The script I'm trying to use is:
UPDATE "LC".sex
SET sex = CHAR(ASCII('f') ^ ASCII('m') ^ ASCII(sex));
As this worked in MySQL. However, I got a syntax error:
ERROR: syntax error at or near "ASCII"
LINE 2: SET sex = CHAR(ASCII('f') ^ ASCII('m') ^ ASCII(sex));"
I tried to dig around and tried my luck with CHR()function and then got this:
function chr(double precision) does not exist
I've nearly given up until I tried this:
SELECT CHAR(ASCII('f') ^ ASCII('m') ^ ASCII('f'));
And that gave me the same syntax error, however, SELECT CHAR(ASCII('f') ^ ASCII('m'); does work in Postgres. So I'm critically stumped. What am I doing wrong?
Thanks.
The ^ operator is for exponentiation in PostgreSQL, you'd want # for bitwise XOR, see Mathematical Operators in the fine manual for details.
So you could say:
update "LC".sex
set sex = chr(ascii('f') # ascii('m') # ascii(sex));
However, I'm a little curious about what you're trying to accomplish with all the bit wrangling. If sex is 'f' then you get 'm'; if sex is 'm' then you get 'f'; if sex is null you get null; if sex is anything else then you get nonsense:
=> select sex, chr(ascii('f') # ascii('m') # ascii(sex))
from (values ('f'), ('m'), ('F'), ('M'), (null), ('X'), ('y')) t(sex);
sex | chr
-----+-----
f | m
m | f
F | M
M | F
|
X | S
y | r
(7 rows)
If you just want to flip the sexes then why not say so:
update "LC".sex
set sex = case lower(sex) when 'f' then 'm' when 'm' then 'f' else null end;
A minor modification will preserve case if that's an issue. That will convert anything other than 'f', 'F', 'm', and 'M' to null but presumably that's not an issue.

Postgres json select not ignoring quotes

I have the following table and setup
create table test (
id serial primary key,
name text not null,
meta json
);
insert into test (name, meta) values ('demo1', '{"name" : "Hello"}')
However, when I run this query, this is the result
select * from test;
id | name | meta
----+-------+--------------------
1 | demo1 | {"name" : "Hello"}
(1 row)
but
select * from test where meta->'name' = 'Hello';
ERROR: operator does not exist: json = unknown
LINE 1: select * from test where meta->'name' = 'Hello';
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
-
select * from test where cast(meta->'name' as text) = 'Hello';
id | name | meta
----+------+------
(0 rows)
and this works
select * from test where cast(meta->'name' as text) = '"Hello"';
id | name | meta
----+-------+--------------------
1 | demo1 | {"name" : "Hello"}
(1 row)
Can anyone tell me what the relevance of this quote is and why it's not doing a simple string search/comparison? Alternatively, does this have something to do with the casting?
That's because the -> gets a field not a value, so you need to add the cast to say to postgresql which data type you are after.
So to run your query like you want you need to use the ->> which gets the json element as text see it here on the docs JSON Functions and Operators
So your query should looks like:
select *
from test
where meta->>'name' = 'Hello';
See it working here: http://sqlfiddle.com/#!15/bf866/8

Using wildcards in column value in LIKE condition

I have a situation where I need to find an entity matching a given filename. The filename is in this form:
filename1 = "ABCD_126518.pdf";
filename2 = "XYZ_32162.pdf";
In the Oracle DB, I have entities with filename_patterns like the following:
ID | filename_pattern
1 | ABCD_
2 | KLM
3 | XYZ_
I need to find the pattern ID that the given filename matches to. In the given example it should be ID = 1 for filename1 and ID = 3 for filename2. How should the query look like in Java for the named query?
I need something like
SELECT p FROM FilenamePattern p WHERE p.filename_pattern || "%" LIKE :param;
We use Oracle DB and JPA 1.0.
How about,
SELECT p FROM FilenamePattern p WHERE :param LIKE CONCAT(p.filename_pattern, "%")