I have an OrientDB document database. I executed the following commands via Studio:
DROP CLASS student;
DROP CLASS studyCourse;
CREATE CLASS student;
CREATE CLASS studyCourse;
CREATE PROPERTY student.Id INTEGER;
CREATE INDEX Student.Id UNIQUE;
CREATE PROPERTY student.surname STRING;
CREATE PROPERTY student.FK_studyCourse_abbreviation STRING;
CREATE PROPERTY studyCourse.abbreviation STRING;
CREATE INDEX studyCourse.abbreviation UNIQUE;
CREATE PROPERTY studyCourse.name STRING;
CREATE LINK student TYPE LINKSET FROM student.FK_studyCourse_abbreviation TO studyCourse.abbreviation INVERSE;
INSERT INTO studyCourse SET abbreviation = 'Inf', name = 'informatics';
INSERT INTO student SET Id = '11111', surname = 'Miller';
UPDATE studyCourse ADD student = (SELECT FROM student WHERE Id = '11111') WHERE abbreviation = 'Inf';
Now I want to select values as described in the manual ( http://orientdb.com/docs/2.1/SQL.html ):
SELECT * FROM studyCourse WHERE student.surname = 'Miller';
There are no records found.
try using contains instead of =
SELECT FROM studyCourse WHERE student.surname contains 'Miller'
this is working for me:
----+-----+-----------+------------+-----------+-------
# |#RID |#CLASS |abbreviation|name |student
----+-----+-----------+------------+-----------+-------
0 |#14:0|studyCourse|Inf |informatics|[1]
----+-----+-----------+------------+-----------+-------
Ivan
Related
I want that all my object have a unique id that is set by PostgreSQL with a (serial) and another id that depends to the first one.
When creating an object if I set the second id after saving it, I'll have an INSERT and an UPDATE on the table, what is not really the best.
So to have only one INSERT I fetch the id from the PostgreSQL sequence and set the id with it instead of letting PostgreSQL do it at INSERT stage.
I'm pretty new on SQLAlchemy and want to be sure that this way of doing is race condition proof.
Thanks for you thoughts on this idea
class MyModel:
def __init__(self, session, **data):
"""
Base constructor for almost all model classes, performing common tasks
"""
cls = type(self)
if session:
"""To avoid having an UPDATE right after the INSERT we manually fetch
the next available id using a postgresl internal
SELECT nextval(pg_get_serial_sequence('events', 'id'));
To do that we need the table's name and the sequence
column's name, by chance we use the same name in all our
model
"""
table_name = cls.__tablename__
qry = f"SELECT nextval(pg_get_serial_sequence('{table_name}', 'id'))"
rs = session.execute(qry)
# TODO : find a non ugly way to to that
for row in rs:
next_id = row[0]
# manually set the object id
self.id = next_id
# set the external_id before saving the object in the database
self.ex_id = cls.ex_id_prefix + self.id
session.add(self)
session.flush([self])
If you are targetting Postgresql 12 or later, you can use a generated column. SQLAlchemy's Computed column type will create such a column, and we can pass an SQL expression to compute the value.
The model would look like this:
class MyModel(Base):
__tablename__ = 't68225046'
ex_id_prefix = 'prefix_'
id = sa.Column(sa.Integer, primary_key=True)
ex_id = sa.Column(sa.String,
sa.Computed(sa.text(":p || id::varchar").bindparams(p=ex_id_prefix)))
producing this DDL
CREATE TABLE t68225046 (
id SERIAL NOT NULL,
ex_id VARCHAR GENERATED ALWAYS AS ('prefix_' || id::varchar) STORED,
PRIMARY KEY (id)
)
and a single insert statement
2021-09-19 ... INFO sqlalchemy.engine.Engine BEGIN (implicit)
2021-09-19 ... INFO sqlalchemy.engine.Engine INSERT INTO t68225046 DEFAULT VALUES RETURNING t68225046.id
2021-09-19 ... INFO sqlalchemy.engine.Engine [generated in 0.00014s] {}
2021-09-19 ... INFO sqlalchemy.engine.Engine COMMIT
For earlier releases of Postgresql, or if you don't need to store the value in the database, you could simulate it with a hybrid property.
import sqlalchemy as sa
from sqlalchemy import orm
from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy.sql import cast
Base = orm.declarative_base()
class MyModel(Base):
__tablename__ = 't68225046'
ex_id_prefix = 'prefix_'
id = sa.Column(sa.Integer, primary_key=True)
#hybrid_property
def ex_id(self):
return self.ex_id_prefix + str(self.id)
#ex_id.expression
def ex_id(cls):
# See https://stackoverflow.com/a/54487891/5320906
return cls.ex_id_prefix + cast(cls.id, sa.String)
I have a many:many relationship between 2 tables: note and tag, and want to be able to search all notes by their tagId. Because of the many:many I have a junction table note_tag.
My goal is to expose a computed field on my Postgraphile-generated Graphql schema that I can query against, along with the other properties of the note table.
I'm playing around with postgraphile-plugin-connection-filter. This plugin makes it possible to filter by things like authorId (which would be 1:many), but I'm unable to figure out how to filter by a many:many. I have a computed column on my note table called tags, which is JSON. Is there a way to "look into" this json and pick out where id = 1?
Here is my computed column tags:
create or replace function note_tags(note note, tagid text)
returns jsonb as $$
select
json_strip_nulls(
json_agg(
json_build_object(
'title', tag.title,
'id', tag.id,
)
)
)::jsonb
from note
inner join note_tag on note_tag.tag_id = tagid and note_tag.note_id = note.id
left join note_tag nt on note.id = nt.note_id
left join tag on nt.tag_id = tag.id
where note.account_id = '1'
group by note.id, note.title;
$$ language sql stable;
as I understand the function above, I am returning jsonb, based on the tagid that was given (to the function): inner join note_tag on note_tag.tag_id = tagid. So why is the json not being filtered by id when the column gets computed?
I am trying to make a query like this:
query notesByTagId {
notes {
edges {
node {
title
id
tags(tagid: "1")
}
}
}
}
but right now when I execute this query, I get back stringified JSON in the tags field. However, all tags are included in the json, whether or not the note actually belongs to that tag or not.
For instance, this note with id = 1 should only have tags with id = 1 and id = 2. Right now it returns every tag in the database
{
"data": {
"notes": {
"edges": [
{
"node": {
"id": "1",
"tags": "[{\"id\":\"1\",\"title\":\"Psychology\"},{\"id\":\"2\",\"title\":\"Logic\"},{\"id\":\"3\",\"title\":\"Charisma\"}]",
...
The key factor with this computed column is that the JSON must include all tags that the note belongs to, even though we are searching for notes on a single tagid
here are my simplified tables...
note:
create table notes(
id text,
title text
)
tag:
create table tag(
id text,
title text
)
note_tag:
create table note_tag(
note_id text FK references note.id
tag_id text FK references tag.id
)
Update
I am changing up the approach a bit, and am toying with the following function:
create or replace function note_tags(n note)
returns setof tag as $$
select tag.*
from tag
inner join note_tag on (note_tag.tag_id = tag.id)
where note_tag.note_id = n.id;
$$ language sql stable;
I am able to retrieve all notes with the tags field populated, but now I need to be able to filter out the notes that don't belong to a particular tag, while still retaining all of the tags that belong to a given note.
So the question remains the same as above: how do we filter a table based on a related table's PK?
After a while of digging, I think I've come across a good approach. Based on this response, I have made a function that returns all notes by a given tagid.
Here it is:
create or replace function all_notes_with_tag_id(tagid text)
returns setof note as $$
select distinct note.*
from tag
inner join note_tag on (note_tag.tag_id = tag.id)
inner join note on (note_tag.note_id = note.id)
where tag.id = tagid;
$$ language sql stable;
The error in approach was to expect the computed column to do all of the work, whereas its only job should be to get all of the data. This function all_nuggets_with_bucket_id can now be called directly in graphql like so:
query MyQuery($tagid: String!) {
allNotesWithTagId(tagid: $tagid) {
edges {
node {
id
title
tags {
edges {
node {
id
title
}
}
}
}
}
}
}
How I am trying to use map to store multiple key value properties. the problem I see is that it doesn't let me store existing data, it overrides data everytime I tried to set a new property.
Create VERTEX Person extends V;
CREATE CLASS Person EXTENDS V;
CREATE PROPERTY Person.name STRING (MANDATORY TRUE, MIN 3, MAX 50);
Create VERTEX Person set name="test";
update ( SELECT from Person where name="test") SET mapField=
{"property1":mapField.property1+10};
set property1 into map, and update it, works just fine.
update ( SELECT from Person where name="test") SET mapField=
{"property1":mapField.property1+30};
select from Person;
Set another property "property2", now I loose the property1.
update ( SELECT from Person where name="test") SET mapField=
{"property2":mapField.property2+10};
select from Person;
is ther a way I can retain previous property and make this work still?
Thanks
Hari
This should do the trick:
update ( SELECT from Person where name="test")
SET mapField.property1 = mapField.property1 + 30;
In V 2.2 there was also an UPDATE PUT option, ie.
update ( SELECT from Person where name="test")
PUT mapField = property1, eval('mapField.property1 + 30');
but it's not supported anymore (and it's definitely ugly)
I am trying to write a MATCH statement to pull related data.
CREATE CLASS Member EXTENDS V;
CREATE CLASS DirectVolumes EXTENDS V;
CREATE SEQUENCE dvSequence TYPE ORDERED;
CREATE CLASS GroupVolumes EXTENDS V;
CREATE SEQUENCE groupVolumesSequence TYPE ORDERED;
CREATE CLASS GenerationVolumes EXTENDS V;
CREATE SEQUENCE genGroupVolumesSequence TYPE ORDERED;
CREATE SEQUENCE memberIdSeq TYPE ORDERED;
CREATE VERTEX Member SET id=sequence('memberIdSeq').next();
CREATE VERTEX DirectVolumes set id=sequence('dvSequence').next();
CREATE VERTEX GroupVolumes set id=sequence('groupVolumesSequence').next();
CREATE VERTEX GenerationVolumes set id=sequence('genGroupVolumesSequence').next();
CREATE EDGE OWNS_DV_CURRENT FROM (SELECT FROM Member WHERE id = 1) TO (select from DirectVolumes where id = 1);
CREATE EDGE OWNS_PG_CURRENT FROM (SELECT FROM Member WHERE id = 1) TO (select from GroupVolumes where id = 1);
CREATE EDGE OWNS_GG_CURRENT FROM (SELECT FROM Member WHERE id = 1) TO (select from GenerationVolumes where id = 1);
Now I want Member, andOWNS_DV_CURRENT, OWNS_PG_CURRENT, OWNS_GG_CURRENT edges data together. I have removed properties for simplicity.
What is the right way to do this with a MATCH? I didn't figure out how to fetch multiple related Classes. query seems ok but it returns 0 I think it is looking for OWNS_PG_CURRENT on top of OWNS_DV_CURRENT and not on Member itself.
select from (MATCH {class: Member, as:member}.out("OWNS_DV_CURRENT"){as:dv}.out("OWNS_PG_CURRENT") {as: pg} RETURN member, dv,pg)
Try this:
MATCH
{class: Member, as:member}.out("OWNS_DV_CURRENT"){as:dv},
{as:member}.out("OWNS_PG_CURRENT") {as: pg}
RETURN member, dv,pg
I have a table patient_details(patient_id, first_name, last_name, address,date_of_birth, gender, contact_number,occupation). I have generated an entity class and a PersistenceUnit. I can only find an object using its ID:
PatientDetails pd = em.find(PatientDetails.class,patient_id);
I want to know how to find an object by using other column name(s) instead of just the primary key.
Have a look on JPQL or JPA Critera.
https://docs.oracle.com/html/E24396_01/ejb3_langref.html
http://docs.oracle.com/javaee/6/tutorial/doc/gjitv.html
Execute Querys:
http://docs.oracle.com/javaee/6/api/javax/persistence/EntityManager.html#createQuery%28javax.persistence.criteria.CriteriaQuery%29
For example:
if
class PatientDetails
String first_name;
#Column(",date_of_birth")
Date birth;
...
}
then
String sql = "SELECT p FROM PatientDetails p where p.first_name = :fname and p.birth > :generation";
Date generation = new Date(int 1980, 0, 1);
TypedQuery<PatientDetails> query = EM.createQuery(sql);
query.setParameter("fname","John");
query.setParameter("generation",generation);
return query.getResultList
returns patients called John and born after 1980.
But you should read the links recommended by #pL4Gu33