OrientDB add/edit description attribute of classes with SQL - orientdb

I am using OrientDB 3.1.1. All the classes have a 'description' attribute whose value is by default null. Is there any way to add description to the class through SQL or by other means.
I have tried ALTER CLASS <className> DESCRIPTION "some text as description". Does not work at all.
It should be a simple matter to update the description but apparently it does not seem to be that way for some reason.
Below is an example of a built in class but it holds good for all classes.
{
customFields: null
defaultClusterId: 10
strictMode: false
description: null
abstract: false
clusterIds: [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]
superClass: null
name: V
clusterSelection: round-robin
shortName: null
overSize: 0.0
properties: []
superClasses: null
}

After some experimentation; I found that the following syntax works to add or alter a description for a class (though it is not explicitly documented in manual).
ALTER CLASS xClass DESCRIPTION `xClass desc1`
Note the tick (``) marks, not single quotes; double quotes will also not work
{
"customFields": null,
"defaultClusterId": 22,
"strictMode": false,
"description": "xClass desc1",
"abstract": false,
"clusterIds": [22, 23, 24, 25],
"superClass": null,
"name": "xClass",
"clusterSelection": "round-robin",
"shortName": null,
"overSize": 0.0,
"properties": [],
"superClasses": null
}
With the above command the description can be set or altered, as observed in the above example.
NOTE: In case of attribute the syntax is similar, single quote needs to be used instead of tick marks.

Related

Need to explicitly use `cast` when using `pl.col` versus indexing seems inconsistent

In the example below, why does scores.filter(scores["zone"] == "North") require adding .cast(str) to work while scores.filter(pl.col("zone") == "North") does not need casting? Interestingly, scores.filter(scores["zone"].is_in(["North", "South"])) works without casting when global string cache is on.
Using polars 0.15.14 (conda)
import polars as pl
pl.toggle_string_cache(True)
scores = pl.DataFrame(
{
"zone": pl.Series([
"North",
"North",
"North",
"South",
"South",
"East",
"East",
"East",
"East",
]).cast(pl.Categorical),
"funding": pl.Series(["yes", "yes", "no", "yes", "no", "no", "no", "yes", "yes"]).cast(pl.Categorical),
"score": [78, 39, 76, 56, 67, 89, 100, 55, 80],
}
)
# works with or without global string cache
scores.filter(pl.col("zone") == "North")
# works without global string cache
scores.filter(pl.col("zone").cast(str).is_in(["North", "South"]))
# works with global string cache
scores.filter(pl.col("zone").is_in(["North", "South"]))
# works with global string cache
scores.filter(scores["zone"].is_in(["North", "South"]))
# requires cast(str)
scores.filter(scores["zone"] == "North")
# works with or without global string cache
scores.filter(scores["zone"].cast(str) == "North")
A related question about the need to cast is shown below. Here, using pl.col requires explicit casting to a float but using [] indexing does not.
zone_count = scores.groupby("zone").agg(pl.count("zone").alias("count"))
# converts to float automatically
zone_count["count"] / 100
# does not convert to float
zone_count.select(pl.col("count")) / 100
# need to explicitly cast to float
zone_count.select(pl.col("count").cast(float)) / 100
Related question here:
Python-Polars: How to filter categorical column with string list

How to add data to an MUI table column-wise instead of row-wise?

Good day! Here is the sandbox react code that I'm using for a project involving MUI tables:
I have been racking my brain over this, but can't seem to get a solution. How can I add to this table column-wise instead of by row?
In line 57-67, the rows are created first and then they are populated row-wise, left-to-right by data.
The data given looks like this:
const data = [
{name: "sample_name",
calories: "19",
fat: "90",
carbs: 70,
protein: 90},
{name: "sample_name",
calories: "19",
fat: "90",
carbs: 70,
protein: 90},
]
What the lines I mentioned do is it takes 1 of the objects in the data and appends them row-wise
I work with a data that looks like this:
const name = ["richard","nixon"]
const calories = [9, 9, 0, 9, 0, 5, 8]
const fat = [10, 9 , 9]
const carbs = [11, 3, 4,5 ]
const protein = [1, 1]
I just want to be able to insert name data into the name column... and so on... this should also hopefully make it easier for me to dynamically insert more data for each column using TextField+button action
Seems to me like this is a data issue, not Material UI. You need to provide row and column data to a table, regardless of what library you use, that's just how tables are build. So if you are getting back data by columns, you need a reducer or a method to convert them into rows. Here is a super quick and dirty example:
const rawData = {
name: ["Ice cream", "Sno cone"],
calories: [32, 45]
};
let columns = Object.keys(rawData);
let rows = rawData.name.map((name, i) => {
return { name, calories: rawData.calories[i] };
});
/*
rows = ["name", "calories"]
columns = [
{ name: "Ice cream", calories: 32 },
{ name: "Sno cone", calories: 45 },
];
*/
Obviously, this is a quick example and not very extensible, but should lead you in a good direction. Perhaps a reducer which could build out row data more elegantly. However, this will allow you to build out the table as intended:
<TableContainer component={Paper}>
<Table>
<TableHead>
<TableRow>
{columns.map((i) => (
<TableCell>{i}</TableCell>
))}
</TableRow>
</TableHead>
<TableBody>
{rows.map((row) => (
<TableRow key={row.name}>
<TableCell>{row.name}</TableCell>
<TableCell>{row.calories}</TableCell>
</TableRow>
))}
</TableBody>
</Table>
</TableContainer>

Return nested data from recursive query

We have a list of recipes (recipe_id) and ingredients (ingredient_id). Every recipe can become an ingredient of another recipe and that child recipe can then hold more second level child recipes and so on. I need a recursive query that takes a recipe id (or an array of ids - how I have it setup now) and returns a NESTED table or array of all the ingredients of the main recipes and any child recipes.
We're using Postgresql / Supabase / PostgREST.
So far I've been able to create the recursive query as a RPC function that returns a table using UNION ALL. This gives me a flat table back and I can't definitively trace back an ingredient to a specific parent recipe (because the same recipe can be called as a child in multiple parent recipes). Not sure where to go from here? The only other option I've figured out so far is to have my API endpoint query each level one at a time, but it generates a lot of network requests =(
DESIRED OUTPUT
Super flexible on format, but it would be nice if I could get all the child components as a nested array like so:
[
{ id: 1,
recipe_id: 22,
item_id: 9,
item: "Croissant Dough",
...,
components: [
{ id: 2,
recipe_id: 1,
item_id: 33,
item: "Butter,
...,
components: []
},
{ id: 3,
recipe_id: 1,
item_id: 71,
item: "Wheat Flour",
...,
components: []
}
]
},
{ id: 1,
recipe_id: 29,
item_id: 4,
item: "Almond Filling",
...,
components: [
{ id: 2,
recipe_id: 29,
item_id: 16,
item: "Almond Meal,
...,
components: []
},
{ id: 3,
recipe_id: 29,
item_id: 42,
item: "Pastry Cream",
...,
components: [
{ id: 7,
recipe_id: 42,
item_id: 22,
item: "Egg Yolks",
...,
components: []
]
}
]
},
]
CURRENT RPC FUNCTION
CREATE or REPLACE FUNCTION recipe_components_recursive (recipeids text)
RETURNS TABLE (id int8, recipe_id int8, item_id int8, quantity numeric, unit_id int8, recipe_order int4, item text, visible bool, recipe bool, "unitName" varchar, "unitAbbreviation" varchar, "conversionFactor" float4, "metricUnit" int8, batch bool)
LANGUAGE plpgsql
AS $$
DECLARE
transformedjson int[] := recipeids;
BEGIN
RETURN QUERY
WITH RECURSIVE recipe_components_rec_query AS (
SELECT *
FROM recipe_components_items
WHERE recipe_components_items.recipe_id = ANY (transformedjson)
UNION ALL
SELECT o.id, o.recipe_id, o.item_id, o.quantity, o.unit_id, o.recipe_order, o.item, o.visible, o.recipe, o."unitName", o."unitAbbreviation", o."conversionFactor", o."metricUnit", o.batch
FROM recipe_components_items o
INNER JOIN recipe_components_rec_query n ON n.item_id = o.recipe_id AND n.recipe = true
) SELECT *
FROM recipe_components_rec_query;
END $$;

Elastic/Nearest search based on document properties in MongoDB

We need to accomplish the nearest search based on document properties in MongoDB.
Let's take an example, there is a Car schema in MongoDB, information will be stored as something similar to:
{
Make: "Hyundai",
Model: "Creta",
Title: "Hyundai Creta E 1.6 Petrol",
Description: "Compact SUV",
Feature: {
ABS: true,
EBD: true,
Speakers: 4,
Display: false
},
Specification: {
Length: "4270 mm",
Width: "1780 mm",
Height: "1630 mm",
Wheelbase: "2590 mm",
Doors: 5,
Seating: 5,
Displacement: "1591 cc"
},
Safety: {
Airbags: 2,
SeatBeltWarning: false
},
Maintenance: {
LastService: "21/06/2016",
WashingDone: true
}
}
Search needs to be provided based on following criteria:
1. Make
2. Model
3. ABS
4. Seating
5. Displacement
6. Airbags
Now results should contain records where 3 or more of the properties match (exact match), and ordered based on the maximum number of properties that match.
What is the best way to implement this with MongoDB?
You could write something to generate documents for each triplet of fields, and then put them together with $or, producing something like
{$or: [
{Make: "Hyundai", Model: "Creta", Feature.ABS: true},
{Make: "Hyundai", Model: "Creta", Specification.Seating: 5},
...
]}
Sorting probably requires sorting by textScore.

JsonMappingException: Already had POJO for id

I have an error when trying to work with #JsonIdentityInfo jackson annotation. When I try to deserialize the object I get the following exception:
Could not read JSON: Already had POJO for id (java.lang.Integer) [1] (through reference chain: eu.cobiz.web.domain.Site["operators"]->eu.yavix.web.domain.Account["image"]->eu.cobiz.web.domain.Image["#Image"]);nested exception is com.fasterxml.jackson.databind.JsonMappingException: Already had POJO for id (java.lang.Integer) [1] (through reference chain: eu.yavix.web.domain.Site["operators"]->eu.cobiz.web.domain.Account["image"]->eu.cobiz.web.domain.Image["#Image"])
The JSON I am trying to deserialize looks like:
{
"#Site": 1,
"siteId": 1,
"name": "0",
"address": {
"#Address": 2,
"addressId": 4,
"number": "22"
},
"operators": [
{
"accountId": 1,
"email": "user982701361#yavix.eu",
"image": {
"#Image": 1,
"imageId": 1,
"uri": "http://icons.iconarchive.com/icons/deleket/purple-monsters/128/Alien-awake-icon.png"
}
},
{
"accountId": 2,
"email": "user174967957#yavix.eu",
"image": {
"#Image": 2,
"imageId": 2,
"uri": "http://icons.iconarchive.com/icons/deleket/purple-monsters/128/Alien-awake-icon.png"
}
}
]
}
My domain object is annotated with
#JsonIdentityInfo(generator = ObjectIdGenerators.IntSequenceGenerator.class, property = "#Image")
The problem arises on #Id annotation since if I remove the annotation the problem disappears (as I did for account) but on my understanding the new feature is useful for cyclic dependencies which is useful for me in other scenarios. There shouldn't be a conflict between the 2 images since they are different objects.
How can I solve this or what is the problem?
You should use scope parameter when annotating the ids. Then the de-serializer would make sure the id is unique within the scope.
From Annotation Type JsonIdentityInfo:
Scope is used to define applicability of an Object Id: all ids must be unique within their scope; where scope is defined as combination of this value and generator type.
e.g.
#JsonIdentityInfo(generator=ObjectIdGenerators.IntSequenceGenerator.class,property="#id", scope = Account.class)
To avoid id conflict try to use ObjectIdGenerators.PropertyGenerator.class or ObjectIdGenerators.UUIDGenerator.class instead of ObjectIdGenerators.IntSequenceGenerator.class