Postgres jsonb_set concatenate current value - postgresql

I'm trying to use jsonb_set to update a range of json objects within my database. I can get a query working that updates the object with a string value, however I cannot seem to get it to update using the current value.
UPDATE entity
SET properties = jsonb_set(properties, '{c_number}', concat('0', properties->>'c_number'))
WHERE type = 1 and length(properties->>'c_number') = 7
The above doesn't work in its current format, I think the issue is the properties->>'c_number' inside the jsonb_set. Is there a way I can access the current value and simply add a leading 0?

Found a solution:
UPDATE entity
SET properties = jsonb_set(properties, '{c_number}', concat('"0', properties->>'c_number', '"')::jsonb)
WHERE type = 1 and length(properties->>'c_number') = 7

Based on this answer I was able to prepare my solution.
My goal was to create a new property in JSON, with a value that is based on the value of one of the properties which my JSON already has.
For example:
I have:
{
property_root: { property_root_child: { source_property_key: "source_property_value" } }
}
I want:
{
property_root: { property_root_child: { source_property_key: "source_property_value", target_property_key: "source_property_value + my custom ending" } }
}
So my query would look:
UPDATE database.table_with_json
SET json_column=jsonb_set(
json_column,
'{ property_root, property_root_child, target_property_key }',
concat('"', json_column->'property_root'->'property_root_child'->>'source_property_key', ' + my custom ending', '"')::jsonb)
WHERE
json_column->'property_root'->'property_root_child'->'source_property_key' IS NOT NULL
Why concat looks messy? Based on the answer mentioned above:
The third argument of jsonb_set() should be of jsonb type. The problem is in casting a text string to jsonb string, you need a string in double quotes.
That is why we have to wrap concat in double qoutes.

Related

how to convert map<anydata> to json

In my CRUD Rest Service I do an insert into a DB and want to respond to the caller with the created new record. I am looking for a nice way to convert the map to json.
I am running on ballerina 0.991.0 and using a postgreSQL.
The return of the Update ("INSERT ...") is a map.
I tried with convert and stamp but i did not work for me.
import ballerinax/jdbc;
...
jdbc:Client certificateDB = new({
url: "jdbc:postgresql://localhost:5432/certificatedb",
username: "USER",
password: "PASS",
poolOptions: { maximumPoolSize: 5 },
dbOptions: { useSSL: false }
}); ...
var ret = certificateDB->update("INSERT INTO certificates(certificate, typ, scope_) VALUES (?, ?, ?)", certificate, typ, scope_);
// here is the data, it is map<anydata>
ret.generatedKeys
map should know which data type it is, right?
then it should be easy to convert it to json like this:
{"certificate":"{certificate:
"-----BEGIN
CERTIFICATE-----\nMIIFJjCCA...tox36A7HFmlYDQ1ozh+tLI=\n-----END
CERTIFICATE-----", typ: "mqttCertificate", scope_: "QARC", id_:
223}"}
Right now i do a foreach an build the json manually. Quite ugly. Maybe somebody has some tips to do this in a nice way.
It cannot be excluded that it is due to my lack of programming skills :-)
The return value of JDBC update remote function is sql:UpdateResult|error.
The sql:UpdateResult is a record with two fields. (Refer https://ballerina.io/learn/api-docs/ballerina/sql.html#UpdateResult)
UpdatedRowCount of type int- The number of rows which got affected/updated due to the given statement execution
generatedKeys of type map - This contains a map of auto generated column values due to the update operation (only if the corresponding table has auto generated columns). The data is given as key value pairs of column name and column value. So this map contains only the auto generated column values.
But your requirement is to get the entire row which is inserted by the given update function. It can’t be returned with the update operation if self. To get that you have to execute the jdbc select operation with the matching criteria. The select operation will return a table or an error. That table can be converted to a json easily using convert() function.
For example: Lets say the certificates table has a auto generated primary key column name ‘cert_id’. Then you can retrieve that id value using below code.
int generatedID = <int>updateRet.generatedKeys.CERT_ID;
Then use that generated id to query the data.
var ret = certificateDB->select(“SELECT certificate, typ, scope_ FROM certificates where id = ?”, (), generatedID);
json convertedJson = {};
if (ret is table<record {}>) {
var jsonConversionResult = json.convert(ret);
if (jsonConversionResult is json) {
convertedJson = jsonConversionResult;
}
}
Refer the example https://ballerina.io/learn/by-example/jdbc-client-crud-operations.html for more details.?

f# Insert on MongoDB using Records

I've been trying for a while to insert on MongoDB using only records with no success.
My problem is that I want to create a simple insert function which I send a generic type and it is inserted into the database.
Like so.
let insert (value: 'a) =
let collection = MongoClient().GetDatabase("db").GetCollection<'a> "col"
collection.InsertOne value
From this function, I tried inserting the following records.
// Error that it can't set the Id
type t1 = {
Id: ObjectId
Text: string
}
// Creates the record perfectly but doesn't generate a new Id
type t2 = {
Id: string
Text: string
}
// Creates the record and autogenerates the Id but doesn't insert the Text, and there are two Ids (_id, Id#)
type t3 = {
mutable Id: ObjectId
Text: string
}
// Creates the record and autogenerates the Id but for every property it generates two on MongoDB (_id, Id#, Text, Text#)
type t4 = {
mutable Id: ObjectId
mutable Text: string
}
So does anyone can think of a solution for this or am I stuck having to use a class.
// Works!!!
type t5() =
member val Id = ObjectId.Empty with get, set
member val Name = "" with get, set
Also, does anyone has any Idea of why when the C# MongoDB library translates the mutable he gets the property with the # at the end?
I would be fine with having all my properties set as mutable, although this wouldn't be my first choice, having he create multiple properties on the DB is quite bad.
You could try annotating your records with CLIMutable (and no mutable fields).
The #s end up in the DB because MongoDB using reflection and F# implementing mutable with backing fields fieldName#

How to update embedded column's property in orientdb

I am facing an issue while updating an embedded field's property in Orientdb.
Below are the steps to reproduce the issue:
CREATE VERTEX Foo set value = { 'abc-def-hgi':"blah blah", '1ab-2cd-3ef': "aaaaa", '345-jkl-mno':'ppppp' }, id = 1
CREATE VERTEX Foo set value = { 'abc-def-hgi':"mmmmm", '1ab-2cd-3ef': "nmnmnmn", '345-jkl-mno':'qqqq' }, id = 2
CREATE VERTEX Foo set value = { 'abc-def-hgi':"lorem ipsum", '1ab-2cd-3ef': "mmmmm", '345-jkl-mno':'llll' }, id = 3
Property "value" has been declared as of type "Embedded".
Now, I want to update record with id "1" for "abc-def-hgi" property in column "value".
I have tried with below queries, but neither of them worked:
update Foo set value["abc-def-hgi"] = "new new" where id = 1
update Foo set value.abc-def-hgi = "new new" where id = 1
It seems that it is having problem with hypen ("-") in the field's property name.
I am using Orientdb's version: 2.2.11
Note: I have looked upon issues in orientdb Git repo, where I found this. Not sure whether it is related to my issue or not, but it's not working at my end.
Any help would be great appreciated.
As you said yourself the problem is with -.
If you try with a field without - the following query works.
Example
update Foo set value.prop = "myprop1" where id=1
If you try to create a field with - , you got an exception.
UPDATE
To create a property with hyphen you could use this command
create property foo.`abc-def-hgi` string
Hope it helps
Regarding the usage of '-' in the names of the properties you can use the quotes as Alessandro says or disable the "Strict" value in database option (in that case you are rolling back to the old parser that was a little bit less rigid)

How to query a JSON element

Let's say I have a Postgres database (9.3) and there is a table called Resources. In the Resources table I have the fields id which is an int and data which is a JSON type.
Let's say I have the following records in said table.
1, {'firstname':'Dave', 'lastname':'Gallant'}
2, {'firstname':'John', 'lastname':'Doe'}
What I want to do is write a query that would return all the records in which the data column has a json element with the lastname equal to "Doe"
I tried to write something like this:
records = db_session.query(Resource).filter(Resources.data->>'lastname' == "Doe").all()
Pycharm however is giving me a compile error on the "->>"
Does anyone know how I would write the filter clause to do what I need?
Try using astext
records = db_session.query(Resource).filter(
Resources.data["lastname"].astext == "Doe"
).all()
Please note that the column MUST have a type of a JSONB. The regular JSON column will not work.
Also you could explicitly cast string to JSON (see Postgres JSON type doc).
from sqlalchemy.dialects.postgres import JSON
from sqlalchemy.sql.expression import cast
db_session.query(Resource).filter(
Resources.data["lastname"] == cast("Doe", JSON)
).all()
If you are using JSON type (not JSONB) the following worked for me:
Note the '"object"'
query = db.session.query(ProductSchema).filter(
cast(ProductSchema.ProductJSON["type"], db.String) != '"object"'
)
I have some GeoJSON in a JSON (not JSONB) type column and none of the existing solutions worked, but as it turns out, in version 1.3.11 some new data casters were added, so now you can:
records = db_session.query(Resource).filter(Resources.data["lastname"].as_string() == "Doe").all()
Reference: https://docs.sqlalchemy.org/en/14/core/type_basics.html#sqlalchemy.types.JSON
Casting JSON Elements to Other Types
Index operations, i.e. those invoked by calling upon the expression
using the Python bracket operator as in some_column['some key'],
return an expression object whose type defaults to JSON by default, so
that further JSON-oriented instructions may be called upon the result
type. However, it is likely more common that an index operation is
expected to return a specific scalar element, such as a string or
integer. In order to provide access to these elements in a
backend-agnostic way, a series of data casters are provided:
Comparator.as_string() - return the element as a string
Comparator.as_boolean() - return the element as a boolean
Comparator.as_float() - return the element as a float
Comparator.as_integer() - return the element as an integer
These data casters are implemented by supporting dialects in order to
assure that comparisons to the above types will work as expected, such
as:
# integer comparison
data_table.c.data["some_integer_key"].as_integer() == 5
# boolean comparison
data_table.c.data["some_boolean"].as_boolean() == True
According sqlalchemy.types.JSON, you can do it like this
from sqlalchemy import JSON
from sqlalchemy import cast
records = db_session.query(Resource).filter(Resources.data["lastname"] == cast("Doe", JSON)).all()
According to this, pre version 1.3.11, the most robust way should be like this, as it works for multiple database types, e.g. SQLite, MySQL, Postgres:
from sqlalchemy import cast, JSON, type_coerce, String
db_session.query(Resource).filter(
cast(Resources.data["lastname"], String) == type_coerce("Doe", JSON)
).all()
From version 1.3.11 onward, type-specific casters is the new and neater way to handle this:
db_session.query(Resource).filter(
Resources.data["lastname"].as_string() == "Doe"
).all()

Composite views in couchbase

I'm new to Couchbase and am struggling to get a composite index to do what I want it to. The use-case is this:
I have a set of "Enumerations" being stored as documents
Each has a "last_updated" field which -- as you may have guessed -- stores the last time that the field was updated
I want to be able to show only those enumerations which have been updated since some given date but still sort the list by the name of the enumeration
I've created a Couchbase View like this:
function (doc, meta) {
var time_array;
if (doc.doc_type === "enum") {
if (doc.last_updated) {
time_array = doc.last_updated.split(/[- :]/);
} else {
time_array = [0,0,0,0,0,0];
}
for(var i=0; i<time_array.length; i++) { time_array[i] = parseInt(time_array[i], 10); }
time_array.unshift(meta.id);
emit(time_array, null);
}
}
I have one record that doesn't have the last_updated field set and therefore has it's time fields are all set to zero. I thought as a first test I could filter out that result and I put in the following:
startkey = ["a",2012,0,0,0,0,0]
endkey = ["Z",2014,0,0,0,0,0]
While the list is sorted by the 'id' it isn't filtering anything! Can anyone tell me what I'm doing wrong? Is there a better composite view to achieve these results?
In couchbase when you query view by startkey - endkey you're unable to filter results by 2 or more properties. Couchbase has only one index, so it will filter your results only by first param. So your query will be identical to query with:
startkey = ["a"]
endkey = ["Z"]
Here is a link to complete answer by Filipe Manana why it can't be filtered by those dates.
Here is a quote from it:
For composite keys (arrays), elements are compared from left to right and comparison finishes as soon as a element is different from the corresponding element in the other key (same as what happens when comparing strings à la memcmp() or strcmp()).
So if you want to have a view that filters by date, date array should go first in composite key.