PostgreSQL field type to store string, int, or json - postgresql

I want to store a table for variables with table definition like below
variable
--------
id int
var_type int 0: number, 1: string, 2: json
var_name int
var_value ??? varchar or jsonb?
If I use varchar, how to store the json type variable and if, I am using jsonb how to store the int and string?
The example json value that is stored,
[{"name": "Andy", "email" : "andy#mail.id"},{"name": "Cindy", "email" : "cindy#mail.id"}]
TIA
Beny

When you have data and you don't know the structure, use a single jsonb column. JSON can handle strings, numbers, and more JSON.
{
"string": "basset hounds got long ears",
"number": 23.42,
"json": [1,2,3,4,5]
}
Don't try to cram them all into a single array. Put them in separate rows.
One row: {"name": "Andy", "email" : "andy#mail.id"}
Another row: {"name": "Cindy", "email" : "cindy#mail.id"}
However, your example feels like its avoiding designing a schema. JSONB is useful, but overusing it defeats the point of a relational database.
create table people (
id bigserial primary key,
// Columns for known keys which can have constraints.
name text not null,
email text not null,
// JSONB for extra keys you can't predict.
data jsonb
)
Use the JSON operators to query individual pairs.
select
name, email, data->>'favorite dog breed'
from some_table

Related

How to break out jsonb array into rows for a postgresql query

I have the objective of breaking out the results of a query on a table with a json column that contains an array into individual rows. However I'm not sure about the syntax to write this query. I'm using this:
For the following query
SELECT
jobs.id,
templates.Id,
templates.Version,
templates.StepGroupId,
templates.PublicVersion,
templates.PlannedDataSheetIds,
templates.SnapshottedDataSheetValues
FROM jobs,
jsonb_to_recordset(jobs.source_templates) AS templates(Id, Version, StepGroupId, PublicVersion,
PlannedDataSheetIds, SnapshottedDataSheetValues)
On the following table:
create table jobs
(
id uuid default uuid_generate_v4() not null
constraint jobs_pkey
primary key,
source_templates jsonb,
);
with the jsonb column containing data in this format:
[
{
"Id":"94729e08-7d5c-459d-9244-f66e17059fc4",
"Version":1,
"StepGroupId":"0274590b-c08d-4963-b37e-8fc8f25151d2",
"PublicVersion":1,
"PlannedDataSheetIds":null,
"SnapshottedDataSheetValues":null
},
{
"Id":"66791bfd-8cdb-43f7-92e6-bfb45b0f780f",
"Version":4,
"StepGroupId":"126404c5-ed1e-4796-80b1-ca68ad486682",
"PublicVersion":1,
"PlannedDataSheetIds":null,
"SnapshottedDataSheetValues":null
},
{
"Id":"e3b31b98-8052-40dd-9405-c316b9c62942",
"Version":4,
"StepGroupId":"bc6a9dd3-d527-449e-bb36-39f03eaf87b9",
"PublicVersion":1,
"PlannedDataSheetIds":null,
"SnapshottedDataSheetValues":null
}
]
I get an error:
[42601] ERROR: a column definition list is required for functions returning "record"
What is the right way to do this without generating the error?
You need to define datatypes:
SELECT
jobs.id,
templates.Id,
templates.Version,
templates.StepGroupId,
templates.PublicVersion,
templates.PlannedDataSheetIds,
templates.SnapshottedDataSheetValues
FROM jobs,
jsonb_to_recordset(jobs.source_templates)
AS templates(Id UUID, Version INT, StepGroupId UUID, PublicVersion INT,
PlannedDataSheetIds INT, SnapshottedDataSheetValues INT)

Postgresql json select from values in second layer of containment of arrays

I have a jsonb column 'data' that contains a tree like json, example:
{
"libraries":[
{
"books":[
{
"name":"mybook",
"type":"fiction"
},
{
"name":"yourbook",
"type":"comedy"
}
{
"name":"hisbook",
"type":"fiction"
}
]
}
]
}
I want to be able to do a index using query that selects a value from the indented "book" jsons according to the type.
so all book names that are fiction.
I was able to do this using jsonb_array_elements a join query, but as i understand this would not be optimized with using the GIN index.
my query is
select books->'name'
from data,
jsonb_array_elements(data->'libraries') libraries,
jsonb_array_elements(libraries->'books') books,
where books->>'type'='grading'
If the example data you are showing is the type of data that is common in your JSON, I would suggest that you may be setting things up wrong.
Why not make a library table and a book table and not use JSON at all, it seems JSON is not the right choice here.
CREATE TABLE library
(
id serial,
name text
);
CREATE TABLE book
(
isbn BIGINT,
name text,
book_type text
);
CREATE TABLE library_books
(
library_id integer,
isbn BIGINT
)
select book.* from library_books where library_id = 1;

json type in Cassandra data model?

I am wondering if I could have json data type in the column family:
My table will have a unique row key, and column name of "tweets_json" and column value of json content.
How would I create such a table in CQL/cassandra or using python Driver or CQLengine?
tweet_json = json.encode({
"tweet_id" : tweet_id,
"body" : tweet_body,
"user_name" : this_user,
"timestamp" : timestamp
})
Basicall
Just have a text column? If you're not looking to filter on the json, then that should be fine. If you are, then you'll need to store it as proper columns, rather than as json.

Updating an array of objects fields in crate

I created a table with following syntax:
create table poll(poll_id string primary key,
poll_type_id integer,
poll_rating array(object as (rating_id integer,fk_user_id string, israted_image1 integer, israted_image2 integer, updatedDate timestamp, createdDate timestamp )),
poll_question string,
poll_image1 string,
poll_image2 string
)
And I inserted a record without "poll_rating" field which is actually an array of objects fields.
Now when I try to update a poll_rating with the following commands:
update poll set poll_rating = [{"rating_id":1,"fk_user_id":-1,"israted_image1":1,"israted_image2":0,"createddate":1400067339.0496}] where poll_id = "f748771d7c2e4616b1865f37b7913707";
I'm getting an error message like this:
"SQLParseException[line 1:31: no viable alternative at input '[']; nested: ParsingException[line 1:31: no viable alternative at input '[']; nested: NoViableAltException;"
Can anyone tell me why I get this error when I try to update the array of objects fields.
Defining arrays and objects directly in SQL statement is currently not supported by our SQL parser, please use parameter substitution using placeholders instead as described here:
https://crate.io/docs/current/sql/rest.html
Example using curl is as below:
curl -sSXPOST '127.0.0.1:4200/_sql?pretty' -d#- <<- EOF
{"stmt": "update poll set poll_rating = ? where poll_id = ?",
"args": [ [{"rating_id":1,"fk_user_id":-1,"israted_image1":1,"israted_image2":0,"createddate":1400067339.0496}], "f748771d7c2e4616b1865f37b7913707" ]
}
EOF

How can I use an hstore column type with Npgsql?

I have a table with the following schema:
CREATE TABLE account
(
id serial primary key,
login varchar(40) not null,
password varchar(40) not null,
data hstore
);
I'd like to use an NpgsqlCommand object with parameters to retrieve and store the account data from my application. Which DbType do I have to use for the NpgsqlParameter? The enum NpgsqlDbType does not have a value for hstore. Can I use a Dictionary or HashTable as value of the NpgsqlParameter object?
When I use a JSON column I can create a parameter of type NpgsqlDbType.Text, use a library like JSON.Net to serialize an object to a JSON string and send an SQL statement like that:
INSERT INTO account (login, password, data) VALUES (:login, :password, :data::json)
Unfortunately this does not work with an hstore column. I get a syntax error when I try to do this:
INSERT INTO account (login, password, data) VALUES (:login, :password, :data::hstore)
The string I pass to the data parameter looks like this:
'key1 => "value1", key2 => "value2"'
Thank you, Francisco! I saw in the log that the single quotes (') at the beginning and the end of the string are escaped when they are passed to PostgreSQL. When I pass
key1 => "value1", key2 => "value2"
instead, I can insert the data into the hstore column.