In my CRUD Rest Service I do an insert into a DB and want to respond to the caller with the created new record. I am looking for a nice way to convert the map to json.
I am running on ballerina 0.991.0 and using a postgreSQL.
The return of the Update ("INSERT ...") is a map.
I tried with convert and stamp but i did not work for me.
import ballerinax/jdbc;
...
jdbc:Client certificateDB = new({
url: "jdbc:postgresql://localhost:5432/certificatedb",
username: "USER",
password: "PASS",
poolOptions: { maximumPoolSize: 5 },
dbOptions: { useSSL: false }
}); ...
var ret = certificateDB->update("INSERT INTO certificates(certificate, typ, scope_) VALUES (?, ?, ?)", certificate, typ, scope_);
// here is the data, it is map<anydata>
ret.generatedKeys
map should know which data type it is, right?
then it should be easy to convert it to json like this:
{"certificate":"{certificate:
"-----BEGIN
CERTIFICATE-----\nMIIFJjCCA...tox36A7HFmlYDQ1ozh+tLI=\n-----END
CERTIFICATE-----", typ: "mqttCertificate", scope_: "QARC", id_:
223}"}
Right now i do a foreach an build the json manually. Quite ugly. Maybe somebody has some tips to do this in a nice way.
It cannot be excluded that it is due to my lack of programming skills :-)
The return value of JDBC update remote function is sql:UpdateResult|error.
The sql:UpdateResult is a record with two fields. (Refer https://ballerina.io/learn/api-docs/ballerina/sql.html#UpdateResult)
UpdatedRowCount of type int- The number of rows which got affected/updated due to the given statement execution
generatedKeys of type map - This contains a map of auto generated column values due to the update operation (only if the corresponding table has auto generated columns). The data is given as key value pairs of column name and column value. So this map contains only the auto generated column values.
But your requirement is to get the entire row which is inserted by the given update function. It can’t be returned with the update operation if self. To get that you have to execute the jdbc select operation with the matching criteria. The select operation will return a table or an error. That table can be converted to a json easily using convert() function.
For example: Lets say the certificates table has a auto generated primary key column name ‘cert_id’. Then you can retrieve that id value using below code.
int generatedID = <int>updateRet.generatedKeys.CERT_ID;
Then use that generated id to query the data.
var ret = certificateDB->select(“SELECT certificate, typ, scope_ FROM certificates where id = ?”, (), generatedID);
json convertedJson = {};
if (ret is table<record {}>) {
var jsonConversionResult = json.convert(ret);
if (jsonConversionResult is json) {
convertedJson = jsonConversionResult;
}
}
Refer the example https://ballerina.io/learn/by-example/jdbc-client-crud-operations.html for more details.?
Related
Given a table with a column of type jsonb[], how do I insert a json array into the column?
Using the provided formatters :array, :json won't work in this instance - unless I am missing the correct combination or something.
const links = [
{
title: 'IMDB',
url: 'https://www.imdb.com/title/tt0076759'
},
{
title: 'Rotten Tomatoes',
url: 'https://www.rottentomatoes.com/m/star_wars'
}
];
const result = await db.none(`INSERT INTO tests (links) VALUES ($1:json)`, [links]);
You do not need the library's :json filter in this case, as you need an array of JSON objects, and not a JSON with an array of JSON objects.
The former is formatted correctly by default, which then only needs ::json[] type casting:
await db.none(`INSERT INTO tests(links) VALUES($1::json[])`, [links]);
Other Notes
Use pg-monitor or event query to output queries being executed, for easier diagnostics.
Method none can only resolve with null, no point storing the result in a variable.
Library pg-promise does not have any :array filter, see supported filters.
I'm trying to parse selected result from PostgreeSQL in Groovy, but something goes wrong, when I'm trying to parse it.
I used a lot of things, but I haven't reached the goal.
I've tried so:
I have three columns in postgre. One of the columns (document) is jsonb type and I want to parse it.
//instance of connection string to postrgre
PgAdmin pg = new
List<GroovyRowResult> result = pg.getSelectResult(Here goes select script)
for(Map oneRow in result) {
def document = oneRow.get("document")
println documnet //Well from here it works good, it prints jsonb data stored in document column.
def customerName = document.get("CustomerName") // it failes and goes on error, where CustomerName is inside jsonB.
}
I've also tried to use JsonSlurper, but it also fails.
I'm trying to return a simple, scalar string value from a Postgres DB using Knex. So far, everything I do returns a JSON object with a key (the column name) and the value, so I have to reach into the object to get the value. If I return multiple rows, then I get multiple JSON objects, each one repeating the key.
I could be returning multiple columns, in which case each row would at least need to be an array. I'm not looking for a special case where specifying a single column returns the value without the array -- I'm OK reaching into the array. I want to avoid the JSON object with the repetitive listing of column names as keys.
I've scoured the Knex docs but don't see how to control the output.
My table is a simple mapping table with two string columns:
CREATE TABLE public._suite
(
piv_id character(18) NOT NULL,
sf_id character(18) NOT NULL,
CONSTRAINT _suite_pkey PRIMARY KEY (piv_id)
)
When I build a query using Knex methods like
let myId = 'foo', table = '_suite';
return db(table).where('piv_id', myId).first(['sf_id'])
.then( function(id) { return(id); });
I get {"sf_id":"a4T8A0000009PsfUAE"} ; what I want is just "a4T8A0000009PsfUAE"
If I use a raw query, like
return db.raw(`select sf_id from ${table} where piv_id = '${myId}'`);
I get a much larger JSON object describing the result:
{"command":"SELECT","rowCount":1,"oid":null,"rows":[{"sf_id":"a4T8A0000009Q9HUAU"}],"fields":[{"name":"sf_id","tableID":33799,"columnID":2,"dataTypeID":1042,"dataTypeSize":-1,"dataTypeModifier":22,"format":"text"}],"_parsers":[null],"RowCtor":null,"rowAsArray":false}
What do I have to do to just get the value itself? (Again, I'm OK if it's in an array -- I just don't want the column names.)
Take a look at the pluck method.
db(table).where('piv_id', myId).pluck('sf_id'); // => will return you ["a4T8A0000009PsfUAE"]
I've been trying for a while to insert on MongoDB using only records with no success.
My problem is that I want to create a simple insert function which I send a generic type and it is inserted into the database.
Like so.
let insert (value: 'a) =
let collection = MongoClient().GetDatabase("db").GetCollection<'a> "col"
collection.InsertOne value
From this function, I tried inserting the following records.
// Error that it can't set the Id
type t1 = {
Id: ObjectId
Text: string
}
// Creates the record perfectly but doesn't generate a new Id
type t2 = {
Id: string
Text: string
}
// Creates the record and autogenerates the Id but doesn't insert the Text, and there are two Ids (_id, Id#)
type t3 = {
mutable Id: ObjectId
Text: string
}
// Creates the record and autogenerates the Id but for every property it generates two on MongoDB (_id, Id#, Text, Text#)
type t4 = {
mutable Id: ObjectId
mutable Text: string
}
So does anyone can think of a solution for this or am I stuck having to use a class.
// Works!!!
type t5() =
member val Id = ObjectId.Empty with get, set
member val Name = "" with get, set
Also, does anyone has any Idea of why when the C# MongoDB library translates the mutable he gets the property with the # at the end?
I would be fine with having all my properties set as mutable, although this wouldn't be my first choice, having he create multiple properties on the DB is quite bad.
You could try annotating your records with CLIMutable (and no mutable fields).
The #s end up in the DB because MongoDB using reflection and F# implementing mutable with backing fields fieldName#
How insert item on top table in PostgreSQL? That it is possible? In the table I have only two fields as text. First is primary key.
CREATE TABLE news_table (
title text not null primary key,
url text not null
);
I need a simple query for the program in java.
OK, this is my code:
get("/getnews", (request, response) -> {
List<News> getNews = newsService.getNews();
List<News> getAllNews = newsService.getAllNews();
try (Connection connection = DB.sql2o.open()) {
String sql = "INSERT INTO news_table(title, url) VALUES (:title, :url)";
for (News news : getNews) {
if (!getAllNews.contains(news)) {
connection.createQuery(sql, true)
.addParameter("title", news.getTitle())
.addParameter("url", news.getUrl())
.executeUpdate()
.getKey();
}
}
}
return newsService.getNews();
}, json());
The problem is that as it calls getnews method for the second time this new news adds at the end of the table, and there is no extant hronologi news. How this resolve? I use Sql2o + sparkjava.
Probably already I know. I need to reverse the List getnews before I will must contains object getnews and getallnews?
There is no start or end in a table. If you want to sort your data, just use an ORDER BY in your SELECT statements. Without ORDER BY, there is no order.
Relational theory, the mathematical foundation of relational databases, lays down certain conditions that relations (represented in real databases as tables) must obey. One of them is that they have no ordering (i.e., the rows will neither be stored nor retrieved in any particular order, since they are treated as a mathematical set). It's therefore completely under the control of the RDBMS where a new row is entered into a table.
Hence there is no way to ensure a particular ordering of the data without using an ORDER BY clause when you retrieve the data.