R2DBC: Why are RowsFetchSpec<T> operators (.all(),.first(),.one()) signal onComplete although record is stored in database? - spring-data-r2dbc

I am using DatabaseClient for building a custom Repository. After I insert or update an Item I need that Row data to return the saved/updated Item. I just can´t wrap my head around why .all(), .first(), .one() are not returning the Result Map, although I can see that the data is inserted/updated in the database. They just signal onComplete. But .rowsUpdated() returns 1 row updated.
I observed this behaviour with H2 and MS SQL Server.
I´m new to R2dbc. What am I missing? Any ideas?
#Transactional
public Mono<Item> insertItem(Item entity){
return dbClient
.sql("insert into items (creationdate, name, price, traceid, referenceid) VALUES (:creationDate, :name, :price, :traceId, :referenceId)")
.bind("creationDate", entity.getCreationDate())
.bind("name", entity.getName())
.bind("price", entity.getPrice())
.bind("traceId", entity.getTraceId())
.bind("referenceId", entity.getReferenceId())
.fetch()
.first() //.all() //.one()
.map(Item::new)
.doOnNext(item -> LOGGER.info(String.format("Item: %s", item)));
}
The table looks like this:
CREATE TABLE [dbo].[items](
[creationdate] [bigint] NOT NULL,
[name] [nvarchar](32) NOT NULL,
[price] [int] NOT NULL,
[traceid] [nvarchar](64) NOT NULL,
[referenceid] [int] NOT NULL,
PRIMARY KEY (name, referenceid)
)
Thanks!

This is the behavior of an insert/update statement in database, it does not return the inserted/updated rows.
It returns the number of inserted/updated rows.
It may also return some generated values by the database (such as auto increment, generated uuid...), by adding the following line:
.filter(statement -> statement.returnGeneratedValues())
where you may specify specific generated columns in parameter. However this has limitations depending on the database (for example MySql can only return the last generated ID of an auto increment column even if you insert multiple rows).
If you want to get the inserted/updated values from database, you need to do a select.

Related

How to bulk update SQLite table entries in Flutter

My Flutter app makes use of an SQLite database with multiple tables and Firebase authentication. Some time after publishing the app, I added a User ID column to each table, so that I could ensure that results could be filtered by users and they would only see their own data, should another user log into the app on the same device.
When I created the new column in each table though, for the entries that already existed, I allowed the data in the new column to be null to prevent an error with the database. The problem with that is that users who made the initial calculations where the User ID is null will now 'lose' that data, as it won't show up when I implement filtering with User ID.
What I want to do is bulk-update the null values in that column of each SQLite database table, if a null value exists in that table. The null values should be replaced by the current logged in User ID. I'm not sure of the best way to do this but my idea is something like this, with a database update function after it has been initialised:
Future _update(Database db, int oldVersion, int newVersion) async {
if (oldVersion < newVersion) {
Future<int> updateDB(String value) async {
final db = await instance.database;
return db.update(
values,
value.toJson(),
where: '${ValueFields.id} = ?',
whereArgs: [value.id],
);
}
}
}
Thing is, this doesn't seem to actually work and of course doesn't bulk update all rows in that particular column. Can someone advise on how I can build the bulk update function, to take all null User ID values in a particular table and change them to the current logged in user?
Thank you!
What you are doing is never updating anything if id is null. That is even if null were passed as the id via the whereargs, it would never update any rows as null is considered unique/distinct so will never be the same as another null. Instead you can use IS NULL. If the id is not null then it would not update rows where the id is null.
You can update all in one go if you use, (in SQL)
UPDATE the_table SET the_column = the_value WHERE the_column IS NULL
which I believe would be along the lines of:-
return db.update(
values,
value.toJson(),
where: '${ValueFields.id} IS NULL', /*<<<<< CHANGED WHERE CLAUSE */
whereArgs: [], /*<<<<< CHANGED NO WHERE ARGS*/
);

PostgresPagingQueryProvider retuns 0 columns

I have a PostgresPagingQueryProvider that has a simple query
Select id from action
This is expected to return 1 column. When I run the batch I have an exception saying
PreparedStatementCallback; SQL [SELECT id FROM action WHERE 1=1 ORDER BY id ASC LIMIT 1000];
The column index is out of range: 1, number of columns: 0.; nested exception is org.postgresql.util.PSQLException:
The column index is out of range: 1, number of columns: 0.
This is the query provider:
#Bean
public PostgresPagingQueryProvider queryProvider() {
PostgresPagingQueryProvider provider = new PostgresPagingQueryProvider();
HashMap<String, Order> sorting = new HashMap<>();
sorting.put("id", Order.ASCENDING);
provider.setSelectClause("Select id");
provider.setFromClause("from action");
provider.setWhereClause("where 1=1");
provider.setSortKeys(sorting);
return provider;
}
And this is the creation of the table
CREATE TABLE action (
id int4 NOT NULL,
"timestamp" timestamp NULL,
CONSTRAINT action_pkey PRIMARY KEY (id)
);
When using the same query with ItemReader this works just fine.
Ok, I have figured it out.
The problem was that due to a copy paste of mine I setting the first variable of the query to STOPPED .
As someone might notice the query does not accept any parameters. So deleting that fixed the issue.
In addition to that I thought that the part where I pass in the HashMap of the variables was unrelated, thus it is not included in the question so I guess no one could have answered that.
I am going to leave this here as a tribute to the monumental errors and time wasting a copy paste might lead.

How to return a plain value from a Knex / Postgresql query?

I'm trying to return a simple, scalar string value from a Postgres DB using Knex. So far, everything I do returns a JSON object with a key (the column name) and the value, so I have to reach into the object to get the value. If I return multiple rows, then I get multiple JSON objects, each one repeating the key.
I could be returning multiple columns, in which case each row would at least need to be an array. I'm not looking for a special case where specifying a single column returns the value without the array -- I'm OK reaching into the array. I want to avoid the JSON object with the repetitive listing of column names as keys.
I've scoured the Knex docs but don't see how to control the output.
My table is a simple mapping table with two string columns:
CREATE TABLE public._suite
(
piv_id character(18) NOT NULL,
sf_id character(18) NOT NULL,
CONSTRAINT _suite_pkey PRIMARY KEY (piv_id)
)
When I build a query using Knex methods like
let myId = 'foo', table = '_suite';
return db(table).where('piv_id', myId).first(['sf_id'])
.then( function(id) { return(id); });
I get {"sf_id":"a4T8A0000009PsfUAE"} ; what I want is just "a4T8A0000009PsfUAE"
If I use a raw query, like
return db.raw(`select sf_id from ${table} where piv_id = '${myId}'`);
I get a much larger JSON object describing the result:
{"command":"SELECT","rowCount":1,"oid":null,"rows":[{"sf_id":"a4T8A0000009Q9HUAU"}],"fields":[{"name":"sf_id","tableID":33799,"columnID":2,"dataTypeID":1042,"dataTypeSize":-1,"dataTypeModifier":22,"format":"text"}],"_parsers":[null],"RowCtor":null,"rowAsArray":false}
What do I have to do to just get the value itself? (Again, I'm OK if it's in an array -- I just don't want the column names.)
Take a look at the pluck method.
db(table).where('piv_id', myId).pluck('sf_id'); // => will return you ["a4T8A0000009PsfUAE"]

How insert item on top table

How insert item on top table in PostgreSQL? That it is possible? In the table I have only two fields as text. First is primary key.
CREATE TABLE news_table (
title text not null primary key,
url text not null
);
I need a simple query for the program in java.
OK, this is my code:
get("/getnews", (request, response) -> {
List<News> getNews = newsService.getNews();
List<News> getAllNews = newsService.getAllNews();
try (Connection connection = DB.sql2o.open()) {
String sql = "INSERT INTO news_table(title, url) VALUES (:title, :url)";
for (News news : getNews) {
if (!getAllNews.contains(news)) {
connection.createQuery(sql, true)
.addParameter("title", news.getTitle())
.addParameter("url", news.getUrl())
.executeUpdate()
.getKey();
}
}
}
return newsService.getNews();
}, json());
The problem is that as it calls getnews method for the second time this new news adds at the end of the table, and there is no extant hronologi news. How this resolve? I use Sql2o + sparkjava.
Probably already I know. I need to reverse the List getnews before I will must contains object getnews and getallnews?
There is no start or end in a table. If you want to sort your data, just use an ORDER BY in your SELECT statements. Without ORDER BY, there is no order.
Relational theory, the mathematical foundation of relational databases, lays down certain conditions that relations (represented in real databases as tables) must obey. One of them is that they have no ordering (i.e., the rows will neither be stored nor retrieved in any particular order, since they are treated as a mathematical set). It's therefore completely under the control of the RDBMS where a new row is entered into a table.
Hence there is no way to ensure a particular ordering of the data without using an ORDER BY clause when you retrieve the data.

EF 4.0 Entity does not pick up new values after insert (select entity after insert)

I am using Entity Framework 4.0 POCO entity
I have mapped custom stored procedure on insert
PROCEDURE [dbo].[usp_MyTable_Insert]
(
#Value1 char(1),
#Value2 varchar(5),
#Value3 varchar(20)
....
)
AS
BEGIN TRANSACTION
INSERT INTO "dbo"."MyTable"
(
"Value1",
"Value2",
"Value3"
)
VALUES
(
#Value1,
#Value2,
#Value3
)
DECLARE #Id int
--Get the latest Id.
SET #Id = ( SELECT CAST(##IDENTITY AS INT) )
--update the table with the some values
UPDATE "dbo"."MyTable"
SET Value3 = ( SELECT SomeTableColumn
FROM SomeTable
WHERE Something = Something
)
WHERE [Id] = #Id
COMMIT TRANSACTION
SELECT #Id AS "Id"
END
It is inserting entity into database and then updating some of the columns in database
then returning identity. All pretty simple.
public int InsertRecord(RecEntity recEntity)
{
context.AddObject("RecEntities", recEntity);
context.SaveChanges();
return recEntity.Id;
}
Method insert working well.
Then i need to update current entity with values which stored procedure inserted.
I have method in my repository to retrieve data
public RecEntity SingleRecEntity(Expression> where)
{
return context.RecEntities.Single(where);
}
When i am calling this method values values inserted by stored procedure doesn't come to entity.
id = repository.InsertRecord(recEntity);
recEntity = repository.SingleBrokerPreRegistration(x => x.Id == id); // new values didnt come here from database
I run the query generated by entity framework in query analyzer, it is returning all up to date values.
But fore some reason datacontext don't want to update this entity.
Probably there is should be some ways to change this.
May be some one may explain this behaviour.
Need help.
Try the Refresh method with the StoreWins parameter.
EF does not refresh the values in case there is already an attached object with Entity Key specified unless the Refresh method is not called explicitly
If you run .Load(Objects.MergeOption.OverwriteChanges) on the collection you'll get any newly added items. If you want the deleted items to be "refreshed" you'll need to detach the entities from the collection before running .Load
Putting it all together (sorry about the vb)
For Each child in Parent.ChildCollection.ToArray()
context.Detatch(child)
Next
Parent.ChildCollection.Load(Objects.MergeOption.OverwriteChanges)
This works for me, but if there's a more elegant way I'd love to see it!