How to bulk update SQLite table entries in Flutter - flutter

My Flutter app makes use of an SQLite database with multiple tables and Firebase authentication. Some time after publishing the app, I added a User ID column to each table, so that I could ensure that results could be filtered by users and they would only see their own data, should another user log into the app on the same device.
When I created the new column in each table though, for the entries that already existed, I allowed the data in the new column to be null to prevent an error with the database. The problem with that is that users who made the initial calculations where the User ID is null will now 'lose' that data, as it won't show up when I implement filtering with User ID.
What I want to do is bulk-update the null values in that column of each SQLite database table, if a null value exists in that table. The null values should be replaced by the current logged in User ID. I'm not sure of the best way to do this but my idea is something like this, with a database update function after it has been initialised:
Future _update(Database db, int oldVersion, int newVersion) async {
if (oldVersion < newVersion) {
Future<int> updateDB(String value) async {
final db = await instance.database;
return db.update(
values,
value.toJson(),
where: '${ValueFields.id} = ?',
whereArgs: [value.id],
);
}
}
}
Thing is, this doesn't seem to actually work and of course doesn't bulk update all rows in that particular column. Can someone advise on how I can build the bulk update function, to take all null User ID values in a particular table and change them to the current logged in user?
Thank you!

What you are doing is never updating anything if id is null. That is even if null were passed as the id via the whereargs, it would never update any rows as null is considered unique/distinct so will never be the same as another null. Instead you can use IS NULL. If the id is not null then it would not update rows where the id is null.
You can update all in one go if you use, (in SQL)
UPDATE the_table SET the_column = the_value WHERE the_column IS NULL
which I believe would be along the lines of:-
return db.update(
values,
value.toJson(),
where: '${ValueFields.id} IS NULL', /*<<<<< CHANGED WHERE CLAUSE */
whereArgs: [], /*<<<<< CHANGED NO WHERE ARGS*/
);

Related

I want to create multiple table in sqflite by calling same function

I want to create multiple tables in sqflite by calling the same function, cause I want to create multiple playlists, playlist name will be the table name, which came from user input,
for this reason, users call the databaseCreate function in multiple time. But its show some error :
Unhandled Exception: DatabaseException(no such table: sports (code 1
SQLITE_ERROR): , while compiling: INSERT INTO sports (title, link, logo,
playlistName) VALUES (?, ?, NULL, ?)) sql 'INSERT INTO sports (title, link, logo,
playlistName)
Database Create Code :
Future open( String name) async {
Directory documentsDirectory = await getApplicationDocumentsDirectory();
final path = join(documentsDirectory.path, 'playlist1.db');
_database = await openDatabase(path, version: 2,
onCreate: (Database db, int version) async {
await db.execute('''
create table $name (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,playlistName TEXT, link TEXT, title TEXT,
logo TEXT)
''');
});
}
Hare's name comes from the user. At a very fast time, it will be okay but for creating a second playlist it will be crashed
You should prefer use parameterized queries:
await db.execute("
create table ? (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,playlistName TEXT, link TEXT, title TEXT,
logo TEXT)
", [name]);
UPDATE
If you need to update the database schema, you need to update the version of the database, like this:
return await openDatabase(
path,
version: 2, // <=== Update (increase) this number
onOpen: (db) {},
onCreate: createDatabase,
onUpgrade: upgradeDatabase,
);
Doing this, you ask SQFLite to update the schema. Use onUpgrade to tell the new schema.
You can also uninstall the app when you are in development stage. In this case, new schema should be in onCreate.
If i resume, onCreate is called once when database is created. And onUpgrade is called each time version is increased.
UPDATE 2
For unknown reasons (haven't search more), execute method does not replace the ? by parameter. Perharps for security reasons. In fact using dynamic table name is not a very good design.
So for achive this, you shoult not use parameterized query like i said previously. You should use a concatened string, like this:
await db.rawQuery(
"create table " +
name +
" ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,playlistName TEXT, link TEXT, title TEXT, logo TEXT)",[]);

R2DBC: Why are RowsFetchSpec<T> operators (.all(),.first(),.one()) signal onComplete although record is stored in database?

I am using DatabaseClient for building a custom Repository. After I insert or update an Item I need that Row data to return the saved/updated Item. I just can´t wrap my head around why .all(), .first(), .one() are not returning the Result Map, although I can see that the data is inserted/updated in the database. They just signal onComplete. But .rowsUpdated() returns 1 row updated.
I observed this behaviour with H2 and MS SQL Server.
I´m new to R2dbc. What am I missing? Any ideas?
#Transactional
public Mono<Item> insertItem(Item entity){
return dbClient
.sql("insert into items (creationdate, name, price, traceid, referenceid) VALUES (:creationDate, :name, :price, :traceId, :referenceId)")
.bind("creationDate", entity.getCreationDate())
.bind("name", entity.getName())
.bind("price", entity.getPrice())
.bind("traceId", entity.getTraceId())
.bind("referenceId", entity.getReferenceId())
.fetch()
.first() //.all() //.one()
.map(Item::new)
.doOnNext(item -> LOGGER.info(String.format("Item: %s", item)));
}
The table looks like this:
CREATE TABLE [dbo].[items](
[creationdate] [bigint] NOT NULL,
[name] [nvarchar](32) NOT NULL,
[price] [int] NOT NULL,
[traceid] [nvarchar](64) NOT NULL,
[referenceid] [int] NOT NULL,
PRIMARY KEY (name, referenceid)
)
Thanks!
This is the behavior of an insert/update statement in database, it does not return the inserted/updated rows.
It returns the number of inserted/updated rows.
It may also return some generated values by the database (such as auto increment, generated uuid...), by adding the following line:
.filter(statement -> statement.returnGeneratedValues())
where you may specify specific generated columns in parameter. However this has limitations depending on the database (for example MySql can only return the last generated ID of an auto increment column even if you insert multiple rows).
If you want to get the inserted/updated values from database, you need to do a select.

Fetching id field of an entity in the SQL Transaction

I have the following schema.
Person(pid, pname)
Beer(bid, bname)
Likes(pid,bid)
I would like to insert a likes item. However, I am accepting the following format for the new users : (Pid, pname, bid, bname).
I would like to create a transaction for that to avoid conflict ( This is a highly simplified version of my real problem but the issue is the same). In my Person table, I set pid Auto-Increment(or Serial in Postgresql). Also the same goes for bid.
I have stuck in a point where I know the Person does not exist but the beer exists. So, I have to create a Person, then add an entity to Likes relation.
As far as I know, when I use the Autocommit(false) in dB, the transaction won't save until the commit. So, should I change the db design:
Change the auto-increment field to a normal integer, not null field.
In the transaction, after the autoCommit(false) has begun, read the last entry of the person
Increment it by one while creating the new person
Then create likes relation
Or, is there any other way around or do I miss something about transactions?
Here is what I have done so far:
try {
String add_person_sql = "INSERT INTO Person (name) VALUES(?)";
PreparedStatement add_person_statement = mydb.prepareStatement(add_person_sql);
String add_likes_sql = "INSERT INTO Likes (pid, bid) VALUES(?, ?)";
PreparedStatement add_likes_statement = mydb.prepareStatement(add_likes_sql);
mydb.setAutoCommit(false);
add_person_statement.setString(1, pname);
// The problem is, without saving the person I cannot know the id of the person
// AFAIK, this execution is not finished until commit occurs
add_person_statement.executeQuery();
// How can I fetch person's id
add_likes_statement.setString(1, pid);
add_likes_statement.setString(2, bid);
add_likes_statement.executeQuery();
mydb.commit();
}
catch(Exception e){
System.out.println(e);
mydb.rollback();
}
You can tell JDBC to return the generated ID from the insert statement, then you can use that ID to insert into the likes table:
mydb.prepareStatement(add_person_sql, new String[]{"pid"});
The second parameter tells the driver to return the generated value for the pid column.
Alternatively you can use
mydb.prepareStatement(add_person_sql, Statement.RETURN_GENERATED_KEYS);
that tells the driver to detect the auto increment columns.
Then run the insert using executeUpdate()
add_person_statement.setString(1, pname);
add_person_statement.executeUpdate();
int newPid = -1;
ResultSet idResult = add_person.getGeneratedKeys();
if (idResult.next()) {
newPid = idResult.getInt(1);
}
add_likes_statement.setString(1, newPid);
add_likes_statement.setString(2, bid);
add_likes_statement.executeUpdate();
mydb.commit();

Entity Framework Interceptor to set Id field of patricular entities

In my current project i am working with the database which has very strange table structure (All Id Fields in most tables are marked as not nullable and primary while there is not auto increment increment enabled for them those Id fields need to be unique as well).
unfortunately there is not way i can modify DB so i find another why to handle my problem.
I have no issues while querying for data but during insert What i want to do is,
To get max Id from table where entity is about to be inserted and increment it by one or even better use SSELECT max(id) pattern during insert.
I was hoping to use Interceptor inside EF to achieve this but is looks too difficult for me now and all i managed to do is to identify if this is insert command or not.
Can someone help me through my way on this problem? how can i achieve this and set ID s during insert either by selecting max ID or using SELECT max(id)
public void TreeCreated(DbCommandTreeInterceptionContext context)
{
if (context.OriginalResult.CommandTreeKind != DbCommandTreeKind.Insert && context.OriginalResult.DataSpace != DataSpace.CSSpace) return;
{
var insertCommand = context.Result as DbInsertCommandTree;
var property = insertCommand?.Target.VariableType.EdmType.MetadataProperties.FirstOrDefault(x => x.Name == "TableName");
if (property == null) return;
var tbaleName = property?.Value as ReadOnlyCollection<EdmMember>;
var variableReference = insertCommand.Target.VariableType.Variable(insertCommand.Target.VariableName);
var tenantProperty = variableReference.Property("ID");
var tenantSetClause = DbExpressionBuilder.SetClause(tenantProperty, DbExpression.FromString("(SELECT MAX(ID) FROM SOMEHOWGETTABLENAME)"));
var filteredSetClauses = insertCommand.SetClauses.Cast<DbSetClause>().Where(sc => ((DbPropertyExpression)sc.Property).Property.Name != "ID");
var finalSetClauses = new ReadOnlyCollection<DbModificationClause>(new List<DbModificationClause>(filteredSetClauses) { tenantSetClause });
var newInsertCommand = new DbInsertCommandTree(
insertCommand.MetadataWorkspace,
insertCommand.DataSpace,
insertCommand.Target,
finalSetClauses,
insertCommand.Returning);
context.Result = newInsertCommand;
}
}
Unfortunately that concept of Interceptor is a little bit new for me and i do not understand it completely.
UPDATE
I manage to dynamically build that expression so that ID field is now included in insert statement, but the problem here is that I can not use SQL query inside it. whenever i try to use this it always results in some wrong SQL query so is there anyway i tweak insert statement so that this SELECT MAX(ID) FROM TABLE_NAME is executed during insert?
Get the next id from the context, and then set the parameter of the insert command accordingly.
void NonQueryExecuting(DbCommand command, DbCommandInterceptionContext<int> interceptionContext)
{
var context = interceptionContext.DbContexts.First() as WhateverYourEntityContainerNameIs;
// get the next id from the database using the context
var theNextId = (from foo in context...)
// update the parameter on the command
command.Parameters["YourIdField"].Value = theNextId;
}
Just bear in mind this is not terribly thread safe; if two users update the same table at exactly the same time, they could theoretically get the same id. This is going to be a problem no matter what update method you use if you manage keys in the application instead of the database. But it looks like that decision is out of your hands.
If this is going to be a problem, you might have to do something more drastic like alter the command.CommandText to replace the value in the values clause with a subquery, for example change
insert into ... values (#YourIdField, ...)
to
insert into ... values ((select max(id) from...), ...)

Implementing if-not-exists-insert using Entity Framework without race conditions

Using LINQ-to-Entities 4.0, is there a correct pattern or construct for safely implementing "if not exists then insert"?
For example, I currently have a table that tracks "user favorites" - users can add or remove articles from their list of favorites.
The underlying table is not a true many-to-many relationship, but instead tracks some additional information such as the date the favorite was added.
CREATE TABLE UserFavorite
(
FavoriteId int not null identity(1,1) primary key,
UserId int not null,
ArticleId int not null
);
CREATE UNIQUE INDEX IX_UserFavorite_1 ON UserFavorite (UserId, ArticleId);
Inserting two favorites with the same User/Article pair results in a duplicate key error, as desired.
I've currently implemented the "if not exists then insert" logic in the data layer using C#:
if (!entities.FavoriteArticles.Any(
f => f.UserId == userId &&
f.ArticleId == articleId))
{
FavoriteArticle favorite = new FavoriteArticle();
favorite.UserId = userId;
favorite.ArticleId = articleId;
favorite.DateAdded = DateTime.Now;
Entities.AddToFavoriteArticles(favorite);
Entities.SaveChanges();
}
The problem with this implementation is that it's susceptible to race conditions. For example, if a user double-clicks the "add to favorites" link two requests could be sent to the server. The first request succeeds, while the second request (the one the user sees) fails with an UpdateException wrapping a SqlException for the duplicate key error.
With T-SQL stored procedures I can use transactions with lock hints to ensure a race condition never occurs. Is there a clean method for avoiding the race condition in Entity Framework without resorting to stored procedures or blindly swallowing exceptions?
You can also write a stored procedure that uses some new tricks from sql 2005+
Use your combined unique ID (userID + articleID) in an update statement, then use the ##RowCount function to see if the row count > 0 if it's 1 (or more), the update has found a row matching your userID and ArticleID, if it's 0, then you're all clear to insert.
e.g.
Update tablex set userID = #UserID, ArticleID = #ArticleID (you could have more properties here, as long as the where holds a combined unique ID) where userID = #UserID and ArticleID = #ArticleID
if (##RowCount = 0)
Begin
Insert Into tablex ...
End
Best of all, it's all done in one call, so you don't have to first compare the data and then determine if you should insert. And of course it will stop any dulplicate inserts and won't throw any errors (gracefully?)
You could try to wrap it in a transaction combined with the 'famous' try/catch pattern:
using (var scope = new TransactionScope())
try
{
//...do your thing...
scope.Complete();
}
catch (UpdateException ex)
{
// here the second request ends up...
}