I have list of records, I want to perform following tasks using SpringJDBCTemplate
(1) Update existing records
(2) Insert new records.
Don't know how this happens using jdbcTemplate of spring.
Any insight?
You just use one of the various forms of batchUpdate for the update. Then you check the return value which will contain 1 if the row was present and 0 otherwise. For the later, you perform another batchUpdate with the insert statements.
Related
Any way to do this? Basically trying to do a SQL UPDATE SET function if matching record for one or more key fields exists in another dataset.
Tried using Joins and Merge. Joins seems like more steps and the Merge appends records instead of updating the correlating rows.
Is it possible to batch together commits from multiple JDBC prepared statements?
In my app the user will insert one or more records along with records in related tables. For example, we'll need to update a record in the "contacts" table, delete related records in the "tags" table, and then insert a fresh set of tags.
UPDATE contacts SET name=? WHERE contact_id=?;
DELETE FROM tags WHERE contact_id=?;
INSERT INTO tags (contact_id,tag) values (?,?);
// insert more tags as needed here...
These statements need to be part of a single transaction, and I want to do them in a single round trip to the server.
To send them in a single round-trip, there are two choices: for each command create a Statement and then call .addBatch(), or for each command create a PreparedStatement, and then call .setString(), .setInt() etc. for parameter values, then call .addBatch().
The problem with the first choice is that sending a full SQL string in the .addBatch() call is inefficient and you don't get the benefit of sanitized parameter inputs.
The problem with the second choice is that it may not preserve the order of the SQL statements. For example,
Connection con = ...;
PreparedStatement updateState = con.prepareStatement("UPDATE contacts SET name=? WHERE contact_id=?;");
PreparedStatement deleteState = con.prepareStatement("DELETE FROM contacts WHERE contact_id=?;");
PreparedStatement insertState = con.prepareStatement("INSERT INTO tags (contact_id,tag) values (?,?);");
updateState.setString(1, "Bob");
updateState.setInt(1, 123);
updateState.addBatch();
deleteState.setInt(1, 123);
deleteState.addBatch();
... etc ...
... now add more parameters to updateState, and addBatch()...
... repeat ...
con.commit();
In the code above, are there any guarantees that all of the statements will execute in the order we called .addBatch(), even across different prepared statements? Ordering is obviously important; we need to delete tags before we insert new ones.
I haven't seen any documentation that says that ordering of statements will be preserved for a given connection.
I'm using Postgres and the default Postgres JDBC driver, if that matters.
The batch is per statement object, so a batch is executed per executeBatch() call on a Statement or PreparedStatement object. In other words, this only executes the statements (or value sets) associated with the batch of that statement object. It is not possible to 'order' execution across multiple statement objects. Within an individual batch, the order is preserved.
If you need statements executed in a specific order, then you need to explicitly execute them in that order. This either means individual calls to execute() per value set, or using a single Statement object and generating the statements in the fly. Due to the potential of SQL injection, this last approach is not recommended.
I am querying data from server to update fields in my local DB.
But, I have more than 100 columns hence I just want to know which column has new value(different than previous) so that I can use only those fields to prepare/make/build update command to my local DB.
EXAMPLE: I have 100 columns already in my DB. Same row in server has been update. Now that I have fetched all those 100 columns from server.(Stored in list object to make prepare statement in Java). Only 10 from 100 columns have updated. I want to which 10 columns have been updated.
How to do it with triggers?
Or there is any other way than triggers like, in Cassandra, inserting using same PK will act as an update for a row.
If you have to do the update you can add a RETURN statement for show updated rows.
You cn see more on the documentation of Postgres: https://www.postgresql.org/docs/current/dml-returning.html
When the Class RowBounds in MyBatis API gets data from DB, does it do full scan and then cut the row that is set up by limit and offset parameters? or does it only get the data bound?
If the SQL query contains offset and limit/fetch first n rows only then the resultset will return only data within bounds. Bounds are applied on DB side. OFFSET 10000 LIMIT 20 will produces a (maximum) 20 records resultset.
This is likely what you need.
Rowbound does not alter the SQL query and operates independently. Mybatis works with whole Resultset returned by the DB.
e.g.: RowBounds(10000, 20) will skip first 10000 records of the resultset, then fetch 20 records and stop. But the result size may be MAX_INT.
It does retrieve full data from database however return only the requested number of records into the program. So, no worries for OutOfMemory but query will take long on database side.
Hibernate and Eclipselink, on the other hand pass on the given limit count onto the database and retrieves only required number of records from database. Hibernate achieves this by using database vendor specific SQL construct in its generated SQL. Ex - LIMIT clause in MS-SQL, Rownum for Oracle.
If you want to achieve the same in mybatis, you need to use these constructs yourselves.
It is easy and you can use mybatis conditions to make the SQL specific to any database.
I want to save more than 1 records at one time that means save block of records.
Is it possible in mongoDB. if yes plz some one tell.
Thanks
The java driver's DBCollection supports insert with a List<DBObject> parameter this should be usable from scala as well.
Edit
Cabash also supports a similar insert method that should do the trick.