I'm using Apache Ignite with Spring Data. I need a column "username" to be unique. In fact "username" is the key of the Ignite cache. I think ignite doesn't implement unique constraint yet.
Using plain ignite API, I'm not sure if I can do a lock like:
IgniteCache<String, Integer> cache = ignite.cache("userCache");
Lock lock = cache.lock("username1");
lock.lock();
//check if doesn't exist yet
...
as "username1" doesn't exist yet. Is there other approach?
I believe your approach will work, however, there is a suitable method for what you are trying to achieve: IgniteCache.putIfAbsent
Current solution with Spring Data and SpringTransactionManager is:
#Transactional("pessimisticTransactionManager")
public void create(final User user) throws AlreadyExistsException {
logger.info("Creating {}", user);
if(userRepository.findByName(user.getName()).isPresent()) {
throw new AlreadyExistsException(user.getName() + " already in use.") ;
}
userRepository.save(user.getName(), user);
}
but with optimisticTransactionManager should also work and have better performance as the chance of collision in names is low for my system.
Related
I've a usecase where I need to connect to two different databases(Postgres and Oracle). Postgres is already configured with jpa. I need to add one more databases(Oracle). In the oracle database i need to choose tables at runtime for insertion and deletion(since tables are not fixed). Currently im passing the tables in my properties file as a list
oracle:
deletion:
table:
-
tableName: user
primaryKey: userId
emailField: emailId
deleteTableName: user_delete
-
tableName: customer
primaryKey: customerId
emailField: emailAddress
deleteTableName: customer_delete
I've created a bean that reads all these properties and puts them in a list
#Bean("oracleTables")
#ConfigurationProperties("oracle.deletion.table")
List<Table> getAllTAbles(){
return new ArrayList<>();
}
I have a list of emailAddresses with me. For each of these tables i need to fetch primary key based on emailAddress from parent table(value in tableName) and insert data into corresponding deleteTable(value in deleteTableName). Once that is done i need to delete data from the actual table(value in tableName) based on email address.
I'm planning to loop through the list of tables I have in my bean and perform fetch, insert and delete.
sample snippet
#Autowired
#Qualifier("oracleTables")
List<Table> tables;
public boolean processDelete(List<String> emails){
for(Table table:tables){
//fetch all the primary keys for given emails from main table(value in tableName)
//insert into corresponding delete table
//delete from main table
}
}
But the question i have is , should i go with jdbcTemplate or jpaRepository/hibernate. And some help with implementation as well with a small sample/link.
The reason for this question is
1)Tables in my case are not fixed
2)I need transaction management to rollback in case of failure in either fetching or inserting or deletion.
3)I need to configure two databases
should i go with jdbcTemplate or jpaRepository/hibernate
Most definitely JdbcTemplate. JPA does not easily allow dynamic tables.
I need transaction management to rollback in case of failure in either fetching or inserting or deletion
If you need transactions, you'll also need to define two separate transaction managers:
#Bean
public TransactionManager oracleTransactionManager() {
var result = new DataSourceTransactionManager();
...
result.setDataSource(oracleDataSource());
return result;
}
#Bean
public TransactionManager postgresTransactionManager() {
...
}
Then, if you want declarative transactions, you need to specify the manager with which to run a given method:
#Transactional(transactionManager = "oracleTransactionManager")
public void doWorkInOracleDb() {
...
}
I need to configure two databases
Just configure two separate DataSource beans. Of course, you will actually need two separate JdbcTemplate beans as well.
I have a .net core web api. Db is PostreSQL. I have a simple POST request that create an entity with two fields:
public class ClientDto{
public string Name {get;set;}
public int ClientId{get;set;}
}
ClientId - FK foreign key to table Clients.
Some client (Postman for exapmle) execute request, but in data model send ClientId that not exists in db.
I have global exeption handler and there I handle db exception, but exception object don't include separated information.
I would like to show to user beautiful message something like "Client with id = 1 not exists".
What the best practis to handle db exceptions?
May be before save object in db I need check if client with id = 1 exists in db? But it is an additional query.
May be before save object in db I need check if client with id = 1 exists in db? But it is an additional query.
I'd do this.
If your client doesn't give you good information in its exception then your probably better to do the additional query. If you're querying on an indexed field (which i'd expect given you are using a foreign key) then it will be a very quick query.
Exception throwing and catching is fairly expensive anyway and i'd probably be happy enough with the extra call.
Two (JSF + JPA + EclipseLink + MySQL) applications share the same database. One application runs a scheduled task where the other one creates tasks for schedules. The tasks created by the first application is collected by queries in the second one without any issue. The second application updates fields in the task, but the changes done by the second application is not refreshed when queried by JPQL.
I have added QueryHints.CACHE_USAGE as CacheUsage.DoNotCheckCache, still, the latest updates are not reflected in the query results.
The code is given below.
How can I get the latest updates done to the database from a JPQL query?
public List<T> findByJpql(String jpql, Map<String, Object> parameters, boolean withoutCache) {
TypedQuery<T> qry = getEntityManager().createQuery(jpql, entityClass);
Set s = parameters.entrySet();
Iterator it = s.iterator();
while (it.hasNext()) {
Map.Entry m = (Map.Entry) it.next();
String pPara = (String) m.getKey();
if (m.getValue() instanceof Date) {
Date pVal = (Date) m.getValue();
qry.setParameter(pPara, pVal, TemporalType.DATE);
} else {
Object pVal = (Object) m.getValue();
qry.setParameter(pPara, pVal);
}
}
if(withoutCache){
qry.setHint(QueryHints.CACHE_USAGE, CacheUsage.DoNotCheckCache);
}
return qry.getResultList();
}
The CacheUsage settings affect what EclipseLink can query using what is in memory, but not what happens after it goes to the database for results.
It seems you don't want to out right avoid the cache, but refresh it I assume so the latest changes can be visible. This is a very common situation when multiple apps and levels of caching are involved, so there are many different solutions you might want to look into such as manual invalidation or even if both apps are JPA based, cache coordination (so one app can send an invalidation even to the other). Or you can control this on specific queries with the "eclipselink.refresh" query hint, which will force the query to reload the data within the cached object with what is returned from the database. Please take care with it, as if used in a local EntityManager, any modified entities that would be returned by the query will also be refreshed and changes lost
References for caching:
https://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Caching
https://www.eclipse.org/eclipselink/documentation/2.6/concepts/cache010.htm
Make the Entity not to depend on cache by adding the following lines.
#Cache(
type=CacheType.NONE, // Cache nothing
expiry=0,
alwaysRefresh=true
)
I have a typical scenario where users enter data that is inserted into a SQL database using Entity Framework 6.0. However, some rows that are part of the entity need to be unique (already enforced with unique key constraints in the database).
To avoid possible concurrency or performance issues I favour these checks to be left done by SQL Server.
When attempting to save a new entity that holds a duplicate row, a DbUpdateException is thrown by Entity Framework. The inner exception is a SqlException with its Number equal to 2627, and a message that reads:
"Violation of UNIQUE KEY constraint 'UK_MyTable_MyRule'. Cannot insert duplicate key in object 'dbo.MyTable'".
Considering that there are several tables involved, which may each have their own unique constraints defined, is there no better way to conclude a friendlier message to the user that reads:
"A MyEntity with the name 'MyEntity1' already exists."
...without having to infer this through the Number and Message properties from the SqlException?
For example:
try
{
...
context.SaveChanges();
}
catch (DbUpdateException exception)
{
var sqlException = exception.InnerException as SqlException;
bool isDuplicateInMyTable3 =
sqlException != null &&
sqlException.Number = 2627/*Unique Constraint Violation*/ &&
sqlException.Message.Contains("'UK_MyTable3_");
if (isDuplicateInMyTable3)
{
return "A MyTable3 with " + ... + " already exists.";
}
throw exception;
}
Is there a "cleaner" way to achieve the same that does not involve looking through the error message string?
You may like to enjoy the AddOrUpdate method.
Research it first. I have noted experts warning of over zealous use.
Context.Set<TPoco>().AddOrUpdate(poco);
can still throw other EF\DB exceptions.
But Duplicate primary key should not be one of them.
Other constraint issues are as before.
I have seen many Questions on foreign key constraints problem and what I got is that
By default, the following constraints are not copied to the client: FOREIGN KEY constraints, UNIQUE constraints, and DEFAULT constraints
in this document: http://msdn.microsoft.com/en-us/library/bb726037.aspx
So, it appears I have to "manually" create the relationships, once the schema is created on the client.
Once relationship has been created on client side, what if I make any changes in tables on server side, I have to recreate all relationships on client side again and again. Is not it'd be a headache. Is there anyway to write code or script to create foreign key constraints on client side that can be just copied. and if we make any changes on server side tables schema that could be done on client side by changing the script.
I am using a modified version of the sample http://code.msdn.microsoft.com/Database-Sync-SQL-Server-7e88adab#content
Sql Server Express to Sql Server over WCF Service.
I used the script from SQL Authority to generate a Alter Table script to add all the foreign keys http://blog.sqlauthority.com/2008/04/18/sql-server-generate-foreign-key-scripts-for-database/
When the client calls the WCF Service GetScopeDescription() to get the Schema for the client I run the above Stored Procedure to get all the Foreign Key relationships to add. The SQL script returned I put in a string in the DbSyncScopeDescription.UserComment field, which holds the script and transports it to the client at the same time as the Schema. Then client side after syncing scope/schema I can run the script generate the relationships.
DbSyncScopeDescription dbSyncScopeDescription = sqlSyncProviderProxy.GetScopeDescription();
sqlSyncScopeProvisioning.PopulateFromScopeDescription(dbSyncScopeDescription);
sqlSyncScopeProvisioning.Apply();
string alterDatabaseScript = dbSyncScopeDescription.UserComment;
This is specific to static database schema/relationships. When schema/relationship modifications are needed I will drop the client database first.
Sync Framework doesnt automatically pick up schema changes made to the tables being synched. whether it's just an FK, column name/type/length change, you will have to re-provision (not unless you want to hack your way on the sync objects).
if you want full schema fidelity, i suggest you create the database objects yourself (table, constraint, sp, triggers, etc...) and not let Sync itself create the tables for you.
and btw, there is no Sync Framework 4.0
I got easy way to add foreign key constrains simply make txt file of foreign key sql commands and give ';' after each sql command and use below code it works perfectly...
private void FunAddForeignKeys()
{
SqlConnection clientConn = new SqlConnection(lconString);
if (clientConn.State == ConnectionState.Closed)
clientConn.Open();
System.Data.SqlClient.SqlCommand Command = new System.Data.SqlClient.SqlCommand(GetSql("ForeignKeyQueries.txt"), clientConn);
try
{
Command.ExecuteNonQuery();
MessageBox.Show("Foreign keys added");
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
finally
{
// Closing the connection should be done in a Finally block
clientConn.Close();
}
}
private string GetSql(string Name)
{
try
{
// Gets the current assembly.
Assembly Asm = Assembly.GetExecutingAssembly();
// Resources are named using a fully qualified name.
Stream strm = Asm.GetManifestResourceStream(Asm.GetName().Name + "." + Name);
// Reads the contents of the embedded file.
StreamReader reader = new StreamReader(strm);
return reader.ReadToEnd();
}
catch (Exception ex)
{
MessageBox.Show("In GetSQL: " + ex.Message);
throw ex;
}
}
I've got a solution that create client side table via synchronization and then add code to generate foreign key constraints. its easy way rather than to generate all tables ourselves and then add constraints to them. just copy lines for relations and that's all.