I'm trying to delete all database entries for a Spring Roo entity. When I look at *_Roo_Entity.aj it seems as if there is no "delete all" method. I tried to implement it myself (Licences is the name of the Roo entity. Don't mind the naming. It was reverese engineered from a database and may be changed later):
public static int Licences.deleteAll() {
return entityManager().createQuery("delete from Licences o").executeUpdate();
}
It compiles just fine but when I call Licences.deleteAll() I get the following exception:
org.springframework.dao.InvalidDataAccessApiUsageException: Executing an update/delete query;
nested exception is javax.persistence.TransactionRequiredException: Executing an update/delete query (NativeException)
Adding #Transactional doesn't make a difference.
What am I missing here?
Is this approach completely wrong and I need to implement it like this:
public static void Licences.deleteAll() {
for (Licences licence : findAllLicenceses()) {
licence.remove();
}
}
This works, but is JPA smart enough to translate this into a delete from licences query or will it create n queries?
#Transactional doesn't work on static function
change
public static int Licences.deleteAll() {
return entityManager().createQuery("delete from Licences o").executeUpdate();
}
to
public int Licences.deleteAll() {
return entityManager().createQuery("delete from Licences o").executeUpdate();
}
https://jira.springsource.org/browse/SPR-5999
Bye
JPA does not have a delete all functionality. (even not with JQL!)
At least there are only three ways:
The loop, like you did
A JPQL Query see: JPQL Reference: 10.2.9. JPQL Bulk Update and Delete
A native SQL Query, but this will cause many problems with Entity Manager caches!
BTW: It seams that you are using AspectJ to attach you delete method. - You can do this (even if I do not know, why not adding the static method direct to the Entity class), but you must not touch the Roo generated aj files!
public static int Licences.deleteAll() {
return new Licences().deleteAllTransactional();
}
#Transactional
private int Licences.deleteAllTransactional() {
if (this.entityManager == null) this.entityManager = entityManager();
return this.entityManager.createQuery("delete from Licences o").executeUpdate();
}
Related
I've been having an issue with one of my .net core services when using the join statement.
This is the problem code:
public List<Category> GetCategoryList()
{
var catlist = (from m in _mappingProductCategoryRepository.Table
join c in _categoryRepository.Table on m.CategoryId equals c.Id
select c).ToList();
return catlist;
}
and the error it throws is this:
InvalidOperationException: Cannot use multiple context instances within a single query execution. Ensure the query uses a single context instance.
Here is the relevant excerpt from my repository:
public partial class PSRepository<TEntity> : IRepository<TEntity> where TEntity : class
{
private readonly PSContext _db;
public PSRepository(PSContext PSContext)
{
_db = PSContext;
}
public virtual IQueryable<TEntity> Table => _db.Set<TEntity>();
}
And here is my context being registered in the startup.cs file
public void ConfigureContainer(ContainerBuilder builder)
{
builder.RegisterType<PSContext>().As<PSContext>().InstancePerDependency();
builder.RegisterGeneric(typeof(PSRepository<>)).As(typeof(IRepository<>)).InstancePerLifetimeScope();
builder.RegisterType<CategoryService>().As<ICategoryService>().InstancePerLifetimeScope();
}
I've changed the context registration from 'InstancePerDependency' to 'InstancePerLifetimeScope' so that it now looks like this:
builder.RegisterType<PSContext>().As<PSContext>().InstancePerLifetimeScope();
And it seems to have fixed the issue. So being somewhat new to .Net Core my question is whether or not this was the correct fix for the issue? Or have I created further issues for myself further down the line?
I realise I could get rid of the repository and use the context directly but I don't really want to do this.
Any help gratefully received
I'm using Eclipselink JPA, I have an Entity with a Timestamp field annotated with #Version por optimistic locking.
By default, this sets the entitymanager to use database time, so, if I have to do a batch update it doesn't work properly as it query the database for time each time it wants to do an insert.
How can I change the TimestampLockingPolicy to use LOCAL_TIME?
The class org.eclipse.persistence.descriptors.TimestampLockingPolicy.class has a public method useLocalTime() but I dont know how to use or, from where should I call it.
Found the answer:
first lets create a DescriptorCustomizer
public class LocalDateTimeCustomizer implements DescriptorCustomizer {
#Override
public void customize(ClassDescriptor descriptor) throws Exception {
OptimisticLockingPolicy policy = descriptor.getOptimisticLockingPolicy();
if (policy instanceof TimestampLockingPolicy) {
TimestampLockingPolicy p = (TimestampLockingPolicy) policy;
p.useLocalTime();
}
}
}
then annotate the entity that has the #Version with
#Customizer(LocalDateTimeCustomizer.class)
I'm trying to migration EF to Dapper and I'm looking for a better and efficient way on how to migrate existing linq expressions "IQueryable" to use Dapper.
Ex:
public class MyDbContext : DbContext
{
public DbSet<MyEntity> MyEntity { get; set; }
+20 more entities..
}
// Inside repository
using (var context = new MyDbContext())
{
context.MyEntity.Where( x => x.Id == 1) // How can I easily migrate this linq to Dapper?
}
The above code is a simple example only of what I'm trying to migrate. Some of those queries are mix of simple and complex. Currently, I have 20+ repositories and 20+ DbSet in MyDbContext that used that kind of approach inside repositories.
I searched in the internet and I haven't found a better approach. So far the only way to do this is to convert linq into query string one by one and use it in Dapper. It is doable but tedious and huge effort. Performance is my biggest concern why I'am migrating to Dapper.
Does anyone has a better way to do this than what I'm currently thinking?
I found a medicine to my own problem. Instead of intercepting the query, I allow to pass the predicate and create Linq like function same with existing.
public class QueryLinq
{
private readonly IDatabase _database;
public QueryLinq(IDatabase database)
{
_database = database; // Dapper implementation
}
public IEnumerable<T> Where<T>(Func<T,bool> predicate)
{
var enumerable = _database.Query<T>();
return enumerable(predicate);
}
}
// Existing Repository
public class MyRepository
{
private readonly QueryLinq _queryLinq;
public MyRepository(QueryLinq queryLinq)
{
_queryLinq = queryLinq;
}
public IEnumerable<MyEntity> SelectMyEntity()
{
// Before
// using (var context = new MyDbContext())
// {
// context.MyEntity.Where( x => x.Id == 1);
// }
// Now
return _queryLinq.Where<MyEntity>( x => x.Id == 1);
}
}
In this approach, I don't need to worry about existing queries.
Update 8/16/2018: for the complete details of this approach kindly visit here.
You can use Entity Framework Logging to output the generated SQL to the Visual Studio console. It's as simple as adding:
_context.Database.Log = x => Trace.WriteLine(x);
to your service or respository class(es).
I am doing exactly what you're doing, and for the same reason. You'll see that the SQL that LINQ generates can be sub-optimal so directly copying the same sub-optimal SQL for Dapper to use seemed to me like a pointless exercise.
What I've done is identify the LINQ queries that perform the worst, and rewrite them in SQL - from scratch - for use with Dapper. What I've ended up with is a combination of LINQ and Dapper throughout the system, benefitting from the strengths of both approaches ie LINQ's rapid development, and the performance gains of Dapper and optimised SQL queries.
Okay so at work we are developing a system using MVC C# & MongoDB. When first developing we decided it would probably be a good idea to follow the Repository pattern (what a pain in the ass!), here is the code to give an idea of what is currently implemented.
The MongoRepository class:
public class MongoRepository { }
public class MongoRepository<T> : MongoRepository, IRepository<T>
where T : IEntity
{
private MongoClient _client;
private IMongoDatabase _database;
private IMongoCollection<T> _collection;
public string StoreName {
get {
return typeof(T).Name;
}
}
}
public MongoRepository() {
_client = new MongoClient(ConfigurationManager.AppSettings["MongoDatabaseURL"]);
_database = _client.GetDatabase(ConfigurationManager.AppSettings["MongoDatabaseName"]);
/* misc code here */
Init();
}
public void Init() {
_collection = _database.GetCollection<T>(StoreName);
}
public IQueryable<T> SearchFor() {
return _collection.AsQueryable<T>();
}
}
The IRepository interface class:
public interface IRepository { }
public interface IRepository<T> : IRepository
where T : IEntity
{
string StoreNamePrepend { get; set; }
string StoreNameAppend { get; set; }
IQueryable<T> SearchFor();
/* misc code */
}
The repository is then instantiated using Ninject but without that it would look something like this (just to make this a simpler example):
MongoRepository<Client> clientCol = new MongoRepository<Client>();
Here is the code used for the search pages which is used to feed into a controller action which outputs JSON for a table with DataTables to read. Please note that the following uses DynamicLinq so that the linq can be built from string input:
tmpFinalList = clientCol
.SearchFor()
.OrderBy(tmpOrder) // tmpOrder = "ClientDescription DESC"
.Skip(Start) // Start = 99900
.Take(PageLength) // PageLength = 10
.ToList();
Now the problem is that if the collection has a lot of records (99,905 to be exact) everything works fine if the data in a field isn't very large for example our Key field is a 5 character fixed length string and I can Skip and Take fine using this query. However if it is something like ClientDescription can be much longer I can 'Sort' fine and 'Take' fine from the front of the query (i.e. Page 1) however when I page to the end with Skip = 99900 & Take = 10 it gives the following memory error:
An exception of type 'MongoDB.Driver.MongoCommandException' occurred
in MongoDB.Driver.dll but was not handled in user code
Additional information: Command aggregate failed: exception: Sort
exceeded memory limit of 104857600 bytes, but did not opt in to
external sorting. Aborting operation. Pass allowDiskUse:true to opt
in..
Okay so that is easy to understand I guess. I have had a look online and mostly everything that is suggested is to use Aggregation and "allowDiskUse:true" however since I use IQueryable in IRepository I cannot start using IAggregateFluent<> because you would then need to expose MongoDB related classes to IRepository which would go against IoC principals.
Is there any way to force IQueryable to use this or does anyone know of a way for me to access IAggregateFluent without going against IoC principals?
One thing of interest to me is why the sort works for page 1 (Start = 0, Take = 10) but then fails when I search to the end ... surely everything must be sorted for me to be able to get the items in order for Page 1 but shouldn't (Start = 99900, Take = 10) just need the same amount of 'sorting' and MongoDB should just send me the last 5 or so records. Why doesn't this error happen when both sorts are done?
ANSWER
Okay so with the help of #craig-wilson upgrading to the newest version of MongoDB C# drivers and changing the following in MongoRepository will fix the problem:
public IQueryable<T> SearchFor() {
return _collection.AsQueryable<T>(new AggregateOptions { AllowDiskUse = true });
}
I was getting a System.MissingMethodException but this was caused by other copies of the MongoDB drivers needing updated as well.
When creating the IQueryable from an IMongoCollection, you can pass in the AggregateOptions which allow you to set AllowDiskUse.
https://github.com/mongodb/mongo-csharp-driver/blob/master/src/MongoDB.Driver/IMongoCollectionExtensions.cs#L53
I am exploring Entity Framework 7 and I would like to know if there is a way to intercept a "SELECT" query. Every time an entity is created, updated or deleted I stamp the entity with the current date and time.
SELECT *
FROM MyTable
WHERE DeletedOn IS NOT NULL
I would like all my SELECT queries to exclude deleted data (see WHERE clause above). Is there a way to do that using Entity Framework 7?
I am not sure what your underlying infrastructure looks like and if you have any abstraction between your application and Entity Framework. Let's assume you are working with DbSet<T> you could write an extension method to exclude data that has been deleted.
public class BaseEntity
{
public DateTime? DeletedOn { get; set; }
}
public static class EfExtensions
{
public static IQueryable<T> ExcludeDeleted<T>(this IDbSet<T> dbSet)
where T : BaseEntity
{
return dbSet.Where(e => e.DeletedOn == null);
}
}
//Usage
context.Set<BaseEntity>().ExcludeDeleted().Where(...additional where clause).
I have somewhat same issue. I'm trying to intercept read queries like; select, where etc in order to look into the returned result set. In EF Core you don't have an equivalent to override SaveChanges for read queries, unfortunately.
You can however, still i Entity Framework Core, hook into commandExecuting and commandExecuted, by using
var listener = _context.GetService<DiagnosticSource>();
(listener as DiagnosticListener).SubscribeWithAdapter(new CommandListener());
and creating a class with following two methods
public class CommandListener
{
[DiagnosticName("Microsoft.EntityFrameworkCore.Database.Command.CommandExecuting")]
public void OnCommandExecuting(DbCommand command, DbCommandMethod executeMethod, Guid commandId, Guid connectionId, bool async, DateTimeOffset startTime)
{
//do stuff.
}
[DiagnosticName("Microsoft.EntityFrameworkCore.Database.Command.CommandExecuted")]
public void OnCommandExecuted(object result, bool async)
{
//do stuff.
}
}
However these are high lewel interceptors and hence you won't be able to view the returned result set (making it useless in your case).
I recommend two things, first go to and cast a vote on the implementation of "Hooks to intercept and modify queries on the fly at high and low level" at: https://data.uservoice.com/forums/72025-entity-framework-core-feature-suggestions/suggestions/1051569-hooks-to-intercept-and-modify-queries-on-the-fly-a
Second you can use PostSharp (a commercial product) by using interceptors like; LocationInterceptionAspect on properties or OnMethodBoundaryAspect for methods.