I am converting ado.net code to use EF. In my ado.net code i set dataReader.FetchSize = command.RowSize * 1000 and that dramatically improves performance over the default fetch size .
When I convert my code to EF, the performance is on par to ado.net code where I didn't specify fetch size, i.e. it's very slow over large records.
Any way I could specify fetch size for retrieving records in EF?
You can set ODP.NET FetchSize in the Registry or the .NET config files when using Entity Framework. That will standardize the FetchSize across all your ODP.NET instances (in the case of the Registry) or across your application (in the case of app/web.config).
http://docs.oracle.com/cd/E48297_01/doc/win.121/e41125/featConfig.htm
Christian Shay
Oracle
I was running into a similar problem, but don't want to change the overall FetchSize, instead I want to change the FetchSize per query.
Here is the solution I came up with, maybe this helps someone.
It basically uses the CallContext to pass arguments to a DbInterceptor. The interceptor will override the needed properties on the query commands.
Thread safe with support for nesting scopes.
This can be as well used to modify other properties of commands executed through Entity Framework queries for a defined scope.
Usage:
using (var context = new MyDbContext())
{
using (new OracleCommandContext(fetchSize: 1024 * 128))
{
// your query here
}
}
Properties to override:
public class OracleCommandProperties
{
public long FetchSize { get; set; } = 524288; // oracle default value
}
The call context:
public class OracleCommandContext : IDisposable
{
private static readonly object sync = new object();
private readonly OracleCommandProperties previousCommandProperties;
private bool isDisposed;
static OracleCommandContext()
{
DbInterception.Add(new OracleCommandInterceptor());
}
public OracleCommandContext(long fetchSize)
{
lock (sync)
{
var commandProperties = new OracleCommandProperties();
if (TryGetProperties(out var previousProperties))
{
// when using nested OracleCommandContext, escalate the properties
previousCommandProperties = previousProperties;
commandProperties.FetchSize = Math.Max(previousProperties.FetchSize, fetchSize);
}
else
{
commandProperties.FetchSize = fetchSize;
}
CallContext.LogicalSetData(nameof(OracleCommandProperties), commandProperties);
}
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
~OracleCommandContext()
{
Dispose(false);
}
private void Dispose(bool disposing)
{
if (disposing)
{
if (!isDisposed)
{
lock (sync)
{
CallContext.LogicalSetData(nameof(OracleCommandProperties), previousCommandProperties);
}
isDisposed = true;
}
}
}
public static bool TryGetProperties(out OracleCommandProperties properties)
{
lock(sync)
{
if (CallContext.LogicalGetData(nameof(OracleCommandProperties)) is OracleCommandProperties oracleReaderProperties)
{
properties = oracleReaderProperties;
return true;
}
properties = null;
return false;
}
}
}
The interceptor doing the actual work:
public class OracleCommandInterceptor : IDbCommandInterceptor
{
public void NonQueryExecuted(DbCommand command, DbCommandInterceptionContext<int> interceptionContext)
{
}
public void NonQueryExecuting(DbCommand command, DbCommandInterceptionContext<int> interceptionContext)
{
AdjustCommand(command);
}
public void ReaderExecuted(DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext)
{
}
public void ReaderExecuting(DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext)
{
AdjustCommand(command);
}
public void ScalarExecuted(DbCommand command, DbCommandInterceptionContext<object> interceptionContext)
{
}
public void ScalarExecuting(DbCommand command, DbCommandInterceptionContext<object> interceptionContext)
{
AdjustCommand(command);
}
private static void AdjustCommand(DbCommand command)
{
if (command is OracleCommand oracleCommand)
{
if (OracleCommandContext.TryGetProperties(out var properties))
{
oracleCommand.FetchSize = properties.FetchSize;
}
}
}
}
Related
I have a service class with some injected services. It's dealing with my Azure storage requests. I need to write NUnit tests for that class.
I'm new to NUnit and I'm struggling with making the object of that my AzureService.cs
Below AzureService.cs. I have used some injected services
using System;
using System.Linq;
using System.Threading.Tasks;
using JohnMorris.Plugin.Image.Upload.Azure.Interfaces;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using Nop.Core.Caching;
using Nop.Core.Configuration;
using Nop.Core.Domain.Media;
using Nop.Services.Logging;
namespace JohnMorris.Plugin.Image.Upload.Azure.Services
{
public class AzureService : IAzureService
{
#region Constants
private const string THUMB_EXISTS_KEY = "Nop.azure.thumb.exists-{0}";
private const string THUMBS_PATTERN_KEY = "Nop.azure.thumb";
#endregion
#region Fields
private readonly ILogger _logger;
private static CloudBlobContainer _container;
private readonly IStaticCacheManager _cacheManager;
private readonly MediaSettings _mediaSettings;
private readonly NopConfig _config;
#endregion
#region
public AzureService(IStaticCacheManager cacheManager, MediaSettings mediaSettings, NopConfig config, ILogger logger)
{
this._cacheManager = cacheManager;
this._mediaSettings = mediaSettings;
this._config = config;
this._logger = logger;
}
#endregion
#region Utilities
public string GetAzureStorageUrl()
{
return $"{_config.AzureBlobStorageEndPoint}{_config.AzureBlobStorageContainerName}";
}
public virtual async Task DeleteFileAsync(string prefix)
{
try
{
BlobContinuationToken continuationToken = null;
do
{
var resultSegment = await _container.ListBlobsSegmentedAsync(prefix, true, BlobListingDetails.All, null, continuationToken, null, null);
await Task.WhenAll(resultSegment.Results.Select(blobItem => ((CloudBlockBlob)blobItem).DeleteAsync()));
//get the continuation token.
continuationToken = resultSegment.ContinuationToken;
}
while (continuationToken != null);
_cacheManager.RemoveByPrefix(THUMBS_PATTERN_KEY);
}
catch (Exception e)
{
_logger.Error($"Azure file delete error", e);
}
}
public virtual async Task<bool> CheckFileExistsAsync(string filePath)
{
try
{
var key = string.Format(THUMB_EXISTS_KEY, filePath);
return await _cacheManager.Get(key, async () =>
{
//GetBlockBlobReference doesn't need to be async since it doesn't contact the server yet
var blockBlob = _container.GetBlockBlobReference(filePath);
return await blockBlob.ExistsAsync();
});
}
catch { return false; }
}
public virtual async Task SaveFileAsync(string filePath, string mimeType, byte[] binary)
{
try
{
var blockBlob = _container.GetBlockBlobReference(filePath);
if (!string.IsNullOrEmpty(mimeType))
blockBlob.Properties.ContentType = mimeType;
if (!string.IsNullOrEmpty(_mediaSettings.AzureCacheControlHeader))
blockBlob.Properties.CacheControl = _mediaSettings.AzureCacheControlHeader;
await blockBlob.UploadFromByteArrayAsync(binary, 0, binary.Length);
_cacheManager.RemoveByPrefix(THUMBS_PATTERN_KEY);
}
catch (Exception e)
{
_logger.Error($"Azure file upload error", e);
}
}
public virtual byte[] LoadFileFromAzure(string filePath)
{
try
{
var blob = _container.GetBlockBlobReference(filePath);
if (blob.ExistsAsync().GetAwaiter().GetResult())
{
blob.FetchAttributesAsync().GetAwaiter().GetResult();
var bytes = new byte[blob.Properties.Length];
blob.DownloadToByteArrayAsync(bytes, 0).GetAwaiter().GetResult();
return bytes;
}
}
catch (Exception)
{
}
return new byte[0];
}
#endregion
}
}
This is my test class below, I need to create new AzureService(); from my service class. But in my AzureService constructor, I'm injecting some service
using JohnMorris.Plugin.Image.Upload.Azure.Services;
using Nop.Core.Caching;
using Nop.Core.Domain.Media;
using Nop.Services.Tests;
using NUnit.Framework;
namespace JohnMorris.Plugin.Image.Upload.Azure.Test
{
public class AzureServiceTest
{
private AzureService _azureService;
[SetUp]
public void Setup()
{
_azureService = new AzureService( cacheManager, mediaSettings, config, logger);
}
[Test]
public void App_settings_has_azure_connection_details()
{
var url= _azureService.GetAzureStorageUrl();
Assert.IsNotNull(url);
Assert.IsNotEmpty(url);
}
[Test]
public void Check_File_Exists_Async_test(){
//To Do
}
[Test]
public void Save_File_Async_Test()(){
//To Do
}
[Test]
public void Load_File_From_Azure_Test(){
//To Do
}
}
}
Question is, what exactly do you want to test? If you want to test if NopConfig is properly reading values from AppSettings, then you do not have to test AzureService at all.
If you want to test that GetAzureStorageUrl method is working correctly, then you should mock your NopConfig dependency and focus on testing only AzureService methods like this:
using Moq;
using Nop.Core.Configuration;
using NUnit.Framework;
namespace NopTest
{
public class AzureService
{
private readonly NopConfig _config;
public AzureService(NopConfig config)
{
_config = config;
}
public string GetAzureStorageUrl()
{
return $"{_config.AzureBlobStorageEndPoint}{_config.AzureBlobStorageContainerName}";
}
}
[TestFixture]
public class NopTest
{
[Test]
public void GetStorageUrlTest()
{
Mock<NopConfig> nopConfigMock = new Mock<NopConfig>();
nopConfigMock.Setup(x => x.AzureBlobStorageEndPoint).Returns("https://www.example.com/");
nopConfigMock.Setup(x => x.AzureBlobStorageContainerName).Returns("containername");
AzureService azureService = new AzureService(nopConfigMock.Object);
string azureStorageUrl = azureService.GetAzureStorageUrl();
Assert.AreEqual("https://www.example.com/containername", azureStorageUrl);
}
}
}
I'm having an Azure WebJob running continuously which is doing CRUD operations in my database. I'm using Entity Framework and UnitOfWork pattern and in my WebJob I use Autofac to inject my dependencies, service and repository layer. I'm having some issues with stale data when running my WebJob.
Example:
I update a record on my website and my WebJob is then kicked off but my WebJob can't see this change in the database. It sees the record prior to the change.
To fix this I tried to inject my custom context like this:
builder.RegisterType<PCContext>().As<IPCContext>().InstancePerDependency();
After doing that I can see the newest changes in the database. But now I have another issues. When I insert a new record and then read it, from my WebJob I can't see this new record. This worked fine before I injected my context (as shown in code above).
If I create a new context in my WebJob function I can read the updates from the database, but I want to use my service layer instead like this:
_services.UserExport.ExportUsers();
I can't figure out what I'm doing wrong here. Basically what I want is every time my WebJob function is kicked off I want a new context to be created so I'm sure I have the newest updates from the database and I want to be able to insert into my database and read this again in my WebJob using my service layer.
Can someone point me in the right direction?
Note that my WebJob is continuous so it's Autofac registration code is only executed once when the WebJob is start, not for every time a function in the WebJob is executed.
Please let me know if more description or code is necessary.
Thanks.
According to your description, I tested the similar scenario on my side and I found I could read and update from my database. I defined my generic Repository and UnitOfWork as follows, you could refer to them:
Repository:
public interface IRepository<T>
{
T GetById(object id);
IQueryable<T> GetAll();
void Edit(T entity);
void Insert(T entity);
void Delete(T entity);
}
public class Repository<T> : IRepository<T> where T : class
{
public DbContext context;
public DbSet<T> dbset;
public Repository(DbContext context)
{
this.context = context;
dbset = context.Set<T>();
}
public T GetById(object id)
{
return dbset.Find(id);
}
public IQueryable<T> GetAll()
{
return dbset;
}
public void Insert(T entity)
{
dbset.Add(entity);
}
public void Edit(T entity)
{
context.Entry(entity).State = EntityState.Modified;
}
public void Delete(T entity)
{
context.Entry(entity).State = EntityState.Deleted;
}
}
UnitOfWork:
public class UnitOfWork : IDisposable
{
private DbContext _context;
private Repository<TodoItem> toDoItemRepository;
public Repository<TodoItem> ToDoItemRepository
{
get
{
if (toDoItemRepository == null)
toDoItemRepository = new Repository<TodoItem>(_context);
return toDoItemRepository;
}
}
public UnitOfWork() : this(new BruceDbContext()) { }
public UnitOfWork(DbContext context)
{
_context = context;
}
public void Commit()
{
_context.SaveChanges();
}
#region Dispose
private bool disposed = false;
protected virtual void Dispose(bool disposing)
{
if (!this.disposed)
{
if (disposing)
{
_context.Dispose();
}
}
this.disposed = true;
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
#endregion
}
For my WebJob I defined the Functions.cs and initialized the JobActivator as follows:
Functions.cs
public class Functions
{
private UnitOfWork _unitOfWork;
public Functions(UnitOfWork unitOfWork)
{
_unitOfWork = unitOfWork;
}
public async Task CronJob([TimerTrigger("0/30 * * * * *")] TimerInfo timer, CancellationToken cancelToken)
{
//retrieve the latest record
var item = _unitOfWork.ToDoItemRepository.GetAll().OrderByDescending(i => i.CreateDate).FirstOrDefault();
Console.WriteLine($"[{item.CreateDate}] {item.Text}");
//insert a new record
_unitOfWork.ToDoItemRepository.Insert(new Entities.TodoItem()
{
Id = Guid.NewGuid().ToString(),
CreateDate = DateTime.Now,
Text = $"hello world -{DateTime.Now}"
});
_unitOfWork.Commit();
//retrieve the previous added record
item = _unitOfWork.ToDoItemRepository.GetAll().OrderByDescending(i => i.CreateDate).FirstOrDefault();
Console.WriteLine($"[{item.CreateDate}] {item.Text}");
}
}
Program.cs
var builder = new ContainerBuilder();
builder.Register<UnitOfWork>(c => new UnitOfWork(new BruceDbContext())).InstancePerDependency();
builder.RegisterType<Functions>();
var container = builder.Build();
var config = new JobHostConfiguration()
{
JobActivator = new AutoFacJobActivator(container)
};
var host = new JobHost(config);
My database have different schema depending on user selections on runtime.
My code is below:
public partial class FashionContext : DbContext
{
private string _schema;
public FashionContext(string schema) : base()
{
_schema = schema;
}
public virtual DbSet<Style> Styles { get; set; }
protected override void OnConfiguring(DbContextOptionsBuilder options)
{
options.UseSqlServer(#"Server=.\sqlexpress;Database=inforfashionplm;Trusted_Connection=True;");
}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Style>()
.ToTable("Style", schema: _schema);
}
}
Upon testing. I created a context instance with 'schema1'.
So far so good.
But when I create another context instance with different schema 'schema2', the resulting data in which the schema is still on 'schema1'.
Here is the implementation:
using (var db = new FashionContext("schema1"))
{
foreach (var style in db.Styles)
{
Console.WriteLine(style.Name);
}
}
Console.ReadLine();
Console.Clear();
using (var db = new FashionContext("schema2"))
{
foreach (var style in db.Styles)
{
Console.WriteLine(style.Name);
}
}
Console.ReadLine();
Later I noticed that the OnModelCreating is called only one time, so it is never called again when you create a new context instance of the same connection string.
Is it possible to have dynamic schema on runtime? Note: this is possible in EF6
One of possible way was mentioned above, but briefly, so I will try to explain with examples.
You ought to override default ModelCacheKeyFactory and ModelCacheKey.
ModelCachekeyFactory.cs
internal sealed class CustomModelCacheKeyFactory<TContext> : ModelCacheKeyFactory
where TContext : TenantDbContext<TContext>
{
public override object Create(DbContext context)
{
return new CustomModelCacheKey<TContext>(context);
}
public CustomModelCacheKeyFactory([NotNull] ModelCacheKeyFactoryDependencies dependencies) : base(dependencies)
{
}
}
ModelCacheKey.cs, please review Equals and GetHashCode overridden methods, they are not best one and should be improved.
internal sealed class ModelCacheKey<TContext> : ModelCacheKey where TContext : TenantDbContext<TContext>
{
private readonly string _schema;
public ModelCacheKey(DbContext context) : base(context)
{
_schema = (context as TContext)?.Schema;
}
protected override bool Equals(ModelCacheKey other)
{
return base.Equals(other) && (other as ModelCacheKey<TContext>)?._schema == _schema;
}
public override int GetHashCode()
{
var hashCode = base.GetHashCode();
if (_schema != null)
{
hashCode ^= _schema.GetHashCode();
}
return hashCode;
}
}
Register in DI.
builder.UseSqlServer(dbConfiguration.Connection)
.ReplaceService<IModelCacheKeyFactory, CustomModelCacheKeyFactory<CustomContext>>();
Context sample.
public sealed class CustomContext : TenantDbContext<CustomContext>
{
public CustomContext(DbContextOptions<CustomContext> options, string schema) : base(options, schema)
{
}
}
You can build the model externally and pass it into the DbContext using DbContextOptionsBuilder.UseModel()
Another (more advanced) alternative is to replace the IModelCacheKeyFactory to take schema into account.
I found a way to recreate the compiled model on each context creation.
public partial class MyModel : DbContext {
private static DbConnection _connection
{
get
{
//return a new db connection
}
}
private static DbCompiledModel _model
{
get
{
return CreateModel("schema name");
}
}
public MyModel()
: base(_connection, _model, false)
{
}
private static DbCompiledModel CreateModel(string schema)
{
var modelBuilder = new DbModelBuilder();
modelBuilder.HasDefaultSchema(schema);
modelBuilder.Entity<entity1>().ToTable(schema + ".entity1");
var builtModel = modelBuilder.Build(_connection);
return builtModel.Compile();
}
}
AsycSubject<Unit>() sub;
// stuff
if(!sub.HasFired())
// Do stuff
Current best attempt is:
public static bool HasFired<T>(this AsyncSubject<T> sub)
{
AsyncSubject<bool> ret = new AsyncSubject<bool>();
sub.Timeout(TimeSpan.FromMilliseconds(20))
.Subscribe(_ =>
{
ret.OnNext(true);
ret.OnCompleted();
},
ex =>
{
ret.OnNext(false);
ret.OnCompleted();
});
return ret.First();
}
But it feels very ugly and long. I suspect I'm missing something simple. Any suggestions?
It's easier to wrap around the existing AsyncSubject and add the required state.
public class AsyncSubjectEx<T> : ISubject<T>, IDisposable
{
AsyncSubject<T> Subject = new AsyncSubject<T>();
public bool HasValue { get; protected set; }
public object Gate = new object();
public void OnCompleted()
{
Subject.OnCompleted();
}
public void OnError(Exception error)
{
Subject.OnError(error);
}
public void OnNext(T value)
{
lock (Gate)
{
Subject.OnNext(value);
HasValue = true;
}
}
public IDisposable Subscribe(IObserver<T> observer)
{
lock (Gate)
return Subject.Subscribe(observer);
}
public void Dispose()
{
Subject.Dispose();
}
}
Ironically, the original AsyncSubject upon reflection shows that there is a hasValue field, but it doesn't happen to be exposed. Consider reporting this to the Rx team - might be useful sometime.
I'm using EF for developing and enough new here.
I'm confused with how to work with EntityFramework context when I have to do different operations with context. Could you give me good tutorials and glance at my code for finding possible issues
Now I have next code
//domain.dll
class OrderDomainService
{
public void DoWork()
{
foreach(var order in GetOrders())
{
DeleteOrder(order);
}
}
public List<Order> GetOrders()
{
IOrderRepository orderRep = new OrderRepository();
return orderRep.GetAll();
}
public void DeleteOrder(Order order)
{
IOrderRepository orderRep = new OrderRepository();
return orderRep.Delete(order);
}
}
//repository.dll
public interface IOrderRepository
{
List<Order> GetAll();
void Delete(Order order);
void SaveContext()
}
public class OrderRepository
{
public OrderRepository()
{
if (ctx == null)
ctx = new EntityFrameworkDataContext();
}
static EntityFrameworkDataContext ctx { get; set; }
public List<Order> GetAll()
{
return ctx.Orders;
}
public void Delete(Order order)
{
ctx.Orders.Delete(order);
}
public void SaveContext()
{
ctx.SaveChanges();
ctx = null;
}
}
You need to share same EntityFrameworkDataContext instance between between several repositories (Use unit of work pattern http://blogs.msdn.com/b/adonet/archive/2009/06/16/using-repository-and-unit-of-work-patterns-with-entity-framework-4-0.aspx ).Because if you are doing an operaiton which you need to initiate two or more repositories you will have problems.