Is there a DbBulkCopy class in ADO.NET? - ado.net

I'm just puzzled that I didn't find it. Our solution uses enterprise library data access application block to make it database agnostic. That means we rely heavily on classes from System.Data.Common. Now I have to implement bulk copy operations and there is no common class or interface that they implement.
I have checked implementation for our three main database vendors (Sql server, Informix and Oracle) and they all have bulk copy classes and their interfaces are almost the same.
Do I have to make my own interfaces in our framework to do database agnostic bulk copy operations? Am I missing something here?
Something like:
public class DbBulkCopy
{
SqlBulkCopy sqlServer;
OracleBulkCopy oracle;
IfxBulkCopy informix;
public void WriteToServer(DataTable table)
{
if (sqlServer != null) sqlServer.WriteToServer(table);
if (oracle != null) oracle.WriteToServer(table);
if (informix != null) informix.WriteToSerber(table);
}

Related

What is a Combo Repository and a Service Bus?

I am learning more about NoSQL and specifically DynamoDB. I recently asked this question: Mapping database structure from SQL Server to DynamoDB
In the comments under the accepted answer; the answers refers to a Service Bus and a Combo repository.
Q1) Is this a Service Bus? (see EventListener class): http://enterprisecraftsmanship.com/2015/05/06/combining-sql-server-and-mongodb-using-nhibernate/
Q2) What is a Combo repository? Is it a "Combination" repository i.e. some methods interface with multiple databases (SQL Server and DynamoDB).
I would usually ask the answerer, however we started to divert from the original post in the other question - the answerer mentioned this. Therefore I have decided to ask another question.
I am thinking about using a NoSQL database to scale database reads
Good idea. It sounds like you are going down the path of Command Query Responsibility Segregation(CQRS). NoSql databases make for excellent read stores.
The link you referenced describes a technique to update
Combining SQL Server and MongoDB using NHibernate - this is what I meant by 'Combo' (combining) Repository. "Combo Repository" isn't a standard pattern. Quoting the author:
Of course, theoretically, we could just move all the data to some NoSQL storage, but what if we don’t want to get rid of our relational database completely? How can we combine them together?
You've tagged your original question with both a Sql-Server and a NoSql database, so at a guess you're in the Polyglot space
The Repository Pattern is a very common abstraction layer around data persistence.
The "combining" link you've referenced specifically solves the problem of many-to-many relationships (often referred to as Junction tables in Sql), and the performance implications when there are many such relations.
In the more general sense, as an alternative to providing interception points in NHibernate, you may / may not have abstracted data access via the repository pattern.
Here's an ultra simple (and non-generic) repository interface in C#:
public interface IWidgetRepository
{
Task<Widget> FetchWidget(string widgetKey);
Task SaveWidget(Widget toBeSaved);
}
Suppose we already have a SqlRepository:
public class SqlWidgetRepository : IWidgetRepository
{
public async Task<Widget> FetchWidget(string widgetKey)
{
... Code to use Obtain an NHibernate session and retrieve and deserialize Widget
}
... Other methods here
}
You could also choose to provide a MongoDb implementation
public class MongoWidgetRepository : IWidgetRepository
{
public async Task<Widget> FetchWidget(string widgetKey)
{
... Code to connect to a MongoDb secondary and Find() the
widget and deserialiaze into Widget
}
... Other methods here
}
And to maintain both databases simultaneously, here's an example of how this "combo" repository may look:
public class ComboWidgetRepository : IWidgetRepository
{
private readonly IWidgetRepository _repo1;
private readonly IWidgetRepository _repo2;
public ComboWidgetRepository(IWidgetRepository repo1, IWidgetRepository repo2)
{
repo1 = repo1;
repo1 = repo2;
}
public async Task<Widget> FetchWidget(string widgetKey)
{
// Just need the one ... first one wins
return await Task.WhenAny(repo1.FetchWidget(widgetKey),
repo2.FetchWidget(widgetKey));
}
public async Task SaveWidget(Widget toBeSaved)
{
// Need both to be saved
await Task.WhenAll(repo1.SaveWidget(toBeSaved),
repo2.SaveWidget(toBeSaved));
}
The above "combo" repository may well fulfil the needs of a single system (and there are many other ways to keep two databases synchronized).
CQRS is however frequently used at enterprise scale (i.e. where you have many systems, with many databases).
My comment about an Enterprise Service Bus will only make sense if you need to distribute data across an enterprise.
The concept is simple
Commands are queued to a transactional system (e.g. "Add Widget") across the bus.
Your system handling widgets performs the transaction (e.g. inserts the widget into a database)
The Widget system then publishes (broadcasts) a message to the bus detailing that a new widget has been added (with all the relevant widget information)
Other systems on in the enterprise which are interested in updates to Widgets subscribe to this message and will update their own Read Store representations of the Widgets (e.g. into a NoSql database or cache, and in the format which makes most sense to them).
This way, when a user accesses any of these other systems and views a screen about 'Widgets', the system can serve the data from its own Read Store, instead of having to request the data from the Widget system itself.

Entity Framework Core 2 Save strings as uppercase

I want to save all strings to my DB in uppercase. I think the best place to do this is by overriding SaveChanges() on my DbContext. I know I need to call ToUpper() on something but I am unsure on what to call it on.
public override int SaveChanges()
{
foreach (var entry in ChangeTracker.Entries().Where(e => e.State == EntityState.Added || e.State == EntityState.Modified))
{
//do something
}
return base.SaveChanges();
}
I'm not sure if it is wise to pollute your Database layer with this constraint. This would limit reusability of your database.
Quite often the definition of structure of the (tables of the) database is separated from the handling of the data in the database, which is done via a separate Database Abstraction Layer.
This separation makes it possible to reuse your database structure for databases with other constraints (for instance, one that allows lower case strings, or a special database for unit tests).
This seperation of concerns is quite often implemented using the repository pattern. This separation makes it possible to change functionality of the database without having to change the structure of the database.
MSDN Description entity framework and repository pattern
Stack Overflow: Repository Pattern Step by Step Explanation
You could also use an existing database that uses lower case strings, as your repository layer could convert everything to upper case before returning queried strings.
So by separating the database from functionality on the data you make it easier to reuse the database for other purposes, and to change requirements without having to change the data in the database: improved reusability and improved maintenance.

IQueryable and Mixing Lazy Loading and Eager Loading

I have a situation where I will be using a repository pattern and pulling objects from the database with a lazy loaded GetAll method that returns IQueryable. However I also need to build dynamic objects that will be included with the lazy loaded objects(query).
Is it possible to add built objects to a lazy loaded IQueryable and still keep the lazy loaded benefits? For instance
public override IQueryable<Foo> GetAll()
{
return _entities; // lazy loaded
}
public override IQueryable<Foo> GetAllPlusDynamic()
{
var entities = GetAll();
foreach(var d in GetAllDynamic())
{
entities.Add(d); // eagerly loaded
}
return entities;
}
I am unsure if I understand you correctly but refering to your comment...
Yes, basically query the database for a set of objects and then query
another data source (in this case a service) and build a set of
objects.
... I would say that it's not possible.
An object of type IQueryable<T> used with Entity Framework (LINQ to Entities) is basically a description of a query which the underlying data store can execute, usually an abstract description (expression tree) which gets translated into SQL.
Every part of such a query description - where expressions, select expression, Any(...) expressions, etc. - must be translatable into the native language (SQL) of the data store. It's especially not possible to include some method calls - like a service call - in an expression that the database cannot understand and perform.
IQueryable<T> knows an underlying "provider". This provider is responsible to translate the expression tree hold by the IQueryable<T> object into "something", for example T-SQL used by SQL Server, or the SQL dialects used by MySQL or Oracle. I believe it's possible to write your own provider which might then be able to perform somehow the service calls and the database queries. But I also believe that this is not an easy task.
Using the standard SQL providers for Entity Framework you have to perform database query and calling the service separately one after each other: Run query and materialize entities in memory - then run the service call on the result collection for each entity.

ObjectContext never derives from an interface?? How do you apply DI/IoC in case of multiple types of ObjectContext?

If you have a system that has multiple types of object contexts. For Eg: BillingObjectContext, HumanResourceObjectContext etc. All derive from ObjectContext but ObjectContext Class does not implement any specific interface like IObjectContext. How would you apply DI/IoC in case of multiple types of ObjectContext say using Ninject?
If you must depend on it in a test, you have to mock it. Here's a sample; it's not much harder than implementing an interface. See also TDD improvements in EF 4.
Why can't we just create the actual context object to be used in our tests? Since we don't want our tests to affect the production database, we can always specify a connection string that points to a test database. Before running each test, construct a new context, add the data you will need in your test, proceed with the unit test, then in the test cleanup section, delete all the records that were created during the test. The only side-affect here would be that the auto-increment IDs would be used up in the test database, but since it's a test database - who cares?
I know that most answers regarding this question propose using DI/IoC designs to create interfaces for data contexts etc. but the reason I am using Entity Framework is exactly to not write any interfaces for my database connections, object models, and simple CRUD transactions. To write mock interfaces for my data objects and to write complex queryable objects to support LINQ, defeats the purpose of relying on highly-tested and reliable Entity Framework.
This pattern for unit testing is not new - Ruby on Rails has been using it for a long time and it's worked out great. Just as .NET provides EF, RoR provides ActiveRecord objects and each unit test creates the objects it needs, proceeds with the tests, and then deletes all the constructed records.
How to specify connection string for test environment? Since all tests are in their own dedicated test project, adding a new App.Config file with a connection string for the test database would suffice.
Just think of how much headache and pain this will save you.
namespace ProjectNamespace
{
[TestClass]
public class UnitTest1
{
private ObjectContext objContext;
[TestInitialize]
public void SetUp()
{
// Create the object context and add all the necessary data records.
}
[TestMethod]
public void TestMethod1()
{
// Runs the tests.
}
[TestCleanup]
public void CleanUp()
{
// Delete created records.
}
}
}

Is there an in-memory provider for Entity Framework?

I am unit testing code written against the ADO .NET Entity Framework. I would like to populate an in-memory database with rows, and make sure that my code retrieves them properly.
I can mock the Entity Framework using Rhino Mocks, but that would not be sufficient. I would be telling the query what entities to return to me. This would neither test the where clause nor the .Include() statements. I want to be sure that my where clause matches only the rows I intend, and no others. I want to be sure that I have asked for the entities that I need, and none that I don't.
For example:
class CustomerService
{
ObjectQuery<Customer> _customerSource;
public CustomerService(ObjectQuery<Customer> customerSource)
{
_customerSource = customerSource;
}
public Customer GetCustomerById(int customerId)
{
var customers = from c in _customerSource.Include("Order")
where c.CustomerID == customerId
select c;
return customers.FirstOrDefault();
}
}
If I mock the ObjectQuery to return a known customer populated with orders, how do I know that CustomerService has the right where clause and Include? I would rather insert some customer rows and some order rows, then assert that the right customer was selected and the orders are populated.
An InMemory provider is included in EF7 (pre-release).
You can use either the NuGet package, or read about it in the EF repo on GitHub (view source).
The article http://www.codeproject.com/Articles/460175/Two-strategies-for-testing-Entity-Framework-Effort  describes Effort  -Entity Framework provider that runs in memory.
You can still use your DbContext or ObjectContext classes within unit tests, without having to have an actual database.
A better approach here might be to use the Repository pattern to encapsulate your EF code. When testing your services you can use mocks or fakes. When testing your repositories you will want to hit the real DB to ensure that you are getting the results you expect.
There is not currently a in memory provider for EF, but if you take a look at Highway.Data it has a base abstraction interface and an InMemoryDataContext.
Testing Data Access and EF with Highway.Data
Yes, there is at least one such provider - SQLite. I have used it a bit and it works. Also you can try SQL Server Compact. It's an embeded database and has EF providers too.
Edit:
SQLite has support for in-memory databases (link1). All you need is to specify a connection string like: "Data Source=:memory:;Version=3;New=True;". If you need in an example you may look at SharpArchitecture.
I am not familiar with Entity Framework and the ObjectQuery class but if the Include method is virtual you can mock it like this:
// Arrange
var customerSourceStub = MockRepository.GenerateStub<ObjectQuery<Customer>>();
var customers = new Customer[]
{
// Populate your customers as if they were coming from DB
};
customerSourceStub
.Stub(x => x.Include("Order"))
.Return(customers);
var sut = new CustomerService(customerSourceStub);
// Act
var actual = sut.GetCustomerById(5);
// Assert
Assert.IsNotNull(actual);
Assert.AreEqual(5, actual.Id);
You could try SQL Server Compact but it has some quite wild limitations:
SQL Server Compact does not support SKIP expressions in paging queries when it is used with the Entity Framework
SQL Server Compact does not support entities with server-generated keys or values when it is used with the Entity Framework
No outer joins, collate, modulo on floats, aggregates
In EF Core there are two main options for doing this:
SQLite in-memory mode allows you to write efficient tests against a provider that behaves like a relational database.
The InMemory provider is a lightweight provider that has minimal dependencies, but does not always behave like a relational database
I am using SQLite and it supports all queries, that I need to do with Azure SQL production database.