We're trying to play around with RIA Services. I can't seem to figure out how to delete a record. Here's the code I'm trying to use.
SomeDomainContext _SomeDomainContext = (SomeDomainContext)(productDataSource.DomainContext);
Product luckyProduct = (Product)(TheDataGrid.SelectedItem);
_SomeDomainContext.Products.Remove(luckyProduct);
productDataSource.SubmitChanges();
The removing the object from the Entity part works fine, but it doesn't seem to do anything to the DB. Am I using the objects like I'm supposed to, or is there a different way of saving things?
The error system is a little finicky. Try this o get the error if there is one and that will give you an idea. My problem was dependencies to other tables needing deletion first before the object could be. Ex: Tasks deleted before deleting the Ticket.
System.Windows.Ria.Data.SubmitOperation op = productDataSource.SubmitChanges();
op.Completed += new EventHandler(op_Completed);
void TicketsLoaded_Completed(object sender, EventArgs e) {
System.Windows.Ria.Data.SubmitOperation op = (System.Windows.Ria.Data.SubmitOperation)sender;
if (op.Error != null) {
ErrorWindow view = new ErrorWindow(op.Error);
view.Show();
}
}
In the code snippet above, I'd suggest using the callback parameter rather than an event handler.
productsDataSource.SubmitChanges(delegate(SubmitOperation operation) {
if (operation.HasError) {
MessageBox.Show(operation.Error.Message);
}
}, null);
The callback model is designed for the caller of Load/SubmitChanges, while the event is designed for other code that gets a reference to a LoadOperation/SubmitOperation.
Hope that helps...
Related
Been using EF Core with Razor pages for a few years now, but Blazor with EF Core has me questioning myself on tasks that used to be simple. I'm creating a golf app, and I'm attempting to update a particular person's round of golf.
Having stumbled in the beginning, I have learned that using dependency injection for the dbContext in Blazor causes several errors including the one in my subject line. Instead, I'm using DI to inject an IDbContextFactory and creating a new context in each method of my services.
The following code updates a golfer's round. When editing, the user may change the course, teebox, or any of the 18 scores. I'm able to update the round once, but if I go back into the same round to edit it a second time I get the "cannot be tracked" "already tracking" error.
I've scoured the internet for possible reasons, I've tried .AsNoTracking() on my initial GetRound(), I've tried detaching the entry after SaveChangesAsync(), I've tried using the ChangeTracker to check whether I need to attach to the Round object being updated. Nothing I've done allows me to update the same round twice without doing a reload in between the first and second update.
I'll provide whatever code necessary, but I'll start with the offending code:
public async Task<bool> UpdateRoundAsync(RoundModel Round)
{
var rtnVal = false;
try
{
using (var _context = _dbFactory.CreateDbContext())
{
_context.Rounds.Attach(Round).State = EntityState.Modified;
await _context.SaveChangesAsync();
_context.Entry(Round).State = EntityState.Detached;
}
rtnVal = true;
}
catch (Exception ex)
{
Console.Write(ex.Message);
throw;
}
return rtnVal;
}
When I run the above code, I see NOTHING in the change tracker as modified until I attach to the Round. Despite nothing being tracked, despite the dbContext being created new, then disposed, I still get an error that I'm already tracking the entity.
Help? What am I doing wrong?
Danny
UPDATE:
Edited the repro as requested, but it did not change the issue - still unable to update the Round twice without a reload in between.
Caveat: I'm not happy posting this as an answer, but it does solve the problem for now. I won't mark it as THE answer until I understand more about EFCore and Blazor together.
I did find that I was making a call to get course details without telling EF that I didn't want it to track the entity, however, that still didn't fix the problem.
In the end, I simply forced the page to reload programmatically: NavMgr.NavigateTo("[same page]", true) after my update call. It feels very un-Blazor-like to do it this way, but ultimately I'm still learning Blazor and not getting much feedback on this post. I'm going to forage ahead, and hope that clarity comes down the road.
For anyone that may run across this post, I ran into the same issue in a completely different project, and finally found something that made sense (here on S/O).
In this line of code:
_context.Rounds.Attach(Round).State = EntityState.Modified;
It should be:
_context.Entry(Round).State = EntityState.Modified;
I never knew that these two were different, and I never had an issue using the first example's syntax before starting to code with Blazor.
If you are unaware, like me, the first way of setting the state to modified updates the entity and all related entities - which is why I was getting the error when I tried to make additional changes to the round-related objects.
The second way of setting the state ONLY updates the entity itself and leaves the related entities in a State of Unchanged.
Thank you to #TwoFingerRightClick for his comment on the accepted answer on this post: Related post
I'm new to Blazor and trying to make a page with several separate components to handle a massive form. Each individual component covers a part of the form.
The problem I'm facing is that each of my components needs access to data from the back-end, and not every component uses the same data. When the page loads, each components makes an attempt to fetch data from the server, which causes a problem with Entity Framework.
A second operation started on this context before a previous operation
completed. This is usually caused by different threads using the same
instance of DbContext.
This is obviously caused by the fact that my components are initialized at the same time, and all make their attempt to load the data simultaneously. I was under the impression that the way DI is set up in Blazor, this wouldn't be a problem, but it is.
Here are the components in my template:
<CascadingValue Value="this">
<!-- BASE DATA -->
<CharacterBaseDataView />
<!-- SPECIAL RULES -->
<CharacterSpecialRulesView />
</CascadingValue>
Here is how my components are initialized:
protected async override Task OnInitializedAsync()
{
CharacterDetailsContext = new EditContext(PlayerCharacter);
await LoadCharacterAsync();
}
private async Task LoadCharacterAsync()
{
PlayerCharacter = await PlayerCharacterService.GetPlayerCharacterAsync(ViewBase.CharacterId.Value);
CharacterDetailsContext = new EditContext(PlayerCharacter);
}
When two components with the above code are in the same view, the mentioned error occurs. I thread using the synchronous version "OnInitialized()" and simply discarding the task, but that didn't fix the error.
Is there some other way to call the data so that this issue doesn't occur? Or am I going about this the wrong way?
You've hit a common problem in using async operations in EF - two or more operations trying to use the same context at once.
Take a look at the MS Docs article about EF DBContexts - there's a section further down specific to Blazor. It explains the use of a DbContextFactory and CreateDbContext to create contexts for units-of-work i.e. one context per operation so two async operations each have a separate context.
Initially to solve the threading issues, I used DbContextFactory to create contexts for each operation - however this resulted in database in-consistency issues across components, and I realised I need change tracking across components.
Therefore instead, I keep my DbContext as scoped, and I don't create a new context before each operation.
I then adapted my OnInitializedAsync() methods to check if the calls to the database have completed, before making these calls through my injected services. This works really well for my app:
#code {
static Semaphore semaphore;
//code ommitted for brevity
protected override async Task OnInitializedAsync()
{
try
{
//First open global semaphore
semaphore = Semaphore.OpenExisting("GlobalSemaphore");
while (!semaphore.WaitOne(TimeSpan.FromTicks(1)))
{
await Task.Delay(TimeSpan.FromSeconds(1));
}
//If while loop is exited or skipped, previous service calls are completed.
ApplicationUsers = await ApplicationUserService.Get();
}
finally
{
try
{
semaphore.Release();
}
catch (Exception ex)
{
Console.WriteLine("ex.Message");
}
}
}
We have an enterprise DB that is replicated through many sites throughout the world. We would like our app to attempt to connect to one of the local sites, and if that site is down we want it to fall back to the enterprise DB. We'd like this behavior on each of our DB operations.
We are using Entity Framework, C#, and SQL Server.
At first I hoped I could just specify a "Failover Partner" in the connection string, but that only works in a mirrored DB environment, which this is not. I also looked into writing a custom IDbExecutionStrategy. But these strategies only allow you to specify the pattern for retrying a failed DB operation. It does not allow you to change the operation in any way like directing it to a new connection.
So, do you know of any good pattern for dealing with this type of operation, other than duplicating retry logic around each of our many DB operations?
Update on 2014-05-14:
I'll elaborate in response to some of the suggestions already made.
I have many places where the code looks like this:
try
{
using(var db = new MyDBContext(ConnectionString))
{
// Database operations here.
// var myList = db.MyTable.Select(...), etc.
}
}
catch(Exception ex)
{
// Log exception here, perhaps rethrow.
}
It was suggested that I have a routine that first checks each of the connections strings and returns the first one that successfully connects. This is reasonable as far as it goes. But some of the errors I'm seeing are timeouts on the operations, where the connection works but the DB has issues that keep it from completing the operation.
What I'm looking for is a pattern I can use to encapsulate the unit of work and say, "Try this on the first database. If it fails for any reason, rollback and try it on the second DB. If that fails, try it on the third, etc. until the operation succeeds or you have no more DBs." I'm pretty sure I can roll my own (and I'll post the result if I do), but I was hoping there might be a known way to approach this.
How about using some Dependency Injection system like autofac and registering there a factory for new context objects - it will execute logic that will try to connect first to local and in case of failure it will connect to enterprise db. Then it will return ready DbContext object. This factory will be provided to all objects that require it with Dependency Injection system - they will use it to create contexts and dispose of them when they are not needed any more.
" We would like our app to attempt to connect to one of the local sites, and if that site is down we want it to fall back to the enterprise DB. We'd like this behavior on each of our DB operations."
If your app is strictly read-only on the DB and data consistency is not absolutely vital to your app/users, then it's just a matter of trying to CONNECT until an operational site has been found. As M.Ali suggested in his remark.
Otherwise, I suggest you stop thinking along these lines immediately because you're just running 90 mph down a dead end street. As Viktor Zychla suggested in his remark.
Here is what I ended up implementing, in broad brush-strokes:
Define delegates called UnitOfWorkMethod that will execute a single Unit of Work on the Database, in a single transaction. It takes a connection string and one also returns a value:
delegate T UnitOfWorkMethod<out T>(string connectionString);
delegate void UnitOfWorkMethod(string connectionString);
Define a method called ExecuteUOW, that will take a unit of work and method try to execute it using the preferred connection string. If it fails, it tries to execute it with the next connection string:
protected T ExecuteUOW<T>(UnitOfWorkMethod<T> method)
{
// GET THE LIST OF CONNECTION STRINGS
IEnumerable<string> connectionStringList = ConnectionStringProvider.GetConnectionStringList();
// WHILE THERE ARE STILL DATABASES TO TRY, AND WE HAVEN'T DEFINITIVELY SUCCEDED OR FAILED
var uowState = UOWStateEnum.InProcess;
IEnumerator<string> stringIterator = connectionStringList.GetEnumerator();
T returnVal = default(T);
Exception lastException = null;
string connectionString = null;
while ((uowState == UOWStateEnum.InProcess) && stringIterator.MoveNext())
{
try
{
// TRY TO EXECUTE THE UNIT OF WORK AGAINST THE DB.
connectionString = stringIterator.Current;
returnVal = method(connectionString);
uowState = UOWStateEnum.Success;
}
catch (Exception ex)
{
lastException = ex;
// IF IT FAILED BECAUSE OF A TRANSIENT EXCEPTION,
if (TransientChecker.IsTransient(ex))
{
// LOG THE EXCEPTION AND TRY AGAINST ANOTHER DB.
Log.TransientDBException(ex, connectionString);
}
// ELSE
else
{
// CONSIDER THE UOW FAILED.
uowState = UOWStateEnum.Failed;
}
}
}
// LOG THE FAILURE IF WE HAVE NOT SUCCEEDED.
if (uowState != UOWStateEnum.Success)
{
Log.ExceptionDuringDataAccess(lastException);
returnVal = default(T);
}
return returnVal;
}
Finally, for each operation we define our unit of work delegate method. Here an example
UnitOfWorkMethod uowMethod =
(providerConnectionString =>
{
using (var db = new MyContext(providerConnectionString ))
{
// Do my DB commands here. They will roll back if exception thrown.
}
});
ExecuteUOW(uowMethod);
When ExecuteUOW is called, it tries the delegate on each database until it either succeeds or fails on all of them.
I'm going to accept this answer since it fully addresses all of concerns raised in the original question. However, if anyone provides and answer that is more elegant, understandable, or corrects flaws in this one I'll happily accept it instead.
Thanks to all who have responded.
I'm in the process of writing a query manager for a WinForms application that, among other things, needs to be able to deliver real-time search results to the user as they're entering a query (think Google's live results, though obviously in a thick client environment rather than the web). Since the results need to start arriving as the user types, the search will get more and more specific, so I'd like to be able to cancel a query if it's still executing while the user has entered more specific information (since the results would simply be discarded, anyway).
If this were ordinary ADO.NET, I could obviously just use the DbCommand.Cancel function and be done with it, but we're using EF4 for our data access and there doesn't appear to be an obvious way to cancel a query. Additionally, opening System.Data.Entity in Reflector and looking at EntityCommand.Cancel shows a discouragingly empty method body, despite the docs claiming that calling this would pass it on to the provider command's corresponding Cancel function.
I have considered simply letting the existing query run and spinning up a new context to execute the new search (and just disposing of the existing query once it finishes), but I don't like the idea of a single client having a multitude of open database connections running parallel queries when I'm only interested in the results of the most recent one.
All of this is leading me to believe that there's simply no way to cancel an EF query once it's been dispatched to the database, but I'm hoping that someone here might be able to point out something I've overlooked.
TL/DR Version: Is it possible to cancel an EF4 query that's currently executing?
Looks like you have found some bug in EF but when you report it to MS it will be considered as bug in documentation. Anyway I don't like the idea of interacting directly with EntityCommand. Here is my example how to kill current query:
var thread = new Thread((param) =>
{
var currentString = param as string;
if (currentString == null)
{
// TODO OMG exception
throw new Exception();
}
AdventureWorks2008R2Entities entities = null;
try // Don't use using because it can cause race condition
{
entities = new AdventureWorks2008R2Entities();
ObjectQuery<Person> query = entities.People
.Include("Password")
.Include("PersonPhone")
.Include("EmailAddress")
.Include("BusinessEntity")
.Include("BusinessEntityContact");
// Improves performance of readonly query where
// objects do not have to be tracked by context
// Edit: But it doesn't work for this query because of includes
// query.MergeOption = MergeOption.NoTracking;
foreach (var record in query
.Where(p => p.LastName.StartsWith(currentString)))
{
// TODO fill some buffer and invoke UI update
}
}
finally
{
if (entities != null)
{
entities.Dispose();
}
}
});
thread.Start("P");
// Just for test
Thread.Sleep(500);
thread.Abort();
It is result of my playing with if after 30 minutes so it is probably not something which should be considered as final solution. I'm posting it to at least get some feedback with possible problems caused by this solution. Main points are:
Context is handled inside the thread
Result is not tracked by context
If you kill the thread query is terminated and context is disposed (connection released)
If you kill the thread before you start a new thread you should use still one connection.
I checked that query is started and terminated in SQL profiler.
Edit:
Btw. another approach to simply stop current query is inside enumeration:
public IEnumerable<T> ExecuteQuery<T>(IQueryable<T> query)
{
foreach (T record in query)
{
// Handle stop condition somehow
if (ShouldStop())
{
// Once you close enumerator, query is terminated
yield break;
}
yield return record;
}
}
[I am new to ADO.NET and the Entity Framework, so forgive me if this questions seems odd.]
In my WPF application a user can switch between different databases at run time. When they do this I want to be able to do a quick check that the database is still available. What I have easily available is the ObjectContext. The test I am preforming is getting the count on the total records of a very small table and if it returns results then it passed, if I get an exception then it fails. I don't like this test, it seemed the easiest to do with the ObjectContext.
I have tried setting the connection timeout it in the connection string and on the ObjectConntext and either seem to change anything for the first scenario, while the second one is already fast so it isn't noticeable if it changes anything.
Scenario One
If the connect was down when before first access it takes about 30 seconds before it gives me the exception that the underlying provider failed.
Scenario Two
If the database was up when I started the application and I access it, and then the connect drops while using the test is quick and returns almost instantly.
I want the first scenario described to be as quick as the second one.
Please let me know how best to resolve this, and if there is a better way to test the connectivity to a DB quickly please advise.
There really is no easy or quick way to resolve this. The ConnectionTimeout value is getting ignored with the Entity Framework. The solution I used is creating a method that checks if a context is valid by passing in the location you which to validate and then it getting the count from a known very small table. If this throws an exception the context is not valid otherwise it is. Here is some sample code showing this.
public bool IsContextValid(SomeDbLocation location)
{
bool isValid = false;
try
{
context = GetContext(location);
context.SomeSmallTable.Count();
isValid = true;
}
catch
{
isValid = false;
}
return isValid;
}
You may need to use context.Database.Connection.Open()