Get int value from database - entity-framework

How i can get int value from database?
Table has 4 columns
Id, Author, Like, Dislike.
I want to get Dislike amount and add 1.
i try
var db = new memyContext();
var amountLike = db.Memy.Where(s => s.IdMema == id).select(like);
memy.like=amountLike+1;
I know that this is bad way.
Please help

I'm not entirely sure what your question is here, but there's a few things that might help.
First, if you're retrieving via something that reasonably only has one match, or in a scenario where you want just one thing, then you should be use SingleOrDefault or FirstOrDefault, respectively - not Where. Where is reserved for scenarios where you expect multiple things to match, i.e. the result will be a list of objects, not an object. Since you're querying by an id, then it's fairly obvious that you expect just one match. Therefore:
var memy = db.Memy.SingleOrDefault(s => s.IdMema == id);
Second, if you just need to read the value of Like, then you can use Select, but here there's two problems with that. First, Select can only be used on enumerables, as already discussed here, you need a single object, not a list of objects. In truth, you can sidestep this in a somewhat convoluted way:
var amountLike = db.Memy.Select(x => x.Like).SingleOrDefault(x => x.IdMema == id);
However, this is still flawed, because you not only need to read this value, but also write back to it, which then needs the context of the object it belongs to. As such, your code should actually look like:
var memy = db.Memy.SingleOrDefault(s => s.IdMema == id);
memy.Like++;
In other words, you pull out the instance you want to modify, and then modify the value in place on that instance. I also took the liberty of using the increment operator here, since it makes far more sense that way.
That then only solves part of your problem, as you need to persist this value back to the database as well, of course. That also brings up the side issue of how you're getting your context. Since this is an EF context, it implements IDisposable and should therefore be disposed when you're done with it. That can be achieved simply by calling db.Dispose(), but it's far better to use using instead:
using (var db = new memyContext())
{
// do stuff with db
}
And while we're here, based on the tags of your question, you're using ASP.NET Core, which means that even this is sub-optimal. ASP.NET Core uses DI (dependency injection) heavily, and encourages you to do likewise. An EF context is generally registered as a scoped service, and should therefore be injected where it's needed. I don't have the context of where this code exists, but for illustration purposes, we'll assume it's in a controller:
public class MemyController : Controller
{
private readonly memyContext _db;
public MemyController(memyContext db)
{
_db = db;
}
...
}
With that, ASP.NET Core will automatically pass in an instance of your context to the constructor, and you do not need to worry about creating the context or disposing of it. It's all handled for you.
Finally, you need to do the actual persistence, but that's where things start to get trickier, as you now most likely need to deal with the concept of concurrency. This code could be being run simultaneously on multiple different threads, each one querying the database at its current state, incrementing this value, and then attempting to save it back. If you do nothing, one thread will inevitably overwrite the changes of the other. For example, let's say we receive three simultaneous "likes" on this object. They all query the object from the database, and let's say that the current like count is 0. They then each increment that value, making it 1, and then they each save the result back to the database. The end result is the value will be 1, but that's not correct: there were three likes just added.
As such, you'll need to implement a semaphore to essentially gate this logic, allowing only one like operation through at a time for this particular object. That's a bit beyond the scope here, but there's plenty of stuff online about how to achieve that.

Related

How to persist aggregate/read model from "EventStore" in a database?

Trying to implement Event Sourcing and CQRS for the first time, but got stuck when it came to persisting the aggregates.
This is where I'm at now
I've setup "EventStore" an a stream, "foos"
Connected to it from node-eventstore-client
I subscribe to events with catchup
This is all working fine.
With the help of the eventAppeared event handler function I can build the aggregate, whenever events occur. This is great, but what do I do with it?
Let's say I build and aggregate that is a list of Foos
[
{
id: 'some aggregate uuidv5 made from barId and bazId',
barId: 'qwe',
bazId: 'rty',
isActive: true,
history: [
{
id: 'some event uuid',
data: {
isActive: true,
},
timestamp: 123456788,
eventType: 'IsActiveUpdated'
}
{
id: 'some event uuid',
data: {
barId: 'qwe',
bazId: 'rty',
},
timestamp: 123456789,
eventType: 'FooCreated'
}
]
}
]
To follow CQRS I will build the above aggregate within a Read Model, right? But how do I store this aggregate in a database?
I guess just a nosql database should be fine for this, but I definitely need a db since I will put a gRPC APi in front of this and other read models / aggreates.
But what do I actually go from when I have built the aggregate, to when to persist it in the db?
I once tried following this tutorial https://blog.insiderattack.net/implementing-event-sourcing-and-cqrs-pattern-with-mongodb-66991e7b72be which was super simple, since you'd use mongodb both as the event store and just create a view for the aggregate and update that one when new events are incoming. It had it's flaws and limitations (the aggregation pipeline) which is why I now turned to "EventStore" for the event store part.
But how to persist the aggregate, which is currently just built and stored in code/memory from events in "EventStore"...?
I feel this may be a silly question but do I have to loop over each item in the array and insert each item in the db table/collection or do you somehow have a way to dump the whole array/aggregate there at once?
What happens after? Do you create a materialized view per aggregate and query against that?
I'm open to picking the best db for this, whether that is postgres/other rdbms, mongodb, cassandra, redis, table storage etc.
Last question. For now I'm just using a single stream "foos", but at this level I expect new events to happen quite frequently (every couple of seconds or so) but as I understand it you'd still persist it and update it using materialized views right?
So given that barId and bazId in combination can be used for grouping events, instead of a single stream I'd think more specialized streams such as foos-barId-bazId would be the way to go, to try and reduce the frequency of incoming new events to a point where recreating materialized views will make sense.
Is there a general rule of thumb saying not to recreate/update/refresh materialized views if the update frequency gets below a certain limit? Then the only other a lternative would be querying from a normal table/collection?
Edit:
In the end I'm trying to make a gRPC api that has just 2 rpcs - one for getting a single foo by id and one for getting all foos (with optional field for filtering by status - but that is not so important). The simplified proto would look something like this:
rpc GetFoo(FooRequest) returns (Foo)
rpc GetFoos(FoosRequest) returns (FooResponse)
message FooRequest {
string id = 1; // uuid
}
// If the optional status field is not specified, return all foos
message FoosRequest {
// If this field is specified only return the Foos that has isActive true or false
FooStatus status = 1;
enum FooStatus {
UNKNOWN = 0;
ACTIVE = 1;
INACTIVE = 2;
}
}
message FoosResponse {
repeated Foo foos;
}
message Foo {
string id = 1; // uuid
string bar_id = 2 // uuid
string baz_id = 3 // uuid
boolean is_active = 4;
repeated Event history = 5;
google.protobuf.Timestamp last_updated = 6;
}
message Event {
string id = 1; // uuid
google.protobuf.Any data = 2;
google.protobuf.Timestamp timestamp = 3;
string eventType = 4;
}
The incoming events would look something like this:
{
id: 'some event uuid',
barId: 'qwe',
bazId: 'rty',
timestamp: 123456789,
eventType: 'FooCreated'
}
{
id: 'some event uuid',
isActive: true,
timestamp: 123456788,
eventType: 'IsActiveUpdated'
}
As you can see there is no uuid to make it possible to GetFoo(uuid) in the gRPC API, which is why I'll generate a uuidv5 with the barId and bazId, which will combined, be a valid uuid. I'm making that in the projection / aggregate you see above.
Also the GetFoos rpc will either return all foos (if status field is left undefined), or alternatively it'll return the foo's that has isActive that matches the status field (if specified).
Yet I can't figure out how to continue from the catchup subscription handler.
I have the events stored in "EventStore" (https://eventstore.com/), using a subscription with catchup, I have built an aggregate/projection with an array of Foo's in the form that I want them, but to be able to get a single Foo by id from a gRPC API of mine, I guess I'll need to store this entire aggregate/projection in a database of some sort, so I can connect and fetch the data from the gRPC API? And every time a new event comes in I'll need to add that event to the database also or how is this working?
I think I've read every resource I can possibly find on the internet, but still I'm missing some key pieces of information to figure this out.
The gRPC is not so important. It could be REST I guess, but my big question is how to make the aggregated/projected data available to the API service (possible more API's will need it as well)? I guess I will need to store the aggregated/projected data with the generated uuid and history fields in a database to be able to fetch it by uuid from the API service, but what database and how is this storing process done, from the catchup event handler where I build the aggregate?
I know exactly how you feel! This is basically what happened to me when I first tried to do CQRS and ES.
I think you have a couple of gaps in your knowledge which I'm sure you will rapidly plug. You hydrate an aggregate from the event stream as you are doing. That IS your aggregate persisted. The read model is something different. Let me explain...
Your read model is the thing you use to run queries against and to provide data for display to a UI for example. Your aggregates are not (directly) involved in that. In fact they should be encapsulated. Meaning that you can't 'see' their state from the outside. i.e. no getter and setters with the exception of the aggregate ID which would have a getter.
This article gives you a helpful overview of how it all fits together: CQRS + Event Sourcing – Step by Step
The idea is that when an aggregate changes state it can only do so via an event it generates. You store that event in the event store. That event is also published so that read models can be updated.
Also looking at your aggregate it looks more like a typical read model object or DTO. An aggregate is interested in functionality, not properties. So you would expect to see void public functions for issuing commands to the aggregate. But not public properties like isActive or history.
I hope that makes sense.
EDIT:
Here are some more practical suggestions.
"To follow CQRS I will build the above aggregate within a Read Model, right? "
You do not build aggregates in the read model. They are separate things on separate sides of the CQRS side of the equation. Aggregates are on the command side. Queries are done against read models which are different from aggregates.
Aggregates have public void functions and no getter or setters (with the exception of the aggregate id). They are encapsulated. They generate events when their state changes as a result of a command being issued. These events are stored in an event store and are used to recover the state of an aggregate. In other words, that is how an aggregate is stored.
The events go on to be published so the event handlers and other processes can react to them and update the read model and or trigger new cascading commands.
"Last question. For now I'm just using a single stream "foos", but at this level I expect new events to happen quite frequently (every couple of seconds or so) but as I understand it you'd still persist it and update it using materialized views right?"
Every couple of seconds is very likely to be fine. I'm more concerned at the persist and update using materialised views. I don't know what you mean by that but it doesn't sound like you have the right idea. Views should be very simple read models. No need to complex relations like you find in an RDMS. And is therefore highly optimised fast for reading.
There can be a lot of confusion on all the terminologies and jargon used in DDD and CQRS and ES. I think in this case, the confusion lies in what you think an aggregate is. You mention that you would like to persist your aggregate as a read model. As #Codescribler mentioned, at the sink end of your event stream, there isn't a concept of an aggregate. Concretely, in ES, commands are applied onto aggregates in your domain by loading previous events pertaining to that aggregate, rehydrating the aggregate by folding each previous event onto the aggregate and then applying the command, which generates more events to be persisted in the event store.
Down stream, a subscribing process receives all the events in order and builds a read model based on the events and data contained within. The confusion here is that this read model, at this end, is not an aggregate per se. It might very well look exactly like your aggregate at the domain end or it could be only creating a read model that doesn't use all the events and or the event data.
For example, you may choose to use every bit of information and build a read model that looks exactly like the aggregate hydrated up to the newest event(likely your source of confusion). You may instead have another process that builds a read model that only tallies a specific type of event. You might even subscribe to multiple streams and "join" them into a big read model.
As for how to store it, this is really up to you. It seems to me like you are taking the events and rebuilding your aggregate plus a history of events in a memory structure. This, of course, doesn't scale, which is why you want to store it at rest in a database. I wouldn't use the memory structure, since you would need to do a lot of state diffing when you flush to the database. You should be modify the database directly in response to each individual event. Ideally, you also transactionally store the stream count with said modification so you don't process the same event again in the case of a failure.
Hope this helps a bit.

EF database concurrency

I have two apps: one app is asp.net and another is a windows service running in background.
The windows service running in background is performing some tasks (read and update) on database while user can perform other operations on database through asp.net app. So I am worried about it as for example, in windows service I collect some record that satisfy a condition and then I iterate over them, something like:
IQueryable<EntityA> collection = context.EntitiesA.where(<condition>)
foreach (EntityA entity in collection)
{
// do some stuff
}
so, if user modify a record that is used later in the loop iteration, what value for that record is EF taken into account? the original retrieved when performed:
context.EntitiesA.where(<condition>)
or the new one modified by the user and located in database?
As far as I know, during iteration, EF is taken each record at demand, I mean, one by one, so when reading the next record for the next iteration, this record corresponds to that collected from :
context.EntitiesA.where(<condition>)
or that located in database (the one the user has just modified)?
Thanks!
There's a couple of process that will come into play here in terms of how this will work in EF.
Queries are only performed on enumeration (this is sometimes referred to as query materialisation) at this point the whole query will be performed
Lazy loading only effects navigation properties in your above example. The result set of the where statement will be pulled down in one go.
So what does this mean in your case:
//nothing happens here you are just describing what will happen later to make the
// query execute here do a .ToArray or similar, to prevent people adding to the sql
// resulting from this use .AsEnumerable
IQueryable<EntityA> collection = context.EntitiesA.where(<condition>);
//when it first hits this foreach a
//SELECT {cols} FROM [YourTable] WHERE [YourCondition] will be performed
foreach (EntityA entity in collection)
{
//data here will be from the point in time the foreach started (eg if you have updated during the enumeration in the database you will have out of date data)
// do some stuff
}
If you're truly concerned that this can happen then get a list of id's up front and process them individually with a new DbContext for each (or say after each batch of 10). Something like:
IList<int> collection = context.EntitiesA.Where(...).Select(k => k.id).ToList();
foreach (int entityId in collection)
{
using (Context context = new Context())
{
TEntity entity = context.EntitiesA.Find(entityId);
// do some stuff
context.Submit();
}
}
I think the answer to your question is 'it depends'. The problem you are describing is called 'non repeatable reads' an can be prevented from happening by setting a proper transaction isolation level. But it comes with a cost in performance and potential deadlocks.
For more details you can read this

Breeze: complex graph returns only 1 collection

I have a physician graph that looks something like this:
The query I use to get data from a WebApi backend looks like this:
var query = new breeze.EntityQuery().from("Physicians")
.expand("ContactInfo")
.expand("ContactInfo.Phones")
.expand("ContactInfo.Addresses")
.expand("PhysicianNotes")
.expand("PhysicianSpecialties")
.where("ContactInfo.LastName", "startsWith", lastInitial).take(5);
(note the ContactInfo is a pseudonym of the People object)
What I find is that If I request Contact.Phones to be expanded, I'll get just phones and no Notes or Specialties. If I comment out the phones I'll get Contact.Addresses and no other collections. If I comment out ContactInfo along with Phones and Addresses I'll get Notes only etc. Essentially, it seems like I can only get one collection at a time.
So, Is this a built in 'don't let the programmer shoot himself in the foot'?? safeguard or do I have to enable something?
OR is this graph too complicated?? should I consider a NoSql object store??
Thanks
You need to put all your expand clauses in a single one like this:
var query = new breeze.EntityQuery().from("Physicians")
.expand("ContactInfo, ContactInfo.Phones, ContactInfo.Addresses, PhysicianNotes, PhysicianSpecialties")
.where("ContactInfo.LastName", "startsWith", lastInitial).take(5);
You can see the documentation here: http://www.breezejs.com/sites/all/apidocs/classes/EntityQuery.html#method_expand
JY told you HOW. But BEWARE of performance consequences ... both on the data tier and over the wire. You can die a miserable death by grabbing too widely and deeply at once.
I saw the take(5) in his sample. That is crucial for restraining a runaway request (something you really must do also on the server). In general, I would reserve extended graph fetches of this kind for queries that pulled a single root entity. If I'm presenting a list for selection and I need data from different parts of the entity graph, I'd use a projection to get exactly what I need to display (assuming, of course, that there is no SQL View readily available for this purpose).
If any of the related items are reference lists (color, status, states, ...), consider bringing them into cache separately in a preparation step. Don't include them in the expand; Breeze will connect them on the client to your queried entities automatically.
Finally, as a matter of syntax, you don't have to repeat the name of a segment. When you write "ContactInfo.Phones", you get both ContactInfos and Phones so you don't need to specify "ContactInfo" by itself.

Does EF caching work differently for SQL Server CE 3.5?

I have been developing some single-user desktop apps using Entity Framework and SQL Server 3.5. I thought I had read somewhere that once records are in an EF cache for one context, if they are deleted using a different context, they are not removed from the cache for the first context even when a new query is executed. Hence, I've been writing really inefficient and obfuscatory code so I can dispose the context and instantiate a new one whenever another method modifies the database using its own context.
I recently discovered some code where I had not re-instantiated the first context under these conditions, but it worked anyway. I wrote a simple test method to see what was going on:
using (UnitsDefinitionEntities context1 = new UnitsDefinitionEntities())
{
List<RealmDef> rdl1 = (from RealmDef rd in context1.RealmDefs
select rd).ToList();
RealmDef rd1 = RealmDef.CreateRealmDef(100, "TestRealm1", MeasurementSystem.Unknown, 0);
context1.RealmDefs.AddObject(rd1);
context1.SaveChanges();
int rd1ID = rd1.RealmID;
using (UnitsDefinitionEntities context2
= new UnitsDefinitionEntities())
{
RealmDef rd2 = (from RealmDef r in context2.RealmDefs
where r.RealmID == rd1ID select r).Single();
context2.RealmDefs.DeleteObject(rd2);
context2.SaveChanges();
rd2 = null;
}
rdl1 = (from RealmDef rd in context1.RealmDefs select rd).ToList();
Setting a breakpoint at the last line I was amazed to find that the added and deleted entity was in fact not returned by the second query on the first context!
I several possible explanations:
I am totally mistaken in my understanding that the cached records
are not removed upon requerying.
EF is capricious in its caching and it's a matter of luck.
Caching has changed in EF 4.1.
The issue does not arise when the two contexts are
instantiated in the same process.
Caching works differently for SQL CE 3.5 than other versions of SQL
server.
I suspect the answer may be one of the last two options. I would really rather not have to deal with all the hassles in constantly re-instantiating contexts for single-user desktop apps if I don't have to do so.
Can I rely on this discovered behavior for single-user desktop apps using SQL CE (3.5 and 4)?
When you run the 2nd query on an the ObjectSet it's requerying the database, which is why it's reflecting the change exposed by your 2nd context. Before we go too far into this, are you sure you want to have 2 contexts like you're explaining? Contexts should be short lived, so it might be better if you're caching your list in memory or doing something else of that nature.
That being said, you can access the local store by calling ObjectStateManager.GetObjectStateEntries and viewing what is in the store there. However, what you're probably looking for is the .Local storage that's provided by DbSets in EF 4.2 and beyond. See this blog post for more information about that.
Judging by your class names, it looks like you're using an edmx so you'll need to make some changes to your file to have your context inherit from a DbSet to an objectset. This post can show you how
Apparently Explanation #1 was closer to the fact. Inserting the following statement at the end of the example:
var cached = context1.ObjectStateManager.GetObjectStateEntries(System.Data.EntityState.Unchanged);
revealed that the record was in fact still in the cache. Mark Oreta was essentially correct in that the database is actually re-queried in the above example.
However, navigational properties apparently behave differently, e.g.:
RealmDef distance = (from RealmDef rd in context1.RealmDefs
where rd.Name == "Distance"
select rd).Single();
SystemDef metric = (from SystemDef sd in context1.SystemDefs
where sd.Name == "Metric"
select sd).Single();
RealmSystem rs1 = (from RealmSystem rs in distance.RealmSystems
where rs.SystemID == metric.SystemID
select rs).Single();
UnitDef ud1 = UnitDef.CreateUnitDef(distance.RealmID, metric.SystemID, 100, "testunit");
rs1.UnitDefs.Add(ud1);
context1.SaveChanges();
using (UnitsDefinitionEntities context2 = new UnitsDefinitionEntities())
{
UnitDef ud2 = (from UnitDef ud in context2.UnitDefs
where ud.Name == "testunit"
select ud).Single();
context2.UnitDefs.DeleteObject(ud2);
context2.SaveChanges();
}
udList = (from UnitDef ud in rs1.UnitDefs select ud).ToList();
In this case, breaking after the last statement reveals that the last query returns the deleted entry from the cache. This was my source of confusion.
I think I now have a better understanding of what Julia Lerman meant by "Query the model, not the database." As I understand it, in the previous example I was querying the database. In this case I am querying the model. Querying the database in the previous situation happened to do what I wanted, whereas in the latter situation querying the model would not have the desired effect. (This is clearly a problem with my understanding, not with Julia's advice.)

best practice when updating records using openJPA

I am wondering what the best practice would be for updating a record using JPA? I currently have devised my own pattern, but I suspect it is by no means the best practice. What I do is essentially look to see if the record is in the db, if I don't find it, I call the enityManager.persist(object<T>) method. if it does exist I call the entityManager.Merge(Object<T>) method.
The reason that I ask, is that I found out that the the merge method looks to see if the record is in the database allready, and if it is not in the db, then it proceeds to add it, if it is, it makes the changes necessary. Also, do you need to nestle the merge call in getTransaction().begin() and getTransaction.commit()? Here is what I have so far...
try{
launchRet = emf.find(QuickLaunch.class, launch.getQuickLaunchId());
if(launchRet!=null){
launchRet = emf.merge(launch);
}
else{
emf.getTransaction().begin();
emf.persist(launch);
emf.getTransaction().commit();
}
}
If the entity you're trying to save already has an ID, then it must exist in the database. If it doesn't exist, you probably don't want to blindly recreate it, because it means that someone else has deleted the entity, and updating it doesn't make much sense.
The merge() method persists an entity that is not persistent yet (doesn't have an ID or version), and updates the entity if it is persistent. You thus don't need to do anything other than calling merge() (and returning the value returned by this call to merge()).
A transaction is a functional atomic unit of work. It should be demarcated at a higher level (in the service layer). For example, transfering money from an account to another needs both account updates to be done in the same transaction, to make sure both changes either succeed or fail. Removing money from one account and failing to add it to the other would be a major bug.