I have a repository:
public ObservableCollection<ProjectExpenseBO> GetProjectExpenses()
{
//Get by query
IQueryable<ProjectExpenseBO> projectExpenseQuery =
from p in _service.project_expense
from e in _service.vw_employee
where p.employee_id == e.employee_id
select new ProjectExpenseBO()
{
ProjectExpenseID = p.project_expense_id
, EmployeeID = p.employee_id
, ProjectNumber = p.project_number
, PurchaseTypeID = p.purchase_type_id
, BuyerEmployeeID = p.buyer_employee_id
, PurchaseOrderNumber = p.purchase_order_number
, DeliveryDate = p.delivery_date
, EmployeeName = e.first_name + " " + e.last_name
};
ObservableCollection<ProjectExpenseBO> projectExpenseCollection = new ObservableCollection<ProjectExpenseBO>(projectExpenseQuery);
return projectExpenseCollection;
}
I am wondering if it is better to return an IList or IEnumerable (instead of an ObservableCollection) from my repository since my viewmodel may end up putting it in either an ObservableCollection or List, depending on my need. For instance, I may return data from the repository above to a read-only datagrid or dropdown list, or I may want the same data in an editable datagrid.
I am thinking (and could be wrong) that I want my repository to return a barebones list, then convert it to what suites my needs in the viewmodel. Is my thinking correct? Here is what I was thinking:
public IEnumerable<ProjectExpenseBO> GetProjectExpenses()
{
//Get by query
IQueryable<ProjectExpenseBO> projectExpenseQuery =
from p in _service.project_expense
from e in _service.vw_employee
where p.employee_id == e.employee_id
select new ProjectExpenseBO()
{
ProjectExpenseID = p.project_expense_id
, EmployeeID = p.employee_id
, ProjectNumber = p.project_number
, PurchaseTypeID = p.purchase_type_id
, BuyerEmployeeID = p.buyer_employee_id
, PurchaseOrderNumber = p.purchase_order_number
, DeliveryDate = p.delivery_date
, EmployeeName = e.first_name + " " + e.last_name
};
return projectExpenseQuery;
}
Thanks.
I would personally return an IEnumerable<T> or IList<T> instead of ObservableCollection. There are many times when you may not need the full behavior of ObservableCollection<T>, in which case you're putting more resources than necessary.
I like your second implementation - however, be aware that there is one potential downside. Some people don't like returning deferred execution IEnumerable<T> from a repository, since it defers execution until usage. While it has the upside of potentially saving resources, especially if you end up not using some or all of the enumerable, this can lead to an exception occuring later (when the IEnumerable<T> is actually used), instead of occuring within your repository yourself.
If this bothers you, you could just force the execution to occur (ie: call ToList() or similar).
Personally I'd return an IEnumerable<T> if you're simply using it to populate the UI. No need to return a List<T> if you aren't going to be adding/removing items from it.
Related
I have a LINQ query
var age = new int[]{1,2,3};
dbContext.TA.WHERE(x=> age.Contains( x.age)).ToList()
In an online article #11 (https://medium.com/swlh/entity-framework-common-performance-mistakes-cdb8861cf0e7) mentioned it is not a good practice as it creates many execution plan at the SQL server.
In this case, how should LINQ be revised so that I can do the same thing but minimize the amount of execution plans generated?
(note that I have no intention to convert it into a stored procedure and pass & join with the UDT as again it requires too many effort to do so)
That article offers some good things to keep in mind when writing expressions for EF. As a general rule that example is something to keep in mind, not a hard "never do this" kind of rule. It is a warning over writing queries that allow for multi-select and to avoid this when possible as it will be on the more expensive side.
In your example with something like "Ages", having a hard-coded list of values does not cause a problem because every execution uses the same list. (until the app is re-compiled with a new list, or you have code that changes the list for some reason.) Examples where it can be perfectly valid to use this is with something like Statuses where you have a status Enum. If there are a small number of valid statuses that a record can have, then declaring a common array of valid statuses to use in an Contains clause is fine:
public void DeleteEnquiry(int enquiryId)
{
var allowedStatuses = new[] { Statuses.Pending, Statuses.InProgress, Statuses.UnderReview };
var enquiry = context.Enquiries
.Where(x => x.EnquiryId == enquiryId && allowedStatuses.Contains(x.Status))
.SingleOrDefault();
try
{
if(enquiry != null)
{
enquiry.IsActive = false;
context.SaveChanges();
}
else
{
// Enquiry not found or invalid status.
}
}
catch (Exception ex) { /* handle exception */ }
}
The statuses in the list aren't going to change so the execution plan is static for that context.
The problem is where you accept something like a parameter with criteria that include a list for a Contains clause.
it is highly unlikely that someone would want to load data where a user could select ages "2, 4, and 6", but rather they would want to select something like: ">=2", or "<=6, or "2>=6" So rather than creating a method that accepts a list of acceptable ages:
public IEnumerable<Children> GetByAges(int[] ages)
{
return _dbContext.Children.Where(x => ages.Contains( x.Age)).ToList();
}
You would probably be better served with ranging the parameters:
private IEnumerable<Children> GetByAgeRange(int? minAge = null, int? maxAge = null)
{
var query = _dbContext.Children.AsQueryable();
if (minAge.HasValue)
query = query.Where(x => x.Age >= minAge.Value);
if (maxAge.HasValue)
query = query.Where(x => x.Age <= maxAge.Value);
return query.ToList();
}
private IEnumerable<Children> GetByAge(int age)
{
return _dbContext.Children.Where(x => x.Age == age).ToList();
}
I have min 100 000 data into a Job_Details table and I'm using Entity Framework to map the data.
This is the code:
public GetJobsResponse GetImportJobs()
{
GetJobsResponse getJobResponse = new GetJobsResponse();
List<JobBO> lstJobs = new List<JobBO>();
using (NSEXIM_V2Entities dbContext = new NSEXIM_V2Entities())
{
var lstJob = dbContext.Job_Details.ToList();
foreach (var dbJob in lstJob.Where(ie => ie.IMP_EXP == "I" && ie.Job_No != null))
{
JobBO job = MapBEJobforSearchObj(dbJob);
lstJobs.Add(job);
}
}
getJobResponse.Jobs = lstJobs;
return getJobResponse;
}
I found to this line is taking about 2-3 min to execute
var lstJob = dbContext.Job_Details.ToList();
How can i solve this issue?
To outline the performance issues with your example: (see inline comments)
public GetJobsResponse GetImportJobs()
{
GetJobsResponse getJobResponse = new GetJobsResponse();
List<JobBO> lstJobs = new List<JobBO>();
using (NSEXIM_V2Entities dbContext = new NSEXIM_V2Entities())
{
// Loads *ALL* entities into memory. This effectively takes all fields for all rows across from the database to your app server. (Even though you don't want it all)
var lstJob = dbContext.Job_Details.ToList();
// Filters from the data in memory.
foreach (var dbJob in lstJob.Where(ie => ie.IMP_EXP == "I" && ie.Job_No != null))
{
// Maps the entity to a DTO and adds it to the return collection.
JobBO job = MapBEJobforSearchObj(dbJob);
lstJobs.Add(job);
}
}
// Returns the DTOs.
getJobResponse.Jobs = lstJobs;
return getJobResponse;
}
First: pass your WHERE clause to EF to pass to the DB server rather than loading all entities into memory..
public GetJobsResponse GetImportJobs()
{
GetJobsResponse getJobResponse = new GetJobsResponse();
using (NSEXIM_V2Entities dbContext = new NSEXIM_V2Entities())
{
// Will pass the where expression to be DB server to be executed. Note: No .ToList() yet to leave this as IQueryable.
var jobs = dbContext.Job_Details..Where(ie => ie.IMP_EXP == "I" && ie.Job_No != null));
Next, use SELECT to load your DTOs. Typically these won't contain as much data as the main entity, and so long as you're working with IQueryable you can load related data as needed. Again this will be sent to the DB Server so you cannot use functions like "MapBEJobForSearchObj" here because the DB server does not know this function. You can SELECT a simple DTO object, or an anonymous type to pass to a dynamic mapper.
var dtos = jobs.Select(ie => new JobBO
{
JobId = ie.JobId,
// ... populate remaining DTO fields here.
}).ToList();
getJobResponse.Jobs = dtos;
return getJobResponse;
}
Moving the .ToList() to the end will materialize the data into your JobBO DTOs/ViewModels, pulling just enough data from the server to populate the desired rows and with the desired fields.
In cases where you may have a large amount of data, you should also consider supporting server-side pagination where you pass a page # and page size, then utilize a .Skip() + .Take() to load a single page of entries at a time.
I am searching for a API function corresponding to the "find in all diagrams"-function (Strg + U) in Enterprise Architect.
The class element provides the attribute diagrams which should return a collection of diagrams but it returns in my case always an empty list. Is it the wrong way?
EDIT:
I would be happy about a function that returns a collection of diagrams which include the element.
THE SOLUTION:
public List<EA.Diagram> getAllDiagramsOfElement(EA.Element element){
String xmlQueryResult = repository.SQLQuery(
"select dobj1.Diagram_ID " +
"from t_diagramobjects dobj1 " +
"where dobj1.Object_ID = " + element.ElementID+";");
XmlDocument xml = new XmlDocument();
xml.LoadXml(xmlQueryResult);
XmlNodeList xnList = xml.SelectNodes("/EADATA/Dataset_0/Data/Row");
List<EA.Diagram> result = new List<EA.Diagram>();
foreach (XmlNode xn in xnList){
result.Add(repository.GetDiagramByID(Convert.ToInt32(xn["Diagram_ID"].InnerText)));
}
return result;
}
With kind regards
MK
You might have to use a query,
Try this
select * from t_diagramobjects dobj1, t_diagramobjects dobj2 where dobj1.object_id=dobj2.object_id and dobj1.diagram_id!=dobj2.diagram_id;
In case you would like to stay with the API, you have to walk the packages in the model tree recursively, adding diagrams to a collection (ok, Dictionary object in VBScript).
Then you find all Diagramobjects from Diagrams. DiagramObjects then relate to Elements (remember, Element may be represented in more Diagrams).
Another approach could be to use Repository.SQLQuery method, which should return XML-formatted resultset (I didn't test that yet). But you'd need MSXML present on the machine to parse it (and keep up with the versions).
Generally, if you want to scan whole model and you don't need parent-child relationships, SQL should be better fit. And vice versa.
I have this same function in my Enterprise Architect Add-in Framework, implemented in the class ElementWrapper:
//returns a list of diagrams that somehow use this element.
public override HashSet<T> getUsingDiagrams<T>()
{
string sqlGetDiagrams = #"select distinct d.Diagram_ID from t_DiagramObjects d
where d.Object_ID = " + this.wrappedElement.ElementID;
List<UML.Diagrams.Diagram> allDiagrams = this.model.getDiagramsByQuery(sqlGetDiagrams).Cast<UML.Diagrams.Diagram>().ToList(); ; ;
HashSet<T> returnedDiagrams = new HashSet<T>();
foreach (UML.Diagrams.Diagram diagram in allDiagrams)
{
if (diagram is T)
{
T typedDiagram = (T)diagram;
if (!returnedDiagrams.Contains(typedDiagram))
{
returnedDiagrams.Add(typedDiagram);
}
}
}
return returnedDiagrams;
}
The function getDiagramsByQuery in the Model class looks like this
//returns a list of diagrams according to the given query.
//the given query should return a list of diagram id's
public List<Diagram> getDiagramsByQuery(string sqlGetDiagrams)
{
// get the nodes with the name "Diagram_ID"
XmlDocument xmlDiagramIDs = this.SQLQuery(sqlGetDiagrams);
XmlNodeList diagramIDNodes =
xmlDiagramIDs.SelectNodes(formatXPath("//Diagram_ID"));
List<Diagram> diagrams = new List<Diagram>();
foreach (XmlNode diagramIDNode in diagramIDNodes)
{
int diagramID;
if (int.TryParse(diagramIDNode.InnerText, out diagramID))
{
Diagram diagram = this.getDiagramByID(diagramID);
diagrams.Add(diagram);
}
}
return diagrams;
}
Ok, I must be working too hard because I can't get my head around what it takes to use the Entity Framework correctly.
Here is what I am trying to do:
I have two tables: HeaderTable and DetailTable. The DetailTable will have 1 to Many records for each row in HeaderTable. In my EDM I set up a Relationship between these two tables to reflect this.
Since there is now a relationship setup between these tables, I thought that by quering all the records in HeaderTable, I would be able to access the DetailTable collection created by the EDM (I can see the property when quering, but it's null).
Here is my query (this is a Silverlight app, so I am using the DomainContext on the client):
// myContext is instatiated with class scope
EntityQuery<Project> query = _myContext.GetHeadersQuery();
_myContext.Load<Project>(query);
Since these calls are asynchronous, I check the values after the callback has completed. When checking the value of _myContext.HeaderTable I have all the rows expected. However, the DetailsTable property within _myContext.HeaderTable is empty.
foreach (var h in _myContext.HeaderTable) // Has records
{
foreach (var d in h.DetailTable) // No records
{
string test = d.Description;
}
I'm assuming my query to return all HeaderTable objects needs to be modified to somehow return all the HeaderDetail collectoins for each HeaderTable row. I just don't understand how this non-logical modeling stuff works yet.
What am I doing wrong? Any help is greatly appriciated. If you need more information, just let me know. I will be happy to provide anything you need.
Thanks,
-Scott
What you're probably missing is the Include(), which I think is out of scope of the code you provided.
Check out this cool video; it explained everything about EDM and Linq-to-Entities to me:
http://msdn.microsoft.com/en-us/data/ff628210.aspx
In case you can't view video now, check out this piece of code I have based on those videos (sorry it's not in Silverlight, but it's the same basic idea, I hope).
The retrieval:
public List<Story> GetAllStories()
{
return context.Stories.Include("User").Include("StoryComments").Where(s => s.HostID == CurrentHost.ID).ToList();
}
Loading the the data:
private void LoadAllStories()
{
lvwStories.DataSource = TEContext.GetAllStories();
lvwStories.DataBind();
}
Using the data:
protected void lvwStories_ItemDataBound(object sender, ListViewItemEventArgs e)
{
if (e.Item.ItemType == ListViewItemType.DataItem)
{
Story story = e.Item.DataItem as Story;
// blah blah blah....
hlStory.Text = story.Title;
hlStory.NavigateUrl = "StoryView.aspx?id=" + story.ID;
lblStoryCommentCount.Text = "(" + story.StoryComments.Count.ToString() + " comment" + (story.StoryComments.Count > 1 ? "s" : "") + ")";
lblStoryBody.Text = story.Body;
lblStoryUser.Text = story.User.Username;
lblStoryDTS.Text = story.AddedDTS.ToShortTimeString();
}
}
I'm currently inserting/updating fields like this (if there's a better way, please say so - we're always learning)
public void UpdateChallengeAnswers(List<ChallengeAnswerInfo> model, Decimal field_id, Decimal loggedUserId)
{
JK_ChallengeAnswers o;
foreach (ChallengeAnswerInfo a in model)
{
o = this.FindChallengeAnswerById(a.ChallengeAnswerId);
if (o == null) o = new JK_ChallengeAnswers();
o.answer = FilterString(a.Answer);
o.correct = a.Correct;
o.link_text = "";
o.link_url = "";
o.position = FilterInt(a.Position);
o.updated_user = loggedUserId;
o.updated_date = DateTime.UtcNow;
if (o.challenge_id == 0)
{
// New record
o.challenge_id = field_id; // FK
o.created_user = loggedUserId;
o.created_date = DateTime.UtcNow;
db.JK_ChallengeAnswers.AddObject(o);
}
else
{
// Update record
this.Save();
}
}
this.Save(); // Commit changes
}
As you can see there is 2 times this.Save() (witch invokes db.SaveChanges();)
when Adding we place the new object into a Place Holder with the AddObject method, in other words, the new object is not committed right away and we can place as many objects we want.
But when it's an update, I need to Save first before moving on to the next object, is there a method that I can use in order to, let's say:
if (o.challenge_id == 0)
{
// New record
o.challenge_id = field_id;
o.created_user = loggedUserId;
o.created_date = DateTime.UtcNow;
db.JK_ChallengeAnswers.AddObject(o);
}
else
{
// Update record
db.JK_ChallengeAnswers.RetainObject(o);
}
}
this.Save(); // Only save once when all objects are ready to commit
}
So if there are 5 updates, I don't need to save into the database 5 times, but only once at the end.
Thank you.
Well if you have an object which is attached to the graph, if you modify values of this object, then the entity is marked as Modified.
If you simply do .AddObject, then the entity is marked as Added.
Nothing has happened yet - only staging of the graph.
Then, when you execute SaveChanges(), EF will translate the entries in the OSM to relevant store queries.
Your code looks a bit strange. Have you debugged through (and ran a SQL trace) to see what is actually getting executed? Because i can't see why you need that first .Save, because inline with my above points, since your modifying the entities in the first few lines of the method, an UPDATE statement will most likely always get executed, regardless of the ID.
I suggest you refactor your code to handle new/modified in seperate method. (ideally via a Repository)
Taken from Employee Info Starter Kit, you can consider the code snippet as below:
public void UpdateEmployee(Employee updatedEmployee)
{
//attaching and making ready for parsistance
if (updatedEmployee.EntityState == EntityState.Detached)
_DatabaseContext.Employees.Attach(updatedEmployee);
_DatabaseContext.ObjectStateManager.ChangeObjectState(updatedEmployee, System.Data.EntityState.Modified);
_DatabaseContext.SaveChanges();
}