I am learning about the POCOs and while I like a lot of the concepts, I think I am not quite getting it.
I have a problem like the following:
I have one sproc which returns multiple columns and values against these columns which dynamically build inside the sproc based on certain conditions.
e.g based on the input, one of the below result should return,
1)
Id -- Name -- Age
1 Peter 25
2 Janit 53
2)
Id -- Provider Name -- Provider Type
5 C. A hospital
I cant create class for these dynamic columns, therefore I fetch records using dynamic object and POCO DB.
List<dynamic> list = db.fetch<dynamic>(sql);
Problem occurs when somebody else call the function with different parameter then result keeps the column information for first call of POCO and result for desire one.
Id -- Name -- Age
5 C. A hospital
this discrepancy causing runtime error.
Can you please help me to resolve this issue?
or how can I define class for this kind of scenario?
Hope I explained my problem in detail manner.
You can define a POCO class just to grab the results. I use plenty of them. PetaPoco will fill only the fields that the SP returns.
Create a POCO with all columns you expect to be returned from the dynamic SP in a manner like the following:
public class PocoName
{
public int Id {get; set;}
public string Name {get; set;}
public int Age {get; set;}
public string ProviderName {get; set;}
public string ProviderType {get; set;}
...
}
Then call the function as follows:
List<PocoName> list = db.fetch<PocoName>(sql);
Every time you run the sproc with different input parameters, only the columns returned by the sproc will be populated within your POCO.
Though it's a 3 years old post, I came across a similar issue recently. Hopefully the workaround will help someone who encounters this in future.
PetaPoco keeps object definitions cached. So any db.fetch/db.query with diff parameters to the same SP will return the first call's object definition when we are expecting to have diff dynamic object list as result. I tried to cache burst, but couldn't find a way. Maybe there's option/config, but I haven't tried that far. If anyone knows the way to cache burst, please do share.
Since it is not practical to have Defined POCO classes to fit the dynamic nature of this case (as mentioned earlier by author), I tried returning DataTable and then converted it to dynamic object list which is working as expected.
Thanks
Related
I have an application that I developed standalone and now am trying to integrate into a much larger model. Currently, on the server side, there are 11 tables and an average of three navigation properties per table. This is working well and stable.
The larger model has 55 entities and 180+ relationships and includes most of my model (less the relationships to tables in the larger model). Once integrated, a very strange thing happens: the server sends the same data, the same number of entities are returned, but the exportEntities function returns a string of about 150KB (rather than the 1.48 MB it was returning before) and all queries show a tenth of the data they were showing before.
I followed the troubleshooting information on the Breeze website. I looked through the Breeze metadata and the entities and relationships seem defined correctly. I looked at the data that was returned and 9 out of ten entities did not appear as an object, but as a function: function (){return e.refMap[t]} which, when I expand it, has an 'arguments' property: Exception: TypeError: 'caller', 'callee', and 'arguments' properties may not be accessed on strict mode functions or the arguments objects for calls to them.
For reference, here are the two entities involved in the breaking change.
The Repayments Entity
public class Repayment
{
[Key, Column(Order = 0)]
public int DistrictId { get; set; }
[Key, Column(Order = 1)]
public int RepaymentId { get; set; }
public int ClientId { get; set; }
public int SeasonId { get; set; }
...
#region Navigation Properties
[InverseProperty("Repayments")]
[ForeignKey("DistrictId")]
public virtual District District { get; set; }
// The three lines below are the lines I added to break the results
// If I remove them again, the results are correct again
[InverseProperty("Repayments")]
[ForeignKey("DistrictId,ClientId")]
public virtual Client Client { get; set; }
[InverseProperty("Repayments")]
[ForeignKey("DistrictId,SeasonId,ClientId")]
public virtual SeasonClient SeasonClient { get; set; }
The Client Entity
public class Client : IClient
{
[Key, Column(Order = 0)]
public int DistrictId { get; set; }
[Key, Column(Order = 1)]
public int ClientId { get; set; }
....
// This Line lines were in the original (working) model
[InverseProperty("Client")]
public virtual ICollection<Repayment> Repayments { get; set; }
....
}
The relationship that I restored was simply the inverse of a relationship that was already there, which is one of the really weird things about it. I'm sure I'm doing something terribly wrong, but I'm not even sure at this point what information might be helpful in debugging this.
For defining foreign keys and inverse properties, I assume I must use either data annotations or the FluentAPI even if the tables follow all the EF conventions. Is either one better than the other? Is it necessary to consistently choose one approach and stay with it? Does the error above provide any insight as to what I might be doing wrong? Is there any other information I could post that might be helpful?
Breeze is an excellent framework and has the potential to really increase our reach providing assistance to small farmers in rural East Africa, and I'd love to get this prototype working.
THanks
Ok, some of what you are describing can be explained by breeze's default behavior of compressing the payload of any query results that return multiple instances of the same entity. If you are using something like the default 'json.net' assembly for serialization, then each entity is sent with an extra '$id' property and if the same entity is seen again it gets serialized via a simple '$ref' property with the value of the previously mentioned '$id'.
On the breeze client during deserialization these '$refs' get resolved back into full entities. However, because the order in which deserialization is performed may not be the same as the order that serialization might have been performed, breeze internally creates deferred closure functions ( with no arguments) that allow for the deferred resolution of the compressed results regardless of the order of serialization. This is the
function (){return e.refMap[t]}
that you are seeing.
If you are seeing this value as part of the actual top level query result, then we have a bug, but if you are seeing this value while debugging the results returned from your server, before they have been returned to the calling function, then this is completely expected ( especially if you are viewing the contents of the closure before it should be executed.)
So a couple of questions and suggestions
Are you are actually seeing an error processing the result of your query or are simply surprised that the results are so small? If it's just a size issue, check and see if you can identify data that should have been sent to the client and is missing. It is possible that the reference compression is simply very effective in your case.
take a look at the 'raw' data returned from your web service. It should look something like this, with '$id' and '$ref' properties.
[{
'$id': '1',
'Name': 'James',
'BirthDate': '1983-03-08T00:00Z',
},
{
'$ref': '1'
}]
if so, then look at the data and make sure that an '$'id' exists that correspond to each of your '$refs'. If not, something is wrong with your server side serialization code. If the data does not look like this, then please post back with a small example of what the 'raw' data does look like.
After looking at your Gist, I think I see the issue. Your metadata is out of sync with the actual results returned by your query. In particular, if you look for the '$id' value of "17" in your actual results you'll notice that it is first found in the 'Client' property of the 'Repayment' type, but your metadata doesn't have 'Client' navigation property defined for the 'Repayment' type ( there is a 'ClientId' ). My guess is that you are reusing an 'older' version of your metadata.
The reason that this results in incomplete results is that once breeze determines that it is deserializing an 'entity' ( i.e. a json object that has $type property that maps to an actual entityType), it only attempts to deserialize the 'known' properties of this type, i.e. those found in the metadata. In your case, the 'Client' navigation property on the 'Repayment' type was never being deserialized, and any refs to the '$id' defined there are therefore not available.
We are using Entity Framework 6.0.0 and use database first (like this) to generate code from tables and stored procedures. This seems to work great, except that changes in stored procedures are not reflected when updating or refreshing the model. Adding a column to a table is reflected, but not adding a field to a stored procedure.
It is interesting that if I go to the Model Browser, right click the stored procedure, select Add Function Import and click the button Get Column Information we can see the correct columns. This means that the model knows of the columns, but does not manage to update the generated code.
There is one workaround, and that is to delete the generated stored procedure before updating the model. This works as long as you have not made any edits on the stored procedure. Does anyone know of a way to avoid this workaround?
I am using Visual Studio 2013 with all the latest updates as of early December 2013.
Thanks in advance!
Update 1:
andersr's answer helped in one case, where the stored procedure used a temporary table, so i gave him +1, but it still does not solve the main problem of updating simple stored procedures.
Update 2:
shimron's comment below links to a question about the same issues in EF 3.5. It seems the same is still true for EF 6.0. Read it for an alternative way of doing it, but my conclusion as of now is that the simplest way of doing it is to delete the generated stored procedure before updating the model. Use partial classes if you want to do something fancy.
Based on this answer by DaveD, these steps address the issue:
In your .edmx, rt-click and select Model Browser.
Within the Model Browser (in VS 2015 default configuration, it is a tab within the Solution Explorer), expand Function Imports under the model.
Double-click your stored procedure.
Click the Update button next to Returns a Collection Of - Complex (if not returning a scalar or entity)
Click okay then save your .edmx to reflect field changes to your stored procedure throughout your project.
Does your stored procedures return data from temporary tables by any chance ? EF does not seem to support this, see EF4 - The selected stored procedure returns no columns for more information.
However, the stored procedure will as you observed, be available in the Model Browser. I did a quick test featuring the scenario described above. The stored procedure was generated in my context class, but the return type was an int rather than a complex type. See the link above for potential workarounds.
I just encountered this and my workaround (it is really nasty) was to create an if statement with a condition that will never be true at the top of the stored procedure which selects the same list of outputs as the query with explicit casting to the datatypes I want to return. This will assume nullability of your types, so to resolve that you wrap the cast in an ISNULL
For example, if your output has the columns:
UserId (int, not null)
RoleId (int, nullable)
FirstName (varchar(255), nullable)
Created (datetime, not null)
You would expect this to create a POCO like:
SomeClass {
public int UserId { get; set; }
public int? RoleId { get; set; }
public string FirstName { get; set; }
public DateTime Created { get; set; }
}
...But it doesn't and that's why we're here today. To get around this not working as expected, I put the following at the top of my SP (right after the 'AS'):
if(1=0)
begin
select
UserId = isnull((cast(0 as int)),0),
RoleId = cast(0 as int),
FirstName = cast(0 as varchar),
DateTime = isnull((cast(0 as datetime)),'')
end
It is horrible and ugly but it works for me every time. Hopefully we get a tooling update that resolves this soon...happened to me today with no temp tables in SQL Server 2016 w/VS2015...
Hope this helps somebody
A friend reported a problem with a computed column, Entity Framework, and Breeze
We have a table with a "FullName" column computed by the database. When creating a new Person, Breeze sends the FullName property value to the server, even though it’s not being set at all, and that triggers an error when trying to insert the new Person instance. The database throws this exception:
The column "FullName" cannot be modified because it is either a computed column or is the result of a UNION operator.
Here is the relevant portion of the SQL Table definition:
CREATE TABLE [dbo].[Person](
[ID] [bigint] IDENTITY(1,1) NOT NULL,
[FirstName] [varchar](100) NULL,
[MiddleName] [varchar](100) NULL,
[LastName] [varchar](100) NOT NULL,
[FullName] AS ((([Patient].[LastName]+',') + isnull(' '+[Patient].[FirstName],'')) + isnull(' '+[Patient].[MiddleName],'')),
...
My friend tells me the corresponding "Code First" class looks something like this:
public class Person {
public int ID {get; set;}
public string FirstName {get; set;}
public string MiddleName {get; set;}
public string LastName {get; set;}
public string FullName {get; set;}
...
}
The answer to this question explains the problem and offers a solution.
Design issues
Everyone looking at this wonders why there is a computed column for FullName and, secondarily, why this property is exposed to the client.
Let's just assume there is a good reason for the computed column, a good reason for the model to get the value from the table instead of calculating the value itself, and a good reason to send it to the client rather than have the client calculate it. Here's what he told me about that;
"We need to include the FullName in queries"
Life works out this way sometimes.
Consequences
Notice that the FullName property has a public setter. The EF metadata generator for the Person class cannot tell that this is a read-only property. FullName looks just like LastName. The metadata say "this is normal read/write property."
Breeze doesn't see a difference either. The client app may not touch this property, but Breeze has to send a value for it when creating a new Person. Back on the server, the Breeze EFContextProvider thinks it should pass that value along when creating the EF entity. The stage is set for disaster.
What can you do if (a) you can't change the table and (b) you can't change the model's FullName property definition?
A Solution
EF needs your help. You should tell EF that this is actually a database computed property. You could use the EF fluent interface or use the attribute as shown here:
[DatabaseGenerated(DatabaseGeneratedOption.Computed)]
public String FullName { get; set; }
Add this attribute and EF knows this property is read-only. It will generate the appropriate metadata and you can save a new Person cleanly. Omit it and you'll get the exception.
Note that this is only necessary for Code First. If he'd generated the model Database First, EF knows that the column is computed and doesn’t try to set it.
Be aware of a similar issue with store-generated keys. The default for an integer key is "store-generated" but the default for a Guid key is "client generated". If, in your table, the database actually sets the Guid, you must mark the ID property with [DatabaseGenerated(DatabaseGeneratedOption.Identity)]
I would like to get the max length of a column I have defined using EF code first. I need to ensure that the value inserted does not exceed the max length:
this.Property(t => t.COMPANY_ID)
.HasMaxLength(30);
Any suggestions?
The way I understood your question, your real need seems to be that you want to make sure that a property of an entity (in this case the COMPANY_ID) does not exceed a certain maximum length (in this case 30).
Instead of performing manual checks like that, you can consider making use of Data Annotations (System.ComponentModel.DataAnnotations and System.ComponentModel.DataAnnotations.Schema), especially since you're using code first anyway. Something like this:
public class MyEntity
{
[MaxLength(30)]
public string MyProperty {get; set;}
[Column(TypeName="Date")]
public DateTime MyDate {get; set;}
}
You can set more than just the maximum length. As you can see above you can specify what data type should reflect in your database. You can also specify if a property is required and many more. EF will manage this for you automatically and will raise exceptions for you if your entities do not meet the criteria set by your data annotations. If you use MVC scaffolding, it can automatically generate validations as well that are consistent with the annotations you've specified for your entities.
I have the following code in my context, and no explicit table-class mapping, yet my database keeps getting created (by my DropCreateDatabaseIfModelChanges initializer) with an EmployeeStatus table, not EmployeeStatuses. Is there a known issue with this, or am I going insane or what?
public DbSet<Department> Departments { get; set; }
public DbSet<EmployeeStatus> EmployeeStatuses { get; set; }
All my other tables are named exactly after their DbSet names, as I expect.
Entity Framework uses its pluralization service to infer database table names based on
the class names in the model—Destination becomes Destinations, Person becomes
People, etc. By convention, Code First will do its best to pluralize the class name and use the results as the name of the table. However, it might not be the same as your
table naming conventions.
You can use the Table Data Annotation to ensure that Code First maps your class to
the correct table name.