What is the proper way to resolve a reference to a service fabric stateless service? - service

I have been developing a system which is heavily using stateless service fabric services. I thought I had a good idea how things worked however I am doing something slightly different now and it is obvious my understanding is limited.
I have a cluster of 5 nodes and all my services have an instance count of -1 for simplicity currently. With everything on every node it means I can watch a single node for basic behavior correctness.
I just added a new service which needs an instance count of 1. However it seems I am unable to resolve this service correctly. Instead SF tries to resolve the service on each machine which fails for all machines except the one where the single service exists.
It was my assumption that SF would automagically resolve a reference to a service anywhere in the cluster. If that reference fails then it would automagically resolve a new reference and so on. It appears that this is not correct at least for the way I am currently doing things.
I can find an instance using code similar to this however what happens if that instance fails. How do I get another reference?
I can resolve for every call like this however that seems like a terrible idea when I really only want to resolve a IXyzService and pass that along.
This is how I am resolving services since I am using the V2 custom serialization.
var _proxyFactory = new ServiceProxyFactory(c =>
{
return new FabricTransportServiceRemotingClientFactory(
serializationProvider: new CustomRemotingSerializationProvider(Logger)
);
});
var location = new Uri("fabric:/xyz/abcService");
var proxy = _proxyFactory.CreateServiceProxy<TService>(location);
This does actually work however it appears to only resolve a service on the same machine. So ServiceA would resolve a reference to ServiceB on the same machine. However if ServiceB doesn't exist on the machine for a valid reason then the resolution would fail.
Summary:
What is the correct way for ServiceA to use the V2 custom serialization ServiceProxyFactory to resolve an interface reference to ServiceB wherever ServiceA and ServiceB are in the cluster?
Update:
The evidence it doesn't work is the call to resolve hangs forever. According to this link that is correct behavior because the service will eventually come up. However only 1 node ever resolved it correctly and that is the node where the single instance is active. I have tried several things even waiting 30 seconds just to make sure it wasn't an init issue.
var proxy = _proxyFactory.CreateServiceProxy<TService>(location);
// Never gets here except on the one node.
SomethingElse(proxy);
Listener code
This essentially follows the V2 custom serialization tutorial almost exactly.
var listeners = new[]
{
new ServiceInstanceListener((c) =>
{
return new FabricTransportServiceRemotingListener(c, this, null,
new CustomRemotingSerializationProvider(Logger));
})
};
public class HyperspaceRemotingSerializationProvider : IServiceRemotingMessageSerializationProvider
{
#region Private Variables
private readonly ILogger _logger;
private readonly Action<RequestInfo> _requestAction;
private readonly Action<RequestInfo> _responseAction;
#endregion Private Variables
public CustomRemotingSerializationProvider(ILogger logger, Action<RequestInfo> requestAction = null, Action<RequestInfo> responseAction = null)
{
_logger = logger;
_requestAction = requestAction;
_responseAction = responseAction;
}
public IServiceRemotingRequestMessageBodySerializer CreateRequestMessageSerializer(Type serviceInterfaceType, IEnumerable<Type> requestWrappedTypes,
IEnumerable<Type> requestBodyTypes = null)
{
return new RequestMessageBodySerializer(_requestAction);
}
public IServiceRemotingResponseMessageBodySerializer CreateResponseMessageSerializer(Type serviceInterfaceType, IEnumerable<Type> responseWrappedTypes,
IEnumerable<Type> responseBodyTypes = null)
{
return new ResponseMessageBodySerializer(_responseAction);
}
public IServiceRemotingMessageBodyFactory CreateMessageBodyFactory()
{
return new MessageBodyFactory();
}
}
Connection code
_proxyFactory = new ServiceProxyFactory(c =>
{
return new FabricTransportServiceRemotingClientFactory(
serializationProvider: new CustomRemotingSerializationProvider(Logger)
);
});
// Hangs here - tried different partition keys or not specifying one.
var proxy = _proxyFactory.CreateServiceProxy<TService>(location, ServicePartitionKey.Singleton);

Related

Locally cached stateManager... any risk in Service Fabric?

What seems to be just common practice could be the wrong thing to do in Service Fabric. I suspect the below code where stateManager is saved as local cache could cause a potential issue when the 'Startup' class is instantiated within the return statement of 'CreateServiceReplicaListeners()' method in 'SomeService' stateful service.
The situation that can happen is when the state manager is somehow re-instantiated. I need more explanation as to whether the below practice is the right thing to do or not. If not, what could be the best practice instead?
internal class SomeService : StatefulService
{
protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
{
return new[]{
new ServiceReplicaListener(
initParams =>
new OwinCommunicationListener("SomeService", new Startup(this.StateManager), initParams))
};
}
}
}
public class Startup : IOwinAppBuilder
{
private readonly IReliableStateManager stateManager;
public Startup(IReliableStateManager stateManager)
{
this.stateManager = stateManager;
}
public void Configuration(IAppBuilder appBuilder)
{
// other initialization codes..
...
...
UnityConfig.RegisterComponents(config, this.stateManager);
appBuilder.UseWebApi(config);
}
}
Whenever a Stateful Service change roles it triggers a IStatefulServiceReplica.ChangeRoleAsync(ReplicaRole newRole, CancellationToken cancellationToken).
ChangeRoleAsync(..) ensure that the new role uses the correct communications doing the following:
Call CloseCommunicationListenersAsync(CancellationToken cancellationToken) to close any listeners open
Call OpenCommunicationListenersAsync(newRole, cancellationToken) for Primary or ActiveSecondary roles
The method OpenCommunicationListenersAsync() will call CreateServiceReplicaListeners() to get the listeners and call CreateCommunicationListener(serviceContext) for each returned listener to open the related endpoints.
Change of Roles is very common to happen during upgrades and Load Balancing, so this is a very common event.
In Summary,
Every time a Change of Role happens, CreateServiceReplicaListeners() will be called, ChangeRole does not shutdown the service, so it might have side effects, for example if you register dependencies in a DI container, you might face duplicate registrations.

Debugging Code Called by EF Core Add-Migrations

I have an Entity Framework Core database defined in a separate assembly, using the IDesignTimeDbContextFactory<> pattern (i.e., I define a class, derived from IDesignTimeDbContextFactory, which has a method called CreateDbContext that returns an instance of the database context).
Because the application of which the EF Core database is a part utilizes AutoFac dependency injection, the IDesignTimeDbContextFactory<> factory class creates an AutoFac container in its constructor, and then resolves the DbContextOptionsBuilder<>-derived class which is fed into the constructor for the database context (I do this so I can control whether a local or an Azure-based SqlServer database is targeted, based on a config file setting, with passwords stored in an Azure KeyVault):
public class TemporaryDbContextFactory : IDesignTimeDbContextFactory<FitchTrustContext>
{
private readonly FitchTrustDBOptions _dbOptions;
public TemporaryDbContextFactory()
{
// OMG, I would >>never<< have thought to do this to eliminate the default logging by this
// deeply-buried package. Thanx to Bruce Chen via
// https://stackoverflow.com/questions/47982194/suppressing-console-logging-by-azure-keyvault/48016958#48016958
LoggerCallbackHandler.UseDefaultLogging = false;
var builder = new ContainerBuilder();
builder.RegisterModule<SerilogModule>();
builder.RegisterModule<KeyVaultModule>();
builder.RegisterModule<ConfigurationModule>();
builder.RegisterModule<FitchTrustDbModule>();
var container = builder.Build();
_dbOptions = container.Resolve<FitchTrustDBOptions>() ??
throw new NullReferenceException(
$"Could not resolve {typeof(FitchTrustDBOptions).Name}");
}
public FitchTrustContext CreateDbContext( string[] args )
{
return new FitchTrustContext( _dbOptions );
}
}
public class FitchTrustDBOptions : DbContextOptionsBuilder<FitchTrustContext>
{
public FitchTrustDBOptions(IFitchTrustNGConfigurationFactory configFactory, IKeyVaultManager kvMgr)
{
if (configFactory == null)
throw new NullReferenceException(nameof(configFactory));
if (kvMgr == null)
throw new NullReferenceException(nameof(kvMgr));
var scannerConfig = configFactory.GetFromDisk()
?? throw new NullReferenceException(
"Could not retrieve ScannerConfiguration from disk");
var dbConnection = scannerConfig.Database.Connections
.SingleOrDefault(c =>
c.Location.Equals(scannerConfig.Database.Location,
StringComparison.OrdinalIgnoreCase))
?? throw new ArgumentOutOfRangeException(
$"Cannot find database connection information for location '{scannerConfig.Database.Location}'");
var temp = kvMgr.GetSecret($"DatabaseCredentials--{dbConnection.Location}--Password");
var connString = String.IsNullOrEmpty(dbConnection.UserID) || String.IsNullOrEmpty(temp)
? dbConnection.ConnectionString
: $"{dbConnection.ConnectionString}; User ID={dbConnection.UserID}; Password={temp}";
this.UseSqlServer(connString,
optionsBuilder =>
optionsBuilder.MigrationsAssembly(typeof(FitchTrustContext).GetTypeInfo().Assembly.GetName()
.Name));
}
}
Needless to say, while this provides me with a lot of flexibility (I can switch from local to cloud database just by changing a single config parameter, and any required passwords are reasonably securely stored in the cloud), it can trip up the add-migration commandlet if there's a bug in the code (e.g., the wrong name of a configuration file).
To debug those kinds of problems, I've often had to resort to outputting messages to the Visual Studio output window via diagnostic WriteLine calls. That strikes me as pretty primitive (not to mention time-consuming).
Is there a way to attach a debugger to my code that's called by add-migration so I can step thru it, check values, etc? I tried inserting a Launch() debugger line in my code, but it doesn't work. It seems to throw me into add-manager codebase, for which I have no symbols loaded, and breakpoints that I try to set in my code show up as the empty red circle: they'll never be hit.
Thoughts and suggestions would be most welcome!
Add Debugger.Launch() to the beginning of the constructor to launch the just-in-time debugger. This lets you attach VS to the process and debug it like normal.

How to resolve InstancePerLifetimeScope component from within SingleInstace component via Func?

The idea is just simple and works in the other containers, not limited with .Net:
Singleton component being referenced from within request context references transient component which in turn references request-scoped component (some UnitOfWork).
I expected that Autofac would resolve the same scoped component in both cases:
- when I request it directly from request scope
- when I request it by invoking Func<>
Unfortunately the reality is quite a bit different - Autofac sticks SingleInstance component to the root scope and resolves InstancePerLifetimeScope component on
the root component introducing memory leak (!!!) as UnitOfWork is disposable and becomes tracked by root scope (attempt to use matching web request scope would just fail finding request scope which is yet more misleading).
Now I'm wondering whether such behavior is by design or just a bug? If it is by design I'm not sure what are the use cases and why it differs from the other containers.
The example is as follows (including working SimpleInjector case):
namespace AutofacTest
{
using System;
using System.Linq;
using System.Linq.Expressions;
using Autofac;
using NUnit.Framework;
using SimpleInjector;
using SimpleInjector.Lifestyles;
public class SingletonComponent
{
public Func<TransientComponent> Transient { get; }
public Func<ScopedComponent> Scoped { get; }
public SingletonComponent(Func<TransientComponent> transient, Func<ScopedComponent> scoped)
{
Transient = transient;
Scoped = scoped;
}
}
public class ScopedComponent : IDisposable
{
public void Dispose()
{
}
}
public class TransientComponent
{
public ScopedComponent Scoped { get; }
public TransientComponent(ScopedComponent scopedComponent)
{
this.Scoped = scopedComponent;
}
}
class Program
{
static void Main(string[] args)
{
try
{
AutofacTest();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
try
{
SimpleInjectorTest();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
private static void AutofacTest()
{
var builder = new ContainerBuilder();
builder.RegisterType<ScopedComponent>().InstancePerLifetimeScope();
builder.RegisterType<SingletonComponent>().SingleInstance();
builder.RegisterType<TransientComponent>();
var container = builder.Build();
var outerSingleton = container.Resolve<SingletonComponent>();
using (var scope = container.BeginLifetimeScope())
{
var singleton = scope.Resolve<SingletonComponent>();
Assert.That(outerSingleton, Is.SameAs(singleton));
var transient = scope.Resolve<TransientComponent>();
var scoped = scope.Resolve<ScopedComponent>();
Assert.That(singleton.Transient(), Is.Not.SameAs(transient));
// this fails
Assert.That(singleton.Transient().Scoped, Is.SameAs(scoped));
Assert.That(transient.Scoped, Is.SameAs(scoped));
Assert.That(singleton.Scoped(), Is.SameAs(scoped)); // this fails
Assert.That(singleton.Transient(), Is.Not.SameAs(transient));
}
}
private static void SimpleInjectorTest()
{
var container = new SimpleInjector.Container();
container.Options.AllowResolvingFuncFactories();
container.Options.DefaultScopedLifestyle = new AsyncScopedLifestyle();
container.Register<ScopedComponent>(Lifestyle.Scoped);
container.Register<SingletonComponent>(Lifestyle.Singleton);
container.Register<TransientComponent>(Lifestyle.Transient);
container.Verify();
var outerSingleton = container.GetInstance<SingletonComponent>();
using (var scope = AsyncScopedLifestyle.BeginScope(container))
{
var singleton = container.GetInstance<SingletonComponent>();
Assert.That(outerSingleton, Is.SameAs(singleton));
var transient = container.GetInstance<TransientComponent>();
var scoped = container.GetInstance<ScopedComponent>();
Assert.That(singleton.Transient(), Is.Not.SameAs(transient));
Assert.That(singleton.Transient().Scoped, Is.SameAs(scoped));
Assert.That(transient.Scoped, Is.SameAs(scoped));
Assert.That(singleton.Scoped(), Is.SameAs(scoped));
Assert.That(singleton.Transient(), Is.Not.SameAs(transient));
}
}
}
public static class SimpleInjectorExtensions
{
public static void AllowResolvingFuncFactories(this ContainerOptions options)
{
options.Container.ResolveUnregisteredType += (s, e) =>
{
var type = e.UnregisteredServiceType;
if (!type.IsGenericType || type.GetGenericTypeDefinition() != typeof(Func<>))
{
return;
}
Type serviceType = type.GetGenericArguments().First();
InstanceProducer registration = options.Container.GetRegistration(serviceType, true);
Type funcType = typeof(Func<>).MakeGenericType(serviceType);
var factoryDelegate = Expression.Lambda(funcType, registration.BuildExpression()).Compile();
e.Register(Expression.Constant(factoryDelegate));
};
}
}
}
The short version what you're seeing is not a bug, you're just misunderstanding some of the finer points of lifetime scopes and captive dependencies.
First, a couple of background references from the Autofac docs:
Controlling Scope and Lifetime explains a lot about how lifetime scopes and that hierarchy works.
Captive Dependencies talks about why you don't generally shouldn't take an instance-per-lifetime or instance-per-dependency scoped item into a singleton.
Disposal talks about how Autofac auto-disposes IDisposable items and how you can opt out of that.
Implicit Relationship Types describes the Owned<T> relationship type used as part of the IDisposable opt-out.
Some big key takeaways from these docs that directly affect your situation:
Autofac tracks IDisposable components so they can be automatically disposed along with the lifetime scope. That means it will hold references to any resolved IDisposable objects until the parent lifetime scope is resolved.
You can opt out of IDisposable tracking either by registering the component as ExternallyOwned or by using Owned<T> in the constructor parameter being injected. (Instead of taking in an IDependency take in an Owned<IDependency>.)
Singletons live in the root lifetime scope. That means any time you resolve a singleton it will be resolved from the root lifetime scope. If it is IDisposable it will be tracked in the root lifetime scope and not released until that root scope - the container itself - is disposed.
The Func<T> dependency relationship is tied to the same lifetime scope as the object in which it's injected. If you have a singleton, that means the Func<T> will resolve things from the same lifetime scope as the singleton - the root lifetime scope. If you have something that's instance-per-dependency, the Func<T> will be attached to whatever scope the owning component is in.
Knowing that, you can see why your singleton, which takes in a Func<T>, keeps trying to resolve these things from the root lifetime scope. You can also see why you're seeing a memory leak situation - you haven't opted out of the disposal tracking for the things that are being resolved by that Func<T>.
So the question is, how do you fix it?
Option 1: Redesign
Generally speaking, it would be better to invert the relationship between the singleton and the thing you have to resolve via Func<T>; or stop using a singleton altogether and let that be a smaller lifetime scope.
For example, say you have some IDatabase service that needs an IPerformTransaction to get things done. The database connection is expensive to spin up, so you might make that a singleton. You might then have something like this:
public class DatabaseThing : IDatabase
{
public DatabaseThing(Func<IPerformTransaction> factory) { ... }
public void DoWork()
{
var transaction = this.factory();
transaction.DoSomethingWithData(this.Data);
}
}
So, like, the thing that's expensive to spin up uses a Func<T> to generate the cheap thing on the fly and work with it.
Inverting that relationship would look like this:
public PerformsTransaction : IPerformTransaction
{
public PerformsTransaction(IDatabase database) { ... }
public void DoSomethingWithData()
{
this.DoSomething(this.Database.Data);
}
}
The idea is that you'd resolve the transaction thing and it'd take the singleton in as a dependency. The cheaper item could easily be disposed along with child lifetime scopes (i.e., per request) but the singleton would remain.
It'd be better to redesign if you can because even with the other options you'll have a rough time getting "instance per request" sorts of things into a singleton. (And that's a bad idea anyway from both a captive dependency and threading standpoint.)
Option 2: Abandon Singleton
If you can't redesign, a good second choice would be to make the lifetime of the singleton... not be a singleton. Let it be instance-per-scope or instance-per-dependency and stop using Func<T>. Let everything get resolved from a child lifetime scope and be disposed when the scope is disposed.
I recognize that's not always possible for a variety of reasons. But if it is possible, that's another way to escape the problem.
Option 3: Use ExternallyOwned
If you can't redesign, you could register the disposable items consumed by the singleton as ExternallyOwned.
builder.RegisterType<ThingConsumedBySingleton>()
.As<IConsumedBySingleton>()
.ExternallyOwned();
Doing that will tell Autofac to not track the disposable. You won't have the memory leak. You will be responsible for disposing the resolved objects yourself. You will also still be getting them from the root lifetime scope since the singleton is getting a Func<T> injected.
public void MethodInsideSingleton()
{
using(var thing = this.ThingFactory())
{
// Do the work you need to and dispose of the
// resolved item yourself when done.
}
}
Option 4: Owned<T>
If you don't want to always manually dispose of the service you're consuming - you only want to deal with that inside the singleton - you could register it as normal but consume a Func<Owned<T>>. Then the singleton will resolve things as expected but the container won't track it for disposal.
public void MethodInsideSingleton()
{
using(var ownedThing = this.ThingFactory())
{
var thing = ownedThing.Value;
// Do the work you need to and dispose of the
// resolved item yourself when done.
}
}

Calling services from other application in the cluster

Is it possible to call services or actors from one application to another in a Service Fabric Cluster ? When I tryed (using ActorProxy.Create with the proper Uri), I got a "No MethodDispatcher is found for interface"
Yes, it is possible. As long as you have the right Uri to the Service (or ActorService) and you have access to the assembly with the interface defining your service or actor the it should not be much different than calling the Service/Actor from within the same application. It you have enabled security for your service then you have to setup the certificates for the exchange as well.
If I have a simple service defined as:
public interface ICalloutService : IService
{
Task<string> SayHelloAsync();
}
internal sealed class CalloutService : StatelessService, ICalloutService
{
public CalloutService(StatelessServiceContext context)
: base(context) { }
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
yield return new ServiceInstanceListener(this.CreateServiceRemotingListener);
}
public Task<string> SayHelloAsync()
{
return Task.FromResult("hello");
}
}
and a simple actor:
public interface ICalloutActor : IActor
{
Task<string> SayHelloAsync();
}
[StatePersistence(StatePersistence.None)]
internal class CalloutActor : Actor, ICalloutActor
{
public CalloutActor(ActorService actorService, ActorId actorId)
: base(actorService, actorId) {}
public Task<string> SayHelloAsync()
{
return Task.FromResult("hello");
}
}
running in a application like this:
Then you can call it from another application within the same cluster:
// Call the service
var calloutServiceUri = new Uri(#"fabric:/ServiceFabric.SO.Answer._41655575/CalloutService");
var calloutService = ServiceProxy.Create<ICalloutService>(calloutServiceUri);
var serviceHello = await calloutService.SayHelloAsync();
// Call the actor
var calloutActorServiceUri = new Uri(#"fabric:/ServiceFabric.SO.Answer._41655575/CalloutActorService");
var calloutActor = ActorProxy.Create<ICalloutActor>(new ActorId(DateTime.Now.Millisecond), calloutActorServiceUri);
var actorHello = await calloutActor.SayHelloAsync();
You can find the right Uri in the Service Fabric Explorer if you click the service and look at the name. By default the Uri of a service is: fabric:/{applicationName}/{serviceName}.
The only tricky part is how do you get the interface from the external service to your calling service? You could simply reference the built .exe for the service you wish to call or you could package the assembly containing the interface as a NuGet package and put on a private feed.
If you don't do this and you instead just share the code between your Visual Studio solutions the Service Fabric will think these are two different interfaces, even if they share the exact same signature. If you do it for a Service you get an NotImplementedException saying "Interface id '{xxxxxxxx}' is not implemented by object '{service}'" and if you do it for an Actor you get an KeyNotfoundException saying "No MethodDispatcher is found for interface id '-{xxxxxxxxxx}'".
So, to fix your problem, make sure you reference the same assembly that is in the application you want to call in the external application that is calling.

Asp.Net Web API Error: The 'ObjectContent`1' type failed to serialize the response body for content type 'application/xml; charset=utf-8'

Simplest example of this, I get a collection and try to output it via Web API:
// GET api/items
public IEnumerable<Item> Get()
{
return MyContext.Items.ToList();
}
And I get the error:
Object of type
'System.Data.Objects.ObjectQuery`1[Dcip.Ams.BO.EquipmentWarranty]'
cannot be converted to type
'System.Data.Entity.DbSet`1[Dcip.Ams.BO.EquipmentWarranty]'
This is a pretty common error to do with the new proxies, and I know that I can fix it by setting:
MyContext.Configuration.ProxyCreationEnabled = false;
But that defeats the purpose of a lot of what I am trying to do. Is there a better way?
I would suggest Disable Proxy Creation only in the place where you don't need or is causing you trouble. You don't have to disable it globally you can just disable the current DB context via code...
[HttpGet]
[WithDbContextApi]
public HttpResponseMessage Get(int take = 10, int skip = 0)
{
CurrentDbContext.Configuration.ProxyCreationEnabled = false;
var lista = CurrentDbContext.PaymentTypes
.OrderByDescending(x => x.Id)
.Skip(skip)
.Take(take)
.ToList();
var count = CurrentDbContext.PaymentTypes.Count();
return Request.CreateResponse(HttpStatusCode.OK, new { PaymentTypes = lista, TotalCount = count });
}
Here I only disabled the ProxyCreation in this method, because for every request there is a new DBContext created and therefore I only disabled the ProxyCreation for this case .
Hope it helps
if you have navigation properties and you do not want make them non virtual, you should using JSON.NET and change configuration in App_Start to using JSON not XML!
after install JSON.NET From NuGet, insert this code in WebApiConfig.cs in Register method
var json = config.Formatters.JsonFormatter;
json.SerializerSettings.PreserveReferencesHandling = Newtonsoft.Json.PreserveReferencesHandling.Objects;
config.Formatters.Remove(config.Formatters.XmlFormatter);
If you have navigation properties make them non virtual. Mapping will still work but it prevents the creation of Dynamic Proxy entities which cannot be serialized.]
Not having lazy loading is fine in a WebApi as you don't have a persistent connection and you ran a .ToList() anyway.
I just disabled proxy classes on a per needed basis:
// GET: ALL Employee
public IEnumerable<DimEmployee> Get()
{
using (AdventureWorks_MBDEV_DW2008Entities entities = new AdventureWorks_MBDEV_DW2008Entities())
{
entities.Configuration.ProxyCreationEnabled = false;
return entities.DimEmployees.ToList();
}
}
Add the following code in Application_Start function of Global.asax.cs:
GlobalConfiguration.Configuration.Formatters.JsonFormatter.SerializerSettings
.ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Ignore;
GlobalConfiguration.Configuration.Formatters
.Remove(GlobalConfiguration.Configuration.Formatters.XmlFormatter);
This instruct the API to serialize every response into JSON and remove XML responses.
In my case the object being returned had a property within it with a type that did not have an argumentless/default constructor. By adding a zero-argument constructor to that type the object could be serialized successfully.
I had the same problem and my DTO was missing an parameter less constructor.
public UserVM() { }
public UserVM(User U)
{
LoginId = U.LoginId;
GroupName = U.GroupName;
}
First constructor was missing.
I got this error message and it turns out the problem was that I had accidentally set my class to use the same serialized property name for two properties:
public class ResultDto
{
//...
[JsonProperty(PropertyName="DataCheckedBy")]
public string ActualAssociations { get; set; }
[JsonProperty(PropertyName="DataCheckedBy")]
public string ExpectedAssociations { get; set; }
//...
}
If you're getting this error and you aren't sending entities directly through your API, copy the class that's failing to serialize to LINQPad and just call JsonConvert.SerializeObject() on it and it should give you a better error message than this crap. As soon as I tried this it gave me the following error message: A member with the name 'DataCheckedBy' already exists on 'UserQuery+ResultDto'. Use the JsonPropertyAttribute to specify another name.
After disable Proxy Creation, use eager loading (Include()) to load the proxy object.
In my Project EntityCollection returned from the WebApi action method.
Configuration.ProxyCreationEnabled = false not applicable. I have tried the below approach it is working fine for me.
Control Panel.
2.Turn on Windows Features on or off
Choose Internet Information Service
Check all the World Wide Web Components it would be better to check all the components in IIS.
Install the components.
Go to (IIS) type inetmgr in command prompt.
select the published code in the Virtual directory.
Convert into application
Browse it the application.
The answer by #Mahdi perfectly fixes the issue for me, however what I noticed is that if my Newtonsoft.JSON is 11.0 version then it doesn't fix the issue, but the moment I update Newtonsoft.JSON to latest 13.0 it starts working.