.NET7 Json serialization: cannot handle circular references - entity-framework

I have these tables in my db:
- Turns
- Jobs
- Turns_Jobs (a turn may have multiple jobs, a single job may span across multiple turns)
- Managers (each job has a manager)
These are the corresponding classes:
- Turn
- int TurnID
- ...
- virtual ICollection<Turn_Job> TurnJobs
- Job
- int JobID
- ...
- int ManagerID
- virtual Manager Manager
- ...
- virtual ICollection<Turn_Job> JobTurns
- Turn_Job
- int TurnID
- ...
- virtual Turn Turn
- int JobID
- virtual Job Job
- Manager
- ManagerID
- Name
I'm trying to serialize the following query:
var q = await _db.Turns
.Include(x => x.TurnJobs).ThenInclude(x => x.Job).ThenInclude(x => x.Manager)
.Where(x => true; /* real application has some conditions here */ )
.Select(x => x)
.ToListAsync();
Usually I would do:
var json = JsonSerializer.Serialize(q, new JsonSerializerOptions(JsonSerializerDefaults.Web) { ReferenceHandler = ReferenceHandler.Preserve });
but this throws the following exception:
System.Text.Json.JsonException: The object or value could not be serialized. Path: $.TurnJobs.Job.JobTurns.Turn.TurnJobs.Job.JobTurns.Turn.TurnJobs.Job.JobTurns.Turn.TurnJobs.Job.JobTurns.Turn.TurnJobs.Job.JobTurns.Turn.TurnJobs.Job.JobTurns.Turn.TurnJobs.Job.JobTurns.Turn.TurnJobs.Job.JobTurns.
---> System.InvalidOperationException: CurrentDepth (64) is equal to or larger than the maximum allowed depth of 64. Cannot write the next JSON object or array.
It seems that circular reference is not detected and/or correctly handled.
If I ignore references:
var json = JsonSerializer.Serialize(r, new JsonSerializerOptions(JsonSerializerDefaults.Web) { ReferenceHandler = ReferenceHandler.IgnoreCycles });
the exception becomes:
System.Text.Json.JsonException: A possible object cycle was detected. This can either be due to a cycle or if the object depth is larger than the maximum allowed depth of 64. Consider using ReferenceHandler.Preserve on JsonSerializerOptions to support cycles. Path: $.TurnJobs.Job.JobTurns.Turn.TurnJobs.Job.JobTurns.Turn.TurnJobs.Job.JobTurns.Turn.TurnJobs.Job.JobTurns.Turn.TurnJobs.Job.JobTurns.Turn.TurnJobs.Job.JobTurns.Turn.TurnJobs.Job.JobTurns.Turn.TurnJobs.Job.JobTurns.Turn.TurnJobs.Job.JobTurns.Turn.TurnJobs.Job.JobTurns.Turn.TurnJobs.TurnID.

Related

FMOD: Cleaning up duplicate platform warning

FMOD for Unity 2.01.07 (Unity 2019.4.18f1 - running on MacOS Catalina) seems to have broken their FMODStudioSettings class.
I can't save in the editor without getting these errors:
FMOD: Cleaning up duplicate platform: ID = playInEditor, name = 'Play In Editor Settings', type = PlatformPlayInEditor
ArgumentException: An item with the same key has already been added. Key: playInEditor
FMOD: Cleaning up duplicate platform: ID = default, name = 'Default Settings', type = PlatformDefault
ArgumentException: An item with the same key has already been added. Key: default
NullReferenceException: Object reference not set to an instance of an object
FMODUnity.SettingsEditor.DisplayPlugins (System.String title, FMODUnity.Platform platform, FMODUnity.Platform+PropertyAccessor`1[T] property, System.Collections.Generic.Dictionary`2[TKey,TValue] expandState, System.String warning) (at Assets/Plugins/FMOD/src/Editor/SettingsEditor.cs:1028)
I believe this is a regression that basically makes the Unity integration unusable. Something to do with de-duplicating platforms in the Platforms map. At runtime there's a series of NPEs related to platforms so actually I can't run the game properly. Has anyone else run into this?
I'm evaluating FMOD as a middleware option for our game, and have run into at least two serious bugs in the Unity integration. See other bug here.
UPDATE:
I haven't found out why this doesn't happen for everyone, but an easy fix for anyone else running into this issue has been applying this diff:
diff --git a/Assets/Plugins/FMOD/src/Runtime/Settings.cs b/Assets/Plugins/FMOD/src/Runtime/Settings.cs
index 2641e926..c2843145 100644
--- a/Assets/Plugins/FMOD/src/Runtime/Settings.cs
+++ b/Assets/Plugins/FMOD/src/Runtime/Settings.cs
## -817,6 +817,10 ## namespace FMODUnity
private void PopulatePlatformsFromAsset()
{
+ Platforms.Clear();
+ PlatformForBuildTarget.Clear();
+ PlatformForRuntimePlatform.Clear();
+
#if UNITY_EDITOR
string assetPath = AssetDatabase.GetAssetPath(this);
UnityEngine.Object[] assets = AssetDatabase.LoadAllAssetsAtPath(assetPath);
## -827,36 +831,8 ## namespace FMODUnity
foreach (Platform newPlatform in assetPlatforms)
{
- Platform existingPlatform = FindPlatform(newPlatform.Identifier);
-
- if (existingPlatform != null)
- {
- // Duplicate platform; clean one of them up
- Platform platformToDestroy;
-
- if (newPlatform.Active && !existingPlatform.Active)
- {
- Platforms.Remove(existingPlatform.Identifier);
-
- platformToDestroy = existingPlatform;
- existingPlatform = null;
- }
- else
- {
- platformToDestroy = newPlatform;
- }
-
- Debug.LogWarningFormat("FMOD: Cleaning up duplicate platform: ID = {0}, name = '{1}', type = {2}",
- platformToDestroy.Identifier, platformToDestroy.DisplayName, platformToDestroy.GetType().Name);
-
- DestroyImmediate(platformToDestroy, true);
- }
-
- if (existingPlatform == null)
- {
- newPlatform.EnsurePropertiesAreValid();
- Platforms.Add(newPlatform.Identifier, newPlatform);
- }
+ newPlatform.EnsurePropertiesAreValid();
+ Platforms.Add(newPlatform.Identifier, newPlatform);
}
#if UNITY_EDITOR
So this was a bug in the integration they fixed in 2.01.10.

Entity Framework is too slow during mapping data up to 100k

I have min 100 000 data into a Job_Details table and I'm using Entity Framework to map the data.
This is the code:
public GetJobsResponse GetImportJobs()
{
GetJobsResponse getJobResponse = new GetJobsResponse();
List<JobBO> lstJobs = new List<JobBO>();
using (NSEXIM_V2Entities dbContext = new NSEXIM_V2Entities())
{
var lstJob = dbContext.Job_Details.ToList();
foreach (var dbJob in lstJob.Where(ie => ie.IMP_EXP == "I" && ie.Job_No != null))
{
JobBO job = MapBEJobforSearchObj(dbJob);
lstJobs.Add(job);
}
}
getJobResponse.Jobs = lstJobs;
return getJobResponse;
}
I found to this line is taking about 2-3 min to execute
var lstJob = dbContext.Job_Details.ToList();
How can i solve this issue?
To outline the performance issues with your example: (see inline comments)
public GetJobsResponse GetImportJobs()
{
GetJobsResponse getJobResponse = new GetJobsResponse();
List<JobBO> lstJobs = new List<JobBO>();
using (NSEXIM_V2Entities dbContext = new NSEXIM_V2Entities())
{
// Loads *ALL* entities into memory. This effectively takes all fields for all rows across from the database to your app server. (Even though you don't want it all)
var lstJob = dbContext.Job_Details.ToList();
// Filters from the data in memory.
foreach (var dbJob in lstJob.Where(ie => ie.IMP_EXP == "I" && ie.Job_No != null))
{
// Maps the entity to a DTO and adds it to the return collection.
JobBO job = MapBEJobforSearchObj(dbJob);
lstJobs.Add(job);
}
}
// Returns the DTOs.
getJobResponse.Jobs = lstJobs;
return getJobResponse;
}
First: pass your WHERE clause to EF to pass to the DB server rather than loading all entities into memory..
public GetJobsResponse GetImportJobs()
{
GetJobsResponse getJobResponse = new GetJobsResponse();
using (NSEXIM_V2Entities dbContext = new NSEXIM_V2Entities())
{
// Will pass the where expression to be DB server to be executed. Note: No .ToList() yet to leave this as IQueryable.
var jobs = dbContext.Job_Details..Where(ie => ie.IMP_EXP == "I" && ie.Job_No != null));
Next, use SELECT to load your DTOs. Typically these won't contain as much data as the main entity, and so long as you're working with IQueryable you can load related data as needed. Again this will be sent to the DB Server so you cannot use functions like "MapBEJobForSearchObj" here because the DB server does not know this function. You can SELECT a simple DTO object, or an anonymous type to pass to a dynamic mapper.
var dtos = jobs.Select(ie => new JobBO
{
JobId = ie.JobId,
// ... populate remaining DTO fields here.
}).ToList();
getJobResponse.Jobs = dtos;
return getJobResponse;
}
Moving the .ToList() to the end will materialize the data into your JobBO DTOs/ViewModels, pulling just enough data from the server to populate the desired rows and with the desired fields.
In cases where you may have a large amount of data, you should also consider supporting server-side pagination where you pass a page # and page size, then utilize a .Skip() + .Take() to load a single page of entries at a time.

Bulk removal of Edges on Titan 1.0

I have a long list of edge IDs (about 12 billion) that I am willing to remove from my Titan graph (which is hosted on an HBase backend).
How can I do it quickly and efficiently?
I tried removing the edges via Gremlin, but that is too slow for that amount of edges.
Is it possible to directly perform Delete commands on HBase? How can I do it? (How do I assemble the Key to delete?)
Thanks
After two days of research, I came up with a solution.
The main purpose - given a very large collection of string edgeIds, implementing a logics which removes them from the graph -
The implementation has to support a removal of billions of edges, so it must be efficient in memory and time.
Direct usage of Titan is disqualified, since Titan performs a lot of unnecessary instantiations which are redundant -- generally, we don't want to load the edges, we just want to remove them from HBase.
/**
* Deletes the given edge IDs, by splitting it to chunks of 100,000
* #param edgeIds Collection of edge IDs to delete
* #throws IOException
*/
public static void deleteEdges(Iterator<String> edgeIds) throws IOException {
IDManager idManager = new IDManager(NumberUtil.getPowerOf2(GraphDatabaseConfiguration.CLUSTER_MAX_PARTITIONS.getDefaultValue()));
byte[] columnFamilyName = "e".getBytes(); // 'e' is your edgestore column-family name
long deletionTimestamp = System.currentTimeMillis();
int chunkSize = 100000; // Will contact HBase only once per 100,000 records two deletes (=> 50,000 edges, since each edge is removed one time as IN and one time as OUT)
org.apache.hadoop.conf.Configuration config = new org.apache.hadoop.conf.Configuration();
config.set("hbase.zookeeper.quorum", "YOUR-ZOOKEEPER-HOSTNAME");
config.set("hbase.table", "YOUR-HBASE-TABLE");
List<Delete> deletions = Lists.newArrayListWithCapacity(chunkSize);
Connection connection = ConnectionFactory.createConnection(config);
Table table = connection.getTable(TableName.valueOf(config.get("hbase.table")));
Iterators.partition(edgeIds, chunkSize)
.forEachRemaining(edgeIdsChunk -> deleteEdgesChunk(edgeIdsChunk, deletions, table, idManager,
columnFamilyName, deletionTimestamp));
}
/**
* Given a collection of edge IDs, and a list of Delete object (that is cleared on entrance),
* creates two Delete objects for each edge (one for IN and one for OUT),
* and deletes it via the given Table instance
*/
public static void deleteEdgesChunk(List<String> edgeIds, List<Delete> deletions, Table table, IDManager idManager,
byte[] columnFamilyName, long deletionTimestamp) {
deletions.clear();
for (String edgeId : edgeIds)
{
RelationIdentifier identifier = RelationIdentifier.parse(edgeId);
deletions.add(createEdgeDelete(idManager, columnFamilyName, deletionTimestamp, identifier.getRelationId(),
identifier.getTypeId(), identifier.getInVertexId(), identifier.getOutVertexId(),
IDHandler.DirectionID.EDGE_IN_DIR);
deletions.add(createEdgeDelete(idManager, columnFamilyName, deletionTimestamp, identifier.getRelationId(),
identifier.getTypeId(), identifier.getOutVertexId(), identifier.getInVertexId(),
IDHandler.DirectionID.EDGE_OUT_DIR));
}
try {
table.delete(deletions);
}
catch (IOException e)
{
logger.error("Failed to delete a chunk due to inner exception: " + e);
}
}
/**
* Creates an HBase Delete object for a specific edge
* #return HBase Delete object to be used against HBase
*/
private static Delete createEdgeDelete(IDManager idManager, byte[] columnFamilyName, long deletionTimestamp,
long relationId, long typeId, long vertexId, long otherVertexId,
IDHandler.DirectionID directionID) {
byte[] vertexKey = idManager.getKey(vertexId).getBytes(0, 8); // Size of a long
byte[] edgeQualifier = makeQualifier(relationId, otherVertexId, directionID, typeId);
return new Delete(vertexKey)
.addColumn(columnFamilyName, edgeQualifier, deletionTimestamp);
}
/**
* Cell Qualifier for a specific edge
*/
private static byte[] makeQualifier(long relationId, long otherVertexId, IDHandler.DirectionID directionID, long typeId) {
WriteBuffer out = new WriteByteBuffer(32); // Default length of array is 32, feel free to increase
IDHandler.writeRelationType(out, typeId, directionID, false);
VariableLong.writePositiveBackward(out, otherVertexId);
VariableLong.writePositiveBackward(out, relationId);
return out.getStaticBuffer().getBytes(0, out.getPosition());
}
Keep in mind that I do not consider System Types and so -- I assume that the given edge IDs are user-edges.
Using this implementation I was able to remove 20 million edges in about 2 minutes.

Ehcache 2.4.2 does not write all the values in the element to a file

I'm trying to do a simple test with ehcache - put an element into the cache, flush and shutdown the cache. Then reload all the beans with spring (also initializes cachemanager). Do a cache.get and retrieve previously written values.
EhCache Element's value is a some serializable class called DOM which comprises a field ConcurrentHasMap
I create 3 DOM instances: d1, d2, d3
d1 (has a map with 3 values: t1, t2, t3)
d2 (has a map with 2 values: x1, x2)
d3 (has a map with 2 values: s1, s2)
I call:
cachemanager and cache are created with spring
cache.put(new Element(1,d1))
cache.put(new Element(2,d2))
cache.put(new Element(3,d3))
cache.flush();
cacheManager.shutdown();
cache = null
cacheManager = null
I call to load spring application context (which creates cacheManager and cache)
I call:
actualD1 = cache.get(1)
actualD2 = cache.get(2)
actualD3 = cache.get(3)
I receive the DOM objects into the actualD1, actualD2 and actualD3 variables
But the problem is that now each of them has only one value
actualD1 (has a map with 1 value: t1)
actualD2 (has a map with 1 value: x1)
actualD3 (has a map with 1 value: s1)
What could be the problem!???
Here is my ehcache.xml file:
<defaultCache
maxElementsInMemory="1000000"
eternal="false"
diskSpoolBufferSizeMB="100"
overflowToDisk="true"
clearOnFlush="false"
copyOnRead="false"
copyOnWrite="false"
diskExpiryThreadIntervalSeconds="300"
diskPersistent="true">
</defaultCache>
Here is how I create a cacheManager (this method is called in startup from spring)
protected def checkAndCreateCacheManagerIfNeeded() =
{
if (cacheManager == null)
{
synchronized
{
if (cacheManager == null)
{
cacheManager = CacheManager.create(ehCacheConfigFile);
}
}
};
};
The following code creates the cache:
protected def getOrCreateCache(cacheName : String) =
{
checkAndCreateCacheManagerIfNeeded();
var cache = cacheManager.getEhcache(cacheName);
if (cache == null)
{
cacheManager.synchronized
{
cache = cacheManager.getEhcache(cacheName);
if (cache == null)
{
cache = cacheManager.addCacheIfAbsent(cacheName);
}
}
};
cache;
};
The problem was adding t1, t2, t3 to d1 without putting the updated d1 to the cache. After each addition of the value to map. One must add the call:
cache.put(d1Element)
While this isn't necessarily true, since it depends on your setup, it is considered best practice indeed to do so with Ehcache. In this particular setup, your elements are being serialized to disk (not necessarily at the time of the put though). As a result, once serialized, any subsequent change to the object graph won't reflect in the DiskStore.
This would have worked with onHeap only storage, but it is still recommended you do put back, enabling you to change the cache configuration in the future without any need to change the code.

StreamInsight - Problem defining a correct window

I started using StreamInsight and I'm using it as a wcf service.
I've already tried seeking help in "A Hitchhiker's Guide to Microsoft StreamInsight Queries" and tried the examples as well as the examples in codeplex.
My problem is this:
My event producer feeds the adapter with AlertEvent's:
public sealed class AlertEvent
{
public DateTime Date { get; set; }
public long IDGroup { get; set; }
public bool IsToNormalize { get; set; }
public bool IsError { get; set; }
}
When a AlertEvent has IsError = false, the flag IsToNormalize is true;
The behaviour that I'm trying to achieve is when I receive a stream with IsError, I want to see if, in the next 'x' minutes, arrives any alertEvent with IsToNormalize. I then send to output the IsError AlarmEvent that started the search.
What I've done is, when I receive a input that correspond to the filter, I extend its lifetime in 'x' minutes and create a TumblingWindow to see if in that period another AlertEvent arrives with the other flag (using a ExtensionMethod to iterate through all payloads in the window).
var stream= from item in input
where item.IsError
group item by item.IdGroup into eachGroup
from window in eachDigital.AlterEventDuration(e => TimeSpan.FromMinutes((double)1.5)).TumblingWindow(TimeSpan.FromSeconds(15), HoppingWindowOutputPolicy.ClipToWindowEnd)
select new
{
Id = eachDigital.Key,
NormEvents = window.HasNormalizationEvents(),
Count = window.Count()
};
Then, to get the AlarmEvent that triggered the TumblingWindow, I've made a join with the original input.
var resultStream = from e1 in stream
join e2 in input
on e1.Id equals e2.DigitalTag
where e1.NormEvents != 0
select e2;
This isn't working at all... :/ Any ideas to help solving this issue?
Another doubt that I have is if there'll be create a new window with a new startDate for each input that passes the filter or will it be created only one tumblingWindow.
Thanks.
Try this:
// Move all error events
// to the point where the potential timeout would occur.
var timedOut = input
.Where(e => e.IsError == true)
.ShiftEventTime(e => e.StartTime + TimeSpan.FromMinutes(5));
// Extend all events IsToNormalize by the timeout.
var following = input
.Where(e => e. IsToNormalize == true)
.AlterEventDuration(e => TimeSpan.FromMinutes(5));
// Mask away the potential timeout events by the extended events.
// - If IsToNormalize did not occur within the timeout, then the shifted error event
// will remain unchanged by the left-anti-semi-join and represent
// the desired output.
var result = from t in timedOut
where (from c in following
where t.IdGroup == c.IdGroup
select c).IsEmpty()
select t; // or some projection on t