Reversing a legacy DB in MDriven, undocumented errror - mdriven

What is the correct action to take for solving the error ", this occurred after "pressing play" with persistency towards a legacy db (read only access)

Though one - very little to go on. Maybe your association end requires 2 attributes but the primary key of the target only has 1 attribute marked as key.
Try to downscale your model until it works - then slowly add things back. See if that process helps you to understand the error message better.

Related

More information for Not Enough Spatial Data problem

After many attempts to make a version of SpectatorView functional using UNet, I was able to successfully communicate between client / server using Azure Spatial Anchor to share anchor information. Many errors have occurred, including an unknown error, which appears to resolve the deletion of the MRTK 2.2.0 library. This has already been resolved, but now at random a problem occurs in the function CloudManager.CreateAnchorAsync, which says just like this: Not enough Neighborhood Spatial Data was available to complete the desired Create operation.
This problem does not happen directly, but randomly makes it impossible to find a pattern to try to think of a solution to this problem.
Remembering that the CloudManager.SessionStatus.RecommendedForCreateProgress variable is above 1 before calling the creation function.
Could you give me more information about this problem? What do I do to not do a project to solve it?

Solution to avoid a record from being inserted in Salesforce

I am in need of stopping a record from being inserted when a certain condition is met. I made this change in beforeInsert of my trigger with the help of addError().
I have an issue with this solution: apex addError - remove default error message.
I want to remove this default error message and keep only my customized message. And I want to make this bold and bit bigger too. I am now convinced that these things are not possible with addError().
Is there any alternative solution to this? I mean, to stop this record from being inserted?
My object in concern is ObjectA. And ObjectA has a lookup to ObjectB. This ObjectB field in ObjectA has to be unique. No two ObjectA records can contain the lookup to same ObjectB field. That's when I need to stop this insertion.
Can someone help me with this?
bold and bit bigger too
Possible only if you have custom UI (Visualforce / Aura Component / Lightning Web Component...). I wouldn't spend too much time on this. Focus on getting your logic right and making sure it runs also via API (so not only manual insert but also Data Loader is protected for example).
if addError doesn't do what you need then consider adding a helper text(18) field. Mark it unique and use a before insert,before update trigger (or workflow) to populate it with value from that lookup.
Uniqueness should be handled by database. Are you really ready to write that "before insert" perfectly? What about update? What about undelete (restore from recycle bin?) What if I'll want to load 2 identical records at same time? That trigger starts to look bit more complex. What if user is not allowed to see the record with which the clash should be detected (sharing rules etc... I mean your scenario sounds like the uniqueness should be "global" but you need really good reason to write "without sharing" code in the trigger handler).
It's all certainly possible but it's so much easier to just make it with an unique field and call it a day. And tell business to deal with the not necessarily friendly error message.

Lagom | Return Values from read side processor

We are using Lagom for developing our set of microservices. The trick here is that although we are using event sourcing and persisting events into cassandra but we have to store the data in one of the graph DB as well since it will be the one that will be serving most of the queries because of the use case.
As per the Lagom's documentation, all the insertion into Graph database(or any other database) has to be done in ReadSideProcecssor after the command handler persist the events into cassandra as followed by philosophy of CQRS.
Now here is the problem which we are facing. We believe that the ReadSideProcecssor is a listener which gets triggered after the events are generated and persisted. What we want is we could return the response back from the ReadSideProcecssor to the ServiceImpl. Example when a user is added to the system, the unique id generated by the graph has to be returned as one of the response headers. How that can be achieved in Lagom since the response is constructed from setCommandHandler and not the ReadSideProcessor.
Also, we need to make sure that if due to any error at graph side, the API should notify the client that the request has failed but again exceptions occuring in ReadSideProcessor are not propagated to either PersistentEntity or ServiceImpl class. How can that be achieved as well?
Any helps are much appreciated.
The read side processor is not a listener that is attached to the command - it is actually completely disconnected from the persistent entity, it may be running on a different node, at a different time, perhaps even years in the future if you add a new read side processor that first comes up to speed with all the old events in history. If the read side processor were connected synchronously to the command, then it would not be CQRS, there would not be segregation between the command and the query side.
Read side processors essentially poll the database for new events, processing them as they detect them. You can add a new read side processor at any time, and it will get all events from all of history, not just the new ones that are added, this is one of the great things about event sourcing, you don't need to anticipate all your query needs from the start, you can add them as the query need comes.
To further explain why you don't want a connection between the two - what happens if the event persist succeeds, but the update on the graph db fails? Perhaps the graph db is crashed. Does the command have to retry? Does the event have to be deleted? What happens if the node doing the update itself crashes before it has an opportunity to fix the problem? Now your read side is in an inconsistent state from your entities. Connecting them leads to inconsistency in many failure scenarios - for example, like when you update your address with a utility company, and but your bills still go to the old address, and you contact them, and they say "yes, your new address is updated in our system", but they still go to the old address - that's the sort of terrible user experience that you are signing your users up for if you try to connect your read side and write side together. Disconnecting allows Lagom to ensure consistency between the events you have emitted on the write side, and the consumption of them on the read side.
So to address your specific concerns: ID generation should be done on the write side, or, if a subsequent ID is generated on the read side, it should also provide a way of mapping the IDs on the write side to the read side ID. And as for handling errors on the read side - all validation should be done on the write side - the write side should ensure that it never emits an event that is invalid.
Now if the read side processor encounters something that is invalid, then it has two options. One option is it could fail. In many cases, this is a good option, since if something is invalid or inconsistent, then it's likely that either you have a bug or some form of corruption. What you don't want to do is continue processing as if everything is happy, since that might make the data corruption or inconsistency even worse. Instead the read side processor stops, your monitoring should then detect the error, and you can go in and work out either what the bug is or fix the corruption. Of course, there are downsides to doing this, your read side will start lagging behind the write side while it's unable to process new events. But that's also an advantage of CQRS - the write side is able to continue working, continue enforcing consistency, etc, the failure is just isolated to the read side, and only in updating the read side. Instead of your whole system going down and refusing to accept new requests due to this bug, it's isolated to just where the problem is.
The other option that the read side has is it can store the error somewhere - eg, store the event in a dead letter table, or raise some sort of trouble ticket, and then continue processing. This way, you can go and fix the event after the fact. This ensures greater availability, but does come at the risk that if that event that it failed to process was important to the processing of subsequent events, you've potentially just got yourself into a bigger mess.
Now this does introduce specific constraints on what you can and can't do, but I can't really anticipate those without specific knowledge of your use case to know how to address them. A common constraint is set validation - for example, how do you ensure that email addresses are unique to a single user in your system? Greg Young (the CQRS guy) wrote this blog post about those types of problems:
http://codebetter.com/gregyoung/2010/08/12/eventual-consistency-and-set-validation/

Hazelcast 3.3 - EntryProcessor is accessing "non-local" keys

I'm using Hazelcast 3.3.
One member writes entries to an IMap and calls map.executeOnEntries(myEntryProcessor). The task of EntryProcessor is to just print the entries on console. However, the members (3 other and the 1st one = 4 members) seem to print overlapping set of entries.
My understanding was that the EntryProcessors get only entries corresponding to localKeySet(). However, it appears thats not the case.
Could someone please explain this behavior?
Your reasoning is correct. An EntryProcessor should only touch local keys.
What are you using as key? Hazelcast uses the serialized version of the key as the actual key; so perhaps you have 2 different key instances that lead to the same 'toString', but their binary content is different.
I have shot myself in the foot with e.g. a HashMap being part of the key; this can lead to different binary content even though the actual content is the same, and then you get strange behavior.
If you are using e.g. Long or String as key; then I can't explain the behavior you are seeing. How difficult is it to get this reproduced?
Found out the issue. The problem was not with the EntryProcessors. Actually, the code which was writing data to the distributed IMap, was running on more than the desired number of members.
So, in essence, a process (launched through IExecutorService) was running on multiple instances and publishing 'overlapping sets'/ duplicate sets of data. The EntryProcessor was working in correct way.

How to initialize EclipseLink connection

Sorry if my question is quite simple, but I really couldn't find an answer googling. I have this project with JPA 2.0 (EclipseLink), it's working fine but I want to ask if there's a way to initialize the database connection?
Actually it begins whenever the user try to access any module that requires any query, which is quite annoying because the connection can take some seconds and the app froze for a second when it's connecting.
I can make any random query on main method for "turn it on", but it's an unnecesary query and not the solution I want to use.
Thanks beforehand!
The problem will be that the deployment process is lazy. This prevents the cost of initializing and connecting to unused/unneeded persistence units, but it means everything with a persistence unit is processed the very first time it is accessed.
This can be configured on a persistence unit by using the "eclipselink.deploy-on-startup" property:
http://www.eclipse.org/eclipselink/documentation/2.4/jpa/extensions/p_deploy_on_startup.htm
Not sure if this is what you are looking for, I found the existence of property eclipselink.jdbc.exclusive-connection.is-lazy, which defaults to true.
According to Javadoc, "property specifies when write connection is acquired lazily".