Paraview crashes when loading a saved state - paraview

I had results of 2 CFD simulations, which I visualized using Paraview. Let me name the 2 results as case A and case B. In principle I had to analyze same parameters for both the cases. Hence I named the states of case A and case B with the same name, but had saved them in entirely different folders. There were about 4 states for each of the case (with the same name). I was able to load the states without any problems until yesterday. Today when I tried to load the very same state that I was able to load yesterday, Paraview crashed. What could be the reason for this?
I thought the problem occurred as I had used the same names for the states of both the cases. Hence I even tried load them after renaming. Still I couldn't load. I also reinstalled Paraview from scratch. Yet it crashed when I tried to load the state. The version I am using is 5.7.0

Related

Anylogic model stop without message

I have created a model to generate a product that will be cycled through a list of machines. Technically the product list is for a single-day run, but I run the model for long durations to stabilise the model output.
The model can run properly for months until around 20 months, then suddenly stops without any error message as shown in the screenshot. I do not know how to debug this since I do not know where the error comes from.
Does anyone have a similar encounter and could advise on how to approach this issue? Could it be an issue of memory overload?
Without more details, it's hard to pinpoint the exact reason, but this generally happens if the run is stuck in an infinite While Loop or similar. So check all your loops where it's possible for such a scenario to happen and it's likely that one of them (or more) is causing the issue.

Core Data, CloudKit - Failed to find matching objectIDs for CKRecordID

I am using CloudKit to sync my app across devices.
At first everything seems to work as expected but after a while CloudKit seems to get caught in an endless loop and the debug console throws tons of these messages (several thousand in serial):
CoreData: debug: CoreData+CloudKit: -[PFCloudKitSerializer
applyUpdatedRecords:deletedRecordIDs:toStore:inManagedObjectContext:onlyUpdatingAttributes:andRelationships:madeChanges:error:]_block_invoke(1018):
Failed to find matching objectIDs for <CKRecordID: 0x60000330c000;
recordName=1E0972A7-D9DD-44A7-88F9-3AD13B32A330,
zoneID=com.apple.coredata.cloudkit.zone:defaultOwner> /
<CKRecordID: 0x60000330c020;
recordName=EE02B981-E54D-486B-95A1-AC0839671C27,
zoneID=com.apple.coredata.cloudkit.zone:defaultOwner> in pending
relationship: 0xe92e2f9c5a6d27e2
x-coredata://75AFDFFD-8E35-4B9F-AA61-C477073B435B/NSCKImportPendingRelationship/p8626
I guess the most important part is
Failed to find matching objectIDs for <CKRecordID: 0x60000330c000; ...
It's just the standard CloudKit implementation without any special custom code, therefore I have no idea where to start to investigate.
Is this normal, expected behaviour?
I feel like this is slowing down my CloudKit sync quite a lot.
I just found the issue on my side. During further investigation I realized that these messages came together with other messages referring to one specific entity. I preload Core Data from a json file with some data which eventually change from time.
For development purposes each time the app launches,I update my preloaded data from the json file
It turned out that I created new objects during that update to run a comparison and forgot to delete the unnecessary objects right after.
Since that update process creates several hundred objects, that adds up quite fast. They were now floating around in core data without any purpose. And CloudKit had to sync them.
However, at the end it's still quite small data and why this causes into those cryptic debug messages and hours of syncing, is still a mystery for me.

Problem displaying geoserver's layer when resource is updated with rest API

I am having a weird issue when using geoserver api to update a netcdf resource of a coverage store layer. The resource is a netcdf containing one 3D (lon, ,lat, time) variable. However, the time dimension is only of length = 1. My code runs within a docker container and uses curl in a .sh file to run the api commands.
I must stress that the problem occurs only once in a while, maybe 10% of the time, maybe less.
When th problem occurs, the update of the store seems to have a problem and the layer cannot be displayed. When looking in the get Capabilities, one of the weird thing is that the time dimension is not right date, but is rather equal to 1970-01-01T00:00:00.000Z, which is the reference date used in the netcdf for the time dimension. Also, no problems are detected in the logs.
I do know that the problem is not with the file, and probably not with the upload of the file. Indeed, when the problem occurs, I can successfully create a store and a layer with the same resource and the same parameters as the layer that is not working.
I have tried multiple things via the API to solve this issue:
Reset the resource cache. It sometimes works, but not always
Delete layer and store and recreating them every time I need to update the resource
Delete resource, layer and store and recreating everything when resource update.
Nothing seems to get rid of the problem permanently. Has anyone experienced the same kind of behavior? It is not the first time I use geoserver’s api in a data harvester, but it is the first time I have this problem!
EDIT
I also tried to make the make the netcdf file as simple as I could, by removing the time dimension.
So now, the netcdf file only has 4 variables: lon, lat, the gridded variable, and a variable called crs that is of dimension 0, so is empty (I left it there for now since it comes from the outside source file).
But then again, the same kind of issue occurs, and again only once in a while. However, when it occurs, there seems to be something wrong caught in geoserver's log:
2022-06-08 16:01:28,267 WARN [operation.projection] - Possible use of "Popular Visualisation Pseudo Mercator" projection outside its valid area.
Longitude 2147483287°00.0'W is out of range (±180°).
But again, if when this happens, I can usually clear the resource cache and the layer will become visible again.
So I still dont know what is happening. Could it be the empty crs variable that sometimes creates problems?
Thanks a lot for your help!

SSRS Error: "One or more parameters required to run the report have not been specified. (rsParametersNotSpecified)"

Okay there are similar questions to this but this is NOT a duplicate. This error seems to come up when you have parameters referencing a dataset which is shared. Deleting the report from the server and redeploying does not fix in my case.
So I am developing on VS 2010 Professional with Business Intelligence Development Studio, BIDS, which is under source control with Team Foundation Server. I am deploying to a 2008R2 server which I thought may be the issue. The workaround is to change the dataset references to be embedded instead which stops this error dead in it's tracks but that is pretty poor in my opinion and I would like to have this work with shared datasets ultimately.
Things I have tried:
Ensure the naming of the dataset matches the reference. EG: "Name is ClientQuery, shared dataset is ClientQuery"
Ensure the naming on the server matches the refernces in step 1.
Ensure that this is what is breaking it by removing the reference to the shared dataset, works right away then.
Ensures that the shared dataset is not enabling some type of caching on the server.
I had a filter on a second shared dataset limiting scope, I removed that and there was still an error.
Removed all parameters and only added a single shared dataset, it gives error right away.
Added an option to the parameters binding to say: "Allow Empty values". Did this with Nulls as well.
Recreated EVERYTHING, a whole brand new RDL file, and copy and pasted only elements on the body of the report but explicitly created the parameters and the datasets and this STILL HAPPENED.
9. UPDATED - I have done the old destroy the RDL and then hope to redeploy. I found that a lot online. That does not work in this case. It is almost like this reference in the RDL:
< DataSet Name="**ClientQuery**">
< SharedDataSet>
< SharedDataSetReference>**ClientQuery**</SharedDataSetReference>
< /SharedDataSet>
< Fields>
< Field Name="CUSTOMER_ID">
< DataField>CUSTOMER_ID</DataField>
< rd:TypeName>System.String</rd:TypeName>
< /Field>
< Field Name="CUSTOMER_NAME">
< DataField>CUSTOMER_NAME</DataField>
< rd:TypeName>System.String</rd:TypeName>
< /Field>
< /Fields>
< /DataSet>
It appears that somehow the mention of this refernce causes havoc. I would examine my bin(environment) directory under my project. (I deploy for multiple environments and set up QA, UAT, PROD, etc.. under solution config) Each time the RDL is getting updated as it should and posting the updates I am showing. I think 'rebuild' is a lot of the issue at times when people see their report files not updating on a server, in my case a rebuild usually gets updates to the RDL versus just hitting deploy first.
While all of this is happening the hard part is that it works throughout changes every time on BIDS seamlessly. So the error is dealing completely with what the source server believes the rdl data to represent.
Any help is much appreciated, I would rate myself advanced at SSRS but this one has me stumped of what the error is refernecing that it is not getting.
I know this is an old question, but I just ran across this and was able to resolve my issue. Thought an updated option is warranted for others struggling with it. My issue had to do with the parameter settings on the Shared Dataset properties. The menu looks like this:
Specifically, make sure that you check the "Allows null value" option where needed. This instantly resolved my issue where a dataset would not work when pointing to a shared but embedding the dataset did.
Okay so the answer Jeroen proposed and others is half right. My issue was that my source code was under an older SVN Source Control, that was deployed to an SSRS 2008 Server, then we migrated the code base to TFS Source Control. The issue appears to be that the Shared Datasets were BELIEVING to be different identifiers than they actually were. The simple work around IN ADDITION to deleting the files is to redeploy the shared datasets as well. In my case I went into my project settings and deployed them to a different location entirely under the report structure to keep them in the same area so: Reports/Datasets instead of just Datasets. This seems to clear up the issue in my case so I believe this was just a perfect storm. In doubt with SSRS just delete everything and start from the ground up I guess.

Entity Framework Code First - Model change breaks Seed

We've been using Entity Framework Code First 5 for a little while now, without major issue.
I've recently discovered that ANY change I make to my model (such as adding a field, or removing a field) means that the Seed method no longer runs leaving my database in an invalid state.
If I reverse the change, the seed method runs fine.
I have tried making changes to varying parts of my model, so it's not the specific change which is relevant.
Anyone know how I can (a) debug what the specific issue is, or (b) come across this themselves and know how to fix it?
UPDATE: After the model change, however many times I query the database it doesn't run the Seed. However, I have found that if I manually run IISRESET, and then re-execute the web service which executes the query it does then run the seed! Anyone know why this would be the case, and why suddenly I need to reset IIS in between the database initialization and the Seed executing?
Many thanks Steve