Problem displaying geoserver's layer when resource is updated with rest API - rest

I am having a weird issue when using geoserver api to update a netcdf resource of a coverage store layer. The resource is a netcdf containing one 3D (lon, ,lat, time) variable. However, the time dimension is only of length = 1. My code runs within a docker container and uses curl in a .sh file to run the api commands.
I must stress that the problem occurs only once in a while, maybe 10% of the time, maybe less.
When th problem occurs, the update of the store seems to have a problem and the layer cannot be displayed. When looking in the get Capabilities, one of the weird thing is that the time dimension is not right date, but is rather equal to 1970-01-01T00:00:00.000Z, which is the reference date used in the netcdf for the time dimension. Also, no problems are detected in the logs.
I do know that the problem is not with the file, and probably not with the upload of the file. Indeed, when the problem occurs, I can successfully create a store and a layer with the same resource and the same parameters as the layer that is not working.
I have tried multiple things via the API to solve this issue:
Reset the resource cache. It sometimes works, but not always
Delete layer and store and recreating them every time I need to update the resource
Delete resource, layer and store and recreating everything when resource update.
Nothing seems to get rid of the problem permanently. Has anyone experienced the same kind of behavior? It is not the first time I use geoserver’s api in a data harvester, but it is the first time I have this problem!
EDIT
I also tried to make the make the netcdf file as simple as I could, by removing the time dimension.
So now, the netcdf file only has 4 variables: lon, lat, the gridded variable, and a variable called crs that is of dimension 0, so is empty (I left it there for now since it comes from the outside source file).
But then again, the same kind of issue occurs, and again only once in a while. However, when it occurs, there seems to be something wrong caught in geoserver's log:
2022-06-08 16:01:28,267 WARN [operation.projection] - Possible use of "Popular Visualisation Pseudo Mercator" projection outside its valid area.
Longitude 2147483287°00.0'W is out of range (±180°).
But again, if when this happens, I can usually clear the resource cache and the layer will become visible again.
So I still dont know what is happening. Could it be the empty crs variable that sometimes creates problems?
Thanks a lot for your help!

Related

Delete Time series in prometheus without API

I am monitoring a instance and changed its target IP. now when I graph it in grafana, there is 2 lines(with different color) showing with the tail of first line the head of the second line.
My goal is to remove the first line and just show the updated, second line.
My attempt is to adjust the time frame in grafana which works but it will affect all the instances that are not changed.
My second attempt is to remove the time-series in prometheus but the API was not enabled and restarting would cause a hiccup in the prometheus system (which is not good in monitoring).
It also said here that time-series can only be deleted via API but this is 2018. I was wondering if it is now possible to remove time-series without API.
No, the only way to remove time series is using the API
Yes, restarting would cause a hiccup, but let's be practical: the downtime is really very small.

data processing of Dymola's result during simulation

I am working on a complex Modelica model that contains a large set of data, and I need the simulation to keep going until I terminate the simulation process, maybe even for days, so the .mat file could get very large, I got trouble with how to do data processing. So I'd like to ask if there are any methods that allow me to
output the data I need after a fixed time step during simulation, but not using the .mat file after simulation. I am considering using Modelica.Utilities.Stream.Print` function to print the data I need into a CSV file, but I have to write a huge amount of code that prints every variable I need, so I think there should be a better solution.
delete the .mat file during a fixed time step, so the .mat file stored on my PC wouldn't get too large, and don't affect the normal simulation of Dymola.
Long time ago I wrote a small C-program that runs the executable of Dymola with two threads. One of them is responsible for terminating the whole simulation after exceeding an input time limit. I used the executable of this C-program within the standard given mfiles from Dymola. I think with some hacking capabilities, one would be able to conduct the mentioned requirements.
Have a look at https://github.com/Mathemodica/dymmat however I need to warn that the associated mfiles were for particular type of models and the software is not maintained since long time. However, the idea of the C-program would be reproducible.
I didn't fully test this, so please think of this more like "source of inspiration" than a full answer:
In Section "4.3.6 Saving periodic snapshots during simulation" of the Dymola 2021 Release Notes you'll find a description to do the following:
The simulator can be instructed to print the simulation result file “dsfinal.txt” snapshots during simulation.
This can be done periodically using the Simulation Setup options "Complete result snapshots", but I think for your case it could be more useful to trigger it from the model using the function Dymola.Simulation.TriggerResultSnapshot(). A simple example is given as well:
when x > 0 then
Dymola.Simulation.TriggerResultSnapshot();
end when;
Also one property of this function could help, as it by default creates multiple files without overwriting them:
By default, a time stamp is added to the snapshot file name, e.g.: “dsfinal_0.1.txt”.
The format of the created dsfinal_[TIMESTAMP].txt is a bit overwhelming at first, as it contains all information for initializing the model, but there should be everything you need...
So some effort is shifted to the post processing, as you will likely need to read multiple files, but I think this is an acceptable trade-off.

How do we change the "precision:ms" setting in the Grafana Query Inspector?

I have an InfluxDB database with only x11 data points in it. These data are not displaying correctly (or at least as I would expect) in Grafana when the time between them is shorter than 1ms.
If I insert data points 1 ms apart, then everything works as expected and I see all x11 points at the correct times, as shown below.:
However, if I delete these points and upload new ones but this time one point per 100 μs, then although the data displays correctly in InfluxDB, in Grafana I see only two points in my graph:
It seems like the data is being rounded/binned to the nearest millisecond, an that this is related to the “precision=ms” setting in the query here:
but I cannot find any way to change this setting. What is the correct way to fix this?
You can't configure Grafana to support different time precision for the InfluxDB. It is hardcoded in the source code: https://github.com/grafana/grafana/blob/36fd746c5df1438f27aa33fc74b24be77debc7ff/public/app/plugins/datasource/influxdb/datasource.ts#L364 (It may need to be fixed in multiple places of the source, not only in this one.)
So the correct way to fix it is to code it, which is of course not in the scope of this question.

Can watchman send why a file changed?

Is watchman capable of posting to the configured command, why it's sending a file to that command?
For example:
a file is new to a folder would possibly be a FILE_CREATE flag;
a file that is deleted would send to the command the FILE_DELETE flag;
a file that's modified would send a FILE_MOD flag etc.
Perhaps even when a folder gets deleted (and therefore the files thereunder) would send a FOLDER_DELETE parameter naming the folder, as well as a FILE_DELETE to the files thereunder / FOLDER_DELETE to the folders thereunder
Is there such a thing?
No, it can't do that. The reasons why are pretty fundamental to its design.
The TL;DR is that it is a lot more complicated than you might think for a client to correctly process those individual events and in almost all cases you don't really want them.
Most file watching systems are abstractions that simply translate from the system specific notification information into some common form. They don't deal, either very well or at all, with the notification queue being overflown and don't provide their clients with a way to reliably respond to that situation.
In addition to this, the filesystem can be subject to many and varied changes in a very short amount of time, and from multiple concurrent threads or processes. This makes this area extremely prone to TOCTOU issues that are difficult to manage. For example, creating and writing to a file typically results in a series of notifications about the file and its containing directory. If the file is removed immediately after this sequence (perhaps it was an intermediate file in a build step), by the time you see the notifications about the file creation there is a good chance that it has already been deleted.
Watchman takes the input stream of notifications and feeds it into its internal model of the filesystem: an ordered list of observed files. Each time a notification is received watchman treats it as a signal that it should go and look at the file that was reported as changed and then move the entry for that file to the most recent end of the ordered list.
When you ask Watchman for information about the filesystem it is possible or even likely that there may be pending notifications still due from the kernel. To minimize TOCTOU and ensure that its state is current, watchman generates a synchronization cookie and waits for that notification to be visible before it responds to your query.
The combination of the two things above mean that watchman result data has two important properties:
You are guaranteed to have have observed all notifications that happened before your query
You receive the most recent information for any given file only once in your query results (the change results are coalesced together)
Let's talk about the overflow case. If your system is unable to keep up with the rate at which files are changing (eg: you have a big project and are very quickly creating and deleting files and the system is heavily loaded), the OS can't fit all of the pending notifications in the buffer resources allocated to the watches. When that happens, it blows those buffers and sends an overflow signal. What that means is that the client of the watching API has missed some number of events and is no longer in synchronization with the state of the filesystem. If that client is maintains state about the filesystem it is no longer valid.
Watchman addresses this situation by re-examining the watched tree and synthetically marking all of the files as being changed. This causes the next query from the client to see everything in the tree. We call this a fresh instance result set because it is the same view you'd get when you are querying for the first time. We set a flag in the result so that the client knows that this has happened and can take appropriate steps to repair its own state. You can configure this behavior through query parameters.
In these fresh instance result sets, we don't know whether any given file really changed or not (it's possible that it changed in such a way that we can't detect via lstat) and even if we can see that its metadata changed, we don't know the cause of that change.
There can be multiple events that contribute to why a given file appears in the results delivered by watchman. We don't them record them individually because we can't track them with unbounded history; imagine a file that is incrementally being written once every second all day long. Do we keep 86400 change entries for it per day on hand and deliver those to our clients? What if there are hundreds of thousands of files like this? We'd have to truncate that data, and at that point the loss in the data reduces how well you can reason about it.
At the end of all of this, it is very rare for a client to do much more than try to read a file or look at its metadata, and generally speaking, they want to do that only when the file has stopped changing. For this use case, watchman-wait, watchman-make and trigger all have the concept of a settle period that causes the change notifications to be delayed in delivery until after the filesystem has stopped changing.

Symfony: getting form values before and after form handling

Hello I want to be able to compare values before and after form handling, so that I can process them before flush.
What I do is collect old values in an array before handlerequest.
I then compare new values to the old values in the array.
It works perfectly on simple variables, like strings for instance.
However I want to work on uploaded files. I am able to get their fullpath and names before handling the form but when I get the values after checking if form is valid, I am still getting the same old value.
I tried both calling $entity->getVar() and $form->getData()->getVar() and I have the same output....
Hello I actually found a solution. Yet it is a departure from the strategy announced in my question, which I realize is somewhat truncated regarding my objective. Which was to compare old file names and new names (those names actually include full path) for changes, so that I would unlink those of those old names that were not in the new name list anymore. Basically, to operate a cleanup after a file was uploaded to replace another, without the first one being deleted first. And to save the webmaster the hassle of having to sort between uniqid-named files that are still used by the web site and those that are useless.
Problem is that my upload functions, that are very similar to those given in examples to the file upload code shown on the official documentation pages, seemed to take effect at flush time.
So, since what I wanted to do with those files had nothing to do with database operations, I resorted to having step two code launch after flush, which works fine.
However I am intrigued by your solutions, as they are both strategies I hadn't thought of. Thank you for suggestions.
However I am not sure if cloning the whole object will be as straightforward as comparing two arrays of file names.