How to reset filters in Fiddler? - fiddler

I am trying to set up some filters in Fiddler and I do not know how to reset them, if I made a mistake.
An example of a session:
Unfiltered status:
Applying a wrong filter (which will not match anything):
I end up with an empty session list (which is OK).
Rolling back
NowI would like to get back to a no-filter state. I tried to:
disable the filtering by unchecking "Use Filters"
refreshing (F5) after doing the above
using filters but going back to -No Host Filter- and applying in the Actions
None of these (and various combinations thereof) worked. How can I apply a no-filter status (short of restarting Fiddler)?

From the comments, we've confirmed that your confusion arises from the fact that Sessions which have been deleted by filters cannot be recovered. The point of filters, beyond helping to limit what you see, is that they reduce Fiddler's memory usage since unwanted data need not be stored.
Obviously, if you've previously saved the traffic in a SAZ file, you can simply reload that file.

Related

Filtering certain types of Requests logged by quarkus.http.access-log

I want to test a new REST-Client I am building, where I'd like to see the exact request which is being built, so I set the quarkus.http.access-log.enabled=true property. When starting the Quarkus however, I am being bombarded with the logs of many scheduled requests which are happening simultaneously. The worst of those are several Elasticsearch Scroll-Requests, which return a lot of data which is directly fed into the log.
My idea is, that everything returning a response that contains _index should be filtered, as I know well by now, that my ES-Client is working properly, however it pumps out so much data, that the request I am intending to log is overwritten almost instantly.
So my question: Does somebody know a working (and convenient) method to effectively filter unwanted HTTP-Logs?
I tried setting
quarkus.http.access-log.exclude-pattern=(_index+)
in an attempt to filter out unwanted requests, but I'm not sure where to continue from there.

Problem displaying geoserver's layer when resource is updated with rest API

I am having a weird issue when using geoserver api to update a netcdf resource of a coverage store layer. The resource is a netcdf containing one 3D (lon, ,lat, time) variable. However, the time dimension is only of length = 1. My code runs within a docker container and uses curl in a .sh file to run the api commands.
I must stress that the problem occurs only once in a while, maybe 10% of the time, maybe less.
When th problem occurs, the update of the store seems to have a problem and the layer cannot be displayed. When looking in the get Capabilities, one of the weird thing is that the time dimension is not right date, but is rather equal to 1970-01-01T00:00:00.000Z, which is the reference date used in the netcdf for the time dimension. Also, no problems are detected in the logs.
I do know that the problem is not with the file, and probably not with the upload of the file. Indeed, when the problem occurs, I can successfully create a store and a layer with the same resource and the same parameters as the layer that is not working.
I have tried multiple things via the API to solve this issue:
Reset the resource cache. It sometimes works, but not always
Delete layer and store and recreating them every time I need to update the resource
Delete resource, layer and store and recreating everything when resource update.
Nothing seems to get rid of the problem permanently. Has anyone experienced the same kind of behavior? It is not the first time I use geoserver’s api in a data harvester, but it is the first time I have this problem!
EDIT
I also tried to make the make the netcdf file as simple as I could, by removing the time dimension.
So now, the netcdf file only has 4 variables: lon, lat, the gridded variable, and a variable called crs that is of dimension 0, so is empty (I left it there for now since it comes from the outside source file).
But then again, the same kind of issue occurs, and again only once in a while. However, when it occurs, there seems to be something wrong caught in geoserver's log:
2022-06-08 16:01:28,267 WARN [operation.projection] - Possible use of "Popular Visualisation Pseudo Mercator" projection outside its valid area.
Longitude 2147483287°00.0'W is out of range (±180°).
But again, if when this happens, I can usually clear the resource cache and the layer will become visible again.
So I still dont know what is happening. Could it be the empty crs variable that sometimes creates problems?
Thanks a lot for your help!

Most Performant way to implement time-dependent status

Central to a project I'm working on is a highlighting-mechanic that can be applied to certain items on the website. The idea is, that this highlighted-status is only active for a certain amount of time.
I'm trying to find the most performant way to achieve this (in querying, setting status, checking status and revoking it)
A first approach would be to set simply set a value 'highlighted:true' to the item. This seems to be the most performant way to query for highlighted items. The Drawback I see here, is that there also needs to be stored a date for the highlighting-action, but furthermore there needs to run an interval to check on the highlighted items and potentially revoke their highlighted status. Also the exact moment when the item stops beeing highlighted can't be determined exactly, since its depending on the interval of the check-function.
A second approach would be to mainly store the date of the highlighting-action and run the query against it. It seems that the query of highlighted objects is way less performant, since every item ever is beeing checked, and on top its not just a boolean, but a proper function that throws those differnt date-values around to check if it is still valid. On the upside there is no external cleanup-function neccessary and every highlighting period ends perfectly on time.
Would love to have your input on this. Is there maybe a clever pattern on this?

Can watchman send why a file changed?

Is watchman capable of posting to the configured command, why it's sending a file to that command?
For example:
a file is new to a folder would possibly be a FILE_CREATE flag;
a file that is deleted would send to the command the FILE_DELETE flag;
a file that's modified would send a FILE_MOD flag etc.
Perhaps even when a folder gets deleted (and therefore the files thereunder) would send a FOLDER_DELETE parameter naming the folder, as well as a FILE_DELETE to the files thereunder / FOLDER_DELETE to the folders thereunder
Is there such a thing?
No, it can't do that. The reasons why are pretty fundamental to its design.
The TL;DR is that it is a lot more complicated than you might think for a client to correctly process those individual events and in almost all cases you don't really want them.
Most file watching systems are abstractions that simply translate from the system specific notification information into some common form. They don't deal, either very well or at all, with the notification queue being overflown and don't provide their clients with a way to reliably respond to that situation.
In addition to this, the filesystem can be subject to many and varied changes in a very short amount of time, and from multiple concurrent threads or processes. This makes this area extremely prone to TOCTOU issues that are difficult to manage. For example, creating and writing to a file typically results in a series of notifications about the file and its containing directory. If the file is removed immediately after this sequence (perhaps it was an intermediate file in a build step), by the time you see the notifications about the file creation there is a good chance that it has already been deleted.
Watchman takes the input stream of notifications and feeds it into its internal model of the filesystem: an ordered list of observed files. Each time a notification is received watchman treats it as a signal that it should go and look at the file that was reported as changed and then move the entry for that file to the most recent end of the ordered list.
When you ask Watchman for information about the filesystem it is possible or even likely that there may be pending notifications still due from the kernel. To minimize TOCTOU and ensure that its state is current, watchman generates a synchronization cookie and waits for that notification to be visible before it responds to your query.
The combination of the two things above mean that watchman result data has two important properties:
You are guaranteed to have have observed all notifications that happened before your query
You receive the most recent information for any given file only once in your query results (the change results are coalesced together)
Let's talk about the overflow case. If your system is unable to keep up with the rate at which files are changing (eg: you have a big project and are very quickly creating and deleting files and the system is heavily loaded), the OS can't fit all of the pending notifications in the buffer resources allocated to the watches. When that happens, it blows those buffers and sends an overflow signal. What that means is that the client of the watching API has missed some number of events and is no longer in synchronization with the state of the filesystem. If that client is maintains state about the filesystem it is no longer valid.
Watchman addresses this situation by re-examining the watched tree and synthetically marking all of the files as being changed. This causes the next query from the client to see everything in the tree. We call this a fresh instance result set because it is the same view you'd get when you are querying for the first time. We set a flag in the result so that the client knows that this has happened and can take appropriate steps to repair its own state. You can configure this behavior through query parameters.
In these fresh instance result sets, we don't know whether any given file really changed or not (it's possible that it changed in such a way that we can't detect via lstat) and even if we can see that its metadata changed, we don't know the cause of that change.
There can be multiple events that contribute to why a given file appears in the results delivered by watchman. We don't them record them individually because we can't track them with unbounded history; imagine a file that is incrementally being written once every second all day long. Do we keep 86400 change entries for it per day on hand and deliver those to our clients? What if there are hundreds of thousands of files like this? We'd have to truncate that data, and at that point the loss in the data reduces how well you can reason about it.
At the end of all of this, it is very rare for a client to do much more than try to read a file or look at its metadata, and generally speaking, they want to do that only when the file has stopped changing. For this use case, watchman-wait, watchman-make and trigger all have the concept of a settle period that causes the change notifications to be delayed in delivery until after the filesystem has stopped changing.

jQuery Auto Complete - Performance Issue

We are using the plugin https://goodies.pixabay.com/jquery/tag-editor/demo.html for our AutoComplete feature. We load the source with 3500 items. The performance gets too bad when the user starts typing and the autocomplete loads the filtered result after 6 to 8 seconds.
What are alternate approach that we can take for upto 4000 items for Autocomplete.
Appreciate your response!
are you using the minLength attribute from autocomplete?
on their homepage, they have something like this:
$('#my_textarea').tagEditor({ autocomplete: { 'source': '/url/', minLength: 3 } });
this effectively means, that the user has to enter at least 3 charaters before autocomplete will be used. doing so will usually reduce the amount of results from the autocomplete to a more sane count (like 20-30 maybe).
However, this might not necessarily be your problem. first you should figure out, if it's your server that's got a problem with responding fast (you can use your browser developer toolbar to see how long the requests takes to complete).
If the request takes 6-8 seconds, then you will have to optimize your server's code. On the other hand, if the response is quick, but tageditor needs a long time to build the suggestion list, then the problem is, that it might not be optimized for so many suggestions. in that case, the ultimate solution would be to rewrite the autocompletion module yourself or patch the existing one to better scale to your needs.
Do you go back to the server every time the user types in something to get the matching results?
I am using SPRING ehcache which gets all the items from database and stores in the server cache when the server is started. Whenever the user types the cached data is used which gets the results with few milliseconds. Some one else recommended me to use this.Below is the example for it
http://www.mkyong.com/ehcache/ehcache-hello-world-example/
I am using the jQuery autocomplete features with 2500 items without any issue.
here is the link where it being used http://www.all4sportsonline.com