Disable Request Journaling, without restarting - wiremock

I want to proxy requests and then snapshot them using Wiremock. So I have to have request-journaling enabled. But for subsequent runs, after taking the snapshot, I don't really need this functionality, is there a way to switch the config, without having to restart the Wiremock instance. (Note: This is for performance reasons. Also I do know about the max_journal_entries parameter, which is quite useful in this scenario, but I am wondering if I can completely disable them)

Related

Ability to fail macOS endpoint extension from within the extension process

I'd like to protect against unauthorised system extension teardown that are triggered by
the container application following this command:
self.deactivationRequest =
OSSystemExtensionRequest.deactivationRequest(
forExtensionWithIdentifier: extensionIdentifier, queue: .main)
self.deactivationRequest!.delegate = self
OSSystemExtensionManager.shared.submitRequest(self.deactivationRequest!)
Is there a callback in the endpoint extension code, that can be invoked upon this deactivation request, and may block/allow it ?
thanks
There is no public API to control the system extension deactivation with EndpointSecurity or inside sysext itself (activation and deactivation management, I think, is a job for some daemon, like sysextd).
I could advice to try two approaches for your case:
You may still be able to deny deactivation with EndpointSecurity, just not in direct way. To deactivate sysext the responsible processes would do a lot of stuff, including opening some specific files, reading them, etc. In case you are lucky, you may be able to fail the deactivation process by blocking one of such operations before it really deativated. However, the context of operation (how do you know the target is your extension) may vary and be less than you need.
You may intercept the OSSystemExtensionManager.shared.submitRequest call inside your application, and add some condition to really call original method from interception method. The interception for submitRequest will be a swizzling.
Or you can place an old good hook on something deeper, like xpc_* stuff, and filter your deactivation request by some unique string from request, also calling original method only on some condition.
Both ways are not bulletproof from perspective of tampering protection ofc, but nothing really is, we just requesting additional efforts from hacker.
If you haven't disabled library validation for your app, there are two ways of tampering it: either turning SIP off, or using some 0-day system breach.
You can't really protect your app from such treats: 0-days are new, you don't know what it may be, and with SIP off the one may unload/disable/alter all possible kinds of protection stuff.

Is there a way to disable long calls to Firestore/Listen?

A long 1 min call to Firestore/Listen is preventing our prerender soln from working properly. The provider waits for all network calls to complete, which means the prerender takes a long time.
We don't use any of the realtime features, so there is no value in listening after requests complete afaict.
First, no it's not possible to directly disable it using some Firestore config option for example.
What you can do is bypass it by using transactions in Firestore. Any operation run in a transaction will use its own connection which will be torn down after. See here for details.
It might also work to shut down Firestore after some timeout in this case. I haven't tried it yet. In theory though for this particular problem I can check the userAgent and shut down fire

How to persist and replay NestJS CQRS event and saga across restart?

I am making an application which will need to use NestJS' CQRS module, as the requirements naturally lend themselves to that pattern.
Updates to the application logic are expected to be frequent and to happen during busy hours (that's just how my management works...), so the application needs to be able to restart gracefully. However, this means that events started just before the shutdown may not finish, or even if they do, some sagas may not trigger due to some events having happened before the restart... I'd like to ensure that doesn't happen.
I'm aware of NestJS' OnApplicationShutdown and OnApplicationBootstrap hooks, which is exactly for this purpose, but what I'm not sure is what I should do there. How can I capture all events that have unfinished handlers and sagas? Then after a restart, how can I make the event bus aware of the events monitored by sagas, without executing the already executed handlers?
I guess the second part could be worked around with a random ID per event/handler combo, that will be looked up in a log, and if present, the handler will be skipped, and if not, it will be executed and added to the log... But even with such a workaround, I don't see how I could do the first part. There will be a lot of events, and sagas (by definition) execute commands, meaning they have side effects... Even if all commands can become idempotent, the sheer quantity of events and frequent restarts means restarting from the very first command is a no go.
I've seen this package but I'm not sure if it solves this particular use case, or if it's really just logging the events, and pretty much nothing more.

URLSession cache only

Sometimes I want to get data from the cache only when using URLSession. For example when quickly scrolling in a UITableView, I would like to show images that are already in the cache, but do not fire any HTTP requests. Images are just an example could be anything.
So I'm currently looking into URLSession's CachePolicy but it doesn't support an option to only get valid (not expired, etc) data from cache.
I can look into the URLCache myself, but this also of course returns data that might be expired. Is there some API that can validate a CachedURLResponse? Because then I can do it myself. Or do I have to implement the validating myself.
That's a fairly unusual request. Normally, you're either writing code to operate in an offline mode (in which case you want to pull from the cache whether the cached results are still valid or not) or you are online (in which case you want to fetch new data if it isn't valid).
I would encourage you to really think long and hard about whether you really want to force cache validation if you aren't firing network requests.
That said, if you really want that behavior, there are two ways you can do it:
Use NSURLRequestReturnCacheDataDontLoad and validate the age of the cached response yourself.
Perform the request in a custom session, use NSURLRequestUseProtocolCachePolicy, and in that session, install a custom NSURLProtocol subclass that overrides initWithTask:cachedResponse:client: and startLoading, and calls URLProtocol:didFailWithError: on the provided client at the top of its startLoading method.
The second approach is probably the best option, because you don't have to worry about knowing all the esoteric rules for cache validation. By making the actual load fail, the cache will work normally, but as soon as it actually would start making a network request, your custom protocol prevents that from happening. And because you'll register the protocol only in that specific session (via the protocolClasses array on the session configuration), it won't break networking in other sessions.

Is my middle-tier MSMQ queue really necessary?

My scenario is this:
I have multiple webservers that:
need to communicate with the backend (IBus.Publish/IBus.Subscribe)
need to communicate with each-other (IBus.Publish/IBus.Subscribe)
Aside from the webservers, I have a number of windows services that consume the same messages.
In order to make this work, I have the webservers send messages to a central hub, which sole responsebility it is to wrap the message in a new message type and publish it to all subscribers.
Can I somehow avoid this, so I can publish the messages directly from the webservers?
EDIT (Added some code) - Current situation:
... WebServer
_bus.Send(new Message{Body="SomethingChanged"});
... Hub
public void Handle(Message message){
_bus.Publish(new WrappedMessage{Message = message})
}
... Handlers (WebServers, WindowsServices etc)
public void Handle(WrappedMessage message){
//Actually do important stuff
}
Wanted situation:
... WebServer
_bus.Publish(new Message{Body="SomethingChanged"};
... Handlers (WebServers, WindowsServices etc)
public void Handle(Message message){
//Do important stuff
}
Well, there isn't anything that technically prevents you from publishing messages inside your web application, and likewise there's nothing that prevents you from subscribing to those messages in all instances of the same web application. The question is whether you should :)
Without knowing the details of your problem, my immediate feeling is that you would be better off using some kind of shared persistent storage for whatever it is that you're trying to synchronize (a cache?), possibly using some kind of read replication if you'd like to scale out and make reads really fast.
Again, without knowing the details of your problem, I'll try and suggest something, and then you can see if that could inspire you into an even better solution... here goes:
Use MongoDB (possible as a replica set if you want to scale out your read operations) as the persistent storage of the thing you're caching
Whenever something happens in the web application, bus.Send a message to your backend
In your backend message handler, you update Mongo (which automatically will replicate to read slaves)
Whenever you need to query your data, you just query your Mongo set (using slaveOk=true whenever you can accept slightly stale values)
The reason I'm suggesting this alternative solution, is that web applications (at least in .NET land) have this funny transient nature where the IIS will dictate its lifecycle, and at any given time you can have n instances of it. This complicates matters if you keep state in it. This makes me think of the web application as a client, not a publisher.
A simpler solution is to keep state in something that does not come & go, e.g. a database. And the reason I'm suggesting Mongo is that my guess is that you're worried about being able to serve web requests fast, but since MongoDB is fairly easy to install as a replica set where read operations will be pretty fast (and, more importantly: horisontally scaleable), my guess is that this setup would make everything much simpler.
How does that sound?