The Google Guava Cache documentation states:
Refreshing is not quite the same as eviction. As specified in LoadingCache.refresh(K), refreshing a key loads a new value for the key, possibly asynchronously. The old value (if any) is still returned while the key is being refreshed, in contrast to eviction, which forces retrievals to wait until the value is loaded anew.
If an exception is thrown while refreshing, the old value is kept, and
the exception is logged and swallowed.
This logging and swallowing of exceptions is really bad in my use case, because it means that if refresh throws an exception users of the cache will continue to find the stale data in the Cache.
How can I make sure that if an exception is thrown in refresh the cache starts returning null or calling load method?
If you never want to serve the stale data, you should call invalidate(key) instead of refresh(key). This discards the cached value for key, if one exists.
Then a subsequent call to get(key) will delegate synchronously to the value loader, and will rethrow any exception thrown by the CacheLoader, wrapped in an (Unchecked)ExecutionException.
If stale data is a problem for you then you should use expireAfterWrite to ensure that stale data is never served.
Related
If I got it right, threads that call get(key) will be blocked until first thread finishes cache load. But what will happen if it fails to load cache? (exception thrown for example). Will another thread that call get(key) retry cache load?
Yes, all calls to the loading cache either fetch the stored value or try to block and fetch the value from the downstream source from a CacheLoader.
From the Wiki page (emphasis mine):
A LoadingCache is a Cache built with an attached CacheLoader. (...) The canonical way to query a LoadingCache is with the method get(K). This will either return an already cached value, or else use the cache's CacheLoader to atomically load a new value into the cache.
Moreover:
Because CacheLoader might throw an Exception, LoadingCache.get(K) throws ExecutionException. (...) You can also choose to use getUnchecked(K), which wraps all exceptions in UncheckedExecutionException, but this may lead to surprising behavior if the underlying CacheLoader would normally throw checked exceptions.
I am getting this error, and I have not been able to resolve:
System.Data.SqlClient.SqlException: 'The transaction operation cannot be performed because there are pending requests working on this transaction.'
What is going on is that a usual data operation is taking place as part of a Controller Action.
At the same time, there is a Filter that is running that logs the action to a database.
this._orderEntryContext.ServerLog.Add(serverLog);
return this._orderEntryContext.SaveChanges() > 0;
This is where the error occurs.
So it seems to me that there is two SaveChanges going on at the same time, and so the transaction gets fouled up.
Not sure how to resolve. They are both using the same context that is gotten through DI. A workaround was to create a second context manually, but I would rather stick to the DI pattern. But I don't know how to create a second Db Context in DI, or even if that is a good idea.
Perhaps I should be using SaveChangesAsync() on both calls to ensure that they do not step on each other?
Turns out the answer to this was to make the Context a transient service:
services.AddDbContext<OrderEntryContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")), ServiceLifetime.Transient);
Then, I changed all repositories to also be transient:
services.AddTransient<AssociateRepository, AssociateRepository>();
We are using spray-cache (can't move to akka-http yet) to cache results from a downstream service we are calling. The effect we want is, if the data is more than 15 minutes old, do the call, otherwise return the cached data.
Our problem is that, if the service call fails, spray-cache will remove the entry from the cache. What we need is to return the old cached data (even if it's stale), and retry the downstream request when the next request comes in.
It looks like Spray does not ship with a default cache implementation that does what you want. According to the spray-caching docs there are two implementations to the Cache trait: SimpleLruCache and ExpiringLruCache.
What you want is a Cache that distinguishes entry expiration (removal of the entry from the cache) from entry refresh (fetching or calculating a more recent copy of the entry).
Since both default implementations merge these two concepts into a single timeout value I think your best bet will be a write a new Cache implementation that distinguishes refresh from expiration.
This question is in regards to the jose4j JWT library. I am planning to create a single JwtConsumerBuilder instance for processing all incoming requests. I read here on stackoverflow and in release notes that JwtConsumerBuilder is multi-thread safe. I also plan to use the setVerificationKey method to validate the signature. When the key expires, I assume I will get an exception. Which type of exception will be returned: InvalidJwtSignatureException or InvalidKeyException?
When such an exception occurs, my plan is to update my global instance of the JwtConsumerBuilder with a new instance after retrieving the updated key through the class HttpsJwksVerificationKeyResolver. Is this a sound approach or does the resolver take care of this for me.
I'd like to do a TTL based memoization with active refresh asynchronously in scala.
ScalaCache example in the documentation allows for TTL based memoization as follows:
import scalacache._
import memoization._
implicit val scalaCache = ScalaCache(new MyCache())
def getUser(id: Int): User = memoize(60 seconds) {
// Do DB lookup here...
User(id, s"user${id}")
}
Curious whether the DB lookup gets triggered after TTL expires for existing value, synchronously and lazily during the next getUser invocation, or if the refresh happens aggressively and asynchronously - even before the next getUser call.
If the ScalaCache implementation is synchronous, is there an alternate library that provides ability to refresh cache actively and asynchronously ?
Expiration and refresh are closely related but different mechanisms. An expired entry is considered stale and cannot be used, so it must be discarded and refetched. An entry eligible for being refreshed means that the content is still valid to use, but the data should be refetched as it may be out of date. Guava provides these TTL policies under the names expireAfterWrite and refreshAfterWrite, which may be used together if the refresh time is smaller than the expiration time.
The design of most caches prefer discarding unused content. An active refresh would require a dedicated thread that reloads entries regardless of whether they have been used. Therefore most caching libraries do not provide active refresh themselves, but make it easy for applications to add that customization on top.
When a read in Guava detects that the entry is eligible for refresh, that caller will perform the operation. All subsequent reads while the refresh is in progress will obtain the current value. This means that to the refresh is performed synchronously on the user's thread that triggered it, and asynchronously to other threads reading that value. A refresh may be fully asynchronous if CacheLoader.reload is overridden to perform the work on an executor.
Caffeine is a rewrite of Guava's cache and differs slightly by always performing the refresh asynchronously to a user's thread. The cache delegates the operation to an executor, by default ForkJoinPool.commonPool which is a JVM-wide executor. The Policy api provides means of inspecting the runtime state of the cache, such as the age of an entry, for adding application-specific custom behavior.
For other ScalaCache backends support is mixed. Ehcache has a RefreshAheadCache decorator that refreshes lazily using its own threadpool. Redis and memcached do not refresh as they are not aware of the system of record. LruMap has expiration support grafted on and does not have any refresh capabilities.