Hi i am using wicket and getting page expire when ever two pages are open and I am trying to submit in on after the other
is there a way to support getPageSettings().setAutomaticMultiWindowSupport(true) in wicket 6.8
You've to know how big your pages will get and how many sessions you've in parallel. I think your page size (serialized) is bigger than maxSizePerSession and because of that only last page is stored on disk.
you can tweak your IPageStore settings a bit (but this will increase your resource usage):
/**
* Sets the maximum number of page instances which will be stored in the application scoped
* second level cache for faster retrieval
*
* #param inmemoryCacheSize
* the maximum number of page instances which will be held in the application scoped
* cache
*/
void setInmemoryCacheSize(int inmemoryCacheSize);
/**
* Sets the maximum size of the {#link File} where page instances per session are stored. After
* reaching this size the {#link DiskDataStore} will start overriding the oldest pages at the
* beginning of the file.
*
* #param maxSizePerSession
* the maximum size of the file where page instances are stored per session. In
* bytes.
*/
void setMaxSizePerSession(Bytes maxSizePerSession);
Related
I have the following definition of useQuery that i use in a couple of React components:
useQuery("myStuff", getMyStuffQuery().queryFn);
Where getMyStuffQuery looks like this:
export const getMyStuffQuery = () => {
return {
queryFn: () => makeSomeApiCall(),
}}
I would expect that although all of those components render, makeSomeApiCall() would only make an API call once, and the rest of the time will use the cache resulted in from this first call.
However, it seems like it keeps calling makeSomeApiCall() again and again, whenever any of said components renders.
Why is React Query not using the cache? Am i doing something wrong?
React Query will cache the data of the query by default, but that does not affect whether or not it thinks that data is stale. If it thinks data is stale, it will call the query function (hit the API) every time useQuery() is called. This means it will read the data from the cache if it has it, but since it thinks that data is stale, will still hit the API in the background to fetch any updated data.
Fortunately, you have complete control over whether or not React Query considers data to be stale. You can set a staleTime config option to control how long specific data should be considered fresh. You can even set it to Infinity to say that as long as your app is open, it should only ever call the query function (hit the API) one time. By default this value is 0, which is why you are seeing the behavior you are - React Query will refetch the data in the background every time useQuery is called because it immediately thinks that data is stale (even though it's still cached).
In your example, if you truly ever only wanted an API to be called one time, you could simply set the staleTime option to Infinity.
useQuery("myStuff", getMyStuffQuery().queryFn, { staleTime: Infinity });
This option, along with all others, can be read about in the docs here https://react-query.tanstack.com/reference/useQuery
React Query has a slightly different model of request caching.
A request can have its results cached, and those results can go stale.
Cached results are returned immediately, but if stale they are re-fetched in the background and the cache is updated.
The default configuration caches results for 5 minutes and makes them stale immediately.
See: https://tanstack.com/query/v4/docs/guides/caching
The cacheTime and staleTime can be set as part of the useQuery options object as shown here for a 5 minute cache time and 1 minute stale time.
useQuery({ queryKey: ['todos'], queryFn: fetchTodos, staleTime: 1 * 60 * 1000, cacheTime: 5 * 60 * 1000 });
The refetching cached results strategy can be changed with options like refetchOnWindowFocus
See: https://tanstack.com/query/v4/docs/guides/important-defaults
You can stop this refetching of stale values on the window focus event like this:
const client = new QueryClient({
defaultOptions: {
queries: {
refetchOnWindowFocus: false,
},
},
});
I created a Spring Batch Integration project for process multiples files and it is working like a charm.
While I'm writing this question I have four Pods running, but the behaviour isn't like I'm expecting, I expect 20 files being processing at the same time (five per Pod).
My pooler setup is using the following parameters:
poller-delay: 10000
max-message-per-poll: 5
I also using Redis for store the files and filter:
private CompositeFileListFilter<S3ObjectSummary> s3FileListFilter() {
return new CompositeFileListFilter<S3ObjectSummary>().addFilter(
new S3PersistentAcceptOnceFileListFilter(new RedisMetadataStore(redisConnectionFactory), "prefix-"))
.addFilter(new S3RegexPatternFileListFilter(".*\\.csv$"));
}
Seems like each pod is processing only one file and also another strange behaviour is like one of the pods register all the files in the Redis, so the others Pods only get new files.
How is the best practice and also how to solve that for processing multiples files at the same time?
See this option on the S3InboundFileSynchronizingMessageSource:
/**
* Set the maximum number of objects the source should fetch if it is necessary to
* fetch objects. Setting the
* maxFetchSize to 0 disables remote fetching, a negative value indicates no limit.
* #param maxFetchSize the max fetch size; a negative value means unlimited.
*/
#ManagedAttribute(description = "Maximum objects to fetch")
void setMaxFetchSize(int maxFetchSize);
And here is the doc: https://docs.spring.io/spring-integration/docs/current/reference/html/ftp.html#ftp-max-fetch
It appears from the documentation that JReJSON only supports JSON. type queries, if I want to use EXISTS on a document I created with JSON.SET do I need an instance Jedis to test for EXISTS?
Using redis-cli I verified that a document I created with JSON.SET reports 1 when tested with EXISTS. so in theory using 2 different clients should work but I wondered if there was a less clunky way?
JReJSON has a constructor which get a Jedis Pool.
/**
* Creates a client using provided Jedis pool
*
* #param jedis bring your own Jedis pool
*/
public JReJSON(Pool<Jedis> jedis) {
this.client = jedis;
}
You can use the same pool for your "standard" Redis calls and to create JReJSON
Right now it is not possible to send an entity to Orion with an PAYLOAD_MAX_SIZE >1MB.
/****************************************************************************
*
*
* PAYLOAD_MAX_SIZE -
*/
#define PAYLOAD_MAX_SIZE (1 * 1024 * 1024) // 1 MB Maximum size of the payload
SourceCode Orion Payload_Max_Size
We have to transfer an entity (including a map/image) through the context broker and the size is > 1MB.
Do you have forseen it as a parameter for the docker compose file? If not, it would be really helpful.
Thanks for you help.
Are you sure you want to store an image in the Broker? You should store it in an Object Storage service but not in Orion.
Orion is suited for context information which is basically about entities (e.g. a car) and their attributes (e.g. the speed and location asssociated to that car). It is not suited for large binaries (such a PNG file) directly, although the usual pattern is to use the URL as refernce to the binary in a external system where it is stored. Have a look to this post for more details.
How do I configure a Virtual Procedure's result to never expire in cache? For example, how would I configure the ttl in this example so the cache never expires:
"/*+ cache(pref_mem ttl:70000) */
add /*+ cache */ to procedure definition.
See https://docs.jboss.org/author/display/TEIID/Results+Caching