How do I configure a Virtual Procedure's result to never expire in cache? For example, how would I configure the ttl in this example so the cache never expires:
"/*+ cache(pref_mem ttl:70000) */
add /*+ cache */ to procedure definition.
See https://docs.jboss.org/author/display/TEIID/Results+Caching
Related
In the PostgreSQL manual it says:
If the same channel name is signaled multiple times from the same transaction with identical payload strings, the database server can decide to deliver a single notification only.
Do you know how this "decision" is made?
That's an interesting question. Perhaps the documentation is unclear but in my experience duplicated notifications are sent only within subtransactions.
To not just guess here, let's open the PostgreSQL source code. Notification function has a test of duplicates:
/* no point in making duplicate entries in the list ... */
if (AsyncExistsPendingNotify(channel, payload))
return;
Ok, but it does not explain the possibility of duplicates. So, we can move forward and inspect the AsyncExistsPendingNotify function. Somewhere inside this function, we found our answer in a comment:
/*
* As we are not checking our parents' lists, we can still get duplicates
* in combination with subtransactions, like in:
*
* begin;
* notify foo '1';
* savepoint foo;
* notify foo '1';
* commit;
*/
So, that's it. We can have duplicated notifications when using subtransactions. The documentation could be clearer, but perhaps PostgreSQL's folks made it intentionally. Therefore I can conclude that avoiding duplicates, in this case, is not a strict requirement.
It is possible to take the specific TransactionalCache per transaction and invoke clear? I'm working with spring context but seems that the transactional cache manager is out of it.
It is not possible now (the current version is 3.4.6).
TransactionalCache is a private field of the TransactionCacheManager which is in turn a private field of the CachingExecutor.
The cache is only cleared during queries and updates when mapper configuration (flushCache attribute and query type) instructs mybatis to do so.
I want to monitor all queries to my postgresql instance. Following these steps, I created a custom db parameter group, set log_statement to all and log_min_duration_statement to 1, applied the parameter group to my instance, and rebooted it. Then I fired a POST request to the instance but a record of query wasn't to be found in the Recent Events & Logs tab of my instance. Doing a SELECT * FROM table query in psql however shows that the resource was created and the post request worked. What am I missing in order to see the logs?
Setting log_min_duration_statement to 1 tells Postgres to only log queries that take longer than 1 ms. If you set it to 0 instead, all queries will be logged (the default, no logging, is -1).
You followed the right steps, all that is left is to make sure the parameter group is properly applied to your Postgres instance. Look at the Parameter Group in the Configuration Details tab of your instance, and make sure they have the right group name followed by "in-sync".
A reboot is usually required when changing parameter groups on an instance, this may be what is (was?) missing in your case.
I have a business scenario where, whenever a new record is loaded into a DB table,
a) A notification will be sent to the client. Notification message is to convey data is loaded and ready for querying.
b) Upon receiving the notification, the Client will make an OData query to the JBOSS vitrual DB. Odata is supported by Teiid VDB
Problem is that: The new records (inserted via manual/automated SL script) that are not returned in the ODATA query response. It is always returning the cached result for first 5 minutes. Because the Odata has a default cache time setting to 5 minutes.
We want TEIID to always return all the records including the newly inserted one.
I tried the following option but it is not working as expected (https://developer.jboss.org/wiki/AHowToGuideForMaterializationcachingViewsInTeiid)
1) Cache hints
/*+ cache(ttl:300000) */ select * from Source.UpdateProduct
2) OPTION NOCACH
**** This works when I make a JDBC query to the DB.
Please suggest, how to turn off this caching in case of ODATA REST query ?
I think Teiid documentation https://docs.jboss.org/author/display/TEIID/OData+Support could help.
You don't specify version of Teiid you use, so I enclose the most current version's documentation.
Now when you go through the docs page, at the bottom there is section Configuration, where there are several configurable options.
Doesn't the skiptoken-cache-time option serve your need? Try setting it to lower value/zero and see if this helps. Just locate the odata war, open it, and change the WEB-INF/web.xml file.
Jan
I want to enhance my sites loading speed, so I use http://gtmetrix.com/, to check what I could improve. One of the lowest rating I get for "Leverage browser caching". I found, that my files (mainly images), have problem "expiration not specified".
Okay, problem is clear, I thought. I start to googling and I found that amazon S3 prefer Cache-Control meta data over Expiry date (I lost this link, now I think maybe I misunderstood something). Anyway, I start looking for how to add cache-control meta to S3 object. I found
this page: http://www.bucketexplorer.com/documentation/amazon-s3--how-to-set-cache-control-header-for-s3-object.html
I learned, that I must add string to my PUT query.
x-amz-meta-Cache-Control : max-age= <value in seconds> //(there is no need space between equal sign and digits(I made a mistake here)).
I use construction: Cache-control:max-age=1296000 and it work okay.
After that I read
https://developers.google.com/speed/docs/best-practices/caching
This article told me: 1) "Set Expires to a minimum of one month, and preferably up to one year, in the future."
2) "We prefer Expires over Cache-Control: max-age because it is is more widely supported."(in Recommendations topic).
So, I start to look way to set Expiry date to S3 object.
I found this:
http://www.bucketexplorer.com/documentation/amazon-s3--set-object-expiration-on-amazon-s3-objects-put-get-delete-bucket-lifecycle.html
And what I found: "Using Amazon S3 Object Lifecycle Management , you can define the Object Expiration on Amazon S3 Objects . Once the Lifecycle defined for the S3 Object expires, Amazon S3 will delete such Objects. So, when you want to keep your data on S3 for a limited time only and you want it to be deleted automatically by Amazon S3, you can set Object Expiration."
I don't want to delete my files from S3. I just want add cache meta for maximum cache time or/and file expiry time.
I completely confused with this. Can somebody explain what I must use: object expiration or cache-control?
S3 lets you specify the max-age and Expires header for cache control , CloudFront lets you specify the Minimum TTL, Maximum TTL, and Default TTL for a cache behavior.
and these header just tell when will the validity of an object expires in the cache(be it cloudfront or browser cache) to read how they are related read the following link
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html#ExpirationDownloadDist
For letting you Leverage Browser caching just specify the Cache control header for all the object on s3 do
Steps for adding cache control for existing objects in your bucket
git clone https://github.com/s3tools/s3cmd
Run s3cmd --configure
(You will be asked for the two keys - copy and paste them from your
confirmation email or from your Amazon account page. Be careful when
copying them! They are case sensitive and must be entered accurately
or you'll keep getting errors about invalid signatures or similar.
Remember to add s3:ListAllMyBuckets permissions to the keys or you will get an AccessDenied error while testing access.)
./s3cmd --recursive modify --add-header="Cache-Control:public ,max-age= 31536000" s3://your_bucket_name/
Your files won't be deleted, just not cached after the expiration date.
The Amazon docs say:
After the expiration date and time in the Expires header passes, CloudFront gets the object again from the origin server every time an edge location receives a request for the object.
We recommend that you use the Cache-Control max-age directive instead of the Expires header field to control object caching. If you specify values both for Cache-Control max-age and for Expires, CloudFront uses only the value of max-age.
"Amazon S3 Object Lifecycle Management" flushs some objects from your bucket based on a rule you can define. It's only about storage.
What you want to do is set the Expires header of the HTTP request as you set the Cache-Control header. It works the same: you juste have to add this header to your PUT query.
Expires doesn't work as Cache-Control: Expires gives a date. For instance: Sat, 31 Jan 2013 23:59:59 GMT
You may read this: https://web.archive.org/web/20130531222309/http://www.newvem.com/how-to-add-caching-headers-to-your-objects-using-amazon-s3/