How to "reset" or clear cache when calling a REST service in Xpages - rest

I have written a pretty straightforward REST service in Domino 9.0.1. I have a view with about 160K records, each record has about 10 field, and the first field is the key.
From time to time I need to make a change to the service, and when I remake the web service call (from the url in my web browser for instance), the data doesn't necessarily change. It seems it is cached on the there server. I believe it is a server issue as I still get the same results even if I switch to another browser. Sometimes I will change my parm and I get DATA FOR THE PREVIOUS PARM I ENTERED. This is terrible.
How can I reset the web service or flush the cache?

You have few options:
1) Add unique parameter to REST url, e.g.: http://hostname/rest/api/endpoint?systemtime=...
Where you compute systemtime value using System.currentTimeMillis()
2) Use HTTP request cache control headers, see http://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Avoiding_caching

Related

Azure data factory pagination doesn't work

I am working on a pipeline which executes oAuth2 flow in order to access REST API json data. Once I have the bearer token I am executing a request which returns the following structure:
As you can see, since the response is quite large, there's paging enabled and as part of the response I get a link to the next page. In order to get to that resource I need to also present MS-ContinuationToken in the headers. So, this is how I basically do it in the config of the Copy activity that I use to get the data from the REST endpoint:
and the issue here is that I only get the first 2000 rows and the next page(s) don't seem to be visited at all. Pipeline executes successfully and only the first 2000 items are fetched.
NOTE: continuationToken and links.next.headers.value have the exact same values from the initial response.
Even if you fix the other issue you’ll have an issue with the “next” URL not including “v1”. This is a known issue in the partner center api team. I’ve escalated it pretty high. But they don’t want to break backwards compatibility by changing the “next” URI to include the v1 or to be relative. They are considering other options but I wouldn’t hold your breath.
I would ditch the idea of using data factory and instead write a .NET console app using the partner center SDK
(You might think to paginate manually with loops etc but the Copy activity doesn’t return eg the http headers, so you will need a complex set up to somehow store the data in a data store and be able to look up the last page in order to get the continuation token. I couldn’t figure it out)

automatic hidden redirect from one html page to another

I'm pretty new to web programming. The whole project links two shops together.
One of our customer's who owns the first shob has provided some sort of an Event API which sends a request to one of our simple html/js files when an event occurs (different ones are specified - e.g. a new customer was created which needs to be synchronised between the shop databases). Included in his GET request is an URL as parameter.
We have to parse this given URL, sending a request to this URL and "reading" what the actual content of the event is (name of the new customer etc.), writing it into our database and giving a reponse to the initial request from the other shop (sucess or failure).
How can I do this in a simple way in an html/js file without setting up web services etc.? Don't bother about the database actions, its taken care of .. its about the automatic redirecting ..
Many Thx ..
J

Perl Catalyst; configuring session expire time and flash behaviour

I just discovered that when I configure the session plugin of a Catalyst app (Catalyst::Plugin::Session) to expire, it screws with the flash data. More specifically, I'm finding that flash data no longer carries over with a new request.
Does this sound normal? How might I cope with this?
Perfectly normal. The whole point of sessions is to be able to associated data from one request with data in another request. When you let the session for some request expire, you are saying that that request's data shouldn't have anything to do with any future request.
More specifically, the flash data is a part of the session data -- see the _save_flash method in the Catalyst/Plugin/Session.pm file, for instance. Also see the big warning for the delete_session method:
NOTE: This method will also delete your flash data.
How to cope with it? You need to persist data from a request using any scheme other than the Session plugin. Without knowing more about your app, what data you are trying to persist, and how you will associate data from an old session with a new request, I couldn't begin to make a more specific recommendation than that.
When configuring the session for example with a database backend you'll have to add flash_to_stash as an option:
<session>
dbi_dbh DB
dbi_table sessions
dbi_id_field id
dbi_data_field session_data
dbi_expires_field expires
flash_to_stash 1
expires 3600
</session>

Whats is the difference between Zend_Cache_Frontend_Capture and Zend_Cache_Frontend_Page

Can someone explain the difference between this two frontends
Zend_Cache_Frontend_Capture and Zend_Cache_Frontend_Page?
the Capture is the default one for page caching ... the weird thing is, it makes the id with get variables, but there is no options to set make_id_with_get_variables like its the case in
Page frontend....
can someone explain this ?
Here is my effort to explain the differences between the two.
To start out, let's look at Zend_Cache_Frontend_Capture. The reference states that this class is designed to work only with Zend_Cache_Backend_Static.
You would use Zend_Cache_Frontend_Capture to cache entire pages that have no relation to the user accessing the site. You use this frontend when you have static data (that could change from time to time) that has no relation to the current user, that is, it is the same for all users (like an RSS feed or dynamically created JavaScript file for example.
Looking further into the Zend_Cache_Backend_Static, you will see that this backend is a bit special. It requires rules in your .htaccess file to assist with serving the cache. Once you have cached something with Frontend_Capture/Backend_Static, PHP and Zend Framework are NOT used in order to serve the cached data. Apache sees that the cache file exists based on your .htaccess and serves the content directly to the user without invoking PHP.
Zend_Cache_Frontend_Page on the otherhand works differently. With it, you can cache content not only based on the request URI, but also based on information in a cookie, session, GET, or POST parameters. By default, caching based on cookie, session, get, and post is disabled, so for this to have any effect on a user logged into your site, you have to tell the cache if there are any pages you want to cache based on that information.
Once I create a cache and tell it I want to cache based on cookie and session, I can now cache a dynamically generated page that is specific to one user. So if person A accesses /accounts/, the page can be cached for that specific user containing the list of their accounts that was pulled from the database. Now when person B accesses /accounts/, they do not see the cache for person A, so the page is now cached separately for them with each respective user's information in their own cache.
In summation:
Use the Capture frontend when you have data you can cache that is the same for ALL users. This will be a higher performance cache since PHP and ZF is not needed once the page is cached. The downside is having to add caching rules to .htaccess
Use the Page frontend if you want to cache pages with dynamic output based not only on request URI, but the cookies, session data, or get/post parameters.
Hope that is clear and helps you understand the differences.
EDIT:
I believe I see what the problem is, not sure if this is classified as a bug or not though.
Zend_Controller_Action_Helper_Cache::preDispatch() generates the cache ID based on the request URI (which includes the query string). Since the jQuery ticker appends a query string to the URL, you are caching one copy of the feed for each request URI. (Look for $reqUri in the aforementioned class method).
I see a couple of options: 1) See if you can get the ticker to not append the query string (at least for that specific URL) or 2) Manually start the Capture cache and pass your own ID, rather than letting the cache helper generated it based on the request URI.

Why GWT URL doesn't change on an event or a service call?

I have two questions:
Q: 1
I'm currently developing a GWT app. The entry point for the app is: ImageViewer.java. I could well access it by http://127.0.0.1:8888/ImageViewer.html?gwt.codesvr=127.0.0.1:9997. I have a service called "Search" which has corresponding "Async" and "Impl"'s defined. Now, I call the service from client side, using RPC. I could call the service, obtain return value. Everything works fine.
However, I expect the application to show a behavioral change on URL. i.e. when a service is being accessed, I thought it would be reflected on the browser's URL something like: http://127.0.0.1:8888/search?gwt.codesvr=127.0.0.1:9997 as I've modified web.xml. However, this behavior is not realized. Any particular reason why this is not reflected??
Q:2
This one is a reverse of the previous ques. i.e. I have an application running. Let's say it has an entrypoint class(Imageviewer.java) and another composite class (searchClass.java) which would be loaded on the Imageviewer based on an event. This searchClass invokes the "search" service mentioned in the previous question.
I could load the "searchClass" in "Imageviewer", invoke the service, and the service also returns the value needed. Everything works fine... But,
I need something like this: by just typing this query string:
http://127.0.0.1:8888/search?value=John
I want the "searchClass" to be loaded on the "ImageViewer", call the service using the value(which is "john" in this case) and display the result. Is this possible at all?
what I've tried: I have tried to create a httpServletClass on the server and mapped it with the URL and could do the search. The search returns appropriate results. However, I want the results from the server to be displayed on the client. Remember, I'm directly using a servlet to read the URL and so there is no value being passed from client to server.
Thanks in advance.
A: 1. To change URL, the hash part, you need to set new history token in the History class. More about history management in this article.
A: 2. For the second part you could achieve it by changing the history token, for instance "http://127.0.0.1/search#value=John". The history service will trigger an event if the # part changes. You could also use the part with "?", as in your example, if you use Window.Location , but it will cause reload of the application, which would put the whole idea of using GWT in question.
RPC (AJAX) calls are done Via XHR and do not change the browser URL.
You can't (with the URL you presented). GWT apps normally run in one web page, i.e. the URL does not change (see how gmail changes browser url bar). What you can do is enable GWT history support. Then your url would be http://host/#search?value=queryu