Disable session timeout in just one jsp in Liferay - liferay-6

I have set the session timeout in my Liferay 6.x to ten minutes and works great, but now I need to override it to a larger value in just one page of the web, as it has a pretty long read and my customers can't finnish it.
Is there any magic javascript, or maybe I need to move that jsp to a different portlet by itself, or what?
EDIT: There's an AUI().ready in a main.js, maybe there?

You can call Liferay.Session.extend() in loop every let's say 9 minutes from javascript until user leaves this webpage

Related

How to add wait / Delay until web page is fully loaded in Automation Anywhere?

I want to know 'How to add wait or Delay until webpage is fully loaded,' in automations anywhere,
I used
wait for screen change
But it hold the process until some time specified by the developer , but I want to add delay until the web page fully loaded,
Is there anyone can help me?
sorry for the bad English.
Usually, a website is "loaded" or "ready" before the actual content is loaded. Some websites even have dummy content which is replaced once the actual content is retrieved from 'somewhere'. Hence waiting for the screen to change is not a good idea.
My approach is to pick an element which you know is loaded after the element you want to interact with. For instance the navigation bar on this website is loaded before the comments are. You can either figure out which element to use by looking at the source of the website by right-clicking anywhere and selecting view source or by simply refreshing the page a couple of times and eye-balling it. The former requires some HTML knowledge, but is a better approach in my opinion.
Once you've identified your element, use Object Cloning on said element and use the built-in wait as a delay (usually set to 15 sec, depending on the website/connection). The Action should be some random get property (store whatever you retrieve in some dummy variable as we're not going to use it anyway).
Object Cloning's wait function polls every so many milliseconds and once the element is found it will almost instantaneously go to the next line in the code. This is where you interact with your target element.
This way you know your target element is loaded and the code is very optimized and robust.
On a final note: It's usually a good idea to surround this with some exception handling as automating websites is prone to errors.
A very simple solution is to run your automation while watching and determine the amount of time it takes for the webpage to load. You can add a Delay rather than a wait if you know the page is generally loaded within 30 seconds or so.

Nutch - How to crawl only urls newly added in the last 24 hours using nutch?

I'm using Nutch 1.7 and everything seems to be working just fine. However, there is one big issue I don't know how to overcome.
How can I crawl ONLY urls newly added in the last 24 hours. Of course we could use adaptive fetching but we hope there would be another better way that we are not aware until now.
We only need the urls that are added in the last 24 hours as we visit our source-websites every each day.
Please let me know if nutch can be configured and setup to do that or if there is a written plugin for crawling only urls added in the last 24 hours.
Kind regards,
Christian
you gain your new urls by parsing HTML!
there is no way that you could specify an anchor's lifetime by parsing an
<a>
tag!
you have to have a list for old urls in your DB so you can skip them!

Save history in iPhone webapp

I made an iPhone Webapp that allows an user to consult a distant database. At some point the user has to enter a code wich is quite long (about 17 digits). I would like to make the webapp remember the 3 or 5 last codes he typed.
How can I achieve this using the cache-manifest? (I have never used it but it looks like the right solution).
Thanks you for your attention.
In the end, I figured out that using cookies would be the easiest way to do it, and hopefully cookies are actually kept for Webapps. I will struggle with the cache-manifest on an other episode.

How are real time updates in Facebook, Twitter and Google Plus performed

When I try to implement a real time update system on my site, I usually make an ajax call, say every 5 secs to a processing file, say getUpdates.php (not sure if it's the right way to do). Get the updates from there and display it. Doing that, when I look at firebug or the developer tools in Chrome and Safari, I can see the file being called ever 5 secs in the XHR section of the tool; after which the updates are displayed.
In case of Google plus, twitter and Facebook, I don't see such a regular call although updates are appearing right in front of me.
How is it that they are doing or is it me not noticing such a regular call??
They use "Long polling" I think. Sounds like a fine excuse to dabble with node js if you ask me. :)

while using Tiny mce pages take for ever to load

I have a few forms that use tiny mce. I have noticed recently that the page takes for ever to load (over 2 minutes), as soon as I comment out the text area that uses tiny mce the page loads just fine (under 5 seconds). I have no clue what is going on since it was working just fine in my local machine until last week. I'm using apache2, php 5, mysql and xajax.
I have been using xdebug to find out what is wrong and all the code finishes running in the server side, but the browser keeps waiting for the page to finish loading, making the navigation and the form impossible to use.
Any leads on what could be going on will be of great help.
Have you taken a look at Firebug on the Net tab to see wether some file is being loaded too long?