deleting page version numbers in form action URLs in wicket for stress testing purposes - wicket

I want to stress test a system based on Apache Wicket, using grinder.
So what I did was that I used grinder's TCP Proxy tool to record a test session in my Application and then fed the generated test script to grinder to stress test the system; but we found out the tests aren't carried out successfully.
After a lot of tweaking and debugging, we found out that the problem was within the wicket's URL generation system, where it mixes the page version number into its URLs.
So I searched and found solutions for removing that page version number from the URLs (Like this), and used them and they worked and removed those version numbers from the URLs used in the browser. But then again, the tests didn't work.
So I inspected more and found out that even though the URLs are clean now, the action attribute of forms still use URLs mixed with page version number like this one : ./?4-1.[wicket-path of the form]
So is there anyway to remove these version numbers from form URLs as well? If not, is there any other way to overcome this problem and be able to stress test a wicket web application?
Thanks in advance

I have not used grinder, but I have successfully load-tested my wicket application using JMeter Proxy; without changing Wicket's default version mechanism.
Here is the JMeter step-by-step link for your reference:
https://jmeter.apache.org/usermanual/jmeter_proxy_step_by_step.pdf
Basically, all I did was running proxy server to accept web requests from the browser to capture the test scenarios. Once done collecting the samples, then change the target host url to whichever server you want to point to (other than your localhost).
Alternatively, there is another load testing tool BlazeMeter (compatible with JMeter). You could add the chrome browser plugin for quick understanding.
Also, you might want to consider mounting your packages to individual urls for 'cleaner' urls. That way, you have set of known urls generated for pages within same package (for example, /reports for all the reports pages within reports package).
Hope this helps!
-Mihir.

You should not ignore/remove the pageId from the urls. If you remove them then you will request a completely new instance of the page, i.e. you will lose any state from the original page.
Instead of using the href when recording you need to use the attribute set (by you!) with org.apache.wicket.settings.DebugSettings#setComponentPathAttributeName(String).
So Grinder/JMeter/Gatling/... should keep track of this special attribute instead of 'href' and later find the link to click by using CSS/XSLT selector.
P.S. If you are not afraid of writing some Scala code then you can take a look at https://github.com/vanillasource/wicket-gatling.

Related

How to reverse engineer a progressive web app ?

I found this free PWA https://www.the-qrcode-generator.com and now wonder how I could do one such myself.
Since I couldn't find any access to its source code I wondered if it'd be difficult to reverse engineer.
I'm interested in building a PWA with QRCode functionality.
This one was created with AngularJS v1.3.20. You can find the source in your console windows under Sources tab. You can easily beautify the code inside the window to make it readable.
If you want to know how they organized their rest API, the browser network tab will help a lot, just filter by XHR and examine all the call from the front end to be.
The front end is very hard to revers engineer, because most sites are served as minified bundles, so you can't see the original code.
You can however find some other information about what they used to build it, for example in the html source you can see some ng-* tags, which indicates that this is angular, you can also see that body has attribute data-ng-app meaning this is angularjs and so on.
For the QR logic you can see that there are no back end calls, meaning that it is written entirely in the client. I would search for already available solutions for that.

Perl: Parsing AJAX loaded content

This is an age-old question regarding perl web scrapers after Web 2.0; they simply cannot parse dynamically loaded pages because they need some sort of JavaScript engine in order to render the page. This issue is much more involved than simply rendering JavaScript, since Perl would also have to be able to manage and maintain the DOM.
It seems WWW::Selenium and WWW::Mechanize::Firefox is able to accomplish this by utilizing FireFox (or other browsers) to do the rendering for it. However, V8 has become so popular (as seen with Node.js), so I'm curious if there are any new libraries that utilize it or there has since been a browser-independent solution, which I'm not aware.
I might usually consider this a closable question, but with so few results when Googling and on Stack Overflow, there shouldn't be too many solutions (if any).
Related (older) Questions:
How can I use Perl to grab text from a web page that is dynamically generated with JavaScript?
How can I handle Javascript in a Perl web crawler?
You mentioned Selenium but there is the later version Selenium::Remote::Driver which works with a selenium 2.0 hub.
I see you can also use it without a Selenium hub
Without Standalone Server ( I haven't used this part)
As of v0.25, it's possible to use this module without a standalone
server - that is, you would not need the JRE or the JDK to run your
Selenium tests. See Selenium::Chrome, Selenium::PhantomJS, and
Selenium::Firefox for details. If you'd like additional browsers
besides these, give us a holler over in Github.
PhantomJS may be of interest as it is a headless browser
This is probably not an answer but it was too long for a comment

Practical way to use DocPad for a site with multiple landing pages

I would like to use DocPad together with its built-in server.
For a website with a single landing page I have set up the structure as recommended on DocPad website: all page sources go into the src/documents, static files into src/files and layouts for the page sources into src/layouts. Then docpad run will generate the resulting site in the out directory and launch a web server using which I can inspect the current state in my browser.
Now, I would like to do the same with multiple landing pages. That means, I plan to deploy files to one site, http://site-one.org and other files to another site http://site-two.org. I am doing this by deploying web pages to two distinct directories /public/site-one/ and /public/site-two/ at my web hoster.
What is the most practical way to accomplish this with DocPad?
These are the things I have tried so far -- they both work (sort of), but neither is very elegant:
Let page sources go into src/documents/site-one and src/documents/site-two and static files into src/files/site-one and src/files/site-two. This renders the entire site properly and it can be uploaded easily. However, the entire DocPad infrastructure with live-reload and the built-in server no longer works (since the built-in server root directory points to out/ instead of out/site-[one|two]/ as it should).
Have two separate DocPad-installations with duplicate docpad.coffee files, duplicate plugins and partially duplicate src/* trees and then upload the resulting out tree to the corresponding sub-directory on the server.
Update: When using option 1, instead of using DocPad built-in live-reload feature it's possible to use one's own web server, point it to out/site-one on one port and out/site-two on another port and then use the grunt live-reload feature which is part of grunt-contrib-watch, where grunt is available as a plugin here. It requires adding a single line of code to the template file, see this link, and to configure the plugin, see this link.
Update 2: A possible solution would involve the ability to set a custom directory for the server. By default it is set to the out directory. While that can be changed, it is not possible to specify separate directories for the generate action and the watch action. However, I have not found such an option in the configuration files.
Turns out that this is much easier than I thought. DocPad has a scarcely documented feature called "environments". This allows to customize several of the regular configuration variables, making it easy to pick the desired landing page on the command line.
This blog post on multiple languages demonstrates exactly how to customize both the document and the output paths in different environments. Applied to my original problem it becomes:
docpadConfig = {
...
environments:
site-one:
documentsPaths: ['documents_site-one']
outPath: 'out_site-one'
site-two:
documentsPaths: ['documents_site-two']
outPath: 'out_site-two'
...
}
Then it is as simple as
docpad [generate|run|...] --env [site-one|site-two|...]
to run regular commands like generate etc. for one of the landing pages by picking the proper custom environment.

SilverStripe CMS times-out when changing pages in the CMS

I have installed SilverStripe on several servers successfully in the past (but I'm not a SilverStripe expert). This time my SS install fails to work and I'm at a loss how to fix it.
The Problem
SilverStripe 2.4.6 installed correctly on the server (AFAIK).
The front-end works as expected. (Show default theme. Pages all load correctly.)
I am able to log into the CMS admin section succesfully. The CMS loads but when changing site pages in the CMS using the browser pane on the left, the CMS shows the circular loading symbol. The new page load never completes.
Using the console of Firebug in Firefox - When attempting to change pages in the CMS (by clicking on the page browser pane) the CMS tries to load two pages. The second page request 404s.
The first GET request is from the initial page loads.
The following POST+GET requests fire when clicking on the page tree to change pages.
Attempting to Find the Solution
I've tried deleting and re-installing silverstripe twice. (2.4.7 and 2.4.6) Both times the problem recurs.
A strange thing is that this server is already running two other silverstripe sites (both of which I installed without a hitch). All three websites are accessed via different domains. I tried accessing this install via another domain thinking there might be something wrong with how this third domain is configured but that didn't help either.
What should I try now? I'm stumped.
Thanks in advance.
Responses to Comments
Check your root .htaccess file. Make sure RewriteBase is set to /
Checked. Full .htaccess on PasteBin
Indeed the javascrip URL is strange. Check if there is anything unusual about what's being returned from the previous POST request. Is the site running in dev, test or live mode?
I can't see anything unusual in the POST request.
Clue Found: The site is running in DEV mode. Switching to LIVE mode and the problem disappears. Also the second GET request only shows up in DEV mode.
Example Post request with response.
Example Get request with respones.
This is a work around more than a fix but if you'd rather be coding than bug hunting it might be worth a go! (remember to log out of SS before doing this fix)
In your mysite/_config.php file change
Director::set_environment_type("dev");
to
if(!isset($_GET['isDev']))
Director::set_environment_type("dev");
else
Director::set_environment_type("live");
Then you can develop the website in dev mode normally and to use the admin in live mode and avoid the bug you just go to: http://{your_domain}/admin?isDev=0
N.B. might find a proper answer when pastebin.com isn't overloaded and I can see your responses!

Detecting what &gwt.codesvr should be set to in non-gwt pages in a GWT/servlet app?

We have an application that is built exclusively in dev mode using the embedded jetty server that comes with GWT. We also host on jetty.
There are a number of pages we use for development only to do things like simulate SSO requests, view emails that were sent through the system, and check what files are uploaded.
When we try to link from these pages into a GWT page the problem becomes that &gwt.codesvr=192.168.0.101:9997 is not included in the URL and we get the error message "GWT module 'YourApp' may need to be (re)compiled". Obviously I can paste in "&gwt.codesvr=192.168.0.101:9997" manually but is very annoying. Does anybody know of a way to detect you are in the embedded Jetty dev mode server and auto generate links to have the correct "&gwt.codesvr=192.168.0.101:9997" added on?
Try this solution: https://stackoverflow.com/a/9122167/970308
I've updated this bookmarklet. It isn't perfect, but makes it quick while developing.
I suggest you create a Filter which will simply redirect you to an address with &gwt.codesvr=192.168.0.101:9997 as soon as you navigate to the one of the "development pages". If codesvr parameter is specific for each developer, each developer will have to set it in some cookie and filter will simply take this value from cookie.