How to rewrite URLs on jboss - jboss

I am developing a Java EE project (used EJB3, JSF and maven) running on JBoss AS 4.2.x.GA app server.
I want to rewrite my URLs while passing param values between pages.
For instance, when user clicks a submit button, some params are added to the end of the URL; however I want it to be more clear like:
../testApp/testPage/12 instead of ../testApp/testPage.jsf?id=..
How to achieve that?

The most used solution with Java is URLRewrite Filter .
The newer versions also have syntax that looks very similar to the very wide used and known "mod_rewrite" one (since this is what most apache httpd based servers use).
You can find documentation and examples there, and many of the solutions on the google group too - since what you mention in your question is a very a common requirement for many applications.
Also please note that you might need both inbound and outbound rules for rewriting too (you'll find examples there), as URLRewrite Filter can't automatically calculate "the inverse" of a rewrite expression.
For rewriting solutions in general, if a user is not quite fluent with regular expressions, than it would make sense to install in the favorite IDE some sort of RegExp plug-in to try those rewrite expressions first (it saved me allot of time in the past :) ).

Related

how to call SWRL rules from netbeans and retrieve data in netbeans .

I really need your help regarding calling SWRL rules from netbeans and retrieve data in netbeans .
I have servelet and jsp pages in my netbeans , i have owl-api as well.
I have ontology in my protege and 2 simple SWRL rules in side protege as well.
im new in this field and need to know how to call classes and from owlapi and how to send request to protege and how to return back the result of swrl ruls in netbeans by servelet .
it would be appreciated if you help me
sincerely
--
Mehdi Tarabi
Results of SWRL rules require a reasoner supporting SWRL rules. The results of reasoning with SWRL rules are common axioms, there is no special method to obtain them. Protege is not required for this purpose; perhaps you're planning to use the SWRLAPI project?
Update: After reading the below comments, I'm convinced your best bet is using the SWRLAPI project. See here for its documentation and especially the section describing how to run SWRLAPI outside Protege:
If you'd like to be able to execute SWRL rules or SQWRL queries you will need a SWRLAPI-based rule engine implementation. Currently, a Drools-based SWRL rule engine implementation is provided. This implementation is also hosted on Maven Central. Its dependency information can be found here: https://maven-badges.herokuapp.com/maven-central/edu.stanford.swrl/swrlapi-drools-engine

deleting page version numbers in form action URLs in wicket for stress testing purposes

I want to stress test a system based on Apache Wicket, using grinder.
So what I did was that I used grinder's TCP Proxy tool to record a test session in my Application and then fed the generated test script to grinder to stress test the system; but we found out the tests aren't carried out successfully.
After a lot of tweaking and debugging, we found out that the problem was within the wicket's URL generation system, where it mixes the page version number into its URLs.
So I searched and found solutions for removing that page version number from the URLs (Like this), and used them and they worked and removed those version numbers from the URLs used in the browser. But then again, the tests didn't work.
So I inspected more and found out that even though the URLs are clean now, the action attribute of forms still use URLs mixed with page version number like this one : ./?4-1.[wicket-path of the form]
So is there anyway to remove these version numbers from form URLs as well? If not, is there any other way to overcome this problem and be able to stress test a wicket web application?
Thanks in advance
I have not used grinder, but I have successfully load-tested my wicket application using JMeter Proxy; without changing Wicket's default version mechanism.
Here is the JMeter step-by-step link for your reference:
https://jmeter.apache.org/usermanual/jmeter_proxy_step_by_step.pdf
Basically, all I did was running proxy server to accept web requests from the browser to capture the test scenarios. Once done collecting the samples, then change the target host url to whichever server you want to point to (other than your localhost).
Alternatively, there is another load testing tool BlazeMeter (compatible with JMeter). You could add the chrome browser plugin for quick understanding.
Also, you might want to consider mounting your packages to individual urls for 'cleaner' urls. That way, you have set of known urls generated for pages within same package (for example, /reports for all the reports pages within reports package).
Hope this helps!
-Mihir.
You should not ignore/remove the pageId from the urls. If you remove them then you will request a completely new instance of the page, i.e. you will lose any state from the original page.
Instead of using the href when recording you need to use the attribute set (by you!) with org.apache.wicket.settings.DebugSettings#setComponentPathAttributeName(String).
So Grinder/JMeter/Gatling/... should keep track of this special attribute instead of 'href' and later find the link to click by using CSS/XSLT selector.
P.S. If you are not afraid of writing some Scala code then you can take a look at https://github.com/vanillasource/wicket-gatling.

Perl: Parsing AJAX loaded content

This is an age-old question regarding perl web scrapers after Web 2.0; they simply cannot parse dynamically loaded pages because they need some sort of JavaScript engine in order to render the page. This issue is much more involved than simply rendering JavaScript, since Perl would also have to be able to manage and maintain the DOM.
It seems WWW::Selenium and WWW::Mechanize::Firefox is able to accomplish this by utilizing FireFox (or other browsers) to do the rendering for it. However, V8 has become so popular (as seen with Node.js), so I'm curious if there are any new libraries that utilize it or there has since been a browser-independent solution, which I'm not aware.
I might usually consider this a closable question, but with so few results when Googling and on Stack Overflow, there shouldn't be too many solutions (if any).
Related (older) Questions:
How can I use Perl to grab text from a web page that is dynamically generated with JavaScript?
How can I handle Javascript in a Perl web crawler?
You mentioned Selenium but there is the later version Selenium::Remote::Driver which works with a selenium 2.0 hub.
I see you can also use it without a Selenium hub
Without Standalone Server ( I haven't used this part)
As of v0.25, it's possible to use this module without a standalone
server - that is, you would not need the JRE or the JDK to run your
Selenium tests. See Selenium::Chrome, Selenium::PhantomJS, and
Selenium::Firefox for details. If you'd like additional browsers
besides these, give us a holler over in Github.
PhantomJS may be of interest as it is a headless browser
This is probably not an answer but it was too long for a comment

Integrate Client-side Validation

EDIT
contacted the author of play-js-validation. Bleeding edge stuff; Play has to be compiled against scala virtualized on to-be-released 2.10, and nested case classes are not yet supported. Really impressive project, I hope it comes to fruition as the prototype does almost exactly what I was hoping for...
Found this:
https://github.com/namin/play-js-validation
Anyone know if there are plans for built-in client-side validation in Play 2.0?
I am currently generating controller, model (with form validation), and dao scala files based on an existing DB schema; would love to include client-side validation as part of that process!
Thanks for clues, insider knowledge, etc.
p.s. Play user group is, to say the least, busy; most posts seem to be completely ignored (of course, many Stackoverflow Play-related questions go unanswered as well, so this thread may be DOA...)
There's no such plans I'm afraid, at least didn't hear about (note: I'm not a dev team member, just Player)
Check tickets on Play's Lighthouse
On the other hand I doubt if this fits Play's assumptions at all. Client-side validation is done with some external JS solution which should not be determined by framework, nobody said that it should use ie. jQuery by default.
Finally, the only thing to use client-side validation is just to include the JS libs and add proper attributes to your form fields, ie it will create tag that you can validate with jQuery Validation plugin:
#inputText(entrantForm("identitynumber"),
'_label->"Identity number",
'class -> "required",
'minlength -> "11",
'maxlength -> "11")

Dynamically generated GET request to an external database

I'm asking for help to my problem. I am new to JSF and I have a simple JSF online store demo page. I don't even use navigation rules since I only include the page with search results beneath the searching tags fields. The problem is a have sth like 15 fields (input texts and menus) to perform a detailed search. After selecting the fields and clicking on the search button I have to generate a long GET request for the database (which is located on a different server than my page and uses REST), receive the response (xml format), extract the search results and publish them on the page. The search patter is sth like this:
http://serveradress/search/ [x1][x2][x3]....[xn]
Where x1-xn are the values for the search engine and have to be read from the page's fields, so it has to be generated dynamically. The get request can be very long since there are 15 fields and one can have some additional options. The data base is on a different server and responds with an xml script with search results.
I found some solutions on the internet on how to perform a GET request using params but don't really know how can it fit to my problem since I have to receive the results from an external data base and manage them rather inside the java bean for publishing (i do not want to change the url adres of my page).
I am using JSF 1.2, with Eclipse IDE and JBoss on Ubuntu. The search request has to be GET since the data base uses that REST interface.
I am asking for your help in this matter if someone is able to find a solution to this problem or provide me with some link. I would strongly appreciate an example code with the solution.
Use JBOSS resteasy RESTful API's