ScalaCheck generator for web URLs - scala

Wondering if anyone had to do this in using ScalaCheck: Create a custom generator for spitting out large number of URLs. Actually there is a caveat to this that I want to test a service which accepts ONLY valid/working web URLs. I am thinking if I get a large number of valid external/WEB URLs in a file and somehow feed in to the custom generator, only can make this possible?
something like
val genUrls = for {
url <- "URL1" | "URL2" | "URL3"
}yield url
does this sound like a reasonable and actually more importantly doable approach?

UrlGen seems to give that, since uses a list of top urls, but cannot find the artifact in maven repo anywhere. Raised an issue.
PS
You can always add .suchThat(exists), where exists does make sure the URL exists during the test or better yet do it once, before the start of the tests, i.e. making sure all these guys exist.

Related

How do I get Gatling reports to show URLs instead of request_0 etc?

I'm new to Gatling, apologies if this is a complete noob question.
The "Details" tab of my Gatling report looks like this:
The left-hand menu contains all the requests that were made. My problem is that, in all but a few rare cases, they're just labelled "request_x" instead of the URL or filename. So where there is a bottleneck I can't tell what page or resource was causing it.
I found that if I manually edit the .scala file before running the scan, I can change each one by hand, e.g. if I change...
.exec(http("request_0")
.get(uri01)
.headers(headers_0)
.resources(http("request_1")
.get(uri02)
.headers(headers_1)))
...to..
.exec(http(uri01)
.get(uri01)
.headers(headers_0)
.resources(http(uri02)
.get(uri02)
.headers(headers_1)))
...it seems to have the desired effect. But I don't want to have to change hundreds of these by hand every time I have a new test to run.
Surely there's a better way?
FWIW I'm generating this scala file using Gatling's "recorder" with an HAR file exported from Chrome, as opposed to running the recorder as a proxy. But I have tried the proxy option and got the same end result.

Restlet routing

I'm trying to work out the best/most performant/easiest to maintain version for handling many different URLs in restlet.
eg, if I want to have an Item resource, is there a better way to do it than this?
router.attach("/items", ItemResource.class);
router.attach("/item/{itemid}", ItemResource.class);
router.attach("/items/list", ItemResource.ItemListResource.class);
router.attach("/items/weapons", ItemResource.WeaponListResource.class);
router.attach("/items/armours", ItemResource.ArmourListResource.class);
...
(I tried with having /items/{itemid}, but then /items/weapons etc could not be accessed.)
ItemResource then has #Get for fetching a single item, but also has #Put for saving an item when just /items is used. Something feels a bit wrong here... Is there a better way to have fetching/inserting/updating/listing for items in this case?
Also, this router.attach list is very long, 100 or so items. Since this has to be run through on every request it would probably be fairly slow. I know I can attach multiple routers together in a chain - but I can't find documentation on how to do this nicely. What's the best way to chain routers and keep them maintainable?
Just put the
router.attach("/item/{itemid}", ItemResource.class);
on the lowest part of the routing, since it will catch all path parameters, so before it matches everything, rout the typed path first. This should fix it based on my experience.

Updating value on a xml located on the web

I have a file on the web, that looks like this. I would like to update the value a 1 tag
Let's say "tempset" and change 150 for another number. How can i do this? NSURLConnection? NSMutableURLRequest? NSURLRequest? If possible keep it to iOs 4! Thanks!
<Courbe>
<age>45</age>
<tempdesi>150</tempdesi>
<vmininit>35</vmininit>
<tempinit>220</tempinit>
<unittemp>0</unittemp><te_fin_c>220,700,700,700,700,700,700,700,700,700</te_fin_c> <vm_fin_c>50,50,50,50,50,50,50,50,50,50</vm_fin_c>
<grfan_a>1,1,1,1,1,1,1,1,1,1</grfan_a>
<ecarnuit>0</ecarnuit>
<tempset>150</tempset>
<tempsetp>700</tempsetp>
<jo_cou_t>1</jo_cou_t>
<ty_stcha>1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1</ty_stcha>
</Courbe>
You have several options:
create a "service" for this , a kind of API so you can call this service from your client in different languages include ObjectiveC
like http://myserver.com/myobject/set?tempset=1
(in real world, use post and not get for this)
of course, to do this you need to write some server part in your favorite language
provide a way to upload the file and replace it completely, a kind of "upload.php"
Which solution is the best depends on your problem: how this file is generated and maintened

What may be an efficient way to judge whether a user stays on the same domain while browsing (using a Firefox extension)?

While browsing a website, say www.example.com, it is possible that the user enters some subdomain say www.sub.example.com
But the base webdomain remains the same.
In my firefox extension, I need to develop a "session" which remains active until a user remains on the same domain.
A simple solution would be to extract the domain from the url first. Then split the string with "." as token. Thus, we get sub and example for the second case and just example for the second case. I can then compare both. If any are equal, I deduce that the domains are same.
Though this might work, this seems more like a hack and might be prone to false positives/negatives.
Is there a more efficient / cleaner solution to the same problem?
Info: I am using GWT to build the extension
You should use nsIEffectiveTLDService, it will handle things like sub.example.co.uk correctly. You can pass an nsIURI instance to nsIEffectiveTLDService.getBaseDomain(), aAdditionalParts parameter should be zero. For http://sub.example.co.uk/foo you will get example.co.uk back - the actual "domain" part. Then you can simply compare the domain names for two URLs.

Trying to figure out what {s: ;} tags mean and where they come from

I am working on migrating posts from the RightNow infrastructure to another service called ZenDesk. I noticed that whenever users added files or even URL links, when I pull the xml data from RightNow it gives me a lot of weird codes like this:
{s:3:""url"";s:45:""/files/56f5be6c1/MUG_presso.pdf"";s:4:""name"";s:27:""MUG presso.pdf"";s:4:""size"";s:5:""2.1MB"";}
It wasn't too hard to write something that parses them and makes normal urls and links, but I was just wondering if this is something specific to the RightNow service, or if it is a tag system that is used. I tried googling for this but am getting some weird results so, thought stack overflow might have someone who has run into this one.
So, anyone know what these {s ;} tags are called and if there are any particular tools to use to read them?
Any answers appreciated!
This resembles partial PHP serialized data, as returned by the serialize() call. It looks like someone may have turned each " into "", which could prevent it from parsing properly. If it's wrapped with text like this before the {s: section, it's almost definitely PHP.
a:6:{i:1;a:10:{s:
These letters/numbers mean things like "an array with six elements follows", "a string of length 20 follows", etc.
You can use any PHP instance with unserialize() to handle the data. If those double-quotes are indeed returned by the API, you might need to replace :"" and ""; with " before parsing.
Parsing modules exist for other languages like Python. You can find more information in this answer.