I have a plug-in that I am distributing via an Eclipse update site.
I want to track how many times it is being downloaded, and preferably by whom.
For regular pages on my site, I can use Google analytics. However, Eclipse doesn't use any HTMLs when going for update sites.
Is there any way to do this when I don't have access to the hosting apache?
A dirty trick you can do is to add a dummy feature in your site.xml which points at a counter page instead of a .jar file. E.g.
<feature url="http://yourcountersite.com/counterpage.php" id="" version="">
<category name="YourCategory"/>
</feature>
The update manager will contact the counter page when it tries to install stuff from your update site but will otherwise ignore this feature.
AFAICT, your only option is to work with web server logs. I wrote this article about tracking Update Site downloads using AWStats, but it requires some server side tweaking. This is just one time set up, though. After that, you use AWStats using its' web interface.
When you say update site - do you mean your own URL or an eclipse repository ?
I guess really you need access to the logs of the underlying server. This way you could monitor whatever type of request eclipse initiates (I'd guess its just a standard HTTP request)
Steve
Related
What is the best way to get auto-reloading to work when developing my website (my website runs on Mojolicious)?
There exists a development server called morbo, and it does update what is served automatically whenever I save changes to a source file, but the website itself does not reload automatically. I must manually refresh the page to see the changes.
What is a sane way to get this behavior? I am okay with using an additional tool if necessary.
My understanding is that Mojolicious::Plugin::AutoReload can do what you want by defining a auto_reload endpoint and having the UI poll your web app to check if the UI should reload.
The module was featured on the Mojolicious blog in 2018.
I want to stress test a system based on Apache Wicket, using grinder.
So what I did was that I used grinder's TCP Proxy tool to record a test session in my Application and then fed the generated test script to grinder to stress test the system; but we found out the tests aren't carried out successfully.
After a lot of tweaking and debugging, we found out that the problem was within the wicket's URL generation system, where it mixes the page version number into its URLs.
So I searched and found solutions for removing that page version number from the URLs (Like this), and used them and they worked and removed those version numbers from the URLs used in the browser. But then again, the tests didn't work.
So I inspected more and found out that even though the URLs are clean now, the action attribute of forms still use URLs mixed with page version number like this one : ./?4-1.[wicket-path of the form]
So is there anyway to remove these version numbers from form URLs as well? If not, is there any other way to overcome this problem and be able to stress test a wicket web application?
Thanks in advance
I have not used grinder, but I have successfully load-tested my wicket application using JMeter Proxy; without changing Wicket's default version mechanism.
Here is the JMeter step-by-step link for your reference:
https://jmeter.apache.org/usermanual/jmeter_proxy_step_by_step.pdf
Basically, all I did was running proxy server to accept web requests from the browser to capture the test scenarios. Once done collecting the samples, then change the target host url to whichever server you want to point to (other than your localhost).
Alternatively, there is another load testing tool BlazeMeter (compatible with JMeter). You could add the chrome browser plugin for quick understanding.
Also, you might want to consider mounting your packages to individual urls for 'cleaner' urls. That way, you have set of known urls generated for pages within same package (for example, /reports for all the reports pages within reports package).
Hope this helps!
-Mihir.
You should not ignore/remove the pageId from the urls. If you remove them then you will request a completely new instance of the page, i.e. you will lose any state from the original page.
Instead of using the href when recording you need to use the attribute set (by you!) with org.apache.wicket.settings.DebugSettings#setComponentPathAttributeName(String).
So Grinder/JMeter/Gatling/... should keep track of this special attribute instead of 'href' and later find the link to click by using CSS/XSLT selector.
P.S. If you are not afraid of writing some Scala code then you can take a look at https://github.com/vanillasource/wicket-gatling.
We have an application that is built exclusively in dev mode using the embedded jetty server that comes with GWT. We also host on jetty.
There are a number of pages we use for development only to do things like simulate SSO requests, view emails that were sent through the system, and check what files are uploaded.
When we try to link from these pages into a GWT page the problem becomes that &gwt.codesvr=192.168.0.101:9997 is not included in the URL and we get the error message "GWT module 'YourApp' may need to be (re)compiled". Obviously I can paste in "&gwt.codesvr=192.168.0.101:9997" manually but is very annoying. Does anybody know of a way to detect you are in the embedded Jetty dev mode server and auto generate links to have the correct "&gwt.codesvr=192.168.0.101:9997" added on?
Try this solution: https://stackoverflow.com/a/9122167/970308
I've updated this bookmarklet. It isn't perfect, but makes it quick while developing.
I suggest you create a Filter which will simply redirect you to an address with &gwt.codesvr=192.168.0.101:9997 as soon as you navigate to the one of the "development pages". If codesvr parameter is specific for each developer, each developer will have to set it in some cookie and filter will simply take this value from cookie.
We've been trying to work with Liferay CMS to create Web Content (liferay terminology). The content is versioned in the sense, each time we change the content and publish it, the version increments.
This has an impact on the URL which is publicly exposed, and we're facing the trouble of changing the URL on content change.
Is there a way of getting a published URL that reflects change in content without changing the URL?
You could use friendly urls in this case. Have a look at this post for some more info.
It doesn't appear that you are able to grab the latest journal content with any invokable URL because it requires a version number to be passed along with the request (otherwise it will just grab the first version not the last).
A work around would be to create a hook plugin that modifies the /journal/view_article_content action path with your custom implementation to return the latest article.
See Liferay's Portal Hook Plugins wiki page on how to create a hook.
Then see Mika's blog post on the specifics of overwriting a struts path.
Good Luck!
Is there any test framework or software that can automatically go through a site and find 404 errors from links?
You could use an extension for your favourite browser, i.e. LinkChecker for Firefox.
Are you looking for a tool that does complete validation/checking of the site? Or one that does use-case testing of specific parts of the site.
For the latter I recommend TestPlan, it has the ability to check the headers of pages and work with the so-called "meta" response of the page.
The original web-site is no longer available but the project is now hosted on Launchpad.
For the former it isn't the best tool, but as part of a test framework it is easy enough to get it to scan through links on the site looking for errors.
If you're running on Windows there is this one.