Maven: How to activate inactive proxies via command line? - command-line

I have a proxy server configured for Maven via the per-user settings.xml file. The documentation snipped of the default settings.xml template suggests that it is possible to influence which of the configured proxies is used via a command-line switch:
<!-- proxies
| This is a list of proxies which can be used on this machine to connect to the network.
| Unless otherwise specified (by system property or command-line switch), the first proxy
| specification in this list marked as active will be used.
|-->
However, I have found no documentation whatsoever on how this is supposed to work. The Maven documentation has very much the same template, but no mention whatsoever of a command-line switch or else.
So, suppose I have a proxy configured, but marked as <active>false</active>, like in this example:
<proxies>
<proxy>
<id>firstProxy</id>
<active>false</active>
<protocol>http</protocol>
<host>proxy.example.invalid</host>
<port>3128</port>
</proxy>
</proxies>
As per the comment there would be some way to for instance "activate" it, possibly giving its id or something like that. Trying the "obvious" using mvn -Dproxies.proxy.firstProxy.active=true java:compile, without success.
I'm very new to Maven and cannot shake the feeling that I am barking up the wrong tree in some way. Is what I am trying to do even possible at all—if so, can anyone point me to a description on how to do it—or am I wasting my time?

As #khmarbaise suggested:
use many settings.xml, e.g put them in project root then
mvn package -s setings.A.xml
mvn package -s setings.B.xml

Related

Eclipse Kepler and JBoss Wildfly hot deployment

I am trying to use eclipse kepler for Java EE 7.I already installed JBoss Tools and added JBoss Wildfly successfully as a server. However my changes are not automatically deployed. Is there anyway the app can be deployed automatically just as when using glassfish?
Using Eclipse, click twice on your WildFly Server to edit the following properties:
Publishing: choose "Automatically publish after a build event". I like to change the publishing interval to 1 second too.
Application Reload Behavior: check the "Customize application reload ..." checkbox and edit the regex pattern to \.jar$|\.class$
That's it. Good luck!
Both #varantes and #Sean are essentially correct, but these answers are not full.
Unfortunately the only way in a Java server environment to have full, zero-downtime hot deployment is to use paid JRebel or free spring-loaded tool.
But for small project there are some ways to speed up work by partial hot-deployment. Essentially:
When enabled option Automatically publish when resource change
then changes inside *.html, *.xhtml files are immediately
reflected as soon as you refresh the browser.
To make hot deployment work for *.jsp files too, then you should
inside ${wildfly-home}/standalone/configuration/standalone.xml
make following change:
<jsp-config/>
replace with:
<jsp-config development="true"/>
restart the server and enjoy hot deployment of web files.
But when modifying *.java source files, then only partial hot deployment is possible. As #varantes stated in his answer, enabling Application Reload Behavior with regex pattern set to \.jar$|\.class$ is an option, but has serious downside: whole module is restarted, thus:
It takes some time (depending on how big is a module).
Whole application state is lost.
So personally, I discourage this solution. JVM supports (in debug mode) code-swapping for methods' bodies. So as long as you are modifying only bodies of existing methods, you are at home (zero downtime, changes are reflected immediately). But you have to disable automatic publishing inside server settings otherwise the application's state will still be destroyed by that republish.
But if you are heavily crafting Java code (adding classes, annotations, constructors) then unfortunately I can only recommend set publishing into Never publish automatically (or shutdown server) and when you finish your work in Java files, then restart by hand your module (or turn-on server). Up to you.
It works for small Java projects, but for bigger ones, JRebel is invaluable (or just spring-loaded), because all approaches described above are not sufficient. Also because of such problems, solutions like Rails/ Django /Play! Framework gained so huge popularity.
I am assuming you are using the latest version of Wildfly (8.0 Beta 1 as of writing).
In the standalone.xml config file, look for <jsp-config/>. Add the attribute development="true" and it should hot-deploy. The resulting config will look like this:
<jsp-config development="true"/>
Add attributes (development, check-interval, modification-test-interval, recompile-on-fail) in configuration file in xPath = //servlet-container/jsp-config/
<servlet-container name="default" default-buffer-cache="default" stack-trace-on-error="local-only">
<jsp-config development="true" check-interval="1" modification-test-interval="1" recompile-on-fail="true"/>
</servlet-container>
(It works in WildFly-8.0.0.Final)
Start server in debug mode and It will track chances inside methods. Other changes It will ask to restart the server.

Passing RAILS_ENV into Torquebox without using a Deployment Descriptor

I am wondering if there is a way to pass a value for RAILS_ENV directly into the Torquebox server without going through a deployment descriptor; similar to how I can pass properties into Java with the -D option.
I have been wrestling with various deployment issues with Torquebox over the past couple weeks. I think a large part of the problem has to do with packaging the gems into the Knob file, which is the most practical way for managing them on a Window environment. I have tried archive deployment and expanded deployment; with and without external deployment descriptor.
With an external deployment descriptor, I found the packaged Gem dependencies were not properly deployed and I received errors about missing dependencies.
When expanded, I had to fudge around a lot with the dependencies and what got included in the Knob, but eventually I got it to deploy. However, certain files in the expanded Knob were marked as failed (possible duplicate dependencies?), but they did not affect the overall deployment. The problem was when the server restarted, deployment would fail the second time mentioning it could not redeploy one of the previously failed files.
The only one I have found to work consistently for me is archive without external deployment descriptor. However, I still need a way to tell the application in which environment it is running. I have different Torquebox instances for each environment and they only run the one application, so it would be fairly reasonable to configure this at the server level.
Any assistance in this matter would be greatly appreciated. Thank you very much!
The solution I finally came to was to pass in RAILS_ENV as a Java property to the Torquebox server and then to set ENV['RAILS_ENV'] to this value in the Rails boot.rb initializer.
Step 1: Set Java Property
First, you will need to set a Rails Environment java property for your Torquebox server. To keep with standard Java conventions, I called this rails.env.
Dependent on your platform and configuration, this change will need to be made in one of the following scripts:
Using JBoss Windows Service Wrapper: service.bat
Standalone environment: standalone.conf.bat (Windows) or standalone.conf (Unix)
Domain environment:: domain.conf.bat (Windows) or domain.conf (Unix)
Add the following line to the appropriate file above to set this Java property:
set JAVA_OPTS=%JAVA_OPTS% -Drails.env=staging
The -D option is used for setting Java system properties.
Step 2: Set ENV['RAILS_ENV'] based on Java Property
We want to set the RAILS_ENV as early as possible, since it is used by a lot of Rails initialization logic. Our first opportunity to inject application logic into the Rails Initialization Process is boot.rb.
See: http://guides.rubyonrails.org/initialization.html#config-boot-rb
The following line should be added to the top of boot.rb:
# boot.rb (top of the file)
ENV['RAILS_ENV'] = ENV_JAVA['rails.env'] if defined?(ENV_JAVA) && ENV_JAVA['rails.env']
This needs to be the first thing in the file, so Bundler can make intelligent decisions about the environment.
As you can see above, a seldom mentioned feature of JRuby is that it conveniently exposes all Java system properties via the ENV_JAVA global map (mirroring the ENV ruby map), so we can use it to access our Java system property.
We check that ENV_JAVA is defined (i.e. JRuby is being used), since we support multiple deployment environments.
I force the rails.env property to be used when present, as it appears that *RAILS_ENV* already has a default value at this point.

Can not create a network connection in Eclipse

This is Eclipse Juno.
for a Maven plugin, I get errors of the form:
ArtifactResolutionException: Failure to transfer org.apache.maven.plugins:maven-compiler-plugin:pom:2.3.2 from http://repo1.maven.org/maven2 ...
for Eclipse marketplace, I get:
MarketplaceDiscoveryStrategy failed with an error
Cannot complete request to ...
I have turned off the firewall both on my computer and at the router, I do not have a proxy, the Internet Options proxy box is unchecked. Putting the web addresses above in my browser (on the same box) returns the correct contents, however, Eclipse doesn't seem to want to contact external servers. What should I change? Help!
edit: my Preferences -> General -> Network Connection -> Provider is set to Direct (not that it matters, setting it to native doesn't work either)
edit2: mvn clean install from the commannd line works just fine and downloads everything.
Oh wow, absolute craziness. It's a windows/IPV6 issue with JDK7.
see:
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7115226
and
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7077696
which can be solved by adding -Djava.net.preferIPv4Stack=true in eclipse.ini.
As to why my command line was working? I didn't update JAVA_HOME when I installed JDK7, and changed the vm setting in eclipse.ini. In other words, my command line was running against JDK6...
Note: proxy could be configures somewhere other than the internet options wizard
It has something to do with file named settings.xml
to find the settings.xml file open Window>Preferences>Maven>User Settings
the file in the field User Settings determines the place of the settings file
you mentioned you had no proxy, so make sure this file has no proxy info, if you have proxy settings by any means the proxy info should be mentioned in this file
<proxies>
<proxy>
<id>[proxy id]</id>
<active>true</active>
<protocol>http</protocol>
<host>[host]</host>
<port>[port]</port>
<nonProxyHosts>
[urls to be skipped separated by '|']
</nonProxyHosts>
</proxy>
</proxies>
I hope this could help with you issue

Why does tomcat replace context.xml on redeploy?

Documentation says if you have a context file here:
$CATALINA_HOME/conf/Catalina/localhost/myapp.xml
it will NOT be replaced by a context file here:
mywebapp.war/META-INF/context.xml
It is written here: http://tomcat.apache.org/tomcat-6.0-doc/config/context.html
Only if a context file does not exist for the application in the $CATALINA_BASE/conf/[enginename]/[hostname]/, in an individual file at /META-INF/context.xml inside the application files.
But everytime I re-deploy the war it replaces this myapp.xml with the /META-INF/context.xml!
Why does it do it and how can I avoid it?
Thanx
Undeploy part of redeploy deletes app and the associated context.xml.
If you use maven tomcat plugin you can avoid deleting context.xml if you deploy your app with command like this:
mvn tomcat:deploy-only -Dmaven.tomcat.update=true
More info here: https://tomcat.apache.org/maven-plugin-2.0-beta-1/tomcat7-maven-plugin/deploy-only-mojo.html
You can use deploy-only with parameter mode to deploy the context.xml too.
The short answer:
Just make the TOMCATHOME/conf/Catalina/localhost dir read-only, and keep reading for more details:
For quick deployment mode (Eclipse dynamic web project, direct Tomcat
connection, etc.) on a local/non-shared Tomcat server you can just define your JDBC datasource (or any
other 'web resource') using the META-INF/context.xml file inside the
WAR file. Easy and fast in your local environment, but not suitable for staging, QA, or
production.
For build deployment mode (usually for staging, QA, or prod), JDBC
datasources and other 'web resources' details are defined by the
QA/production team, not the development team anymore. Therefore, they
must be specified in the Tomcat server, not inside the WAR file
anymore. In this case, specify them in the file
TOMCATHOME/conf/Catalina/localhost/CONTEXT.xml (change Catalina
by the engine, and localhost by the host, and CONTEXT by your context accordingly). However,
Tomcat will delete this file on each deployment. To prevent this
deletion, just make this dir read-only; in Linux you can type:
chmod a-w TOMCATHOME/conf/Catalina/localhost
Voila! Your welcome.
The long answer
For historical reasons Tomcat allows you to define web resources (JDBC datasources, and others) in four
different places (read four different files) in a very specific order of precedence, if you happen to define the same resource multiple times. The ones named in the
short answer above are the more suitable nowadays for each purpose, though you could still
use the others (nah... you probably don't want to). I'm not going to
discuss the other ones here unless someone asks for it.
On tomcat7, also woth autoDeploy=false the file will be deleted on undeploy. This is documented and not a bug (althought it avoids good automated deployments with server-side fixed configuration).
I found a workaround which solved the problem for me:
create a META-INF/context.xml file in your webapp that contains
on the Server create a second context "/config-context" in server.xml and put all your server-side configuration parameters there
on the application use context.getContext("/config-context").getInitParameter(...) to access the configuration there.
This allows a per-host configuration that is independent of the deployed war.
It should also be possible to add per-context configurations by adding contexts like "/config-context-MYPATH". In your app you can use the context path oth the app to calculate the context path of the config app.
According to the documentation (http://tomcat.apache.org/tomcat-8.0-doc/config/automatic-deployment.html#Deleted_files) upon redeploy tomcat detects the deletion (undeploy) of your application. So it will start a cleanup process deleting the directory and xml also. This is independent of auto deployment - so it will happen upon redeployment through manager and modification of war also. There are 3 exceptions:
global resources are never deleted
external resources are never deleted
if the WAR or DIR has been modified then the XML file is only deleted
if copyXML is true and deployXML is true
I don't know why, but copyXML="false" deployXML="false" won't help.
Secondly: Making the directory read only just makes tomcat throwing an exception and won't start.
You can try merging your $CATALINA_BASE/conf/Catalina/localhost/myapp-1.xml, $CATALINA_BASE/conf/Catalina/localhost/myapp-2.xml, etc files into $CATALINA_BASE/conf/context.xml (that works only if you make sure your application won't deploy its own context configuration, like myapp-1.xml)
If someone could tell what is that "external resources" that would generally solve the problem.
The general issue as described by the title is covered by Re-deploy from war without deleting context which is still an open issue at this time.
There is an acknowledged distinction between re-deploy which does not delete the context, and deploy after un-deploy where the un-deploy deletes the context. The documentation was out of date, and the manager GUI still does not support re-deploy.
Redeployment means two parts: undeployment and deployment.
Undeployment removes the conf/Catalina/yourhost/yourapp.xml because the
<Host name="localhost" appBase="webapps" unpackWARs="true"
autoDeploy="true"> <!-- means autoUndeploy too!!! -->
</Host>
Change the autoDeploy="false" and Tomcat has no order anymore to remove the conf/Catalina/yourhost/yourapp.xml.
There is an feature that allowes us to make those steps (undeploy/deploy) as one single step (redeploy) that do not remove the context.xml. This feature is available via the manager-text-interface, but the option is not available using the manager-html-interface. You might have to wait until the bug in tomcat is fixed. You can use the method described in this answer as an workaround.

What is the best way to integrate an external build tool into Eclipse?

I've just started using Eclipse for Python development since we can make use of a lovely plugin I've found to enable distributed pair-programming. Anyway, the next step to getting Eclipse to integrate properly with our existing environment, would be finding a way to drive our current build tool (Waf) from within the IDE.
So the question is, is there a way I can set up Eclipse to drive Waf in a Make-like fashion? I see for Make it has some quite advanced functionality, such as being able to work out what targets are available etc. Bonus points for telling me if there is a way I could go as far as this! (I suspect the answer is that this is all built in to the Make plugin for Ecplipse).
In eclipse CDT I run waf by simply changing the build program in
ProjectPreferences->C/C++ Build->BuilderSettings
Choose External builder and then put in the path to waf
for example I use
/Users/mark/bin/waf -v -k -j2
Note that waf and make do not agree on the -j setting and you have to give i explicitly and not use the eclipse dialog.
You can use the Make targets view add the targets to call waf e.g. configure, build etc.
One issue I had is that Eclipse is hard coded to see the output from Make say Make when i changes directory so I had to patch waf
see waf issue
You could try and define a Custom builder, calling Waf with the appropriate options for the python compilation step.
(From eclipsejdt alcatel-lucent manual)
That picture (not related to Waf at all) illustrates the fact a builder can be defined as an external tool (meaning any .bat or shell you may want to call)
In that "eclipsejdt" example, the custom builder was configured like so:
To set up the builder, bring up the property dialog for project "jex1p" by selecting the project in the Package Explorer and selecting Project > Properties > Builders. Then click New..., select Program, and click OK.
Configure the builder Main tab using values:
Name : nmbldr_pre
Location : ${system_path:ksh}
Working Directory: ${build_project}
Arguments : nmbldr -p 2 -t ${build_type} -s jpre
As VonC says, the elegant way is to use a Custom builder.
Alternatively it is less work (in the short term) to hack together an ant script to do the heavy lifting and define an external builder to configure it onto the project. You may find the drawbacks of an external builder (e.g. no incremental support) mean it is worth investing the effort to do it "properly".