Setting custom service class names for different WSDL versions with wsdl2java - soap

I'm developing a SOAP client based on Apache CXF. The Java classes for accessing the webservices are generated using the Maven plugin of wsdl2java. There are two WSDLs which define a service (InfoService) in two different versions:
info_service_v1.wsdl
info_service_v2.wsdl
Internally, both WSDLs use the identical naming, i.e. the generated webservice class is in each case named InfoService.
Is it possible to specify another name depending on the used WSDL?
Example:
info_service_v1.wsdl --> InfoServiceV1
info_service_v2.wsdl --> InfoServiceV2

In wsdl2java, you can set the option -sn service-name in order to change the service name for both versions.
Another option is to generate each version of code in a different package with option -p package-name

Related

Custom implicit module dependency in WildFly

There's the 3rd-party application bpm.ear that contains the ancient commons-net.jar in its /bpm.war/WEB-INF/lib/.
I need to globally override it with my own version of the jar without patching the files inside the deployment. That is, make this change survive undeployment of the app.
Formerly, when we were using JBoss 4x this was solved by setting the $CLASSPATH env. var before starting the server. Of course, this doesn't work in Wildfly 11.
I want to create a custom module inside ${JBOSS_HOME}/modules (already done) and to create a simple rule to implicitly add this module to all apps deployed on this server.
You can use https://wildscribe.github.io/WildFly/16.0/subsystem/ee/#attr-global-modules to define a list of modules that should be made available to all deployments.

Presto Plugins: Single JAR vs Multiple JARs

My Presto plugin has 2 components: some UDFs (for basic MD5 / SHA1 hashing) and an EventListener (for logging queries using FluentD logger)
During development (single-node Presto cluster), I added them under a single Plugin class, bundled a single JAR and faced no problem
During deployment I found a pitfall: the UDFs must be registered with all nodes whereas (my particular) EventListener must be registered only with master node
Now I have two options
1. Bundle them together in single JAR
We can control registration of UDFs / EventListeners via external config file (different configs for master & slave nodes). As more UDFs, EventListeners and other SPIs are added, a single JAR paired with tweaked config file with achieve the desired result.
2. Bundle them as separate JARs
We can create different Plugin classes for UDFs / EventListener and provide corresponding classpaths in META-INF.services/com.facebook.spi.Plugin file through Jenkins. We'll then have different JARs for different components: one JAR for all UDFs, one JAR for all EventListeners etc. However as more functionalities are added in future, we might end up having lots of different JARs.
My questions are
What are the pros and cons of both techniques?
Is there an alternate approach?
I'm currently on Presto 0.194 but will soon be upgrading to Presto 0.206
Either way works. You can do whichever is easiest for you. There's actually a third option in the middle, which is to have multiple Plugin implementations in a single JAR (you would list all implementations in the META-INF/services file).
EventListener is actually used on both the coordinator and workers. Query events happen on the coordinator and split events happen on the workers. However, if you only care about query events, you only need it on the coordinator.
You can deploy the event plugin on both coordinator and workers but only configure it on the coordinator. The code will only be used if you configure it by adding an event-listener.properties file with a event-listener.name property that matches the name you return in your EventListenerFactory.getName() method.

Sharing Global Elements Across Mule Flows

What is the recommended way to permit import of global elements across flows. I am using mule studio version 3.4, by using the import tag I am able to access global elements defined in my but running the flow within mule studio generates following errors java.lang.IllegalArgumentException: A service named already exists.
<spring:beans>
<spring:import resource="name of flow xml where global elements are defined"/>
</spring:beans>
Is there an alternate way to share global elements?
Bit late to the party but I just came across this issue and it was because I had imported a common mule config into another config that was in the same project/application.
In my case I had a single project set up in AnypointStudio that had two configs with completely separate flows. On top of this I had a third config with "common" sub-flows that I use in the other two. This common config was also in the same project. I had imported the common config into the other two configs but this is unnecessary as it's already available to both if they're all in the same project. This seems to mean the common config is brought in several times causing conficts with the common flows elements names.
Long story short...try remove imports like this that are in the same project:
<spring:beans>
<spring:import resource="classpath:mule-common-config.xml" />
</spring:beans>
Encountered this issue while trying to run mule application from Anypoint Studio 6.4.2. I believe the error is mule's way of complaining that the XML resource has been imported already. Checking for any duplicate imports could help in tackling this issue.
If your mule-config.xml lies in a location , e.g. "packaged/serviceframework" then use below code snippet, i.e. by providing the path of the xml file instead of just the file name.
<spring:beans>
<spring:import resource="classpath:packaged/serviceframework/mule-config.xml" />
</spring:beans>

Passing RAILS_ENV into Torquebox without using a Deployment Descriptor

I am wondering if there is a way to pass a value for RAILS_ENV directly into the Torquebox server without going through a deployment descriptor; similar to how I can pass properties into Java with the -D option.
I have been wrestling with various deployment issues with Torquebox over the past couple weeks. I think a large part of the problem has to do with packaging the gems into the Knob file, which is the most practical way for managing them on a Window environment. I have tried archive deployment and expanded deployment; with and without external deployment descriptor.
With an external deployment descriptor, I found the packaged Gem dependencies were not properly deployed and I received errors about missing dependencies.
When expanded, I had to fudge around a lot with the dependencies and what got included in the Knob, but eventually I got it to deploy. However, certain files in the expanded Knob were marked as failed (possible duplicate dependencies?), but they did not affect the overall deployment. The problem was when the server restarted, deployment would fail the second time mentioning it could not redeploy one of the previously failed files.
The only one I have found to work consistently for me is archive without external deployment descriptor. However, I still need a way to tell the application in which environment it is running. I have different Torquebox instances for each environment and they only run the one application, so it would be fairly reasonable to configure this at the server level.
Any assistance in this matter would be greatly appreciated. Thank you very much!
The solution I finally came to was to pass in RAILS_ENV as a Java property to the Torquebox server and then to set ENV['RAILS_ENV'] to this value in the Rails boot.rb initializer.
Step 1: Set Java Property
First, you will need to set a Rails Environment java property for your Torquebox server. To keep with standard Java conventions, I called this rails.env.
Dependent on your platform and configuration, this change will need to be made in one of the following scripts:
Using JBoss Windows Service Wrapper: service.bat
Standalone environment: standalone.conf.bat (Windows) or standalone.conf (Unix)
Domain environment:: domain.conf.bat (Windows) or domain.conf (Unix)
Add the following line to the appropriate file above to set this Java property:
set JAVA_OPTS=%JAVA_OPTS% -Drails.env=staging
The -D option is used for setting Java system properties.
Step 2: Set ENV['RAILS_ENV'] based on Java Property
We want to set the RAILS_ENV as early as possible, since it is used by a lot of Rails initialization logic. Our first opportunity to inject application logic into the Rails Initialization Process is boot.rb.
See: http://guides.rubyonrails.org/initialization.html#config-boot-rb
The following line should be added to the top of boot.rb:
# boot.rb (top of the file)
ENV['RAILS_ENV'] = ENV_JAVA['rails.env'] if defined?(ENV_JAVA) && ENV_JAVA['rails.env']
This needs to be the first thing in the file, so Bundler can make intelligent decisions about the environment.
As you can see above, a seldom mentioned feature of JRuby is that it conveniently exposes all Java system properties via the ENV_JAVA global map (mirroring the ENV ruby map), so we can use it to access our Java system property.
We check that ENV_JAVA is defined (i.e. JRuby is being used), since we support multiple deployment environments.
I force the rails.env property to be used when present, as it appears that *RAILS_ENV* already has a default value at this point.

How to share a GWT RPC RemoteServiceServlet among multiple client modules / apps

I have several GWT modules and web apps running in Jetty. They each want to use my LoginService RPC interface, so I put this interface and its servlet implementation in a common module. I serve the servlet (LoginServiceImpl) from my root web context, and web.xml exposes it at the url "/loginService". In a another GWT module, to use this service, I had to set the entry point, like this...
LoginServiceAsync loginService = GWT.create(LoginService.class);
ServiceDefTarget t = (ServiceDefTarget)loginService;
t.setServiceEntryPoint("/loginService");
Now, the module trying to use the loginService is called discussions, and I get this error on the server.
ERROR: The serialization policy file
'/discussions/discussions/7B344C69AD493C1EC707EC98FE148AA0.gwt.rpc' was not found;
did you forget to include it in this deployment?
So the servlet is reporting an error that mentions the client (the discussions module). I'm guessing that the RPC plumbing passed the name of this .rpc file through from the client, and the servlet is now looking for it. (?) As an experiment, I copied, the *.gwt.rpc files from the discussions module into the root web context, so the servlet could find them. This did stop the error. But I still get another error:
com.google.gwt.user.client.rpc.SerializationException: Type
'mystuff.web.shared.User' was not assignable to
'com.google.gwt.user.client.rpc.IsSerializable' and did not have a custom field
serializer. For security purposes, this type will not be serialized.: ...
This class is serializable; it worked before in other modules, so now I'm lost.
What is the right way to use the LoginService from multiple clients modules?
Update:
This error was showing up in hosted devmode, and it went away after a full compile. Maybe this is related to gwt serialization policy hosted mode out of sync . I will update again if I can better reproduce the problem.
See my answer here. The short answer is: you'll need to make mystuff.web.shared.Users source available at compile-time to your discussions module.