Spring Batch - XML or Java Config - spring-batch

I'm start to develop some batches with spring-batch, and, looking at the docs, i need to decide which approach to choose, XML or Java Config.
I have some questions:
What is the best choice?
What are the pros and cons?
Will XML Config be deprecated?

What is the best choice?
It depends on your needs and preferences.
What are the pros and cons?
This might help: Benefits of JavaConfig over XML configurations in Spring?
In the context of Spring Batch, the one advantage I see of XML config over Java config is the ability to change the job definition (like changing the order of steps) without having to recompile the app. Other than that, I would go for Java config.
Will XML Config be deprecated?
No, at least not in the short term. We follow other Spring projects for this kind of decisions. If one day XML config would be deprecated, we would follow the same decision for consistency with other projects from the portfolio.

Related

Build swagger2 specs via spring-rest-docs

I like the TDD approach to documenting your restful api with spring-rest-docs. However, I love "API Playground" feature enabled by swagger specification. I wish there was a way to get best of both worlds.
Is there a way to build swagger2 specs from spring rest docs? may be via building custom request/response preprocessors.
Do you have any thoughts or recommendations?
There's not out-of-the-box support for this in Spring REST Docs at the moment. The issue that you opened will track the possibility of adding such functionality. In the meantime, your best bet would be to look at writing a custom Snippet implementation that generates (part of) a Swagger specification.
Typically, a Spring REST Docs snippet deals with documentating a single resource, whereas a Swagger specification describes an entire service. This means that the Swagger specification Snippet implementation will need to accumulate state somehow, before producing a complete specification at the end. There are lots of ways to do that (in memory, multiple files that are combined in a post-processing step, etc.). It's not clear to me that one approach is obviously the right one so some experimentation would be useful. If you do some experimentation, please comment on the issue that you opened with your findings.

Spring AOP: <context:load-time-weaver> Vs <aop:aspectj-autoproxy>

I was looking for the option to profile my APIs. I found Spring AOP is one of the options to profile the methods.
There are two options in Spring AOP to configure and use the aspects:
context:load-time-weaver
aop:aspectj-autoproxy
As per my understanding first option (load-time-weaver) performs weaving at load time without creating any proxy objects. And second option (aspectj-autoproxy) creates proxy objects. Am I correct on this? I believe, creation of proxy objects may hit the performance. wouldn't it?
Which option is best to choose considering better performance? What are the pros and cons of both approaches?
Well, Narendra, first of all there are profilers for profiling software. Maybe there is no need to code anything on your own.
As for your question: I have not idea how to configure Spring because I never use it. I am an AspectJ user. What I do know though, is that Spring AOP always uses proxies (JDK or CGLIB, depending on whether you need to proxy interfaces or classes). This is, as you said, something you probably do not want for profiling. AspectJ, no matter if you use compile or load time weaving, does not need or use proxies and thus should be faster. If you are not already using Spring in your project anyway, I would not touch it just to satisfy your profiling needs. Furthermore, Spring AOP only works for Spring Beans and just offers method interception, not much more. AspectJ is a full-blown AOP implementation and much more powerful. If you are already using Spring, you have a choice of using Spring AOP, AspectJ within Spring or a mixture of both.

Running customized (non-BPMN) process definitions with Activiti

We are evaluating Activiti as a process engine to replace our existing home grown work flow engine. We are quite impressed by the capabilities of Activiti especially related to multi tenancy and REST WS.
However, one of the biggest challenge (and probably blocker) to adopt Activiti would be - How we can run or migrate our existing work flow definitions.
As I mentioned earlier, our work flow solution is a home grown one and doesn't adheres to BPMN specifications. There are thousands of templates out there. We can't simply ask our customers to redefine their templates using Activiti. These definitions are stored in proprietary XML format.
Looking at the level of customization in the templates, it would be very difficult to migrate these definitions to BPMN format.
So, does Activiti provides any hooks to run such custom templates. Alternatively, please share your thoughts about migrating the templates from proprietary format to BPMN format.
I suppose such scenario would be common and other people would have faced the same.
I know I am being very vague with this query but at this stage I don't have specific problems that I can discuss.
One option is implement your own proprietary XML parser and parser handlers. Look at org.activiti.engine.impl.bpmn.parser.BpmnParse and org.activiti.engine.impl.bpmn.parser.handler.AbstractBpmnParseHandler and its descendants.
We did it and worked fine.

OSGI: What is the best approach to wait for a declarative service component to start?

I have the following problem:
1: An OSGI bundle A (equinox) is activated, and the activator parses an XML file
2: in the XML file, a declarative service is requested, which is present in another bundle (B)
3: bundle B is not activated yet, so the activator of bundle A needs to wait
I know how to approach this purely in DS, but the parsing needs to be carried out in the activator. Also I do not want to fool around with start levels and the likes. Ideally, I would want to be able to register the service provided by bundle B when needed.
Is there an elegant way to achieve this?
Thanks,
Kees
OSGi services are dynamic by nature and therefore you should never depend on the availability of a service. You need to use some kind of service tracking via a ServiceTracker or better, go for the pure DS solution which does all the hard work for you.
Since you indicate that you must parse the XML file, I guess you decided to use some kind of external configuration with services to use. I would suggest to re-consider this type of architecture. You need to write a lot of code while often the same goals can be reached by using a combination of the configuration admin and declarative services/blueprint.

Struts2 configuration and performance

I use Struts2 + Spring + Hibernate for web site development. And I am wondering about 1 thing, I never used annotations in my web applications, but hey, what is the best way to code a web application? Annotations (I never understood how they works) or Config-files? and why? More complex applications will work faster on this?, or is something about principles?
This isn't definitive it is just what I do with similar tools.
Looking at the Struts2 xml configuration vs conventions (struts2-conventions-plugin) and annotations. The benefit of the later is that there is a lot less work. When the conventions don't do what we want we have a choice, use struts.xml which will override the conventions or use annotations which will also override the conventions. If you go with annotations on your action class then you can clearly see what is going on from one location. With struts.xml you often need to look at both the configuration file and the action to understand the whole picture.
Although I advocate annotations, the xml configuration is still good for somethings. It is a good place to set global parameters. It is still needed for defining custom interceptors/interceptor-stacks and if you need actions defined from wildcards it makes sense to have them there too. All these examples reinforce the point that it is more general configuration that belongs in struts.xml because they are bigger than any action.
For hibernate it is similar. Your entity classes and meta information are all in one place which makes it easier to understand. There was a case I had where xml was more useful in a testing situation, I needed to use the same entity classes but needed to make extensive changes to the metadata. So in that case I could simply load a different set of xml files.
With spring I use annotations for injection but wire the beans in application.xml.
Other stackoverflow posts that may be of interest:
Xml configuration versus Annotation based configuration
Is there a good reason to configure hibernate with XML rather than via annotations?