Example Spring integration DSL for JPA Inbound Channel adapter - spring-data-jpa

I can't find a useful example for polling a JPA source for inbound data. I know how to do this in XML but can't figure out how do it in DSL.
In short what I want to do is periodically poll a JPA repository for records then put the records into a flow that will do the usual filtering/transforming/executing.
Kind regards
David Smith

You are right: there is no yet JPA components support in the Spring Integration Java DSL. Feel free to raise a JIRA (JavaDSL component) on the matter and we'll take care about this demand. Feel free to contribute as well!
Meanwhile I can help you to figure out how to do that without high-level API.
The <int-jpa:inbound-channel-adapter> is based on the JpaPollingChannelAdapter and JpaExecutor objects (exactly them we will use for DSL API). You just must to configure #Bean for JpaExecutor and use it like this:
#Bean
public JpaExecutor jpaExecutor(EntityManagerFactory entityManagerFactory) {
JpaExecutor jpaExecutor = new JpaExecutor(entityManagerFactory);
jpaExecutor.setJpaQuery("from Foo");
....
return jpaExecutor;
}
#Bean
public IntegrationFlow jpaFlow(JpaExecutor jpaExecutor) {
return IntegrationFlows.from(new JpaPollingChannelAdapter(jpaExecutor))
.split()
.transform()
....
}
Everything else will be done by framework as usual for existing DSL components API.
UPDATE
How to provide auto-startup= property when creating JpaPollingChannelAdapter programmatically? Also, is it possible to get this bean and invoke .start(), .stop() using control-bus?
See, Gary's answer. The Lifecycle control is a responsibility of Endpoint in our case it is SourcePollingChannelAdapter. So, you should specify that second Lambda argument, configure the .autoStartup() and .id() there to be able to inject the SourcePollingChannelAdapter for your JpaPollingChannelAdapter and operate with it for your purpose. That id really can be used from control-bus to start()/stop() at runtime.
Yes, I agree JpaPollingChannelAdapter is unfortunate name for that class because it is really a MessageSource implementation.

Wire up a JpaPollingChannelAdapter as a #Bean and use
IntegrationFlows.from(jpaMessageSource(),
c -> c.poller(Pollers.fixedDelay(1000)))
.transform(...)
...
See the DSL Reference for configuration options.
This one's near the top (with a different message source).

Related

Profiles In dagger

I am new to dagger and I am searching for how can we implement functionality like spring profiles in dagger-2.x. I want different beans for my devo and prod environments, but I am using dagger framework with Java.
#Provides
#Singleton
public void providesDaggerCoffeeShopClient(Stage stage) {
DaggerCoffeeShop.builder()
.dripCoffeeModule(new DripCoffeeModule())
.qualifier(stage)
.build();
}
Here, I want to skip this bean creation if stage is "Devo". Any help will be appreciated.
Well. I have met this question 2 days ago. And since performed research about this matter. I was looking for a solution that would allow me to be able to run application with different profiles passed as a system property on the application run like:
java -Denv=local-dev-env -jar java-app.jar
The only appropriate solution I was able to find is to follow the oficial documentation testing guide:
https://dagger.dev/dev-guide/testing
and devide my one module into different modules, in particular I had to separate and substitute data base dependency when I want to run my app locally avoiding connection to real DB and executing any command against real DB.
And when I run my app I perform check on system property like:
public boolean isLocalDevEnv() {
return Environments.LOCAL_DEV.envName.equals(System.getProperty("env", Environments.PRODUCTION.envName));
}
and if the system property DOES NOT contain the property I am looking for, then I
create the PRODUCTION instance of my component (that is configured to use production modules):
DaggerMyAppComponent.create()
Which approximately looks like:
#Component(modules = {MyAppModule.class, DaoModule.class})
#Singleton
public interface MyAppComponent {...}
otherwise, I create loca-dev-env version of the component that uses the version of the module that produces mock of Dao that would be creating real connection to real Data Base otherwise:
DaggerMyAppLocalDevEnvComponent.create()
Which approximately looks like:
#Component(modules = {MyAppModule.class, DaoMockModule.class})
#Singleton
public interface MyAppLocalDevEnvComponent {...}
Hope it was clear, so just think of Spring Profiles for dagger 2 from the perspective of system properties and programmatic decision making. This approach definitely requires ALOT of boilerplate code in comparison to Spring's Profiles implementation, but it is the only viable approach I was able to come up with.
Hope it helps.

Apply Spring Data's ReactiveCrudRepository to Redis

I'm playing with Spring Boot 2 with webflux. I'm trying to use ReactiveSortingRepository to simplify redis ops.
public interface DataProfileRepository extends ReactiveSortingRepository<DataProfileDTO, String> {
}
Simply use this interface
Mono<DataProfileDTO> tmp = this.dataProfileRepository.findById(id);
exception:
org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type [com.tradeshift.dgps.dto.DataProfileDTO] to type [reactor.core.publisher.Mono<?>]
at org.springframework.core.convert.support.GenericConversionService.handleConverterNotFound(GenericConversionService.java:321) ~[spring-core-5.0.2.RELEASE.jar:5.0.2.RELEASE]
at org.springframework.core.convert.support.GenericConversionService.convert(GenericConversionService.java:194) ~[spring-core-5.0.2.RELEASE.jar:5.0.2.RELEASE]
at org.springframework.core.convert.support.GenericConversionService.convert(GenericConversionService.java:174) ~[spring-core-5.0.2.RELEASE.jar:5.0.2.RELEASE]
at org.springframework.data.repository.util.ReactiveWrapperConverters.toWrapper(ReactiveWrapperConverters.java:197) ~[spring-data-commons-2.0.2.RELEASE.jar:2.0.2.RELEASE]
at org.springframework.data.repository.core.support.QueryExecutionResultHandler.postProcessInvocationResult(QueryExecutionResultHandler.java:104) ~[spring-data-commons-2.0.2.RELEASE.jar:2.0.2.RELEASE]
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:587) ~[spring-data-commons-2.0.2.RELEASE.jar:2.0.2.RELEASE]
is thrown.
The behavior of this repository didn't match reactor, I can see in the debug mode, an actual DataProfileDTO was fetched from redis. And failed when trying to:
GENERIC_CONVERSION_SERVICE.convert(reactiveObject, targetWrapperType);
in ReactiveWrapperConverters.toWrapper
I went googling, it seems Spring Data Redis 2.0 doesn't mention reactive repository support. I'm wondering if anything I did wrong in my code or Spring Data Redis 2.0 just doesn't support ReactiveCrudRepository yet.
According to Spring's documentation for Reactive Redis Support, the highest level of abstraction to interface with Redis with reactive support is ReactiveRedisTemplate. The ReactiveRedisConnection is lower abstraction that works with binary values (ByteBuffer) as input and output.
There's no mention of support of reactive repositories.
You can also consult the official reactive examples in the spring-data github repo.
In order for all this to work, you need to have reactive support in the driver you're using - currently that would be lettuce.
Although not ideal, an alternative is Flux.fromIterable(). You can use blocking repository and handle the result in reactive way.
public interface DataProfileRepository extends CrudRepository<DataProfileDTO, String> {
}
And wrap it:
Flux.fromIterable(dataProfileRepository.findById(id)), DataProfileDTO.class))

Can PMD be used for dataflow analysis on Java?

i'd like to know if i can use PMD to perform some basic data flow analysis actions. It's an assignment so it doesn't matter if it's trivial.
I can't find any code examples online.
Is the DFA module working? Should i go the reverse engineering way to see what's going on?
Thanks lots
PMD's Data Flow Analysis module is operational. There are rules using it shipping with PMD, for instance DataflowAnomalyAnalysis.
However, it is true the PMD team plans to revamp that implementation at some point in the future.
DFA is only usable through Java rules (XPath rules can't be used). Writing a DFA rule consists of:
Writing a visitor in which you get the DFA node for the method / constructor you want to analyze:
public Object visit(ASTMethodDeclaration methodDeclaration, Object data) {
final DataFlowNode node = methodDeclaration.getDataFlowNode().getFlow().get(0);
final DAAPathFinder pathFinder = new DAAPathFinder(node, executable, getProperty(MAX_PATH_DESCRIPTOR));
pathFinder.run();
return data;
}
Writing a proper Executable to enforce your rule.
public void execute(CurrentPath path) {
// your code here to analyze the current path
}
A working example can be found here

Do not allow Expands for specific EntityTypes in Breeze

I'm searching for an elegant way to not allow specific EntityTypes to be expanded in BreezeJS. We have a (somewhat) public Web Service that we are exposing, and there are some tables that we don't want to be visible to some consumers of that service. Although we can only expose Web API Methods for those specific tables, consumers of the service could still access those tables by expanding from related tables.
Note: I've posted an answer to this question, giving a work-around. However, I'm interested if anyone out there knows a more elegant way of skinning this particular cat.
On the UserVoice page for requesting this feature to be formally added to Breeze, Ward Bell suggests a decent work-around:
Meanwhile, in your controller you can examine the query string from the request for presence of $select and $expand and throw an exception if you see it.
I'm guessing this would look something like this:
[HttpGet]
public IQueryable<Widget> Widgets() {
if (!string.IsNullOrEmpty(HttpContext.Current.Request.QueryString["$expand"]))
{
throw new Exception("Ah ah ah, you didn't say the magic word!");
}
return _contextProvider.Context.Widgets;
}
...to block all Expands, or something more specific to block the Expand of Features, itself. This isn't too shabby but not quite "elegant".
(Yes, that is a Jurassic Park reference.)

Can I use RequestFactory without getId() and getVersion() methods?

We are trying to use RequestFactory with an existing Java entity model. Our Java entities all implement a DomainObject interface and expose a getObjectId() method (this name was chosen as getId() can be ambiguous and conflict with the domain object's actual ID from the domain being modeled.
The ServiceLayerDecorator interface allows for customization of ID and Version property lookup strategies.
public class MyServiceLayerDecorator extends ServiceLayerDecorator {
#Override
public Object getId(Object object) {
DomainObject domainObject = (DomainObject) object;
return domainObject.getObjectId();
}
}
So far, so good. However, trying to deploy this solution yields runtime errors. In particular, RequestFactoryInterfaceValidator complains:
[ERROR] There is no getId() method in type com.mycompany.server.MyEntity
Then later on:
[ERROR] Type type com.mycompany.client.MyEntityProxy was previously marked as bad
[ERROR] The type com.mycompany.client.MyEntityProxy did not pass RequestFactory validation
[ERROR] Unexpected error
com.google.web.bindery.requestfactory.server.UnexpectedException: The type com.mycompany.client.MyEntityProxy did not pass RequestFactory validation
at com.google.web.bindery.requestfactory.server.ServiceLayerDecorator.die(ServiceLayerDecorator.java:212) ~[gwt-servlet.jar:na]
My question is - why does the ServiceLayerDecorator allow for customized ID and Version lookup strategies if RequestFactoryInterfaceValidator is hardcoding the convention of getId() and getVersion()?
I guess I could override ServiceLayerDecorator.resolveClass() to ignore "poisoned" proxy classes but at this point it seems like I'm fighting the framework too much...
Couple of options, some of which have already been mentioned:
Locator. I like to make a single Locator for the entire proj, or at least for groups of related objects that have similar key types. The getId() call will be able to invoke your DomainObject.getObjectId() method and return that value. Note that the getDomainType() method is currently unused, and can return null or throw an exception.
ValueProxy. Instead of having your objects map to something RF can understand as an entity, map them to plain value objects - no id or version required. RF misses out on a lot of clever things it can do, especially with regard to avoiding sending redundant data to the server.
ServiceLayerDecorator. This worked pre 2.4, but with the annotation processing that goes on now, it works less well, since it tries to do some of the work for you. It seems ServiceLayerDecorator has lost a lot of its teeth in the last few months - in theory, you could use it to rebuild getters to talk directly to your persistence mechanism, but now that the annotation processing verifies your code, that is no longer an option.
Big issue in all of this is that RequestFactory is designed to solve a single problem, and solve it well: Allow developers to use POJOs mapped to some persistence mechanism, and refer to those objects from the client, following certain conventions to avoid writing extra code or configuration.
As a result, it solves its own problem pretty well, and ends up being a bad fit for many other problems/use-cases. You might be finding that it isn't worth it: if so, a few thoughts you might consider:
RPC. It isn't perfect for much, but it does an okay job for a lot.
AutoBeans (which RF is based on) is still a pretty fast, lightweight way to send data over the wire and get it into the app. You could build your own wrapper around it, like RF has done, and slim down the problem it is trying to solve to just your use-case.