Teiid Spring Boot: How to expose specific MongoDB collections as OData entities? - mongodb

I created a project following the steps of the sample project at https://github.com/teiid/teiid-spring-boot/tree/master/samples/mongodb and added org.teiid:spring-odata dependency for OData exposure.
I find out that it exposed all collections in the MongoDB database as OData entities by default. Would it possible to configure it to expose specific collections only?

Updated:
You can add following to application.properties,
spring.teiid.data.mongodb.accounts.remoteServerList=localhost:27017
spring.teiid.data.mongodb.accounts.database=sampledb
spring.teiid.data.mongodb.accounts.user=admin
spring.teiid.data.mongodb.accounts.password=admin
spring.teiid.data.mongodb.accounts.authDatabase=admin
spring.teiid.data.mongodb.accounts.importer.excludeTables=.*
where "accounts" is the bean name. See "importer properties"
http://teiid.github.io/teiid-documents/master/content/reference/MongoDB_Translator.html
Then to configure the data source
#Configuration
public class DataSources {
#Bean
public MongoDBConnectionFactory accounts(#Qualifier("config") #Autowired MongoDBConfiguration config) {
return new MongoDBConnectionFactory(new MongoDBTemplate(config));
}
#ConfigurationProperties("spring.teiid.data.mongodb.accounts")
#Bean("config")
public MongoDBConfiguration mongoConfig() {
return new MongoDBConfiguration();
}
}
The above is when strictly talking about exposing MongoDB alone without any other changes.

Related

Role of PlatformTransactionManager with Spring batch 5

The documentation is not very clear about the role of PlatformTransactionManager in steps configuration.
First, stepBuilder.tasklet and stepBuilder.chunk requires a PlatformTransactionManager as second parameter while the migration guide says it is now required to manually configure the transaction manager on any tasklet step definition (...) This is only required for tasklet steps, other step types do not require a transaction manager by design..
More over, in the documentation the transactionManager is injected via a method parameter:
/**
* Note the TransactionManager is typically autowired in and not needed to be explicitly
* configured
*/
But the transactionManager created by Spring Boot is linked to the DataSource created by Spring Boot based on spring.datasource.url. So with autoconfiguration, the following beans works together: dataSource, platformTransactionManager, jobRepository. It makes sense for job and step executions metadata management.
But unless readers, writers and tasklet works with this default DataSource used by JobOperator, the auto configured transactionManager must not be used for the steps configuration. Am I right ?
Tasklets or a chunk oriented steps will often need another PlatformTransactionManager:
if a step writes data in a specific db it needs a specific DataSource (not necessarily declared as bean otherwise the JobRepository will use it) and a specific PlatformTransactionManager linked to this DataSource
if a step writes data in a file or send message to a MOM, the ResourcelessTransactionManager is more appropriate. This useful implementation is not mentioned in the documentation.
As far as I understand, the implementation of PlatformTransactionManager for a step depends on where the data are written and has nothing to do with the transactionManager bean used by the JobOperator Am I right ?
Example:
var builder = new StepBuilder("step-1", jobRepository);
PlatformTransactionManager txManager = new ResourcelessTransactionManager();
return builder.<Input, Output> chunk(10, txManager)
.reader(reader())
.processor(processor())
.writer(writer()/*a FlatFileItemWriter*/)
.build();
or
#SpringBootApplication
#EnableBatchProcessing
public class MyJobConfiguration {
private DataSource dsForStep1Writer;
public MyJobConfiguration(#Value("${ds.for.step1.writer.url"} String url) {
this.dsForStep1Writer = new DriverManagerDataSource(url);
}
// reader() method, processor() method
JdbcBatchItemWriter<Output> writer() {
return new JdbcBatchItemWriterBuilder<Output>()
.dataSource(this.dsForStep1Writer)
.sql("...")
.itemPreparedStatementSetter((item, ps)->{/*code*/})
.build();
}
#Bean
Step step1(JobRepository jobRepository) {
var builder = new StepBuilder("step-1", jobRepository);
var txManager = new JdbcTransactionManager(this.dsForStep1Writer);
return builder.<Input, Output> chunk(10, txManager)
.reader(reader())
.processor(processor())
.writer(writer())
.build();
}
// other methods
}
Is that correct ?
Role of PlatformTransactionManager with Spring batch 5
The role of the transaction manager did not change between v4 and v5. I wrote an answer about this a couple of years ago for v4, so I will update it for v5 here:
In Spring Batch, there are two places where a transaction manager is used:
In the proxies created around the JobRepository/JobExplorer to create transactional methods when interacting with the job repository/explorer
In each step definition to drive the step's transaction
Typically, the same transaction manager is used in both places, but this is not a requirement. It is perfectly fine to use a ResourcelessTransactionManager with the job repository to not store any meta-data and a JpaTransactionManager in the step to persist data in a database.
Now in v5, #EnableBatchProcessing does not register a transaction manager bean in the application context anymore. You either need to manually configure one in the application context, or use the one auto-configured by Spring Boot (if you are a Spring Boot user).
What #EnableBatchProcessing will do though is look for a bean named transactionManager in the application context and set it on the auto-configured JobRepository and JobExplorer beans (this is configurable with the transactionManagerRef attribute). Again, this transaction manager bean could be manually configured or auto-configured by Boot.
Once that in place, it is up to you to set that transaction manager on your steps or not.

Changes to mongo collection not visible across sessions

this is my mongo config:
#Configuration
public class MongoConfig {
#Bean
public MongoCustomConversions customConversions() {
return new MongoCustomConversions(Arrays.asList(new OffsetDateTimeReadConverter(), new OffsetDateTimeWriteConverter()));
}
#Bean
public ValidatingMongoEventListener validatingMongoEventListener() {
return new ValidatingMongoEventListener(validator());
}
#Bean
public LocalValidatorFactoryBean validator() {
return new LocalValidatorFactoryBean();
}
}
and:
spring:
data:
mongodb:
uri: mongodb://localhost:27017/my-database
I have noticed that whatever changes I am making to my collection in the Spring Boot service, whether I use repository or MongoOperations, save or find, they are visible only during the lifetime of the Spring Boot service and are NOT visible with command line mongo interface. Also the documents that I add with mongo command line are NOT visible to the spring boot service.
To my knowledge I have only one instance of mongodb, only one is visible in Task manager. I double checked the name of db and collection, too.
What could be the reason?
I have found that the problem was caused by embedded MongoDB:
<dependency>
<groupId>de.flapdoodle.embed</groupId>
<artifactId>de.flapdoodle.embed.mongo</artifactId>
<scope>test</scope>
</dependency>
Despite the scope being test, the embedded server run also for the main configuration. I could observe this behaviour only in eclipse, not in Intellij.
I could solve or at least circumvent the problem by excluding the embedded configuration:
#SpringBootApplication(exclude = EmbeddedMongoAutoConfiguration.class)
public class MyApplication

Spring Boot with application managed persistence context

I am trying to migrate an application from EJB3 + JTA + JPA (EclipseLink). Currently, this application makes use of application managed persistent context due to an unknown number of databases on design time.
The application managed persistent context allows us to control how to create EntityManager (e.g. supply different datasources JNDI to create proper EntityManager for specific DB on runtime).
E.g.
Map properties = new HashMap();
properties.put(PersistenceUnitProperties.TRANSACTION_TYPE, "JTA");
//the datasource JNDI is by configuration and without prior knowledge about the number of databases
//currently, DB JNDI are stored in a externalized file
//the datasource is setup by operation team
properties.put(PersistenceUnitProperties.JTA_DATASOURCE, "datasource-jndi");
properties.put(PersistenceUnitProperties.CACHE_SHARED_DEFAULT, "false");
properties.put(PersistenceUnitProperties.SESSION_NAME, "xxx");
//create the proper EntityManager for connect to database decided on runtime
EntityManager em = Persistence.createEntityManagerFactory("PU1", properties).createEntityManager();
//query or update DB
em.persist(entity);
em.createQuery(...).executeUpdate();
When deployed in a EJB container (e.g. WebLogic), with proper TransactionAttribute (e.g. TransactionAttributeType.REQUIRED), the container will take care of the transaction start/end/rollback.
Now, I am trying to migrate this application to Spring Boot.
The problem I encounter is that there is no transaction started even after I annotate the method with #Transactional(propagation = Propagation.REQUIRED).
The Spring application is packed as an executable JAR file and run with embadded Tomcat.
When I try to execute those update APIs, e.g. EntityManager.persist(..), EclipseLink always complains about:
javax.persistence.TransactionRequiredException: 'No transaction is currently active'
Sample code below:
//for data persistence
#Service
class DynamicServiceImpl implements DynamicService {
//attempt to start a transaction
#Transactional(propagation = Propagation.REQUIRED)
public void saveData(DbJndi, EntityA){
//this return false that no transaction started
TransactionSynchronizationManager.isActualTransactionActive();
//create an EntityManager based on the input DbJndi to dynamically
//determine which DB to save the data
EntityManager em = createEm(DbJndi);
//save the data
em.persist(EntityA);
}
}
//restful service
#RestController
class RestController{
#Autowired
DynamicService service;
#RequestMapping( value = "/saveRecord", method = RequestMethod.POST)
public #ResponseBody String saveRecord(){
//save data
service.saveData(...)
}
}
//startup application
#SpringBootApplication
class TestApp {
public static void main(String[] args) {
SpringApplication.run(TestApp.class, args);
}
}
persistence.xml
-------------------------------------------
&ltpersistence-unit name="PU1" transaction-type="JTA">
&ltproperties>
&lt!-- comment for spring to handle transaction??? -->
&lt!--property name="eclipselink.target-server" value="WebLogic_10"/ -->
&lt/properties>
&lt/persistence-unit>
-------------------------------------------
application.properties (just 3 lines of config)
-------------------------------------------
spring.jta.enabled=true
spring.jta.log-dir=spring-test # Transaction logs directory.
spring.jta.transaction-manager-id=spring-test
-------------------------------------------
My usage pattern does not follow most typical use cases (e.g. with known number of DBs - Spring + JPA + multiple persistence units: Injecting EntityManager).
Can anybody give me advice on how to solve this issue?
Is there anybody who has ever hit this situation that the DBs are not known in design time?
Thank you.
I finally got it work with:
Enable tomcat JNDI and create the datasource JNDI to each DS programmatically
Add transaction stuff
com.atomikos:transactions-eclipselink:3.9.3 (my project uses eclipselink instead of hibernate)
org.springframework.boot:spring-boot-starter-jta-atomikos
org.springframework:spring-tx
You have pretty much answered the question yourself: "When deployed in a EJB container (e.g. WebLogic), with proper TransactionAttribute (e.g. TransactionAttributeType.REQUIRED), the container will take care of the transaction start/end/rollback".
WebLogic is compliant with the Java Enterprise Edition specification which is probably why it worked before, but now you are using Tomcat (in embedded mode) which are NOT.
So you simply cannot do what you are trying to do.
This statement in your persistence.xml file:
<persistence-unit name="PU1" transaction-type="JTA">
requires an Enterprise Server (WebLogic, Glassfish, JBoss etc.)
With Tomcat you can only do this:
<persistence-unit name="PU1" transaction-type="RESOURCE_LOCAL">
And you need to handle transactions by your self:
myEntityManager.getTransaction.begin();
... //Do your transaction stuff
myEntityManager.getTransaction().commit();

Unable to create Spring AOP aspect on Spring Data JPA Repository when CGLIB proxies are used

I'm trying to apply an aspect on a Spring Data JPA Repository and it works fine with default Spring AOP config
#EnableAspectJAutoProxy
(when Spring uses standard Java interface-based proxies).
However, when I switch to CGLIB proxies:
#EnableAspectJAutoProxy(proxyTargetClass = true)
I get this exception:
Caused by: org.springframework.aop.framework.AopConfigException: Could not generate CGLIB subclass of class [class com.sun.proxy.$Proxy59]:
Looks like Spring tries to wrap a CGLIB proxy on a repository class, which is already a CGLIB proxy (generated by Spring Data) and fails.
Any ideas how to make it work?
My Spring Data Repository:
import org.springframework.data.jpa.repository.JpaRepository;
public interface DummyEntityRepository extends JpaRepository<DummyEntity, Integer> {
}
and the aspect:
#Aspect
public class DummyCrudRepositoryAspect {
#After("this(org.springframework.data.repository.CrudRepository)")
public void onCrud(JoinPoint pjp) {
System.out.println("I'm there!");
}
}

Autowiring issues with a class in Spring data Mongo repository

For a variety of reasons, I ended up using spring boot 1.2.0 RC2.
So a spring data mongo application that worked fine in spring boot1.1.8 is now having issues. No code was changed except for the bump to spring boot 1.2.0 RC2. This is due to the snapshot version of spring cloud moving to this spring boot version.
The repository class is as follows
#Repository
public interface OAuth2AccessTokenRepository extends MongoRepository<OAuth2AuthenticationAccessToken, String> {
public OAuth2AuthenticationAccessToken findByTokenId(String tokenId);
public OAuth2AuthenticationAccessToken findByRefreshToken(String refreshToken);
public OAuth2AuthenticationAccessToken findByAuthenticationId(String authenticationId);
public List<OAuth2AuthenticationAccessToken> findByClientIdAndUserName(String clientId, String userName);
public List<OAuth2AuthenticationAccessToken> findByClientId(String clientId);
}
This worked quite well before the bump in versions and now I see this in the log.
19:04:35.510 [main] DEBUG o.s.c.a.ClassPathBeanDefinitionScanner - Ignored because not a concrete top-level class: file [/Users/larrymitchell/rpilprojects/corerpilservicescomponents/channelMap/target/classes/com/cisco/services/rpil/mongo/repository/oauth2/OAuth2AccessTokenRepository.class]
I do have another mongo repository that is recognized but it was defined as a class implementation
#Component
public class ChannelMapRepository { ... }
This one is recognized (I defined it as a implementation class as a workaround for another problem I had). This class is recognized and seems to work fine.
19:04:35.513 [main] DEBUG o.s.c.a.ClassPathBeanDefinitionScanner - Identified candidate component class: file [/Users/larrymitchell/rpilprojects/corerpilservicescomponents/channelMap/target/classes/com/cisco/services/rpil/services/Microservice.class]
Anyone have an idea why? I looked up the various reasons for why component scanning would not work and nothing lends itself to my issue.
Try removing the #Repository annotation? Worked for me. This was an issue in Github as well.