JPAMapStore missing in latest version of Hibernate-Spring - spring-data-jpa

JPAMapStore missing in latest version of Hibernate-Spring.
Its Available in hazelcast-spring-3.2.4.jar, However i am not able to find it latest version of Hibernate-Spring.
I am trying to use JPA based Mapstore for my Spring Boot Application

It was deleted long time ago and moved to Hazelcast Code Samples. It's not strictly related to core Hazelcast, so you can write it on your own or just copy from the Code Samples.
Some other related resources:
Hazelcast JPA Code Sample
Hazelcast JPA Example
Hazelcast Spring Data Module

Related

Need to upgrade to Spring Framework 5.x.x while remaining on Spring Data GemFire 1.3.4; Is there anyway to ensure a smooth transition?

Our project used to depend on spring 3.x.x and 4.x.x, while it has to upgrade to 5.3.18 due to the vulnerabilities security department require to handle.
we strong depend on gemfire 8.2.5 and it cannot upgrade due to many many reasons. we use gemfire with spring-data-gemfire 1.3.4 which depends on spring 3.x.x (4.x.x is also compatible).
When we upgrade to spring 5.x.x, the compatible problems occur such as most important usage about BeanFactoryLocator, it has been removed in the spring 5.x.x. Maybe there are other compatible problems as well.
So want to ask if there is anyway to remain spring 5.x.x and spring-data-gemfire 1.3.4 unchange and also solve the incompatible problems. Such as some bridge dependency to confirm spring smooth transition?
In short... NO.
There is no simple way to reconcile the differences between Spring Data GemFire 1.3.4 (which has long been out of Spring OSS support, BTW), that was based on Spring Framework 3.2.x (specifically, 3.2.8.RELEASE) and the now current version of Spring Framework 5.x.
This is a significant and major version gap across 2 generations no less, and many things have changed in-between and since then.
Spring Framework 3.x and 4.x have also long been out of OSS as well as commercial support.
The specific problem you are mentioning involving the Spring Framework's BeanFactoryLocator was addressed in SGF-587 over 5 years ago. Spring (Data) major.minor versions are only supported for ~1.5 years. An entire major generation even is only supported for roughly 3-4 years.

Evaluation context not correctly set for #Document in Spring 5.x / Spring boot 2.1.6

I am writing a Spring boot application with latest Spring boot 2.1.6 release. There was an issue earlier, which has been discussed both on SO as well as Spring's bug tracker where the Spring EL context did not have access to beans.
This was supposed to have been resolved in Spring 4.x releases. However, I am facing the same problem
References:
SO thread 1
SO thread 2
Spring Data Bug 1043
Spring Data Bug 525
Spring Data Bug 1874
I have tried most of the solutions that were suggested before the actual fix was put in as well.
Right now my code has the annotation like this -
#Document("#{mongoCollectionNameResolver.getCollectionName('BANK')}")
//#Document("BANK")
public class Bank {
}
I have verified that the bean is getting correctly created with the name mentioned in the expression.
I just wanted to ask the community if I am supposed to do anything more for Spring 5.x that I am missing before I reopen the bug / open a new bug with Spring data mongo
When referring to beans with names from SpEL they need to be prefixed # (see the Spring Reference Guide). That being said this means your SpEL expression is wrong.
It should be #{#mongoCollectionNameResolver.getCollectionName('BANK')}.

How to set up Apache Sling to use a relational DB

I am on Sling 11, which uses Jackrabbit Oak as content repository. I was wondering how to set up Sling to store the JCR repo on an RDBMS (DB2 to be specific).
I found this link on Jackrabbit Persistence, but looks like it does not apply to Oak and Oak documentation is mostly about MongoDB.
Also found an implementation of a Cassandra Resource Provider, although that seems designed to access specific paths mapped to Cassandra without using Oak.
Thanks,
Answering here but credit goes to Sling user's mailing list
Package the DB driver in an OSGi bundle
Download Sling's starter project
In boot.txt add a new running mode (in my case oak_db2)
[settings]
sling.run.mode.install.options=oak_tar,oak_mongo,oak_db2
Download Sling's datasource project and compile it.
In oak.txt configure the running mode (this will load the bundles for you in Felix):
[artifacts startLevel=15 runModes=oak_db2]
com.h2database/h2-mvstore/1.4.196
com.ibm.db2/jcc4/11.1
org.apache.sling/org.apache.sling.datasource/1.0.3-SNAPSHOT
And set-up the services that will manage persistence:
[configurations runModes=oak_db2]
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService
documentStoreType="RDB"
org.apache.sling.datasource.DataSourceFactory
url="jdbc:db2://10.1.2.3:50000/sling"
driverClassName="com.ibm.db2.jcc.DB2Driver"
username="****"
password="****"
datasource.name="oak"
Create a 'sling' named database.
run with java -jar -Dsling.run.modes=oak_db2 sling-starter.jar

Configuring Spring Cloud Stream in Camden.SR5 with Spring boot 1.5.1

First off, thanks to the Spring team for all their work pushing this work forward!
Now that Camden.SR5 is official, I have some questions on how to correctly configure the spring cloud stream kafka binder when using Spring Boot 1.5.1.
Spring boot 1.5.1 has auto configuration for kafka and those configuration options seem to be redundant with those in the spring cloud stream kafka binder.
Do we use the core spring boot properties (spring.kafka.) or do we use (spring.cloud.stream.kafka.binder.)?
I did find this issue, but I am curious if this work will be included in the next Camden release?
https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/73
Additionally, I saw this issue reported on Stack Overflow and I believe it will also be an issue with Camden.SR5?
Failed to start bean 'inputBindingLifecycle' when using spring-boot:1.5.1 and spring-cloud-stream
Thanks
Supporting the Boot 1.5 configuration options is an issue in progress. Also, since dedicated 1.5 support is coming only with Spring Cloud Stream Chelsea release train (which is included in the Dalston release of Spring Cloud), it will be available only there.
Also, when using Spring Cloud Camden with Boot 1.5 you will need to override the Kafka dependencies as described in Failed to start bean 'inputBindingLifecycle' when using spring-boot:1.5.1 and spring-cloud-stream. This will be avoided in future versions of Spring Cloud Stream (and Spring Cloud) but only starting in the Chelsea release train of Spring Cloud Stream (and the Dalston release of Spring Cloud) - see https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/88 for details.

Using Cassandra and MySQL together with JPA in Play framework

I would like to use Cassandra NoSQL server with an RDBMS in Play 2.3.0!
Started to build it up using Kundera, according to this tutorial:
http://recipes4geeks.com/2013/07/06/play-nosql-building-nosql-applications-with-play-framework/
It works fine, and I can use it with pure mysql-jdbc connection, and it also works if I use jdbc for Cassandra connection and JPA for MySQL..
.. but the goal is to use a persistence framework, without handling basic JDBC stuffs!
It looks, this problem was mentioned in the link above:
Caution: javaJdbc app dependency downloads hibernate-entitymanager jar file that interferes with Kundera. Make sure you remove this app dependency which is by default present.
If I remove the hibernate-entitymanager from the dependencies, the project runs, but when it wants to call the Persistence.createEntityManagerFactory("mysql") method, Play says: No Persistence provider... as it was expected.
If I keep the hibernate-entitymanager in the dependencies list, beside the kundera client, the Play server simply shuts down.
Is there a possibility to make it work or I have to replace Kundera?
DataNucleus JPA supports persistence to all RDBMS around (via JDBC), as well as to Cassandra, MongoDB, Neo4j, LDAP, HBase and many others. It's Cassandra support seems to be for all latest versions and uses the native Cassandra driver (not JDBC) and so no chance of conflicts like above. You can read up on it at
http://www.datanucleus.org
Caution: javaJdbc app dependency downloads hibernate-entitymanager jar file that interferes with Kundera. Make sure you remove this app dependency which is by default present.
This should not be an issue with latest Kundera releases. Also you can email sample project at kundera#impetus.co.in in case looking for quick support.