I am developing an WebApp that uses datanucleus as JPA - Provider and Neo4j as No-SQL database.
My dependencies are
<dependency>
<groupId>org.datanucleus</groupId>
<artifactId>datanucleus-core</artifactId>
<version>[5.0.0-m1, 5.9)</version>
</dependency>
<dependency>
<groupId>org.datanucleus</groupId>
<artifactId>datanucleus-api-jpa</artifactId>
<version>[5.0.0-m1, 5.9)</version>
</dependency>
<dependency>
<groupId>org.datanucleus</groupId>
<artifactId>javax.persistence</artifactId>
<version>2.1.0</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>[1.2, 1.3)</version>
</dependency>
<dependency>
<groupId>org.datanucleus</groupId>
<artifactId>datanucleus-neo4j</artifactId>
<version>[5.0.0-m1, 5.9)</version>
</dependency>
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j</artifactId>
<version>2.3.0</version>
</dependency>
My persistence.xml looks like
<persistence-unit name="neo4j">
<provider>org.datanucleus.api.jpa.PersistenceProviderImpl</provider>
<class>entities.Person</class>
<class>entities.Car</class>
<exclude-unlisted-classes/>
<properties>
<property name="javax.persistence.jdbc.url" value="neo4j:C:\Users\phe\Documents\Neo4j\sampledatanucleus"/>
<!-- <property name="datanucleus.storeManagerType" value="neo4j"/> -->
<property name="datanucleus.storeManagerType" value="neo4j"/>
</properties>
</persistence-unit>
I am persisting the entities like this
Person person = new Person(personName, personAge);
Car car = new Car(carName);
car.setOwner(person);
person.getCars().add(car);
em.persist(person);
When persisting the entities I get an exception:
Caused by: org.neo4j.kernel.StoreLockException: Unable to obtain lock on store lock file: C:\Users\phe\Documents\Neo4j\sampledatanucleus\store_lock. Please ensure no other process is using this database, and that the directory is writable (required even for read-only access)
at org.neo4j.kernel.StoreLocker.storeLockException(StoreLocker.java:93)
at org.neo4j.kernel.StoreLocker.checkLock(StoreLocker.java:85)
at org.neo4j.kernel.StoreLockerLifecycleAdapter.start(StoreLockerLifecycleAdapter.java:44)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:528)
... 153 more
This makes me crazy. The directory I defined in the perstistence.xml is empty when I start the applictaion. Therefore it is impossible that it is locked by another process. DataNucleus then creates the database. I can access it with Neo4j Browser. But Persisting does not work.
Any ideas?
EDIT:
This is the class I use for querying
#Stateless
public class Manager {
#PersistenceContext(unitName = "neo4j")
private EntityManager em;
public List<Person> queryCache() {
String statement = "Select o from Person o";
Query query = em.createQuery(statement, Person.class);
List<Person> list = query.getResultList();
em.close();
return list;
}
public void save(String personName, Double personAge, String carName) {
Person person = new Person(personName, personAge);
Car car = new Car(carName);
car.setOwner(person);
person.getCars().add(car);
em.persist(person);
}
}
These two methods get called by the controller when clicking the corresponding buttons on the view.
Related
I'm using Shiro 1.7.1 and Guice 4.2.3, below is the snippet of my POM file,
<properties>
<shiro.version>1.7.1</shiro.version>
<guice.version>4.2.3</guice.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.shiro</groupId>
<artifactId>shiro-web</artifactId>
<version>${shiro.version}</version>
</dependency>
<dependency>
<groupId>org.apache.shiro</groupId>
<artifactId>shiro-guice</artifactId>
<version>${shiro.version}</version>
</dependency>
<dependency>
<groupId>org.apache.shiro</groupId>
<artifactId>shiro-ehcache</artifactId>
<version>${shiro.version}</version>
</dependency>
<dependency>
<groupId>com.google.inject</groupId>
<artifactId>guice</artifactId>
<version>${guice.version}</version>
</dependency>
<dependency>
<groupId>com.google.inject.extensions</groupId>
<artifactId>guice-servlet</artifactId>
<version>${guice.version}</version>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
<version>3.1.0</version>
<scope>provided</scope>
</dependency>
...
</dependencies>
I'm customizing Shiro's LogoutFilter by creating a new class,
package com.myshiro.myshiro;
import org.apache.shiro.web.filter.authc.LogoutFilter;
public class MyLogoutFilter extends LogoutFilter {
}
and bind org.apache.shiro.web.filter.authc.LogoutFilter to the above customized MyLogoutFilter,
package com.myshiro.myshiro;
public class MyShiroModule extends ShiroWebModule {
public MyShiroModule(ServletContext servletContext) {
super(servletContext);
}
protected void configureShiroWeb() {
try {
bindRealm().toConstructor(IniRealm.class.getConstructor(Ini.class));
} catch (NoSuchMethodException e) {
addError(e);
}
bind(org.apache.shiro.web.filter.authc.LogoutFilter.class).to(MyLogoutFilter.class).in(Scopes.SINGLETON);
addFilterChain("/logout", LOGOUT);
}
}
and I try to create the Guice injector in the unit test class like this,
public class MyShiroModuleTest {
#Mock
private ServletContext servletContext;
#Test
public void test() {
Guice.createInjector(new MyShiroModule(servletContext));
}
}
and it failed with the following errors,
1) Binding to null instances is not allowed. Use toProvider(Providers.of(null)) if this is your intended behaviour.
at org.apache.shiro.guice.web.ShiroWebModule.configureShiro(ShiroWebModule.java:136)
2) A binding to org.apache.shiro.web.filter.authc.LogoutFilter was already configured at com.myshiro.myshiro.MyShiroModule.configureShiroWeb(MyShiroModule.java:25).
at org.apache.shiro.guice.web.ShiroWebModule.setupFilterChainConfigs(ShiroWebModule.java:209)
From the second note above, it explained that the binding to org.apache.shiro.web.filter.authc.LogoutFilter is already configured in both MyShiroModule and ShiroWebModule. Do you have any idea of how to bind to my customized LogoutFilter?
This issue did not happened in Shiro 1.3.x.
My sample project is available here, you can see the error simply when you mvn clean install.
Sounds like your problem is related to Guice 4, and less about Shiro. Instead of re-using the same binding key, define a new one, something like:
bind(MyLogoutFilter.class).to(MyLogoutFilter.class).in(Scopes.SINGLETON);
addFilterChain("/logout", Key.get(MyLogoutFilter.class));
Im trying to fetch a list of objects from the database using Spring boot Webflux with the postgres R2DBC driver, but I get an error saying:
value ignored org.springframework.transaction.reactive.TransactionContextManager$NoTransactionInContextException: No transaction in context Context1{reactor.onNextError.localStrategy=reactor.core.publisher.OnNextFailureStrategy$ResumeStrategy#7c18c255}
it seems all DatabaseClient operations requires to be wrap into a transaction.
I tried different combinations of the dependencies between spring-boot-data and r2db but didn't really work.
Version:
<spring-boot.version>2.2.0.RC1</spring-boot.version>
<spring-data-r2dbc.version>1.0.0.BUILD-SNAPSHOT</spring-data-r2dbc.version>
<r2dbc-releasetrain.version>Arabba-M8</r2dbc-releasetrain.version>
Dependencies:
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-r2dbc</artifactId>
<version>${spring-data-r2dbc.version}</version>
</dependency>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-postgresql</artifactId>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
</dependency>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-bom</artifactId>
<version>${r2dbc-releasetrain.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
fun findAll(): Flux<Game> {
val games = client
.select()
.from(Game::class.java)
.fetch()
.all()
.onErrorContinue{ throwable, o -> System.out.println("value ignored $throwable $o") }
games.subscribe()
return Flux.empty()
}
#Table("game")
data class Game(#Id val id: UUID = UUID.randomUUID(),
#Column("guess") val guess: Int = Random.nextInt(500))
Github repo: https://github.com/odfsoft/spring-boot-guess-game/tree/r2dbc-issue
I expect read operations to not require #Transactional or to run the query without wrapping into the transactional context manually.
UPDATE:
After a few tries with multiple version I manage to find a combination that works:
<spring-data-r2dbc.version>1.0.0.BUILD-SNAPSHOT</spring-data-r2dbc.version>
<r2dbc-postgres.version>0.8.0.RC2</r2dbc-postgres.version>
Dependencies:
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-r2dbc</artifactId>
<version>${spring-data-r2dbc.version}</version>
</dependency>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-postgresql</artifactId>
<version>${r2dbc-postgres.version}</version>
</dependency>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-bom</artifactId>
<version>Arabba-RC2</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
it seems the versions notation went from 1.0.0.M7 to 0.8.x for r2dbc due to the following:
https://r2dbc.io/2019/05/13/r2dbc-0-8-milestone-8-released
https://r2dbc.io/2019/10/07/r2dbc-0-8-rc2-released
but after updating to the latest version a new problem appear which is that a transaction is required to run queries as follow:
Update configuration:
#Configuration
class PostgresConfig : AbstractR2dbcConfiguration() {
#Bean
override fun connectionFactory(): ConnectionFactory {
return PostgresqlConnectionFactory(
PostgresqlConnectionConfiguration.builder()
.host("localhost")
.port(5432)
.username("root")
.password("secret")
.database("game")
.build())
}
#Bean
fun reactiveTransactionManager(connectionFactory: ConnectionFactory): ReactiveTransactionManager {
return R2dbcTransactionManager(connectionFactory)
}
#Bean
fun transactionalOperator(reactiveTransactionManager: ReactiveTransactionManager) =
TransactionalOperator.create(reactiveTransactionManager)
}
Query:
fun findAll(): Flux<Game> {
return client
.execute("select id, guess from game")
.`as`(Game::class.java)
.fetch()
.all()
.`as`(to::transactional)
.onErrorContinue{ throwable, o -> System.out.println("value ignored $throwable $o") }
.log()
}
Disclaimer this is not mean to be used in production!! still before GA.
I'm trying use Spring Boot Admin Server but in the dashboard my application is not listed there.
What did I do, in my pom.xml I added the following dependencies:
<!-- https://mvnrepository.com/artifact/de.codecentric/spring-boot-admin-server -->
<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-server</artifactId>
<version>2.0.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/de.codecentric/spring-boot-admin-server-ui -->
<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-server-ui</artifactId>
<version>2.0.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/de.codecentric/spring-boot-admin-starter-client -->
<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-starter-client</artifactId>
<version>2.0.2</version>
</dependency>
In my ServidorApplication Class:
#Configuration
#EnableAdminServer
#SpringBootApplication
#EnableAutoConfiguration(exclude = { JacksonAutoConfiguration.class })
public class ServidorApplication {
public static void main(String[] args) {
SpringApplication.run(ServidorApplication.class, args);System.err.println("Sem parametros de inicializacao");
}
GsonHttpMessageConverter gsonHttpMessageConverter() {
return new GsonHttpMessageConverter(new Gson());
}
}
And in my application.properties:
spring.boot.admin.url=http://localhost:8084
management.security.enabled=false
But when I open the Spring Boot Admin dashboard I do not have any application.
Print of my dashboard
can someone help me?
Instead of admin.url use admin.client.url
spring.boot.admin.client.url=http://localhost:8084
I have two applications that talk to each other using a REST API.
I would like to know if I can use Apache Camel as a proxy that could "persist" the API calls, for example storing them as messages in ActiveMQ, and then later route the requests to the actual API endpoint.
Practically, I would like to use Apache Camel to "enhance" the API endpoints adding persistence, throttling of requests, etc...
What component do you suggest to use?
You can always try to bridge your HTTP request into a queue, but making the thread wait by forcing the exchangePattern to InOut.
See this example :
import org.apache.activemq.broker.BrokerService;
import org.apache.camel.LoggingLevel;
import org.apache.camel.builder.RouteBuilder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class Main {
private static final Logger logger = LoggerFactory.getLogger(SimpleRouteBuilder.class);
public static void main(String[] args) throws Exception {
org.apache.camel.main.Main main = new org.apache.camel.main.Main();
main.addRouteBuilder(new SimpleRouteBuilder());
logger.info("Next call is blocking, ctrl-c to exit\n");
main.run();
}
}
class SimpleRouteBuilder extends RouteBuilder {
private static final Logger logger = LoggerFactory.getLogger(SimpleRouteBuilder.class);
public void configure() throws Exception {
// launching an activemq in background
final BrokerService broker = new BrokerService();
broker.setBrokerName("activemq");
broker.addConnector("tcp://localhost:61616");
Runnable runnable = () -> {
try {
broker.start();
} catch (Exception e) {
e.printStackTrace();
}
};
runnable.run();
// receiving http request but queuing them
from("jetty:http://127.0.0.1:10000/input")
.log(LoggingLevel.INFO, logger, "received request")
.to("activemq:queue:persist?exchangePattern=InOut"); // InOut has to be forced with JMS
// dequeuing and calling backend
from("activemq:queue:persist")
.log(LoggingLevel.INFO, logger,"requesting to destination")
.removeHeaders("CamelHttp*")
.setHeader("Cache-Control",constant("private, max-age=0,no-store"))
.to("jetty:http://perdu.com?httpMethod=GET");
}
}
If you are using maven, here is the pom.xml :
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>be.jschoreels.camel</groupId>
<artifactId>camel-simple</artifactId>
<version>1.0-SNAPSHOT</version>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-core</artifactId>
<version>2.19.2</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jms</artifactId>
<version>2.19.2</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jetty</artifactId>
<version>2.19.2</version>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-camel</artifactId>
<version>5.15.3</version>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-all</artifactId>
<version>5.15.3</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.activemq/activemq-kahadb-store -->
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-kahadb-store</artifactId>
<version>5.15.3</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.25</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.25</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
</dependencies>
</project>
I know druid have scala/php client ,but I'm using java and I don't want to query the druid DB through HTTP directly, is any java druid-client available ?
Please checkout DruidDry. It's a java-based utility library to query druid.
Druid dry provides support to write query json and query to broker node using Java API. Currently it does not support few complex JSONs and also does not support ingestion spec. However it does support most common operations such as select, scan, group by etc.
Your project needs to run on Java 8 or greater in order to work with druid dry client.
Here is a simple Spring Boot Java Application which queries Druid data using Avatica JDBC Driver and prints the first row from the query.
Assuming that Druid is running in local and you already have data in a table name "druid_table" which has a column sourceIP
FlinkDruidApplication.java
#SpringBootApplication
public class FlinkDruidApplication {
public static void main(String[] args) {
SpringApplication.run(FlinkDruidApplication.class, args);
Logger log = LoggerFactory.getLogger("FlinkDruidApplication");
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
DataSet<Row> dbData =
env.createInput(
JDBCInputFormat
.buildJDBCInputFormat()
.setDrivername("org.apache.calcite.avatica.remote.Driver")
.setDBUrl("jdbc:avatica:remote:url=http://localhost:8082/druid/v2/sql/avatica/")
.setUsername("null")
.setPassword("null")
.setQuery(
"SELECT sourceIP FROM druid_table"
)
.setRowTypeInfo((RowTypeInfo) Types.ROW(Types.STRING))
.finish()
);
try {
log.info("Printing first IP :: {} " + dbData.collect().iterator().next());
} catch (Exception e) {
log.error(e.getMessage());
}
}
}
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.8.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.shashank</groupId>
<artifactId>FlinkDruid</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>FlinkDruid</name>
<description>Flink Druid Connection</description>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-core -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-core</artifactId>
<version>1.9.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-streaming-java -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.12</artifactId>
<version>1.9.0</version>
<scope>provided</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-java -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>1.9.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-clients -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.12</artifactId>
<version>1.9.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-jdbc -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-jdbc_2.12</artifactId>
<version>1.8.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.calcite.avatica/avatica-core -->
<dependency>
<groupId>org.apache.calcite.avatica</groupId>
<artifactId>avatica-core</artifactId>
<version>1.15.0</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
you can also use druidry, Java based utility library
The simplest thing is probably Druid SQL over JDBC.
Here's how to Query Druid using JDBC and Calcite Avatica:
Add Calcite Avatica
Add Calcite Avatica to your POM:
...
<dependency>
<groupId>org.apache.calcite.avatica</groupId>
<artifactId>avatica</artifactId>
<version>1.21.0</version>
</dependency>
...
Example Code
// ---------- Set up the Connection -------- //
String url = "jdbc:avatica:remote:url=https://example.com:8888/druid/v2/sql/avatica/";
Properties properties = new Properties();
properties.setProperty("user", "myusername");
properties.setProperty("password", "mypassword");
Connection conn = DriverManager.getConnection(url, properties);
// --------------- Query Druid ------------- //
String sql =
"SELECT page, COUNT(*) AS Edits \n"
+ "FROM \"wikipedia\" \n"
+ "WHERE \"__time\" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00' \n"
+ "GROUP BY page \n"
+ "ORDER BY Edits DESC \n"
+ "LIMIT 10";
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(sql);
// Do something with the query results here.
See the docs here: https://druid.apache.org/docs/latest/querying/sql.html#jdbc