#NotNull annotation is not checking null queryparameter in Jersey REST resource - rest

I am trying to use javax.validation.validation-api for validating #QueryParam parameters. I have followed steps as below:
Added dependency:
<dependency>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
<version>1.1.0.Final</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.ext</groupId>
<artifactId>jersey-bean-validation</artifactId>
<version>2.12</version>
<exclusions>
<exclusion>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-validator</artifactId>
</exclusion>
</exclusions>
</dependency>
Jersey resource:
#GET
#Produces(MediaType.APPLICATION_JSON)
#Path("/{param1}")
#ValidateOnExecution
public SomeObject get(#PathParam("param1") String param1,
#NotNull #QueryParam("q1") String q1,
#NotNull #QueryParam("q2") String q2) {
try {
//Assuming q1 and q2 are NOT Null here
...
} catch(Exception exception) {
throw new WebApplicationException(exception,
Response.Status.INTERNAL_SERVER_ERROR);
}
return someObject;
}
I tried giving various combination of URLs as in q1 and q2 both parameters absent, q1 absent or q2 absent. Each time, #NotNull is not getting detected. Means try block code is getting executed irrespective of q1 and q2 being null.
What else needs to be done?
I referred this link - https://jersey.java.net/documentation/latest/bean-validation.html#d0e11956.
How to check Auto-Discoverable feature is on in my environment?

Related

Spring boot data R2DBC requires transaction for read operations

Im trying to fetch a list of objects from the database using Spring boot Webflux with the postgres R2DBC driver, but I get an error saying:
value ignored org.springframework.transaction.reactive.TransactionContextManager$NoTransactionInContextException: No transaction in context Context1{reactor.onNextError.localStrategy=reactor.core.publisher.OnNextFailureStrategy$ResumeStrategy#7c18c255}
it seems all DatabaseClient operations requires to be wrap into a transaction.
I tried different combinations of the dependencies between spring-boot-data and r2db but didn't really work.
Version:
<spring-boot.version>2.2.0.RC1</spring-boot.version>
<spring-data-r2dbc.version>1.0.0.BUILD-SNAPSHOT</spring-data-r2dbc.version>
<r2dbc-releasetrain.version>Arabba-M8</r2dbc-releasetrain.version>
Dependencies:
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-r2dbc</artifactId>
<version>${spring-data-r2dbc.version}</version>
</dependency>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-postgresql</artifactId>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
</dependency>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-bom</artifactId>
<version>${r2dbc-releasetrain.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
fun findAll(): Flux<Game> {
val games = client
.select()
.from(Game::class.java)
.fetch()
.all()
.onErrorContinue{ throwable, o -> System.out.println("value ignored $throwable $o") }
games.subscribe()
return Flux.empty()
}
#Table("game")
data class Game(#Id val id: UUID = UUID.randomUUID(),
#Column("guess") val guess: Int = Random.nextInt(500))
Github repo: https://github.com/odfsoft/spring-boot-guess-game/tree/r2dbc-issue
I expect read operations to not require #Transactional or to run the query without wrapping into the transactional context manually.
UPDATE:
After a few tries with multiple version I manage to find a combination that works:
<spring-data-r2dbc.version>1.0.0.BUILD-SNAPSHOT</spring-data-r2dbc.version>
<r2dbc-postgres.version>0.8.0.RC2</r2dbc-postgres.version>
Dependencies:
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-r2dbc</artifactId>
<version>${spring-data-r2dbc.version}</version>
</dependency>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-postgresql</artifactId>
<version>${r2dbc-postgres.version}</version>
</dependency>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-bom</artifactId>
<version>Arabba-RC2</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
it seems the versions notation went from 1.0.0.M7 to 0.8.x for r2dbc due to the following:
https://r2dbc.io/2019/05/13/r2dbc-0-8-milestone-8-released
https://r2dbc.io/2019/10/07/r2dbc-0-8-rc2-released
but after updating to the latest version a new problem appear which is that a transaction is required to run queries as follow:
Update configuration:
#Configuration
class PostgresConfig : AbstractR2dbcConfiguration() {
#Bean
override fun connectionFactory(): ConnectionFactory {
return PostgresqlConnectionFactory(
PostgresqlConnectionConfiguration.builder()
.host("localhost")
.port(5432)
.username("root")
.password("secret")
.database("game")
.build())
}
#Bean
fun reactiveTransactionManager(connectionFactory: ConnectionFactory): ReactiveTransactionManager {
return R2dbcTransactionManager(connectionFactory)
}
#Bean
fun transactionalOperator(reactiveTransactionManager: ReactiveTransactionManager) =
TransactionalOperator.create(reactiveTransactionManager)
}
Query:
fun findAll(): Flux<Game> {
return client
.execute("select id, guess from game")
.`as`(Game::class.java)
.fetch()
.all()
.`as`(to::transactional)
.onErrorContinue{ throwable, o -> System.out.println("value ignored $throwable $o") }
.log()
}
Disclaimer this is not mean to be used in production!! still before GA.

#Before cucumber hook does accept scenario as an argument

I'm trying to get the name of the current cucumber scenario.
I'm using JUnit 4.10. When I add the #Before without any arguments then the method is successfully called. However if I include the argument Scenario then I get:
cucumber.runtime.CucumberException: Can't invoke
stepDefinitions.beforeScenarios(Scenario)
import cucumber.annotation.Before;
import gherkin.formatter.model.Scenario;
public class stepDefinitions {
public Scenario scenario = null;
#Before
public void beforeScenarios(Scenario scenario) {
System.out.println("Method called");
}
...
Any ideas what I'm doing wrong?
I'm updated my pom.xml with:
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-java</artifactId>
<version>3.141.5</version>
</dependency>
<dependency>
<groupId>io.cucumber</groupId>
<artifactId>cucumber-java</artifactId>
<version>4.3.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.cucumber</groupId>
<artifactId>cucumber-junit</artifactId>
<version>4.3.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
Below are right API to be imported.
#Before - import cucumber.api.java.Before;
Scenario - import cucumber.api.Scenario;

gRPC spring-boot-starter can not bind #GRpcGlobalInterceptor?

I m using
<dependency>
<groupId>org.lognet</groupId>
<artifactId>grpc-spring-boot-starter</artifactId>
<version>2.1.4</version>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>io.zipkin.brave</groupId>
<artifactId>brave-instrumentation-grpc</artifactId>
<version>4.13.1</version>
</dependency>
<dependency>
<groupId>io.zipkin.brave</groupId>
<artifactId>brave</artifactId>
<version>4.13.1</version>
</dependency>
and I want to use brave-instrumentation-grpc monitor my grpc server application. So I followed advice below:
https://github.com/LogNet/grpc-spring-boot-starter#interceptors-support
https://github.com/openzipkin/brave/tree/master/instrumentation/grpc
#Configuration
public class GrpcFilterConfig{
#GRpcGlobalInterceptor
#Bean
public ServerInterceptor globalInterceptor(){
Tracing tracing = Tracing.newBuilder().build();
GrpcTracing grpcTracing = GrpcTracing.create(tracing);
return grpcTracing.newServerInterceptor();
}
}
The question is : The GlobalInterceptor i defined does not bind to grpcserver.
debug deep insde the GRpcServerRunner.class, it seems that the code does not return beansWithAnnotation
Map<String, Object> beansWithAnnotation = this.applicationContext.getBeansWithAnnotation(annotationType);
beansWithAnnotation is null.
Is there something wrong with my usage?

MongoDB Scala - query document for a specific field value

So I know that in Mongo Shell, you use dot notation to get the field you want in any document.
How is dot notation achieved in MongoDB Scala. I'm confused as to how it works. Here is the code that fetches a document from a collection:
val record = collection.find().projection(fields(include("offset"), excludeId())).limit(1)
EDIT:
I'm trying to work on a mechanism to basically re-consume Kafka records at a point where the consumer was shutdown. To do this, I store my kafka records in an external database, and then try to fetch the most recent offset from there and start consuming from that point. Here is my Scala method that should do that:
def getLatestCommitOffsetFromDB(collectionName: String): Long = {
import com.mongodb.Block
import org.bson.Document
val printBlock = new Block[Document]() {
override def apply(document: Document): Unit = {
println(document.toJson)
}
}
import com.mongodb.async.SingleResultCallback
val callbackWhenFinished = new SingleResultCallback[Void]() {
override def onResult(result: Void, t: Throwable): Unit = {
System.out.println("Latest offset fetched from database.")
}
}
var obj: String = " "
try {
val record = collection.find().projection(fields(include("offset"), excludeId())).limit(1)
//TODO FIND A WAY TO GET THE VALUE AND STORE IT IN A VARIABLE
} catch {
case e: RuntimeException =>
logger.error(s"MongoDB Server Error : Unable to fetch data from collection : $collection")
logger.error(e.printStackTrace().toString())
}
obj.toLong
}
The problem isn't that I can fetch documents from Mongo, more-so that I'm trying to access a particular field in Mongo. The Document has four fields in it: topic, partition, message, and offset. I want to get the "offset" field and store that in a variable, so I can use it as a restarting point to re-consume Kafka records.
where do I go from there?
POM.xml
<?xml version="1.0" encoding="UTF-8"?>
http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0
<groupId>OffsetManagementPoC</groupId>
<artifactId>OffsetManagementPoC</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.12</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-compiler</artifactId>
<version>2.11.8</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.25</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<version>0.10.0.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>2.2.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka-0-10 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>2.2.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-core -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.6.5</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.6.5</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-annotations -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>2.6.5</version>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>casbah_2.12</artifactId>
<version>3.1.1</version>
<type>pom</type>
</dependency>
<dependency>
<groupId>com.typesafe</groupId>
<artifactId>config</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>org.mongodb.scala</groupId>
<artifactId>mongo-scala-driver_2.12</artifactId>
<version>2.1.0</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-compiler</artifactId>
<version>2.11.8</version>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongo-java-driver</artifactId>
<version>3.4.2</version>
</dependency>
<dependency>
<groupId>org.mongodb.scala</groupId>
<artifactId>mongo-scala-driver_2.11</artifactId>
<version>2.1.0</version>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>bson</artifactId>
<version>3.3.0</version>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongodb-driver-async</artifactId>
<version>3.4.3</version>
</dependency>
<dependency>
<groupId>org.mongodb.scala</groupId>
<artifactId>mongo-scala-bson_2.11</artifactId>
<version>2.1.0</version>
</dependency>
</dependencies>
You can modify your query this way:
import com.mongodb.MongoClient
import com.mongodb.client.MongoCollection
import com.mongodb.client.model.Projections
def getLatestCommitOffsetFromDB(
databaseName: String,
collectionName: String
): Long = {
val mongoClient = new MongoClient("localhost", 27017);
val collection =
mongoClient.getDatabase(databaseName).getCollection(collectionName)
val record = collection
.find()
.projection(
Projections
.fields(Projections.include("offset"), Projections.excludeId()))
.first
record.get("offset").asInstanceOf[Double].toLong
}
I think you were missing the com.mongodb.client.model.Projections imports in order to use fields, include and excludeId
I used first instead of limit(1) to make it easier to extract the result.
first returns a Document object on which you can call get to retrieve the value of the requested field.
But in fact, since you just want one record and one field, you can remove the projection!:
val record = collection.find().first
According to the documentation, collection.find() accepts a com.mongodb.DBObject
One of the implementations of that interface that you can use is BasicDBObject which is basically like a mutable.Map[String, Object]. You can use the constructor which accepts a map like:
val query = new com.mongodb.BasicDBObject(Map(
"foo.bar" -> "value1"
"bar.foo" -> "value2"
))
val record = collection.find(query)....

Storm-Kafka-client from Storm 1.0.1

Based on the Storm documentation supported implementation of KafkaSpout is based on the old consumer API. I noticed the external package has another implementation named storm-kafka-client.
https://github.com/apache/storm/tree/master/external/storm-kafka-client
It is unclear if the new client release in 1.0.1 is production ready. Does anyone have experience running it?
I posted the same question to the Storm mail list.
the new API is production ready. We should use 1.x branch.
I plan to test with
<!-- https://mvnrepository.com/artifact/org.apache.storm/storm-kafka-client -->
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka-client</artifactId>
<version>1.0.1</version>
</dependency>
Will update on the progress.
Below Code works for me fine!!!
public TopologyBuilder myTopology() {
TopologyBuilder builder = new TopologyBuilder();
try {
KafkaSpoutConfig<String, String> kafkaSpoutConfig = getKafkaSpoutConfig("KAFKA_IP:9092", KAFKA_TOPIC);
KafkaSpout kafkaSpout = new KafkaSpout<>(kafkaSpoutConfig);
builder.setSpout("kafkaSpout", kafkaSpout, 2 * 2);
builder.setBolt("Bolt-1", new TestBolt(), parallelism).shuffleGrouping("kafkaSpout", KAFKA_TOPIC);
} catch (Exception ex) {
}
return builder;
}
Configure Spout.
protected KafkaSpoutConfig<String, String> getKafkaSpoutConfig(String bootstrapServers ,String topic) {
ByTopicRecordTranslator<String, String> trans = new ByTopicRecordTranslator<>(
(r) -> new Values(r.topic(), r.partition(), r.offset(), r.key(), r.value()),
new Fields("topic", "partition", "offset", "key", "value"), topic);
Builder<String, String> builder = KafkaSpoutConfig.builder(bootstrapServers, new String[]{topic});
return builder.setProp(ConsumerConfig.GROUP_ID_CONFIG, topic)
.setProcessingGuarantee(ProcessingGuarantee.AT_LEAST_ONCE)
.setRetry(getRetryService())
.setRecordTranslator(trans)
.setOffsetCommitPeriodMs(10_000)
.setFirstPollOffsetStrategy(UNCOMMITTED_EARLIEST)
.setMaxUncommittedOffsets(1000)
.build();
}
For configure failed messages retyr logic
protected KafkaSpoutRetryService getRetryService() {
return new KafkaSpoutRetryExponentialBackoff(TimeInterval.microSeconds(500),
TimeInterval.milliSeconds(2), Integer.MAX_VALUE, TimeInterval.seconds(10));
}
You can use following maven dependency for storm 1.1.0
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>1.1.0</version>
<scope>provided</scope>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.9.0.0</version>
<exclusions>
<exclusion>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
You may face some more dependency issue which you can resolve by adding the required jars.
Also the dependency in java code will change from org.backtype.storm.XXXXX to org.apache.storm.XXXXX