R2DBC - MssqlNonTransientException causing onErrorDropped - spring-data-r2dbc

I am quite fresh in reactive world and recently come across quite odd problem, context:
using spring data with R2DBC, on MS SQL
running on SB 2.3, OpenJDK 14 (dependencies below)
using transactional via annotations
using "repositories" built purely on DatabaseClient (code sample below)
and while trying to execute:
delete from master.abc.DEF, followed by
an invalid, on purpose, insert I get following exception:
2020-06-04 00:46:16.345 ERROR [....] 56223 --- [actor-tcp-nio-2] reactor.core.publisher.Operators : Operator called default onErrorDropped
io.r2dbc.mssql.ExceptionFactory$MssqlNonTransientException: Cannot insert the value NULL into column 'some_id', table 'master.abc.DEF'; column does not allow nulls. INSERT fails.
at io.r2dbc.mssql.ExceptionFactory.createException(ExceptionFactory.java:152)
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Assembly trace from producer [reactor.core.publisher.FluxLift] :
reactor.core.publisher.Flux.doOnComplete
io.r2dbc.mssql.RpcQueryMessageFlow.exchange(RpcQueryMessageFlow.java:154)
Error has been observed at the following site(s):
|_ Flux.doOnComplete ⇢ at io.r2dbc.mssql.RpcQueryMessageFlow.exchange(RpcQueryMessageFlow.java:154)
|_ Flux.filter ⇢ at io.r2dbc.mssql.RpcQueryMessageFlow.exchange(RpcQueryMessageFlow.java:160)
|_ Flux.doOnCancel ⇢ at io.r2dbc.mssql.RpcQueryMessageFlow.exchange(RpcQueryMessageFlow.java:161)
|_ Flux.doOnSubscribe ⇢ at io.r2dbc.mssql.RpcQueryMessageFlow.exchange(RpcQueryMessageFlow.java:163)
Stack trace:
at io.r2dbc.mssql.ExceptionFactory.createException(ExceptionFactory.java:152)
at io.r2dbc.mssql.ExceptionFactory.createException(ExceptionFactory.java:181)
at io.r2dbc.mssql.RpcQueryMessageFlow.lambda$exchange$1(RpcQueryMessageFlow.java:148)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:96)
at reactor.core.publisher.EmitterProcessor.drain(EmitterProcessor.java:426)
at reactor.core.publisher.EmitterProcessor.onNext(EmitterProcessor.java:268)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:90)
at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:203)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:90)
at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:203)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:90)
at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:203)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:90)
at reactor.core.publisher.FluxHandleFuseable$HandleFuseableSubscriber.onNext(FluxHandleFuseable.java:178)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:90)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:242)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:90)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:192)
at reactor.core.publisher.EmitterProcessor.drain(EmitterProcessor.java:426)
at reactor.core.publisher.EmitterProcessor.onNext(EmitterProcessor.java:268)
at io.r2dbc.mssql.client.ReactorNettyClient$1.next(ReactorNettyClient.java:237)
at io.r2dbc.mssql.client.ReactorNettyClient$1.next(ReactorNettyClient.java:197)
at io.r2dbc.mssql.message.token.Tabular$TabularDecoder.decode(Tabular.java:425)
at io.r2dbc.mssql.client.ConnectionState$4$1.decode(ConnectionState.java:206)
at io.r2dbc.mssql.client.StreamDecoder.withState(StreamDecoder.java:137)
at io.r2dbc.mssql.client.StreamDecoder.decode(StreamDecoder.java:109)
at io.r2dbc.mssql.client.ReactorNettyClient.lambda$new$6(ReactorNettyClient.java:247)
at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:189)
at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:220)
at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:354)
at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:352)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:96)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:93)
at io.r2dbc.mssql.client.ssl.TdsSslHandler.channelRead(TdsSslHandler.java:402)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:832)
and due to that - transaction is not rollbacked and underlying table is locked :/
The flow (simplified) is:
#Transactional
public Mono<X> replaceX(final String someX)
return someRepository.deleteByX(someX)
.then(...making a webClient-based-call)
.flatMap(input -> someRepository.create(detailsFrom(input)))
....
Adding Hooks.onErrorDropped(error -> ... in a way solves the problem, but that does not seem as a proper solution and I am still wondering what could be the actual root cause - I assume my code/approach rather then r2dbc.
pom snippet
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.3.0.RELEASE</version>
<relativePath/>
</parent>
...
<java.version>14</java.version>
<!-- dependencies version -->
<spring-sleuth.version>2.2.2.RELEASE</spring-sleuth.version>
<reactor-tools.version>3.3.5.RELEASE</reactor-tools.version>
<reactor-tools-blockhound.version>1.0.3.RELEASE</reactor-tools-blockhound.version>
<mssql-jdbc.version>7.4.1.jre11</mssql-jdbc.version> <!-- override spring's default -->
<hibernate-validator.version>6.1.5.Final</hibernate-validator.version>
<springdoc-webflux.version>1.3.2</springdoc-webflux.version>
<logstash-logback.version>6.3</logstash-logback.version>
...
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-r2dbc</artifactId>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-h2</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-mssql</artifactId>
<scope>runtime</scope>
</dependency>
repository methods (conceptual):
public Mono<Void> deleteByX(final String x) {
return this.databaseClient.delete()
.from(X.class)
.matching(where("x").is(x))
.then();
}
public Mono<Long> create(final #NonNull X x) {
return this.databaseClient.insert()
.into(X.class)
.using(x)
.map(row -> row.get(0, Long.class))
.first();
}

Related

trouble with google text-to-speech and mysql-connector-java 8.0.19

I'm a newbee using google text-to-speech. the API work fine with Java 1.8 but when i had Mysql connectore driver in my pom.xml file i got a warning and an error Just on executing the QuickStart demo. here is my eclipse console.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.protobuf.UnsafeUtil (file:/C:/Users/LOGISPORT/.m2/repository/com/google/protobuf/protobuf-java/3.6.1/protobuf-java-3.6.1.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of com.google.protobuf.UnsafeUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Exception in thread "grpc-default-executor-0" java.lang.NoSuchMethodError: 'boolean com.google.protobuf.GeneratedMessageV3.isStringEmpty(java.lang.Object)'
at com.google.cloud.texttospeech.v1.VoiceSelectionParams.getSerializedSize(VoiceSelectionParams.java:328)
at com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:916)
at com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:668)
at com.google.cloud.texttospeech.v1.SynthesizeSpeechRequest.getSerializedSize(SynthesizeSpeechRequest.java:352)
at io.grpc.protobuf.lite.ProtoInputStream.available(ProtoInputStream.java:108)
at io.grpc.internal.MessageFramer.getKnownLength(MessageFramer.java:205)
at io.grpc.internal.MessageFramer.writePayload(MessageFramer.java:137)
at io.grpc.internal.AbstractStream.writeMessage(AbstractStream.java:65)
at io.grpc.internal.ForwardingClientStream.writeMessage(ForwardingClientStream.java:37)
at io.grpc.internal.DelayedStream$6.run(DelayedStream.java:283)
at io.grpc.internal.DelayedStream.drainPendingCalls(DelayedStream.java:182)
at io.grpc.internal.DelayedStream.access$100(DelayedStream.java:44)
at io.grpc.internal.DelayedStream$4.run(DelayedStream.java:148)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
at java.base/java.lang.Thread.run(Thread.java:832)
Is there an incompatility beetween MySql connector and Google text-to-speech ?
Here is a brief view of my code:
public static void main(String... args ) throws Exception {
String jsonPath = "accueil-mamoudzou.json";
ConnectBase();
String num = String.valueOf(Integer.valueOf("00013"));
CredentialsProvider credentialsProvider = FixedCredentialsProvider.create(ServiceAccountCredentials.fromStream(new FileInputStream(jsonPath)));
TextToSpeechSettings settings = TextToSpeechSettings.newBuilder().setCredentialsProvider( credentialsProvider).build();
System.out.println("Settings créer ... Lancement de la traduction.");
try (TextToSpeechClient textToSpeechClient = TextToSpeechClient.create(settings)){
SynthesisInput input = SynthesisInput.newBuilder().setText("Le numéro "+num+" est demandé à la porte 45. Merci").build();
VoiceSelectionParams voice =
VoiceSelectionParams.newBuilder()
.setName("fr-FR-Wavenet-E")
.setLanguageCode("fr-FR")
.setSsmlGender(SsmlVoiceGender.FEMALE)
.build();
AudioConfig audioConfig =
AudioConfig.newBuilder().setAudioEncoding(AudioEncoding.LINEAR16).build();
SynthesizeSpeechResponse response =
textToSpeechClient.synthesizeSpeech(input, voice, audioConfig);
ByteString audioContents = response.getAudioContent();
// Write the response to the output file.
try (OutputStream out = new FileOutputStream("output.wav")) {
out.write(audioContents.toByteArray());
System.out.println("Audio content written to file output.wav");
out.close();
}
playSound();
}
}
and my pom.xml
<dependencies>
<!-- https://mvnrepository.com/artifact/mysql/mysql-connector-java -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>8.0.19</version>
</dependency>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-texttospeech</artifactId>
<version>2.1.5</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.google.auth/google-auth-library-appengine -->
<dependency>
<groupId>com.google.auth</groupId>
<artifactId>google-auth-library-appengine</artifactId>
<version>1.6.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.google.cloud/google-cloud-storage -->
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-storage</artifactId>
<version>2.4.5</version>
</dependency>
</dependencies>
Try adding this to your pom.xml
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>libraries-bom</artifactId>
<version>25.0.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
It's said to address errors such as NoSuchMethodException and references protobuf directly (Among other things)
Source: https://cloud.google.com/java/docs/bom

Embedded mongodb to spring-boot application java exception

I need to set embedded mongodb in my springboot project but it show infinite error logs. Someone can help me?
I use these dependencies
<!-- https://mvnrepository.com/artifact/de.flapdoodle.embed/de.flapdoodle.embed.mongo -->
<dependency>
<groupId>de.flapdoodle.embed</groupId>
<artifactId>de.flapdoodle.embed.mongo</artifactId>
<version>2.2.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/cz.jirutka.spring/embedmongo-spring -->
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>bson</artifactId>
<version>3.8.0</version>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongodb-driver</artifactId>
<version>3.8.0</version>
<exclusions>
<exclusion>
<groupId>org.mongodb</groupId> <!-- Exclude Project-E from Project-B -->
<artifactId>bson</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongodb-driver-core</artifactId>
<version>3.8.0</version>
</dependency>
<dependency>
<groupId>cz.jirutka.spring</groupId>
<artifactId>embedmongo-spring</artifactId>
<version>1.3.1</version>
</dependency>
And then i configure the mongoTemplate with this method in a configuration class
private static final String MONGO_DB_URL = "localhost";
private static final String MONGO_DB_NAME = "embedded_db";
#Bean
public MongoTemplate mongoTemplate() throws IOException {
EmbeddedMongoFactoryBean mongo = new EmbeddedMongoFactoryBean();
mongo.setBindIp(MONGO_DB_URL);
MongoClient mongoClient = (MongoClient) mongo.getObject();
return new MongoTemplate(mongoClient, MONGO_DB_NAME);
}
But when i run my application it show this error+
Exception in thread "main" 11:56:00.710 [Thread-0] DEBUG de.flapdoodle.embed.process.store.CachingArtifactStore - force delete for PRODUCTION:Windows:B64 and de.flapdoodle.embed.process.extract.ImmutableExtractedFileSet#545997b1
11:56:00.710 [Thread-1] DEBUG de.flapdoodle.embed.mongo.AbstractMongoProcess - try to stop mongod
java.lang.NoSuchMethodError: 'void com.mongodb.client.internal.MongoClientDelegate.<init>(com.mongodb.connection.Cluster, java.util.List, java.lang.Object)'
at com.mongodb.Mongo.<init>(Mongo.java:319)
at com.mongodb.Mongo.<init>(Mongo.java:291)
at com.mongodb.Mongo.<init>(Mongo.java:286)
at com.mongodb.Mongo.<init>(Mongo.java:282)
at com.mongodb.MongoClient.<init>(MongoClient.java:180)
at com.mongodb.MongoClient.<init>(MongoClient.java:155)
at com.mongodb.MongoClient.<init>(MongoClient.java:145)
at cz.jirutka.spring.embedmongo.EmbeddedMongoBuilder.build(EmbeddedMongoBuilder.java:104)
at cz.jirutka.spring.embedmongo.EmbeddedMongoFactoryBean.getObject(EmbeddedMongoFactoryBean.java:52)
at com.nextage.arcacrmconnector.commons.EmbeddedMongoDb.mongoTemplate(EmbeddedMongoDb.java:20)
at com.nextage.arcacrmconnector.commons.MongoTemplateSingleton.setMongoTemplate(MongoTemplateSingleton.java:20)
at com.nextage.arcacrmconnector.commons.MongoTemplateSingleton.getMongoTemplate(MongoTemplateSingleton.java:13)
at com.nextage.arcacrmconnector.services.CommonMongoService.<init>(CommonMongoService.java:12)
at com.nextage.arcacrmconnector.services.LogService.<init>(LogService.java:18)
at com.nextage.arcacrmconnector.consumer.QueueConsumerTimerTask.<init>(QueueConsumerTimerTask.java:23)
at com.nextage.arcacrmconnector.application.Application.<clinit>(Application.java:30)
11:56:00.716 [Thread-0] WARN de.flapdoodle.embed.process.io.file.Files - could not delete C:\Users\DONATE~1\AppData\Local\Temp\extract-70ac2cd1-bb5b-4276-9243-cdf6b52db3famongod.exe. Will try to delete it again when program exits.
Exception in thread "Thread-0" java.lang.IllegalStateException: Shutdown in progress
at java.base/java.lang.ApplicationShutdownHooks.add(ApplicationShutdownHooks.java:66)
at java.base/java.lang.Runtime.addShutdownHook(Runtime.java:213)
at de.flapdoodle.embed.process.io.file.FileCleaner.forceDeleteOnExit(FileCleaner.java:51)
at de.flapdoodle.embed.process.io.file.Files.forceDelete(Files.java:128)
at de.flapdoodle.embed.process.extract.ExtractedFileSets.delete(ExtractedFileSets.java:77)
at de.flapdoodle.embed.process.store.ArtifactStore.removeFileSet(ArtifactStore.java:90)
at de.flapdoodle.embed.process.store.CachingArtifactStore$FilesWithCounter.forceDelete(CachingArtifactStore.java:176)
at de.flapdoodle.embed.process.store.CachingArtifactStore.removeAll(CachingArtifactStore.java:100)
at de.flapdoodle.embed.process.store.CachingArtifactStore$CacheCleaner.run(CachingArtifactStore.java:196)
at java.base/java.lang.Thread.run(Thread.java:830)
How can i fix it? It is a dependency version error?
String tempFile = System.getenv("temp") + File.separator + "extract-" + System.getenv("USERNAME") + "-extractmongod";
String executable;
if (System.getenv("OS") != null && System.getenv("OS").contains("Windows")) {
executable = tempFile + ".exe";
} else {
executable = tempFile + ".sh";
}
Files.deleteIfExists(new File(executable).toPath());
Files.deleteIfExists(new File(tempFile + ".pid").toPath());
please try this temporary solution. write this in mongo config file,This should be executed before the start of embedded mongo db
link is : https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues/171

Spring boot data R2DBC requires transaction for read operations

Im trying to fetch a list of objects from the database using Spring boot Webflux with the postgres R2DBC driver, but I get an error saying:
value ignored org.springframework.transaction.reactive.TransactionContextManager$NoTransactionInContextException: No transaction in context Context1{reactor.onNextError.localStrategy=reactor.core.publisher.OnNextFailureStrategy$ResumeStrategy#7c18c255}
it seems all DatabaseClient operations requires to be wrap into a transaction.
I tried different combinations of the dependencies between spring-boot-data and r2db but didn't really work.
Version:
<spring-boot.version>2.2.0.RC1</spring-boot.version>
<spring-data-r2dbc.version>1.0.0.BUILD-SNAPSHOT</spring-data-r2dbc.version>
<r2dbc-releasetrain.version>Arabba-M8</r2dbc-releasetrain.version>
Dependencies:
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-r2dbc</artifactId>
<version>${spring-data-r2dbc.version}</version>
</dependency>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-postgresql</artifactId>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
</dependency>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-bom</artifactId>
<version>${r2dbc-releasetrain.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
fun findAll(): Flux<Game> {
val games = client
.select()
.from(Game::class.java)
.fetch()
.all()
.onErrorContinue{ throwable, o -> System.out.println("value ignored $throwable $o") }
games.subscribe()
return Flux.empty()
}
#Table("game")
data class Game(#Id val id: UUID = UUID.randomUUID(),
#Column("guess") val guess: Int = Random.nextInt(500))
Github repo: https://github.com/odfsoft/spring-boot-guess-game/tree/r2dbc-issue
I expect read operations to not require #Transactional or to run the query without wrapping into the transactional context manually.
UPDATE:
After a few tries with multiple version I manage to find a combination that works:
<spring-data-r2dbc.version>1.0.0.BUILD-SNAPSHOT</spring-data-r2dbc.version>
<r2dbc-postgres.version>0.8.0.RC2</r2dbc-postgres.version>
Dependencies:
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-r2dbc</artifactId>
<version>${spring-data-r2dbc.version}</version>
</dependency>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-postgresql</artifactId>
<version>${r2dbc-postgres.version}</version>
</dependency>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-bom</artifactId>
<version>Arabba-RC2</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
it seems the versions notation went from 1.0.0.M7 to 0.8.x for r2dbc due to the following:
https://r2dbc.io/2019/05/13/r2dbc-0-8-milestone-8-released
https://r2dbc.io/2019/10/07/r2dbc-0-8-rc2-released
but after updating to the latest version a new problem appear which is that a transaction is required to run queries as follow:
Update configuration:
#Configuration
class PostgresConfig : AbstractR2dbcConfiguration() {
#Bean
override fun connectionFactory(): ConnectionFactory {
return PostgresqlConnectionFactory(
PostgresqlConnectionConfiguration.builder()
.host("localhost")
.port(5432)
.username("root")
.password("secret")
.database("game")
.build())
}
#Bean
fun reactiveTransactionManager(connectionFactory: ConnectionFactory): ReactiveTransactionManager {
return R2dbcTransactionManager(connectionFactory)
}
#Bean
fun transactionalOperator(reactiveTransactionManager: ReactiveTransactionManager) =
TransactionalOperator.create(reactiveTransactionManager)
}
Query:
fun findAll(): Flux<Game> {
return client
.execute("select id, guess from game")
.`as`(Game::class.java)
.fetch()
.all()
.`as`(to::transactional)
.onErrorContinue{ throwable, o -> System.out.println("value ignored $throwable $o") }
.log()
}
Disclaimer this is not mean to be used in production!! still before GA.

Spark Scala read csv file using s3a

I am trying to read a csv (native) file from an S3 bucket using a locally running Spark - Scala. I am able to read the file using the http protocol but I intend to use the s3a protocol.
Below is the configuration setup before the call.
val awsId = System.getenv("AWS_ACCESS_KEY_ID")
val awsKey = System.getenv("AWS_SECRET_ACCESS_KEY")
sc.hadoopConfiguration.set("fs.s3a.access.key", awsId)
sc.hadoopConfiguration.set("fs.s3a.secret.key", awsKey)
sc.hadoopConfiguration.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
sc.hadoopConfiguration.set("fs.s3a.aws.credentials.provider","org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider");
sc.hadoopConfiguration.set("com.amazonaws.services.s3.enableV4", "true")
sc.hadoopConfiguration.set("fs.s3a.endpoint", "us-east-1.amazonaws.com")
sc.hadoopConfiguration.set("fs.s3a.impl.disable.cache", "true")
here
Read the file and print the first 5 rows from the rdd/dataframe
val fileAPath = Files.s3aPath(Files.input);
println("reading file s3", fileAPath)
// s3a://bucket-name/dataSets/policyoutput.csv
val df = sc.textFile(fileAPath);
df.take(5).foreach(println);
I am getting the below exception
Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: FD92FDC175C64AA2, AWS Error Code: null, AWS Error Message: Bad Request, S3 Extended Request ID: IuloUEASgqnY4lrSMpbyJpwgFfCFbttxuxmJ9hGHMUgZTbO/UR/YyDgjix+3rBe0Y4MQHPzNvhA=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:154)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:258)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1333)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.take(RDD.scala:1327)
Any help / direction for further investigation will be much appreciated.
Thanks
Anyone else struggling with this I had to update the version of hadoop-client
additionally the links below were quite helpful
https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html
https://disqus.com/by/cfeduke/?utm_source=reply&utm_medium=email&utm_content=comment_author
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
pom details below
<properties>
<spark.version>2.2.0</spark.version>
<hadoop.version>2.8.0</hadoop.version>
</properties>
<dependencies>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.11 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-aws</artifactId>
<version>${hadoop.version}</version>
</dependency>

kafka-apache flink execution log4j error

I'm trying to run a simple Apache Flink script with Kafka inegration but I keep on having problems with the execution.
The script should read messages coming from a kafka producer, elaborate them, and then send back again, to an other topic, the result of the processing.
I've get this example from here:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Simple-Flink-Kafka-Test-td4828.html
The error I have is:
Exception in thread "main" java.lang.NoSuchFieldError:ALL
at org.apache.flink.streaming.api.graph.StreamingJobGraphGenera‌tor.createJobGraph(S‌​treamingJobGraphGene‌​rator.java:86)
at org.apache.flink.streaming.api.graph.StreamGraph.getJobGraph‌​(StreamGraph.java:42‌​9)
at org.apache.flink.streaming.api.environment.LocalStreamEnviro‌nment.execute(LocalS‌​treamEnvironment.jav‌​a:46)
at org.apache.flink.streaming.api.environment.LocalStreamEnviro‌nment.execute(LocalS‌​treamEnvironment.jav‌​a:33)
This is my code:
public class App {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
//properties.setProperty("zookeeper.connect", "localhost:2181");
properties.setProperty("group.id", "javaflink");
DataStream<String> messageStream = env.addSource(new FlinkKafkaConsumer010<String>("test", new SimpleStringSchema(), properties));
System.out.println("Step D");
messageStream.map(new MapFunction<String, String>(){
public String map(String value) throws Exception {
// TODO Auto-generated method stub
return "Blablabla " + value;
}
}).addSink(new FlinkKafkaProducer010("localhost:9092", "demo2", new SimpleStringSchema()));
env.execute();
}
}
These are the pom.xml dependencies:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-core</artifactId>
<version>1.3.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java_2.11</artifactId>
<version>0.10.2</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.11</artifactId>
<version>1.3.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-core</artifactId>
<version>0.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.11</artifactId>
<version>1.3.1</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.10_2.11</artifactId>
<version>1.3.1</version>
</dependency>
What could cause this kind of error?
Thanks
Luca
The problem is most likely caused by the mixture of different Flink versions you have defined in your pom.xml. In order to run this program, it should be enough to include the following dependencies:
<!-- Streaming API -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.11</artifactId>
<version>1.3.1</version>
</dependency>
<!-- In order to execute the program from within your IDE -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.11</artifactId>
<version>1.3.1</version>
</dependency>
<!-- Kafka connector dependency -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.10_2.11</artifactId>
<version>1.3.1</version>
</dependency>