redis4cats Scala redis.set not setting any records - scala

Scala redis4cats library
redis: RedisCommands[IO, String, String]
Below code line in scala does not set any records in redis.
redis.set("a", "x")
Although when setting via terminal works
127.0.0.1:6379[2]> set a b OK
And retrieving that record which was set from terminal from scala works as well which verifies the redis connection from Scala is set properly.

Related

API method OClass.setCustom fails in distributed environment when trying to write text containing spaces

I've implemented an OrientDB client using the Java-API (Version 3.0.30). My code makes heavy usage of custom fields in order to store meta data to schema classes.
In the development and integration test environment everything works fine. In the production environment my initialization method fails at the first setCustom command that writes a text containing spaces to a cutom field. The a OCommandSQLParsingException is thrown (see below).
In the dev/test environment I'm using all sorts of databases (embedded, plocal and remote access to a standalone OrientDB instance - running on docker).
In the production environment the application connects to a distributed cluster of several OrientDB instances (all docker containers). That's the only difference I can spot.
The initialization code looks like this:
try(ODatabaseSession session = orientDbService.getSession()) {
final String className = "mySuperClass";
OClass oClass = session.getMetadata().getSchema().getClass(className);
if(oClass == null) {
oClass = session.createVertexClass(className);
oClass.setAbstract(true);
oClass.setCustom("myDescription", "my superclass"); // <- fails here in production
oClass.setCustom("myCreationDate", ...
The error occors when the text to be written to the custom field contains spaces. Text without spaces is processed without problems.
This exception is thrown:
OCommandSQLParsingException: Error parsing query:
alter class `mySuperClass` custom `myDescription`=my superclass
^
Encountered " <IDENTIFIER> "superclass "" at line 1, column 53.
Was expecting one of:
<EOF>
<UNSAFE> ...
";" ...
<UNSAFE> ...
<UNSAFE> ...
DB name="myDb"
Error Code="1"
at com.orientechnologies.orient.core.sql.parser.OStatementCache.throwParsingException(OStatementCache.java:149)
at com.orientechnologies.orient.core.sql.parser.OStatementCache.parse(OStatementCache.java:141)
at com.orientechnologies.orient.core.sql.parser.OStatementCache.get(OStatementCache.java:90)
at com.orientechnologies.orient.core.sql.parser.OStatementCache.get(OStatementCache.java:68)
at com.orientechnologies.orient.core.sql.OCommandExecutorSQLAbstract.preParse(OCommandExecutorSQLAbstract.java:228)
at com.orientechnologies.orient.core.sql.OCommandExecutorSQLAlterClass.parse(OCommandExecutorSQLAlterClass.java:60)
at com.orientechnologies.orient.core.sql.OCommandExecutorSQLAlterClass.parse(OCommandExecutorSQLAlterClass.java:44)
at com.orientechnologies.orient.core.sql.OCommandExecutorSQLDelegate.parse(OCommandExecutorSQLDelegate.java:58)
at com.orientechnologies.orient.core.sql.OCommandExecutorSQLDelegate.parse(OCommandExecutorSQLDelegate.java:39)
at com.orientechnologies.orient.server.distributed.impl.ODistributedStorage.command(ODistributedStorage.java:240)
at com.orientechnologies.orient.core.command.OCommandRequestTextAbstract.execute(OCommandRequestTextAbstract.java:68)
at com.orientechnologies.orient.core.metadata.schema.OClassEmbedded.setCustom(OClassEmbedded.java:198)
at com.orientechnologies.orient.core.metadata.schema.OClassEmbedded.setCustom(OClassEmbedded.java:24)
at com.orientechnologies.orient.core.sql.parser.OAlterClassStatement.executeDDL(OAlterClassStatement.java:318)
at com.orientechnologies.orient.core.sql.executor.ODDLExecutionPlan.executeInternal(ODDLExecutionPlan.java:55)
at com.orientechnologies.orient.core.sql.parser.ODDLStatement.execute(ODDLStatement.java:42)
at com.orientechnologies.orient.core.sql.parser.OStatement.execute(OStatement.java:79)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentEmbedded.command(ODatabaseDocumentEmbedded.java:567)
at com.orientechnologies.orient.server.OConnectionBinaryExecutor.executeQuery(OConnectionBinaryExecutor.java:1188)
at com.orientechnologies.orient.client.remote.message.OQueryRequest.execute(OQueryRequest.java:136)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.sessionRequest(ONetworkProtocolBinary.java:310)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.execute(ONetworkProtocolBinary.java:212)
at com.orientechnologies.common.thread.OSoftThread.run(OSoftThread.java:69)
The following parameters are identical in develop, test and prod environment:
Version: OrientDB 3.0.30 (Java API, servers)
Java Runtime: OpenJDK 11
OS: Linux

Authenticate with ECE ElasticSearch Sink from Apache Fink (Scala code)

Compiler error when using example provided in Flink documentation. The Flink documentation provides sample Scala code to set the REST client factory parameters when talking to Elasticsearch, https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/elasticsearch.html.
When trying out this code i get a compiler error in IntelliJ which says "Cannot resolve symbol restClientBuilder".
I found the following SO which is EXACTLY my problem except that it is in Java and i am doing this in Scala.
Apache Flink (v1.6.0) authenticate Elasticsearch Sink (v6.4)
I tried copy pasting the solution code provided in the above SO into IntelliJ, the auto-converted code also has compiler errors.
// provide a RestClientFactory for custom configuration on the internally created REST client
// i only show the setMaxRetryTimeoutMillis for illustration purposes, the actual code will use HTTP cutom callback
esSinkBuilder.setRestClientFactory(
restClientBuilder -> {
restClientBuilder.setMaxRetryTimeoutMillis(10)
}
)
Then i tried (auto generated Java to Scala code by IntelliJ)
// provide a RestClientFactory for custom configuration on the internally created REST client// provide a RestClientFactory for custom configuration on the internally created REST client
import org.apache.http.auth.AuthScope
import org.apache.http.auth.UsernamePasswordCredentials
import org.apache.http.client.CredentialsProvider
import org.apache.http.impl.client.BasicCredentialsProvider
import org.apache.http.impl.nio.client.HttpAsyncClientBuilder
import org.elasticsearch.client.RestClientBuilder
// provide a RestClientFactory for custom configuration on the internally created REST client// provide a RestClientFactory for custom configuration on the internally created REST client
esSinkBuilder.setRestClientFactory((restClientBuilder) => {
def foo(restClientBuilder) = restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = { // elasticsearch username and password
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(es_user, es_password))
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
}
})
foo(restClientBuilder)
})
The original code snippet produces the error "cannot resolve RestClientFactory" and then Java to Scala shows several other errors.
So basically i need to find a Scala version of the solution described in Apache Flink (v1.6.0) authenticate Elasticsearch Sink (v6.4)
Update 1: I was able to make some progress with some help from IntelliJ. The following code compiles and runs but there is another problem.
esSinkBuilder.setRestClientFactory(
new RestClientFactory {
override def configureRestClientBuilder(restClientBuilder: RestClientBuilder): Unit = {
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = {
// elasticsearch username and password
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(es_user, es_password))
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
httpClientBuilder.setSSLContext(trustfulSslContext)
}
})
}
}
The problem is that i am not sure if i should be doing a new of the RestClientFactory object. What happens is that the application connects to the elasticsearch cluster but then discovers that the SSL CERT is not valid, so i had to put the trustfullSslContext (as described here https://gist.github.com/iRevive/4a3c7cb96374da5da80d4538f3da17cb), this got me past the SSL issue but now the ES REST Client does a ping test and the ping fails, it throws an exception and the app shutsdown. I am suspecting that the ping fails because of the SSL error and maybe it is not using the trustfulSslContext i setup as part of new RestClientFactory and this makes me suspect that i should not have done the new, there should be a simple way to update the existing RestclientFactory object and basically this is all happening because of my lack of Scala knowledge.
Happy to report that this is resolved. The code i posted in Update 1 is correct. The ping to ECE was not working for two reasons:
The certificate needs to include the complete chain including the root CA, the intermediate CA and the cert for the ECE. This helped get rid of the whole trustfulSslContext stuff.
The ECE was sitting behind an ha-proxy and the proxy did the mapping for the hostname in the HTTP request to the actual deployment cluster name in ECE. this mapping logic did not take into account that the Java REST High Level client uses the org.apache.httphost class which creates the hostname as hostname:port_number even when the port number is 443. Since it did not find the mapping because of the 443 therefore the ECE returned a 404 error instead of 200 ok (only way to find this was to look at unencrypted packets at the ha-proxy). Once the mapping logic in ha-proxy was fixed, the mapping was found and the pings are now successfull.

Starting KsqlRestApplication form scala and getting NoSuchMethodError org.apache.kafka.streams.StreamsConfig.getConsumerConfigs

I am trying to write a program that enables me to run predefined KSQL operations on Kafka topics in Scala, but I don't want to open the KSQL Cli everytime. Therefore I want to start the KSQL "Server" from within my Scala program. If I understand the KSQL source code right, I have to build and start a KsqlRestApplication:
def restServer = KsqlRestApplication.buildApplication(new
KsqlRestConfig(defaultServerProperties), true, new VersionCheckerAgent
{override def start(ksqlModuleType: KsqlModuleType, properties:
Properties): Unit = ???})
But when I try doing that, I get the following error:
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.kafka.streams.StreamsConfig.getConsumerConfigs(Ljava/lang/String;Ljava/lang/String;)Ljava/util/Map;
at io.confluent.ksql.rest.server.BrokerCompatibilityCheck.create(BrokerCompatibilityCheck.java:62)
at io.confluent.ksql.rest.server.KsqlRestApplication.buildApplication(KsqlRestApplication.java:241)
I looked into the function call in BrokerCompatibilityCheck and in the create function it calls the StreamsConfig.getConsumerConfigs() with 2 Strings as parameters instead of the parameters defined in
https://kafka.apache.org/0102/javadoc/org/apache/kafka/streams/StreamsConfig.html#getConsumerConfigs(StreamThread,%20java.lang.String,%20java.lang.String).
Are my KSQL and Kafka version simply not compatible or am I doing something wrong?
I am using KSQL version 4.1.0-SNAPSHOT and Kafka version 1.0.0.
Yes, NoSuchMethodError typically indicates a version incompatibility between libraries.
The link you posted is to javadoc for kafka 0.10.2. The method hasn't changed in 1.0 but indeed in the upcoming 1.1 it only takes 2 Strings:
https://kafka.apache.org/11/javadoc/org/apache/kafka/streams/StreamsConfig.html#getConsumerConfigs(java.lang.String,%20java.lang.String)
. That suggests the version of KSQL you're using (4.1.0-SNAPSHOT) depends on version 1.1 of kafka streams, which is currently in the release candidate phase and I believe and should be out soon:
https://lists.apache.org/thread.html/780c4458b16590e99261b69d7b41b9ec374a3226d72c8d38885a008a#%3Cusers.kafka.apache.org%3E
As per that email you can find the latest (1.1.0-rc2) artifacts in the apache staging repo:
https://repository.apache.org/content/groups/staging/

Code coverage on Play! project

I have a Play! project where I would like to add some code coverage information. So far I have tried JaCoCo and scct. The former has the problem that it is based on bytecode, hence it seems to give warning about missing tests for methods that are autogenerated by the Scala compiler, such as copy or canEqual. scct seems a better option, but in any case I get many errors during tests with both.
Let me stick with scct. I essentially get errors for every test that tries to connect to the database. Many of my tests load some fixtures into an H2 database in memory and then make some assertions. My Global.scala contains
override def onStart(app: Application) {
SessionFactory.concreteFactory = Some(() => connection)
def connection() = {
Session.create(DB.getConnection()(app), new MySQLInnoDBAdapter)
}
}
while the tests usually are enclosed in a block like
class MySpec extends Specification {
def app = FakeApplication(additionalConfiguration = inMemoryDatabase())
"The models" should {
"be five" in running(app) {
Fixtures.load()
MyModels.all.size should be_==(5)
}
}
}
The line running(app) allows me to run a test in the context of a working application connected to an in-memory database, at least usually. But when I run code coverage tasks, such as scct coverage:doc, I get a lot of errors related to connecting to the database.
What is even more weird is that there are at least 4 different errors, like:
ObjectExistsException: Cache play already exists
SQLException: Attempting to obtain a connection from a pool that has already been shutdown
Configuration error [Cannot connect to database [default]]
No suitable driver found for jdbc:h2:mem:play-test--410454547
Why is that launching tests in the default configuration is able to connect to the database, while running in the context of scct (or JaCoCo) fails to initialize the cache and the db?
specs2 tests run in parallel by default. Play disables parallel execution for the standard unit test configuration, but scct uses a different configuration so it doesn't know not to run in parallel.
Try adding this to your Build.scala:
.settings(parallelExecution in ScctPlugin.ScctTest := false)
Alternatively, you can add sequential to the beginning of your test classes to force all possible run configurations to run sequentially. I've got both in my files still, as I think I had some problems with the Build.scala solution at one point when I was using an early release candidate of Play.
A better option for Scala code coverage is Scoverage which gives statement line coverage.
https://github.com/scoverage/scalac-scoverage-plugin
Add to project/plugins.sbt:
addSbtPlugin("com.sksamuel.scoverage" % "sbt-scoverage" % "1.0.1")
Then run SBT with
sbt clean coverage test
You need to add sequential in the beginning of your Specification.
class MySpec extends Specification {
sequential
"MyApp" should {
//...//
}
}

How do I turn off the Scala Fast Compilation server's (FSC) timeout?

I am using a Scala compilation server. This is probably not related to my IDE IntelliJ IDEA, but I will just inform you that I start the Scala compilation server through a special run configuration in that IDE.
After some time that goes by without compiling anything, the compilation server terminates, without any message. Usually, I only notice this when I try to compile something and compilation fails. Then, I need to start the compilation server again, and of course the next compilation takes a long time, because it's once more the first compilation since starting the compilation server.
How do I turn off that timeout? I looked at the manpage for scalac, and there seems to be no option for it. I can add VM options for that run configuration.
Pass -max-idle 0 as parameter. It will work on a very (very!) recent nightly, and it should be available on Scala 2.9.0 when it comes out. However, there's no guarantee the name won't change until then.
I don't think you can. Here is a code snippet from the compilation server:
object SocketServer
{
// After 30 idle minutes, politely exit.
// Should the port file disappear, and the clients
// therefore unable to contact this server instance,
// the process will just eventually terminate by itself.
val IdleTimeout = 1800000
val BufferSize = 10240
def bufferedReader(s: Socket) = new BufferedReader(new InputStreamReader(s.getInputStream()))
def bufferedOutput(s: Socket) = new BufferedOutputStream(s.getOutputStream, BufferSize)
}
I think you should open a feature request in scala-lang.org