Scala no configuration key found - scala

I could extract value for
val ranking_score_path = cf.getString(stg + ".input.path.ranking_score")
.replaceAll("_replace_date_", this_date)
and
val output_path = cf.getString(stg + ".output.path.hdfs") + tomz_date + "/"
but not
val AS_HOST = cf.getString(stg + ".output.path.aerospike.host")
println("AS_HOST = " + AS_HOST)
I have tried
replacing . with _,
adding commas
but didnt work.
error log
Exception in thread "main" com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'production.output.path.aerospike'
at com.typesafe.config.impl.SimpleConfig.findKeyOrNull(SimpleConfig.java:152)
at ...
application.conf
production {
input {
path {
local = "/home/aduser/tmp/"
hdfs = "/user/aduser/tmp_vincent/CPA/_replace_date_/intermediate/l1/"
ranking_score = "/home/aduser/plt/item_performance/pipeline/cpa/output/_replace_date_/predict_output/ranking_score.csv"
}
}
output {
path {
local = "/home/aduser/tmp/"
hdfs = "/user/aduser/dyson/display/"
aerospike {
host = "0.0.0.0"
port = 3000
namespace = "test"
set = "spark-test2"
}
}
}
}
Reply # Comment #1
the cf is very long but the important part is as follows
... ore.csv"}},"output":{"path":{"hdfs":"/user/aduser/dyson/display/","local":"/home/aduser/tmp/"}}},"sun":{"arch": ...
Effort #1: Replaced part of the application.conf
path {
local = "/home/aduser/tmp/"
hdfs = "/user/aduser/dyson/display/"
ae_host = "0.0.0.0"
ae_port = 3000
ae_namespace = "test"
ae_set = "spark-test2"
}
and changed the calling method
val AS_HOST = cf.getString(stg + ".output.path.ae_host")
println("AS_HOST = " + AS_HOST)
but still getting errors
Exception in thread "main" com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'production.output.path.ae_host'
at com.typesafe.config.impl.SimpleConfig.findKeyOrNull(SimpleConfig.java:152)

I have found the problem after several hours, my application.conf was at the same level with src.main.scala and it worked partially for reasons which I don't know. It worked perfectly after creating src.main.resources and putting application.conf inside.

Related

Flink SQL Client connect to secured kafka cluster

I want to execute a query on Flink SQL Table backed by kafka topic of secured kafka cluster. I'm able to execute the query programmatically but unable to do the same through Flink SQL client. I'm not sure on how to pass JAAS config (java.security.auth.login.config) and other system properties through Flink SQL client.
Flink SQL query programmatically
private static void simpleExec_auth() {
// Create the execution environment.
final EnvironmentSettings settings = EnvironmentSettings.newInstance()
.inStreamingMode()
.withBuiltInCatalogName(
"default_catalog")
.withBuiltInDatabaseName(
"default_database")
.build();
System.setProperty("java.security.auth.login.config","client_jaas.conf");
System.setProperty("sun.security.jgss.native", "true");
System.setProperty("sun.security.jgss.lib", "/usr/libexec/libgsswrap.so");
System.setProperty("javax.security.auth.useSubjectCredsOnly","false");
TableEnvironment tableEnvironment = TableEnvironment.create(settings);
String createQuery = "CREATE TABLE test_flink11 ( " + "`keyid` STRING, " + "`id` STRING, "
+ "`name` STRING, " + "`age` INT, " + "`color` STRING, " + "`rowtime` TIMESTAMP(3) METADATA FROM 'timestamp', " + "`proctime` AS PROCTIME(), " + "`address` STRING) " + "WITH ( "
+ "'connector' = 'kafka', "
+ "'topic' = 'test_flink10', "
+ "'scan.startup.mode' = 'latest-offset', "
+ "'properties.bootstrap.servers' = 'kafka01.nyc.com:9092', "
+ "'value.format' = 'avro-confluent', "
+ "'key.format' = 'avro-confluent', "
+ "'key.fields' = 'keyid', "
+ "'value.fields-include' = 'EXCEPT_KEY', "
+ "'properties.security.protocol' = 'SASL_PLAINTEXT', 'properties.sasl.kerberos.service.name' = 'kafka', 'properties.sasl.kerberos.kinit.cmd' = '/usr/local/bin/skinit --quiet', 'properties.sasl.mechanism' = 'GSSAPI', "
+ "'key.avro-confluent.schema-registry.url' = 'http://kafka-schema-registry:5037', "
+ "'key.avro-confluent.schema-registry.subject' = 'test_flink6', "
+ "'value.avro-confluent.schema-registry.url' = 'http://kafka-schema-registry:5037', "
+ "'value.avro-confluent.schema-registry.subject' = 'test_flink4')";
System.out.println(createQuery);
tableEnvironment.executeSql(createQuery);
TableResult result = tableEnvironment
.executeSql("SELECT name,rowtime FROM test_flink11");
result.print();
}
This is working fine.
Flink SQL query through SQL client
Running this giving the following error.
Flink SQL> CREATE TABLE test_flink11 (`keyid` STRING,`id` STRING,`name` STRING,`address` STRING,`age` INT,`color` STRING) WITH('connector' = 'kafka', 'topic' = 'test_flink10','scan.startup.mode' = 'earliest-offset','properties.bootstrap.servers' = 'kafka01.nyc.com:9092','value.format' = 'avro-confluent','key.format' = 'avro-confluent','key.fields' = 'keyid', 'value.avro-confluent.schema-registry.url' = 'http://kafka-schema-registry:5037', 'value.avro-confluent.schema-registry.subject' = 'test_flink4', 'value.fields-include' = 'EXCEPT_KEY', 'key.avro-confluent.schema-registry.url' = 'http://kafka-schema-registry:5037', 'key.avro-confluent.schema-registry.subject' = 'test_flink6', 'properties.security.protocol' = 'SASL_PLAINTEXT', 'properties.sasl.kerberos.service.name' = 'kafka', 'properties.sasl.kerberos.kinit.cmd' = '/usr/local/bin/skinit --quiet', 'properties.sasl.mechanism' = 'GSSAPI');
Flink SQL> select * from test_flink11;
[ERROR] Could not execute SQL statement. Reason:
java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is /tmp/jaas-6309821891889949793.conf
There is nothing in /tmp/jaas-6309821891889949793.conf except the following comment
# We are using this file as an workaround for the Kafka and ZK SASL implementation
# since they explicitly look for java.security.auth.login.config property
# Please do not edit/delete this file - See FLINK-3929
SQL client run command
bin/sql-client.sh embedded --jar flink-sql-connector-kafka_2.11-1.12.0.jar --jar flink-sql-avro-confluent-registry-1.12.0.jar
Flink cluster command
bin/start-cluster.sh
How to pass this java.security.auth.login.config and other system properties (that I'm setting in the above java code snippet), for SQL client?
flink-conf.yaml
security.kerberos.login.use-ticket-cache: true
security.kerberos.login.principal: XXXXX#HADOOP.COM
security.kerberos.login.use-ticket-cache: false
security.kerberos.login.keytab: /path/to/kafka.keytab
security.kerberos.login.principal: XXXX#HADOOP.COM
security.kerberos.login.contexts: Client,KafkaClient
I haven't really tested whether this solution is feasible, you can try it out, hope it will help you.

java.lang.NoClassDefFoundError: org/apache/flink/streaming/api/scala/StreamExecutionEnvironment

package com.knoldus
import org.apache.flink.api.java.utils.ParameterTool
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.api.windowing.time.Time
object SocketWindowWordCount {
def main(args: Array[String]) : Unit = {
var hostname: String = "localhost"
var port: Int = 9000
try {
val params = ParameterTool.fromArgs(args)
hostname = if (params.has("hostname")) params.get("hostname") else "localhost"
port = params.getInt("port")
} catch {
case e: Exception => {
System.err.println("No port specified. Please run 'SocketWindowWordCount " +
"--hostname <hostname> --port <port>', where hostname (localhost by default) and port " +
"is the address of the text server")
System.err.println("To start a simple text server, run 'netcat -l <port>' " +
"and type the input text into the command line")
return
}
}
// get the execution environment
val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
// get input data by connecting to the socket
val text: DataStream[String] = env.socketTextStream(hostname, port, '\n')
// parse the data, group it, window it, and aggregate the counts
val windowCounts = text
.flatMap { w => w.split("\\s") }
.map { w => WordWithCount(w, 1) }
.keyBy("word")
.timeWindow(Time.seconds(5))
.sum("count")
// print the results with a single thread, rather than in parallel
windowCounts.print().setParallelism(1)
env.execute("Socket Window WordCount")
}
/** Data type for words with count */
case class WordWithCount(word: String, count: Long)
}
In the end while running this code on Intellij i am getting this error
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flink/streaming/api/scala/StreamExecutionEnvironment$
at com.knoldus.SocketWindowWordCount$.main(SocketWindowWordCount.scala:43)
at com.knoldus.SocketWindowWordCount.main(SocketWindowWordCount.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.flink.streaming.api.scala.StreamExecutionEnvironment$
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 2 more
Select menu item "Run" => "Edit Configurations...",
then in the "Build and run" section select "Modify options" => Java => Add dependencies with "Provided" scope to classpath in your local configuration.
In this way you don't have to remove the <scope>provided</scope>.
I resolved this by removing the <scope>provided</scope> which was present in the maven import.

SSLHandshakeException happens during file upload to AWS S3 via Alpakka

I'm trying to setup an Alpakka S3 for files upload purpose. Here is my configs:
alpakka s3 dependency:
...
"com.lightbend.akka" %% "akka-stream-alpakka-s3" % "0.20"
...
Here is application.conf:
akka.stream.alpakka.s3 {
buffer = "memory"
proxy {
host = ""
port = 8000
secure = true
}
aws {
credentials {
provider = default
}
}
path-style-access = false
list-bucket-api-version = 2
}
File upload code example:
private val awsCredentials = new BasicAWSCredentials("my_key", "my_secret_key")
private val awsCredentialsProvider = new AWSStaticCredentialsProvider(awsCredentials)
private val regionProvider = new AwsRegionProvider { def getRegion: String = "us-east-1" }
private val settings = new S3Settings(MemoryBufferType, None, awsCredentialsProvider, regionProvider, false, None, ListBucketVersion2)
private val s3Client = new S3Client(settings)(system, materializer)
val fileSource = Source.fromFuture(ByteString("ololo blabla bla"))
val fileName = UUID.randomUUID().toString
val s3Sink: Sink[ByteString, Future[MultipartUploadResult]] = s3Client.multipartUpload("my_basket", fileName)
fileSource.runWith(s3Sink)
.map {
result => println(s"${result.location}")
} recover {
case ex: Exception => println(s"$ex")
}
When I run this code I get:
javax.net.ssl.SSLHandshakeException: General SSLEngine problem
What can be a reason?
The certificate problem arises for bucket names containing dots.
You may switch to
akka.stream.alpakka.s3.path-style-access = true to get rid of this.
We're considering making it the default: https://github.com/akka/alpakka/issues/1152

How do I reference a config file in Scala WorkSheet and Akka?

I'm trying to create a simple akka system in Scala Worksheet but each time I try, I get this error.
com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'akka'
at com.typesafe.config.impl.SimpleConfig.findKeyOrNull(tsets.sc:148)
at com.typesafe.config.impl.SimpleConfig.findKey(tsets.sc:141)
at com.typesafe.config.impl.SimpleConfig.findOrNull(tsets.sc:168)
at com.typesafe.config.impl.SimpleConfig.find(tsets.sc:180)
at com.typesafe.config.impl.SimpleConfig.find(tsets.sc:185)
at com.typesafe.config.impl.SimpleConfig.getString(tsets.sc:242)
at akka.actor.ActorSystem$Settings.<init>(tsets.sc:166)
at akka.actor.ActorSystemImpl.<init>(tsets.sc:533)
at akka.actor.ActorSystem$.apply(tsets.sc:139)
at akka.actor.ActorSystem$.apply(tsets.sc:116)
at #worksheet#.system$lzycompute(tsets.sc:9)
at #worksheet#.system(tsets.sc:9)
at #worksheet#.get$$instance$$system(tsets.sc:9)
at A$A9$.main(tsets.sc:35)
at #worksheet#.#worksheet#(tsets.sc)
Even though the Scala worksheet is in the exact same directory as the application.conf file.
The akka entry for the file looks something like this.
akka {
loglevel = ??
actor {
provider = ??
}
remote {
log-remote-lifecycle-events = ??
enabled-transports =??
netty.tcp {
hostname = ??
port = ??
maximum-frame-size = ??
maximum-frame-size = ??
send-buffer-size = ??
send-buffer-size = ??
receive-buffer-size = ??
receive-buffer-size = ??
}
}
cluster {
roles = ["frontend"]
seed-nodes = ??
use-dispatcher = c??
failure-detector {
threshold = ??
acceptable-heartbeat-pause = ??
heartbeat-interval = ??
heartbeat-request {
expected-response-after = ??
}
}
}
}
I have even tried adding this to the build.sbt
assemblyMergeStrategy in assembly := {
case PathList("application.conf") => MergeStrategy.concat
}
One of the possible variants is calling ConfigFactory.parseFile with resolve (if you have some substitutions like receive-buffer-size = ${send-buffer-size} in your config file):
val confPath = getClass.getResource("/application.conf")
val config = ConfigFactory.parseFile(new File(confPath.getPath)).resolve()
config.getString("akka.loglevel")
Your application.conf file should be in resources folder and your worksheet can be placed in scala folder.
Update
If you use Intellij IDEA. There is a workaround for this
Go to File > Settings > Languages & Frameworks > Scala > Worksheet
Untick "Run config in the compiler process" and press "OK"
So you can use ConfigFactory.load() freely
I used IDEA with version 2018.1.5

com.typesafe.config.ConfigException$NotResolved: has not been resolved,

I am trying to read the following config file using typesafe config
common = {
jdbcDriver = "com.mysql.jdbc.Driver"
slickDriver = "slick.driver.MySQLDriver"
port = 3306
db = "foo"
user = "bar"
password = "baz"
}
source = ${common} {server = "remoteserver"}
target = ${common} {server = "localserver"}
When I try to read my config using this code
val conf = ConfigFactory.parseFile(new File("src/main/resources/application.conf"))
val username = conf.getString("source.user")
I get an error
com.typesafe.config.ConfigException$NotResolved: source.user has not been resolved, you need to call Config#resolve(), see API docs for Config#resolve()
I don't get any error if I put everything inside "source" or "target" tags. I get errors only when I try to use "common"
I solved it myself.
ConfigFactory.parseFile(new File("src/main/resources/application.conf")).resolve()
I solved it.
Config confSwitchEnv = ConfigFactory.load("env.conf");
the env.conf is in the resources dir.
reference: https://nicedoc.io/lightbend/config