I'm trying to produce some messages to a Kafka topic using the library zio-kafka, version 0.15.0.
Clearly, my comprehension of the ZIO ecosystem is suboptimal cause I cannot produce a simple message. My program is the following:
object KafkaProducerExample extends zio.App {
val producerSettings: ProducerSettings = ProducerSettings(List("localhost:9092"))
val producer: ZLayer[Blocking, Throwable, Producer[Nothing, String, String]] =
ZLayer.fromManaged(Producer.make(producerSettings, Serde.string, Serde.string))
val effect: RIO[Nothing with Producer[Nothing, String, String], RecordMetadata] =
Producer.produce("topic", "key", "value")
override def run(args: List[String]): URIO[zio.ZEnv, ExitCode] = {
effect.provideSomeLayer(producer).exitCode
}
}
The compiler gives me the following error:
[error] KafkaProducerExample.scala:19:28: Cannot prove that zio.blocking.Blocking with zio.Has[zio.kafka.producer.Producer.Service[Nothing,String,String]] <:< Nothing with zio.kafka.producer.Producer[Nothing,String,String].
[error] effect.provideSomeLayer(producer).exitCode
[error] ^
[error] one error found
Can anyone help me in understanding what's going on?
Ok, it was ZIO that requires some hints about types during the creation of the producer layer:
val producer: ZLayer[Blocking, Throwable, Producer[Any, String, String]] =
ZLayer.fromManaged(Producer.make[Any, String, String](producerSettings, Serde.string, Serde.string))
When calling the make smart constructor, we have to give him the types we want to use. The first represents the environment needed to build key and value serializer, while the last two are the types of the messages' keys and values.
In this case, we need no environment at all to build the two serializers, so we pass Any.
Finally, also the Producer.produce function requires some type hints:
val effect: RIO[Producer[Any, String, String], RecordMetadata] =
Producer.produce[Any, String, String]("topic", "key", "value")
After doing the above changes, the types perfectly align, and the compiler is happy again.
Related
I just start learning the Window functions in Flink.
I have a customized source to produce the number and my objective is to calculate the sum of even numbers and odd numbers.
Below is the code (Flink:1.12 Scala:2.11.8).
object ProcessWindowFunc {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val source = env.addSource(new CustomSource())
source
.keyBy(x=>x%2)
.window(TumblingProcessingTimeWindows.of(Time.seconds(5))) // error message here
.reduce(_+_)
.print()
env.execute("sum")
}
}
class CustomSource extends SourceFunction[Int]{
var running = true
var count = 0
def run(ctx: SourceFunction.SourceContext[Int]) = {
while(running) {
ctx.collect(count)
count += 1
Thread.sleep(800)
}
}
override def cancel() = {
this.running = false
}
}
It fails to build and the console output is below.
type mismatch;
found: org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows
required: org.apache.flink.streaming.api.windowing.assigners.WindowAssigner[_ >: Int, ?]
Note: Object <: Any (and org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows <: org.apache.flink.streaming.api.windowing.assigners.WindowAssigner[Object,org.apache.flink.streaming.api.windowing.windows.TimeWindow]),
but Java-defined class WindowAssigner is invariant in type T.
You may wish to investigate a wildcard type such as `_ <: Any`. (SLS 3.2.10)
.window(TumblingProcessingTimeWindows.of(Time.seconds(5)))
I don't quite understand the error message here. I try to implement the same logic in Java and it works.
Thus, I guess that there may be generic conflicts between Java and Scala. But I still don't know how to solve this problem.
Any help and tips are appreciated! This question has confused me the whole day.
First guess is that you don't have
import org.apache.flink.streaming.api.scala._
I am working on a Kafka streaming implementation of a word counter in Scala in which I extended the transformer:
class WordCounter extends Transformer[String, String, (String, Long)]
It is then called in the stream as follows:
val counter: KStream[String, Long] = filtered_record.transform(new WordCounter, "count")
However, I am getting the error below when running my program via sbt:
[error] required: org.apache.kafka.streams.kstream.TransformerSupplier[String,String,org.apache.kafka.streams.KeyValue[String,Long]]
I can't seem to figure out how to fix it, and could not find any appropriate Kafka example of a similar implementation.
Anyone got any idea of what I am doing wrong?
The signature of transform() is:
def transform[K1, V1](transformerSupplier: TransformerSupplier[K, V, KeyValue[K1, V1]],
stateStoreNames: String*): KStream[K1, V1]
Thus, transform() takes a TransformerSupplier as first argument not a Transformer.
See also the javadocs
Im using the latest SJS version (master) and the application extends SparkHiveJob. In the runJob implementation, I have the following
val eDF1 = hive.applySchema(rowRDD1, schema)
I would like to persist eDF1 and tried the following
val rdd_topersist = namedObjects.getOrElseCreate("cleanedDF1", {
NamedDataFrame(eDF1, true, StorageLevel.MEMORY_ONLY)
})
where the following compile errors occur
could not find implicit value for parameter persister: spark.jobserver.NamedObjectPersister[spark.jobserver.NamedDataFrame]
not enough arguments for method getOrElseCreate: (implicit timeout:scala.concurrent.duration.FiniteDuration, implicit persister:spark.jobserver.NamedObjectPersister[spark.jobserver.NamedDataFrame])spark.jobserver.NamedDataFrame. Unspecified value parameter persister.
Obviously this is wrong, but I can't figure what is wrong. I'm fairly new to Scala.
Can someone help me understand this syntax from NamedObjectSupport?
def getOrElseCreate[O <: NamedObject](name: String, objGen: => O)
(implicit timeout: FiniteDuration = defaultTimeout,
persister: NamedObjectPersister[O]): O
I think you should define implicit persister. Looking at the test code, I see something like this
https://github.com/spark-jobserver/spark-jobserver/blob/ea34a8f3e3c90af27aa87a165934d5eb4ea94dee/job-server-extras/test/spark.jobserver/NamedObjectsSpec.scala#L20
I'm using the 0.9 Kafka Java client in Scala.
scala> val kafkaProducer = new KafkaProducer[String, String](props)
ProducerRecord has several constructors that allow you to include or not include a key and/or partition.
scala> val keyedRecord = new ProducerRecord("topic", "key", "value")
scala> kafkaProducer.send(keyedRecord)
should have no problem.
However, an unkeyed ProducerRecord gives a type error.
scala> val unkeyedRecord = new ProducerRecord("topic", "value")
res8: org.apache.kafka.clients.producer.ProducerRecord[Nothing,String] =
ProducerRecord(topic=topic, partition=null, key=null, value=value
scala> kafkaProducer.send(res8)
<console>:17: error: type mismatch;
found : org.apache.kafka.clients.producer.ProducerRecord[Nothing,String]
required: org.apache.kafka.clients.producer.ProducerRecord[String,String]
Note: Nothing <: String, but Java-defined class ProducerRecord is invariant in type K.
You may wish to investigate a wildcard type such as `_ <: String`. (SLS 3.2.10)
kafkaProducer.send(res8)
^
Is this against Kafka's rules or could it be an unnecessary precaution that has come from using this Java API in Scala?
More fundamentally, is it poor form to put keyed and unkeyed messages in the same Kafka topic?
Thank you
Javadoc: http://kafka.apache.org/090/javadoc/org/apache/kafka/clients/producer/package-summary.html
Edit
Could changing the variance of parameter K in KafkaProducer fix this?
It looks like the answer is in the comments, but to spell it out, Scala uses type inference when types are not explicitly provided. Since you wrote:
val unkeyedRecord = new ProducerRecord("topic", "value")
The key is not provided, and it becomes null, which Scala's type system infers is a Nothing instance. To fix that, declare the types explicitly:
val unkeyedRecord = new ProducerRecord[String,String]("topic", "value")
Using Play 2.4 ScalaWS. I've defined a method that takes a type manifest T and performs a GET request to an external API. The problem is that it won't compile because there isn't an implicit Reads for parsing JSON.
Here's the code:
def myGet[T](path: String)(implicit m: Manifest[T]): Future[Either[model.MyError,T]] = {
val url = MY_HOST+"/"+path
ws
.url(url)
.withHeaders(myHeaders: _*)
.get()
.map { response =>
try {
Right(response.json.as[T])
} catch {
// check if this response was an error
Left(response.json.as[model.MyError])
}
}
}
The compilation error is specifically:
Compilation error[No Json deserializer found for type T. Try to implement an implicit Reads or Format for this type.]
I'm not sure of the simplest way to do this. Thanks for your help.
Edit
I also tried (implicit m: Manifest[T], reads: Reads[T]) with no luck.
It turns out using (implicit m: Manifest[T], readsT: Reads[T]) and having the Reads be an implicit parameter was the correct way of doing this. I had to run sbt clean since something was improperly cached in the incremental compiler.
It now works just fine.