I am trying to use camels sql: and have googled myself into this:
import org.apache.commons.dbcp2.BasicDataSource
import org.apache.camel.impl.{SimpleRegistry, DefaultCamelContext}
object CamelApplication {
val jdbcUrl = "jdbc:mysql://host:3306"
val user = "test"
val password = "secret"
val driverClass = "com.mysql.jdbc.Driver"
// code to create data source here
val ds = new BasicDataSource
ds.setUrl(jdbcUrl)
ds.setUsername(user)
ds.setPassword(password)
ds.setDriverClassName(driverClass)
val registry = new SimpleRegistry
registry.put("dataSource", ds)
def main(args: Array[String]) = {
val context = new DefaultCamelContext(registry)
context.setUseMDCLogging(true)
context.addRoutes(new DlrToDb)
context.start()
Thread.currentThread.join()
}
}
and my DlrToDb route is this:
import org.apache.camel.scala.dsl.builder.RouteBuilder
class DlrToDb extends RouteBuilder{
"""netty:tcp://localhost:12000?textline=true""" ==> {
id("DlrToDb")
log("sql insert coming up")
to("sql:insert into camel_test (msgid, dlr_body) VALUES ('some_id','test')")
}
}
i.e. when I telnet to localhost and press enter I would like some data to be added in my database. However it is a BasicDataSource and not a DataSource so I get an error:
Failed to create route DlrToDb .....
.... due to: Property 'dataSource' is required
Do I need to change/convert from the BasicDatasource, or do I need to do something to the registry to make it work?
You need to append query option "dataSource" to the URI:
....
to("sql:insert into camel_test (msgid, dlr_body) VALUES ('some_id','test')?dataSource=dataSource")
....
Related
I'm trying to name my source processor using the Consumed.as() method (full code below):
val usersOrdersStreams: KStream[UserId, Order] = builder
.stream[UserId, Order](ordersByUserTopic)(Consumed.as("topic-name"))
However when I'm running the application I'm getting the following exception:
scalaorg.apache.kafka.common.config.ConfigException: Please specify a value serde or set one through StreamsConfig#DEFAULT_VALUE_SERDE_CLASS_CONFIG
When I looked at the definition of .as() I saw this:
public static <K, V> Consumed<K, V> as(final String processorName) {
return new Consumed<>(null, null, null, null, processorName);
}
So I guessed the issue was that the key/value serdes were set to null.
I tried to solve it by adding a call to withValueSerde():
val orderSerde = ...
val usersOrdersStreams: KStream[UserId, Order] = builder
.stream[UserId, Order](ordersByUserTopic)(Consumed.as("topic-name").withValueSerde(orderSerde))
But got the same error. What am I doing wrong?
Note: if I remove the Consumed.as() part the code works and the exception is not being thrown
Following is the full code (some imports were removed for readability reasons):
import org.apache.kafka.common.serialization.Serde
import org.apache.kafka.streams.kstream.{GlobalKTable, JoinWindows, TimeWindows, Windowed}
import org.apache.kafka.streams.scala.ImplicitConversions._
import org.apache.kafka.streams.scala.serialization.Serdes
import org.apache.kafka.streams.scala.serialization.Serdes._
import scala.concurrent.duration._
object KafkaStreamsApp {
implicit def serde[A >: Null : Decoder : Encoder]: Serde[A] = {
val serializer = (a: A) => a.asJson.noSpaces.getBytes
val deserializer = (aAsBytes: Array[Byte]) => {
val aAsString = new String(aAsBytes)
val aOrError = decode[A](aAsString)
aOrError match {
case Right(a) => Option(a)
case Left(error) =>
Option.empty
}
}
Serdes.fromFn[A](serializer, deserializer)
}
implicit val orderSerde: Serde[Order] = serde[Order]
// Topics
final val ordersByUserTopic = "orders-by-user"
final val filterOrders = "filter-low-orders"
final val applyMapValues = "mapValues-apply-discount"
final val payedOrdersTopic = "filtered-orders"
type UserId = String
case class Order(user: UserId, amount: Double)
val builder = new StreamsBuilder
val usersOrdersStreams: KStream[UserId, Order] =
builder.stream[UserId, Order](ordersByUserTopic)(Consumed.as("vvv").withValueSerde(orderSerde))
def paidOrdersTopology(): Unit = {
usersOrdersStreams
.filter((_, v) => v.amount > 1000.0, named = Named.as(filterOrders))
.mapValues(v => v.copy(amount = v.amount * 0.85), named = Named.as(applyMapValues))
.to(payedOrdersTopic)
}
def main(args: Array[String]): Unit = {
val props = new Properties
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "orders-application")
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092")
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.stringSerde.getClass)
paidOrdersTopology()
val topology: Topology = builder.build()
println(topology.describe())
val application: KafkaStreams = new KafkaStreams(topology, props)
application.start()
}
}
So... after some digging I managed to find the issue: the key serde was missing. The following code sets only the values serde, which creates a Consumed object with a null key serde:
val orderSerde = ...
val usersOrdersStreams: KStream[UserId, Order] = builder
.stream[UserId, Order](ordersByUserTopic)(Consumed.as("topic-name").withValueSerde(orderSerde))
When I added the key serde as well:
val orderSerde = ...
val consumed = Consumed.as("topic-name")
.withKeySerde(Serdes.stringSerde) // Missing key serde
.withValueSerde(orderSerde)
val usersOrdersStreams: KStream[UserId, Order] =
builder.stream[UserId, Order](ordersByUserTopic)(consumed)
The code started working.
The only thing I'm not sure about is why the error thrown stated that value serde was missing, when it's the key serde that's missing.
I am trying to write a scala-jdbc program which will run an analyze statement on tables present on our database. To do that, I wrote the code as below.
object Trip {
def main(args: Array[String]): Unit = {
val gs = new GetStats(args(0))
gs.run_analyze()
}
}
-----------------------------------------------------------------
class GetStats {
var tables = ""
def this(tables:String){
this
this.tables = tables
}
def run_analyze():Unit = {
val tabList = tables.split(",")
val gpc = new GpConnection()
val con = gpc.getGpCon()
val statement = con.get.createStatement()
try {
for(t<-tabList){
val rs = statement.execute(s"analyze ${t}")
if(rs.equals(true)) println(s"Analyzed ${t}")
else println(s"Analyze failed ${t}")
}
} catch {
case pse:PSQLException => pse.printStackTrace()
case e:Exception => e.printStackTrace()
}
}
}
-----------------------------------------------------------------
class GpConnection {
var gpCon: Option[Connection] = None
def getGpCon():Option[Connection] = {
val url = "jdbc:postgresql://.."
val driver = "org.postgresql.Driver"
val username = "user"
val password = "1239876"
Class.forName(driver)
if(gpCon==None || gpCon.get.isClosed) {
gpCon = DriverManager.getConnection(url, username, password).asInstanceOf[Option[Connection]]
gpCon
} else gpCon
}
}
I create a jar file on my idea (IntelliJ) and submit the jar as below.
scala -cp /home/username/jars/postgresql-42.1.4.jar analyzetables_2.11-0.1.jar schema.table
When I submit the jar file, I see the exception ClassCastException as given below.
java.lang.ClassCastException: org.postgresql.jdbc.PgConnection cannot be cast to scala.Option
at com.db.manager.GpConnection.getGpCon(GpConnection.scala:15)
at com.gp.analyze.GetStats.run_analyze(GetStats.scala:19)
at com.runstats.Trip$.main(Trip.scala:8)
at com.runstats.Trip.main(Trip.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at scala.reflect.internal.util.ScalaClassLoader.$anonfun$run$2(ScalaClassLoader.scala:98)
at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:32)
at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:30)
at scala.reflect.internal.util.ScalaClassLoader.run$(ScalaClassLoader.scala:98)
at scala.reflect.internal.util.ScalaClassLoader.run(ScalaClassLoader.scala:90)
at scala.tools.nsc.CommonRunner.run$(ObjectRunner.scala:22)
The exception says that connection cannot be casted to scala.option but if I don't use Option, I cannot use null to initialize the connection object & I see NullPointerException when I run the code.
Could anyone let me know what is the mistake I am making here and how can I fix it ?
asInstanceOf[] doesn't work that way. It won't just create an Option[] for you.
val x:Option[Int] = 5.asInstanceOf[Option[Int]] //not gonna happen
You have to create the Option[] explicitly.
val x:Option[Int] = Option(5)
You can use an uninitialized var as:
var gpCon: Connection = _
But since you are using scala.util.Option which is a better thing to do, do it in a functional way and don't write imperative Java code in Scala, like:
// a singleton object (Scala provided)
object GpConnection {
private var gpCon: Option[Connection] = None
// returns a Connection (no option - as we need it!)
def getOrCreateCon(): Connection = gpCon match {
case conOpt if conOpt.isEmpty || conOpt.get.isClosed =>
// connection not present or is closed
val url = "jdbc:postgresql://.."
val driver = "org.postgresql.Driver"
val username = "user"
val password = "1239876"
// may throw an exception - you can even handle this
Class.forName(driver)
// may throw an exception - you can even handle this
gpCon = Option(DriverManager.getConnection(url, username, password).asInstanceOf[Connection])
gpCon.getOrElse(throw new RuntimeException("Cannot create connection"))
case Some(con) => con
}
}
use it like:
val con = GpConnection.getOrCreateCon
I have a object like this :
object DatabaseFactory {
import slick.jdbc.PostgresProfile.api._
private val db = Database.forConfig("database.postgresql")
def getDatabase = db
}
and a configuration like this :
database {
postgresql {
connectionPool = "HikariCP"
dataSourceClass = "org.postgresql.ds.PGSimpleDataSource"
properties = {
serverName = "localhost"
portNumber = "5432"
databaseName = "myProject"
user = "user"
password = "userPass"
}
numThreads = 10
}
}
There is any way to get javax.sql.DataSource from slick ?
I need a instance of PGSimpleDataSource from slick .
I want to use that on Flyway configuration :
Flyway.configure()
.baselineOnMigrate(true)
.locations("filesystem:/etc/myProject/db-scripts")
.dataSource(??? Need DataSource ???)
I've just stumbled upon this and seen the comment by https://stackoverflow.com/users/337134/knows-not-much .
Basically, you'll need to implement your own instance of Datasource:
package slick.migration.api.flyway
import java.io.PrintWriter
import java.sql.{DriverManager, SQLException, SQLFeatureNotSupportedException}
import slick.jdbc.JdbcBackend
import javax.sql.DataSource
class DatabaseDatasource(database: JdbcBackend#Database) extends DataSource {
override def getConnection = database.createSession().conn
override def getConnection(username: String, password: String) = throw new SQLFeatureNotSupportedException()
override def unwrap[T](iface: Class[T]) =
if (iface.isInstance(this)) this.asInstanceOf[T]
else throw new SQLException(getClass.getName + " is not a wrapper for " + iface)
override def isWrapperFor(iface: Class[_]) = iface.isInstance(this)
override def getLogWriter = throw new SQLFeatureNotSupportedException()
override def setLogWriter(out: PrintWriter): Unit = throw new SQLFeatureNotSupportedException()
override def setLoginTimeout(seconds: Int): Unit = DriverManager.setLoginTimeout(seconds)
override def getLoginTimeout = DriverManager.getLoginTimeout
override def getParentLogger = throw new SQLFeatureNotSupportedException()
}
There is any way to get javax.sql.DataSource from slick ?
I don't think so but I was able to get an instance of slick.jdbc.JdbcDataSource which I then used the exact same way i always used javax.sql.DataSource. creating connection, then preparedstatement, then handling a resultset. all the same.
db.source
I know this is old but wanted to give an option to try that isn't as drastic as the other answer of creating your own instance of datasource.
I was trying out this slick example and when I try to create an entry and then fetch that right after, I don't get the record. I modified the test case which is here as below.
val response = create(BankProduct("car loan", 1)).flatMap(getById)
whenReady(response) { p =>
assert(p.get === BankProduct("car loan", 1))
}
The above fails because the created BankProduct cannot be fetched immediately.
It is using h2 db for this and below is the configuration.
trait H2DBComponent extends DBComponent {
val logger = LoggerFactory.getLogger(this.getClass)
val driver = slick.driver.H2Driver
import driver.api._
val randomDB = "jdbc:h2:mem:test" + UUID.randomUUID().toString() + ";"
val h2Url = randomDB + "MODE=MySql;DATABASE_TO_UPPER=false;INIT=runscript from 'src/test/resources/schema.sql'\\;runscript from 'src/test/resources/schemadata.sql'"
val db: Database = {
logger.info("Creating test connection")
Database.forURL(url = h2Url, driver = "org.h2.Driver")
}
}
private[repo] trait BankProductTable extends BankTable { this: DBComponent =>
import driver.api._
private[BankProductTable] class BankProductTable(tag: Tag) extends Table[BankProduct](tag, "bankproduct") {
val id = column[Int]("id", O.PrimaryKey, O.AutoInc)
val name = column[String]("name")
val bankId = column[Int]("bank_id")
def bank = foreignKey("bank_product_fk", bankId, bankTableQuery)(_.id)
def * = (name, bankId, id.?) <> (BankProduct.tupled, BankProduct.unapply)
}
protected val bankProductTableQuery = TableQuery[BankProductTable]
protected def bankProductTableAutoInc = bankProductTableQuery returning bankProductTableQuery.map(_.id)
}
I don't understand why this is happening and how to avoid this?
I tried adding the propery autoCommit also but it didn't work either.
Appreciate any help to clarify this ambiguity.
This might be due to in-memory database content being lost after create call closes its connection. According to docs:
By default, closing the last connection to a database closes the
database. For an in-memory database, this means the content is lost.
To keep the database open, add ;DB_CLOSE_DELAY=-1 to the database URL.
To keep the content of an in-memory database as long as the virtual
machine is alive, use jdbc:h2:mem:test;DB_CLOSE_DELAY=-1.
However, after adding DB_CLOSE_DELAY=-1, there will be errors due to
runscript from 'src/test/resources/schemadata.sql'
which is executed on each connection, thus refactoring is neccessary such that database is populated only once on initialization.
I am trying play-framework template "play-hbase".
It's a template so I expect it works in most cases.
But in my case hbase is running with boot2docker on Win 7 x64.
So I added some config details to template:
object Application extends Controller {
val barsTableName = "bars"
val family = Bytes.toBytes("all")
val qualifier = Bytes.toBytes("json")
lazy val hbaseConfig = {
val conf = HBaseConfiguration.create()
// ADDED ADDED specify boot2docker vm
conf.set("hbase.zookeeper.quorum", "192.168.59.103")
conf.set("hbase.zookeeper.property.clientPort", "2181")
conf.set("hbase.master", "192.168.59.103:60000");
val hbaseAdmin = new HBaseAdmin(conf)
// create a table in HBase if it doesn't exist
if (!hbaseAdmin.tableExists(barsTableName)) {
val desc = new HTableDescriptor(barsTableName)
desc.addFamily(new HColumnDescriptor(family))
hbaseAdmin.createTable(desc)
Logger.info("bars table created")
}
// return the HBase config
conf
}
It is being compiled and runs, but shows "bad request error" during this code.
def addBar() = Action(parse.json) { request =>
// create a new row in the table that contains the JSON sent from the client
val table = new HTable(hbaseConfig, barsTableName)
val put = new Put(Bytes.toBytes(UUID.randomUUID().toString))
put.add(family, qualifier, Bytes.toBytes(request.body.toString()))
table.put(put)
table.close()
Ok
}