ActiveMQ with HBase persistence - persistence

Is it possible to use HBase as persistence database for ActiveMQ? Anyone had done something similar?

According to what I have read on activemq's homepage http://activemq.apache.org/persistence.html and in this pdf ( http://fusesource.com/docs/broker/5.0/persistence/persistence.pdf ) it should be possible using generic jdbc connection.

Related

Lock annotation in Spring-data-jdbc

This reference documentation from spring.io states claims that Spring Data JDBC supports a #Lock annotation.
Spring Data JDBC supports locking on derived query methods. To enable locking on a given derived query method inside a repository, you annotate it with #Lock.
However, I am unable to find such an annotation in the spring-data-jdbc library. There is one in the spring-data-jpa, but we use data-jdbc.
Is there a mistake in the documentation or am I missing something?
It's org.springframework.data.relational.repository.Lock.
As you can see it is in Spring Data Relational which is the basis for both Spring Data R2DBC and Spring Data JDBC.

Using Slick with Kudu/Impala

Kudu tables can be accessed via Impala thus its jdbc driver. Thanks to that it is accessable via standard java/scala jdbc api. I was wondering if it is possible to use slick for it. Or if not is any other high level scala db framework supporting impla/kudu.
Slick can be used with any JDBC database
http://slick.lightbend.com/doc/3.3.0/database.html
At least, for me, Slick is not fully compatible with Impala Kudu. Using Slick, I can not modify db entities, can not create, update or delete any item. It works only to read data.
There are two ways you could use Slick with an arbitrary JDBC driver (and SQL dialect).
The first is to use low-level JDBC calls. The SimpleDBIO class gives you access to a JDBC connection:
val getAutoCommit = SimpleDBIO[Boolean](_.connection.getAutoCommit)
That example is from the Slick manual.
However, I think you're more interested in working at a higher level than that. In that case, for Slick, you'd need to implement a custom Profile. If Impala is similar enough to an existing database profile, you may be able to extend an existing profile and adjust it to account for any differences. For example, this would allow you to customize how SQL is formatted for Impala, how timestamps are represented, how column names are quoted. The documentation on Porting SQL from Other Database Systems to Impala would give you an idea of what needs to change in a driver.
Or if not is any other high level scala db framework supporting impla/kudu.
None of the main-stream libraries seem to support Impala as a feature. Having said that, the Doobie documentation mentions customising connections for Hive. So Doobie may be worth quickly trying Doobie to see if you can query and insert, for example.

How to access the HIVE ACID table in Spark sql?

How could you access the HIVE ACID table, in Spark sql?
We have worked on and open sourced a datasource that will enable users to work on their Hive ACID Transactional tables using Spark.
Github: https://github.com/qubole/spark-acid
It is available as a Spark package and instructions to use it are on the Github page. Currently the datasource supports only reading from Hive ACID tables, and we are working on adding the ability to write into these tables via Spark as well.
Feedback and suggestions are welcome!
#aniket Spark doesn't support reading Hive Acid tables directly. (https://issues.apache.org/jira/browse/SPARK-15348/SPARK-16996)
The data layout for transactional tables requires special logic to decide which directories to read and how to combine them correctly. Some data files may represent updates of previously written rows, for example. Also, if you are reading while something is writing to this table your read may fail (w/o the special logic) because it will try to read incomplete ORC files. Compaction may (again w/o the special logic) may make it look like your data is duplicated.
It can be done (WIP) via LLAP - tracked in https://issues.apache.org/jira/browse/HIVE-12991
I faced the same issue (Spark for Hive acid tables )and I can able to manage with JDBC call from Spark. May be I can use this JDBC call from spark until we get the native ACID support from Spark.
https://github.com/Gowthamsb12/Spark/blob/master/Spark_ACID
Spark can read acid table directly at least since spark 2.3.2. But I can aslo confirm it can't read acid table in spark 2.2.0.

Mule ESB implementation guide

I have two databases on both MYSQL and sqlserver database engine , I want to connect with MULE ESB. The wanted result is a table with fields (MACC, tencc, ngaysinh) on MYSQL and a table with fields (ID, NAME, ADDRESS) on SQLSERVER, when I perform adding manipulation (NAME, ADDRESS) on MYSQL, then the data also changes on SQLSERSER.
Thanks.
Mule JDBC connectivity suites provide very good connectors for mysql and sqlserver databases.
For your requirement kindly go thorough Mulesoft official Document here. and learn how to connect databases.
Good tutorial for sqlserver connectivity in mulesoft, here.
Based on above tutorial you can design you mule flow which connects to mysql db and sqlserver db using mule timer component, this timer component triggers event which reads data from mysql table and populate in sqlserver table as per need.
Note : In my opinion replicating data in such manner is not good design. If its for PoC or for learning purpose its good. If possible can you please share your usecase.

Is it possible to load Phoenix tables from HDFS?

I am new to Phoenix. Is there any way to load tables to Phoenix from hadoop filesystem ?
Yes...
Phoenix is a wrapper on hBase...
So, you can create a phoenix table pointing to HDFS Data and can use it.
Please let me know if you are facing any specific issue related to that...
You can't do it straight away. You need to import data into HBase first. There are pre built importers (CSV format):
https://phoenix.apache.org/bulk_dataload.html