How I can connect to snowflake using scala slick jdbc - scala

I am using scala and akka stream for my application and finally want to insert the record to snowflake.
Is it possible to connect to snowflake using slick jdbc or alpakka slick .
Please assist

You can't, Snowflake is not in the list of supported databases:
https://scala-slick.org/doc/3.3.2/supported-databases.html

Related

kafka-connect JDBC PostgreSQL Sink Connector explicitly define the PostgrSQL schema (namespace)

I am using the JDBC sink connector to write data to postgresql.
The connector works fine, but it seems the connector can only write data to the default postgresql schema called public
This is the common JDBC URL format for postgresql.
jdbc:postgresql://<host>:<port5432>/<database>
Is it possible to explicitly define the schema name, to which I need the postgresql sink connector to write?
UPDATE:
Thanks, #Laurenz Albe for the hint. I can define search_path in the jdbc connection URL like either of these:
jdbc:postgresql://<host>:<port>/<database>?options=-c%20search_path=myschema,public
jdbc:postgresql://<host>:<port>/<database>?currentSchema=myschema
Use the options connection parameter to set the search_path:
jdbc:postgresql://<host>:<port5432>/<database>?options=-c%20search_path=myschema,public

Hive streaming not working

I trying to enable hive streaming by following https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest#StreamingDataIngest-StreamingRequirements
I had changed all configuration properties to enable hive streaming, but hive metastore service running with below error,
18/02/09 12:22:51 ERROR compactor.Initiator: Caught an exception in the main loop of compactor initiator, exiting MetaException(message:Unable to connect to transaction database org.postgresql.util.PSQLException: ERROR: relation "compaction_queue" does not exist
Note: I am using PostgreSQL for JDBC metastore and hive version 2.0.1
Help me to solve this error and start working with Hive Streaming.
The definition of this table (and others related to ACID tables/streaming ingest) can be found in https://github.com/apache/hive/blob/branch-2.0/metastore/scripts/upgrade/postgres/hive-txn-schema-2.0.0.postgres.sql. All of these are required for streaming to function properly.

What is the way to connect to hive using scala code and execute query into hive?

I checked out this link but did not find anything useful :
HiveClient Documentation
From raw Scala you can use Hive JDBC connector: https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-JDBC.
One more option is to use Spark Hive context.

Connect icCube with Reshift

in icCube 5.1 there is no Redshift as list of supported JDBC connections.
How to create a data source in icCube on Amazon Redshift ?
A first solution is using the Postgres jdbc driver. Redshift is based on Postgres so it also works (for how long is a good question).
The second is a bit more complicated as you need to add Reshift jdbc driver to icCube. First download jdbc driver from amazon from here, after follow this instructions to add a library to icCube.
Once done you've to configure a new data-source :

Spring batch connectivity to vertica DB

I am trying to connect Vertica using Spring batch but getting excception from Vertica DB.
Can I connect to Vertica DB using Spring batch. Tried to search on on net but didn't find any example.
You need to initialize your database using jdbc:initialize-database: use your custim vertica db script contains springbatch metadata tables creation and all should be fine.