Snowflake-kafka connect -> Error: Caused by: java.lang.ClassNotFoundException: org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider - apache-kafka

I am getting this error message when i send a post request to start my kafka snowflake connector:
[ec2-user#ip-10-0-64-123 tmp]$ curl -X POST -H "Content-Type: application/json" --data #snowflake.json http://internal-test-dev-alb-39351xyz.eu-central-1.elb.amazonaws.com:80/connectors
<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 500 Request failed.</title>
</head>
<body><h2>HTTP ERROR 500</h2>
<p>Problem accessing /connectors. Reason:
<pre> Request failed.</pre></p><hr>Powered by Jetty:// 9.4.18.v20190429<hr/>
</body>
</html>
When i look in the logs:
Caused by: java.lang.ClassNotFoundException: org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 13 more
[2020-09-28 17:21:32,308] WARN /connectors (org.eclipse.jetty.server.HttpChannel:597)
javax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: org/bouncycastle/jcajce/provider/BouncyCastleFipsProvider
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:408)
at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:365)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:318)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:873)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:542)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1700)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1667)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:174)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:505)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:698)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:804)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: org/bouncycastle/jcajce/provider/BouncyCastleFipsProvider
at org.glassfish.jersey.servlet.internal.ResponseWriter.rethrow(ResponseWriter.java:254)
at org.glassfish.jersey.servlet.internal.ResponseWriter.failure(ResponseWriter.java:236)
at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:436)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:261)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:679)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392)
... 33 more
Caused by: java.lang.NoClassDefFoundError: org/bouncycastle/jcajce/provider/BouncyCastleFipsProvider
...
Here is my docker file to build the connector:
FROM openjdk:8-jre
# Add Confluent Repository and install Confluent Platform
RUN wget -qO - http://packages.confluent.io/deb/5.3/archive.key | apt-key add -
RUN echo "deb [arch=amd64] http://packages.confluent.io/deb/5.3 stable main" > /etc/apt/sources.list.d/confluent.list
RUN apt-get update && apt-get install -y --no-install-recommends confluent-kafka-connect-* confluent-schema-registry gettext confluent-kafka-2.12
RUN mkdir -p /usr/share/java/kafka-connect-pubsub/
RUN mkdir -p /etc/kafka-connect-pubsub/
# Script to configure properties in various config files.
COPY config-templates/configure.sh /configure.sh
# RUN wget -q -P /usr/share/java/kafka-connect-jdbc/ https://s3.amazonaws.com/datahub-public-repo/ojdbc10.jar
# RUN wget -q -P /usr/share/java/kafka-connect-jdbc/ https://s3.amazonaws.com/datahub-public-repo/mysql-connector-java-5.1.41-bin.jar
# RUN wget -q -P /usr/share/java/kafka-connect-jdbc/ https://s3.amazonaws.com/datahub-public-repo/terajdbc4.jar
# RUN wget -q -P /usr/share/java/kafka-connect-jdbc/ https://s3.amazonaws.com/datahub-public-repo/tdgssconfig.jar
RUN wget -q -P /usr/share/java/ https://repo1.maven.org/maven2/com/snowflake/snowflake-kafka-connector/1.4.4/snowflake-kafka-connector-1.4.4.jar
RUN wget -q -P /usr/share/java/ https://repo1.maven.org/maven2/org/bouncycastle/bc-fips/1.0.2/bc-fips-1.0.2.jar
RUN wget -q -P /usr/share/java/ https://repo1.maven.org/maven2/org/bouncycastle/bcpkix-fips/1.0.4/bcpkix-fips-1.0.4.jar
COPY config-templates/connect-standalone.properties.template /etc/kafka/connect-standalone.properties.template
COPY config-templates/snowflake.properties.template /etc/kafka/snowflake.properties.template
COPY config-templates/connect-distributed.properties.template /etc/kafka/connect-distributed.properties.template
# COPY config-templates/jdbc-source.properties.template /etc/kafka-connect-jdbc/jdbc-source.properties.template
# COPY config-templates/jdbc-sink.properties.template /etc/kafka-connect-jdbc/jdbc-sink.properties.template
# COPY config-templates/pubsub-sink-connector.properties.template /etc/kafka-connect-pubsub/pubsub-sink-connector.properties.template
COPY config-templates/kafka-run-class /usr/bin/kafka-run-class
# Modify these lines to reflect your client Keystore and Truststore.
# COPY replicantSuperUser.kafka.client.keystore.jks /replicantSuperUser.kafka.client.keystore.jks
# COPY kafka.client.truststore.jks /tmp/kafka.client.truststore.jks
RUN chmod 755 configure.sh /usr/bin/kafka-run-class
ENTRYPOINT /configure.sh && $KC_CMD && bash
Here is my snowflake.json
{
"name":"Snowflaketest",
"config":{
"connector.class":"com.snowflake.kafka.connector.SnowflakeSinkConnector",
"tasks.max":"8",
"topics":"dat.slt.isc.incoming.json",
"buffer.count.records":"10000",
"buffer.flush.time":"60",
"buffer.size.bytes":"5000000",
"snowflake.url.name":"https://t1234.eu-central-1.snowflakecomputing.com:443",
"snowflake.user.name":"asdf_CONNECT",
"snowflake.private.key":"MIIFLTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQIWi2iAjGL9JsCAggAMAw********",
"snowflake.private.key.passphrase":"hchajvdzSvcmqamIWe1jvrF***",
"snowflake.database.name":"db_SANDBOX",
"snowflake.schema.name":"LOADDB",
"key.converter":"org.apache.kafka.connect.storage.StringConverter",
"value.converter":"com.snowflake.kafka.connector.records.SnowflakeJsonConverter"
}
}
Any pointer?
My guess is that bouncycastle jars are not placed in a correct location openjdk? Any idea where they should be placed or if there is any other way to fix the problem?
Any help shall be highly appreciated.

Related

symfony postgres connection with custom schema does not work

When using database_name.my_schema on a remote database it tells me that db does not exists, but if I use only the database_name it seems to connect, but when I run
RUN php bin/console doctrine:migrations:migrate --no-interaction
on Dockerfile it fails to connect.
On /config/packages/doctrine.yaml
doctrine:
dbal:
dbname : 'my_db' // or dbname.myschema
user : 'user'
password : 'xxxxxxxx'
host : 'xxxxxxxx'
port: 1520
driver : 'pdo_pgsql'
server_version: '13'
The Dockerfile
FROM php:8.1-apache AS symfony_php
RUN a2enmod rewrite
RUN apt-get update \
&& apt-get install -y libpq-dev libzip-dev git libxml2-dev wget --no-install-recommends \
&& docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
&& docker-php-ext-install pdo pdo_pgsql pgsql zip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN wget https://getcomposer.org/download/2.3.5/composer.phar \
&& mv composer.phar /usr/bin/composer && chmod +x /usr/bin/composer
COPY docker/php/apache.conf /etc/apache2/sites-enabled/000-default.conf
COPY docker/php/php.ini /usr/local/etc/php/conf.d/app.ini
COPY . /var/www
WORKDIR /var/www
RUN composer install
RUN php bin/console doctrine:migrations:migrate --no-interaction
RUN mkdir ./var/cache/pfv && mkdir ./var/log/pfv && chmod -R 777 ./var/cache && chmod -R 777 ./var/lo
So, the questions are what is the correct setup for postgres with custom schema and why when apparently it connects with just de database name, the migration command fails.
EDIT
Error thrown:
[critical] Error thrown while running command "doctrine:migrations:migrate --no-interaction". Message: "An exception occurred in the driver: SQLSTATE[08006] [7] timeout expired"
In ExceptionConverter.php line 81:
An exception occurred in the driver: SQLSTATE[08006] [7] timeout expired

pyspark container- spark-submitting a pyspark script throws file not found error

Solution-
Add following env variables to the container
export PYSPARK_PYTHON=/usr/bin/python3.9
export PYSPARK_DRIVER_PYTHON=/usr/bin/python3.9
Trying to create a spark container and spark-submit a pyspark script.
I am able to create the container but running the pyspark script throws the following error:
Exception in thread "main" java.io.IOException: Cannot run program
"python": error=2, No such file or directory
Questions :
Any idea why this error is occurring ?
Do i need to install python separately or does it comes bundled with spark download ?
Do i need to install Pyspark separately or does it comes bundled with spark download ?
What is preferable regarding python installation? download and put it under /opt/python or use apt-get ?
pyspark script:
from pyspark import SparkContext
sc = SparkContext("local", "count app")
words = sc.parallelize (
["scala",
"java",
"hadoop",
"spark",
"akka",
"spark vs hadoop",
"pyspark",
"pyspark and spark"]
)
counts = words.count()
print "Number of elements in RDD -> %i" % (counts)
output of spark-submit:
newuser#c1f28230da16:~$ spark-submit count.py
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform
(file:/opt/spark/jars/spark-unsafe_2.12-3.0.1.jar) to constructor
java.nio.DirectByteBuffer(long,int) WARNING: Please consider reporting
this to the maintainers of org.apache.spark.unsafe.Platform WARNING:
Use --illegal-access=warn to enable warnings of further illegal
reflective access operations WARNING: All illegal access operations
will be denied in a future release 21/02/01 19:58:35 WARN
NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable Exception in
thread "main" java.io.IOException: Cannot run program "python":
error=2, No such file or directory at
java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1128) at
java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1071) at
org.apache.spark.deploy.PythonRunner$.main(PythonRunner.scala:97) at
org.apache.spark.deploy.PythonRunner.main(PythonRunner.scala) at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method) at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564) at
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
at
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused
by: java.io.IOException: error=2, No such file or directory at
java.base/java.lang.ProcessImpl.forkAndExec(Native Method) at
java.base/java.lang.ProcessImpl.<init>(ProcessImpl.java:319) at
java.base/java.lang.ProcessImpl.start(ProcessImpl.java:250) at
java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1107)
... 15 more log4j:WARN No appenders could be found for logger
(org.apache.spark.util.ShutdownHookManager). log4j:WARN Please
initialize the log4j system properly. log4j:WARN See
http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
output of printenv:
newuser#c1f28230da16:~$ printenv
HOME=/home/newuser
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:
PYTHONPATH=:/opt/spark/python:/opt/spark/python/lib/py4j-0.10.4-src.zip
TERM=xterm SHLVL=1 SPARK_HOME=/opt/spark
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/java/bin:/opt/spark/bin
_=/usr/bin/printenv
myspark dockerfile:
JDK_PACKAGE=openjdk-14.0.2_linux-x64_bin.tar.gz ARG
SPARK_HOME=/opt/spark ARG SPARK_PACKAGE=spark-3.0.1-bin-hadoop3.2.tgz
#MAINTAINER demo#gmail.com
#LABEL maintainer="demo#foo.com"
############################################
### Install openjava
############################################
# Base image stage 1 FROM ubuntu as jdk
ARG JAVA_HOME ARG JDK_PACKAGE
WORKDIR /opt/
## download open java
# ADD https://download.java.net/java/GA/jdk14.0.2/205943a0976c4ed48cb16f1043c5c647/12/GPL/$JDK_PACKAGE
/
# ADD $JDK_PACKAGE / COPY $JDK_PACKAGE .
RUN mkdir -p $JAVA_HOME/ && \
tar -zxf $JDK_PACKAGE --strip-components 1 -C $JAVA_HOME && \
rm -f $JDK_PACKAGE
############################################
### Install spark search
############################################
# Base image stage 2 From ubuntu as spark
#ARG JAVA_HOME ARG SPARK_HOME ARG SPARK_PACKAGE
WORKDIR /opt/
## download spark COPY $SPARK_PACKAGE .
RUN mkdir -p $SPARK_HOME/ && \
tar -zxf $SPARK_PACKAGE --strip-components 1 -C $SPARK_HOME && \
rm -f $SPARK_PACKAGE
# Mount elasticsearch.yml config
### ADD config/elasticsearch.yml /opt/elasticsearch/config/elasticsearch.yml
############################################
### final
############################################
From ubuntu as finalbuild
ARG JAVA_HOME ARG SPARK_HOME ARG SPARK_PACKAGE
WORKDIR /opt/
# get artifacts from previous stages COPY --from=jdk $JAVA_HOME $JAVA_HOME COPY --from=spark $SPARK_HOME $SPARK_HOME
# Setup JAVA_HOME, this is useful for docker commandline ENV JAVA_HOME $JAVA_HOME ENV SPARK_HOME $SPARK_HOME
# setup paths ENV PATH $PATH:$JAVA_HOME/bin ENV PATH $PATH:$SPARK_HOME/bin ENV PYTHONPATH
$PYTHONPATH:$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.4-src.zip
# Expose ports
# EXPOSE 9200
# EXPOSE 9300
# Define mountable directories.
#VOLUME ["/data"]
## give permission to entire setup directory RUN useradd newuser --create-home --shell /bin/bash && \
echo 'newuser:newpassword' | chpasswd && \
chown -R newuser $SPARK_HOME $JAVA_HOME && \
chown -R newuser:newuser /home/newuser && \
chmod 755 /home/newuser
#chown -R newuser:newuser /home/newuser
#chown -R newuser /home/newuser && \
# Install Python RUN apt-get update && \
apt-get install -yq curl && \
apt-get install -yq vim && \
apt-get install -yq python3.9
## Install PySpark and Numpy
#RUN \
# pip install --upgrade pip && \
# pip install numpy && \
# pip install pyspark
#
USER newuser
WORKDIR /home/newuser
# RUN chown -R newuser /home/newuser
Added following env variables to the container and it works
export PYSPARK_PYTHON=/usr/bin/python3.9
export PYSPARK_DRIVER_PYTHON=/usr/bin/python3.9

How to connect to Confluent Cloud from a Docker container

I have setup a Kafka topic on Confluent cloud (https://confluent.cloud/) and can connect/send messages to the topic using below configuration:
kafka-config.properties:
# Kafka
ssl.endpoint.identification.algorithm=
bootstrap.servers=pkc-4yyd6.us-east1.gcp.confluent.cloud:9092
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="uname" password="pwd";
sasl.mechanism=PLAIN
Connecting from a docker container I receive:
Failed to produce: org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Searching the above error suggests ssl.endpoint.identification.algorithm= should fix
Here is my Dockerfile:
FROM ysihaoy/scala-play:2.12.2-2.6.0-sbt-0.13.15
COPY ["build.sbt", "/tmp/build/"]
COPY ["project/plugins.sbt", "project/build.properties", "/tmp/build/project/"]
COPY . /root/app/
WORKDIR /root/app
CMD ["sbt" , "run"]
I build and run the container using:
docker build -t kafkatest .
docker run -it kafkatest
Is there extra config required to allow connecting to Confluent Kafka ?
I do not receive this issue when building locally (not using Docker).
Update:
Here is the Scala src I use to build the properties:
def buildProperties(): Properties = {
val kafkaPropertiesFile = Source.fromResource("kafka-config.properties")
val properties: Properties = new Properties
properties.load(kafkaPropertiesFile.bufferedReader())
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.connect.json.JsonSerializer")
properties
}
Update 2:
def buildProperties(): Properties = {
val kafkaPropertiesFile = Source.fromResource("kafka-config.properties")
val properties: Properties = new Properties
properties.load(kafkaPropertiesFile.bufferedReader())
println("bootstrap.servers:"+properties.get("bootstrap.servers"))
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.connect.json.JsonSerializer")
properties
}
The property bootstrap.servers is found, therefore the file is added to the container.
Update3 :
sasl.jaas.config:org.apache.kafka.common.security.plain.PlainLoginModule required username="Q763KBPRI" password="bFehkfL/J6m8L2aukX+A/L59LAYb/bWr"
Update4:
docker run -it kafkatest --network host
returns error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"--network\": executable file not found in $PATH": unknown.
Using a different base image resolved the issue, but I'm unsure which difference is causing the resolution. Here is my updated Dockerfile:
ARG OPENJDK_TAG=8u232
FROM openjdk:${OPENJDK_TAG}
ARG SBT_VERSION=1.4.1
# Install sbt
RUN \
mkdir /working/ && \
cd /working/ && \
curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb && \
apt-get update && \
apt-get install sbt && \
cd && \
rm -r /working/ && \
sbt sbtVersion
COPY ["build.sbt", "/tmp/build/"]
COPY ["project/plugins.sbt", "project/build.properties", "/tmp/build/project/"]
COPY . /root/app/
WORKDIR /root/app
CMD ["sbt" , "run"]

Getting error in configuring s3 Sink conector

I have cloned landoop fast-data-dev docker repo from this GitHub repo.
and built the image using command docker build --tag=landoop .
After building the image, I ran it using:
docker run --rm -p 2181:2181 -p 3030:3030 -p 8081-8083:8081-8083 -p 9581-9585:9581-9585 -p 9092:9092 -e ADV_HOST=10.10.X.X -e DEBUG=1 -e AWS_ACCESS_KEY_ID=XXX -e AWS_SECRET_ACCESS_KEY=XXX landoop
Once the UI was up, I tried to create a s3 sink connection but it failed saying:
Caused by: java.io.FileNotFoundException: /usr/lib/libnss3.so
Also I don't see the libnss3.so file in the location. However if I run the docker container directly using the command below, I can see the file in the location and there is no error when creating the s3 sink connector.
docker run --rm --net=host landoop/fast-data-dev
Has anyone faced this error?
Answering my own question so that others can benefit,if it's not appropriate please leave a comment and I will make it a comment. I figured out that the libnss3 library was missing from debian image and had to install while building the image. For this I edited the setp-and-run.sh and added the libnss3, the script looks like :
FROM debian as compile-lkd
RUN apt-get update \
&& apt-get install -y \
unzip \
wget \
libnss3 \

Eclipse Che Google Cloud Compute Engine

Having challenges running Eclipse Che multiuser mode on Google Cloud computer engine instance.
Environment
(che cli): 6.1.0 - using docker 17.03.2-ce / native
Input:
docker run -it -e CHE_MULTIUSER=true -e CHE_HOST={server-ip} -v /var/run/docker.sock:/var/run/d
ocker.sock -v {localuserfolder}:/data eclipse/che start
Output:
INFO: (che start): Starting containers...
docker_compose --file="/data/instance/docker-compose-container.yml" -p="che" up -d >> "/data/cli.log" 2>&1
che_postgres_1 is up-to-date
ERROR: for che Container "4a245b40b556" is unhealthy.
ERROR: for keycloak Container "4a245b40b556" is unhealthy.
Encountered errors while bringing up the project.
ERROR: Error during 'compose up' - printing 30 line tail of {localuserfolder}/cli.log:
Noticed issue has something do to with postgres not having permission to run some scripts:
docker container logs che_postgres_1
/usr/bin/container-entrypoint: line 3: exec: /var/lib/pgsql/init-che-user-and-run.sh: cannot execute: Permission denied
/usr/bin/container-entrypoint: line 3: /var/lib/pgsql/init-che-user-and-run.sh: Permission denied
Documented fix doesn't work, :/data is already mounted to read/writable directory.
Probably Google Cloud overrides file system permissions somehow. Have you tried mounting different dirs into :/data?
I've tested your command and it works in my case by using a GCP instance with Debian 9 and docker 17.12, here is the steps that I've followed:
1) install docker
$ sudo apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common
$ curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable"
$ sudo apt-get install docker-ce
2) run che, I've used as a directory the /tmp, I recommend you to create another directory to store the files
$ sudo docker run -it -e CHE_MULTIUSER=true -e CHE_HOST=INTERNAL-IP -v /var/run/docker.sock:/var/run/docker.sock -v /DIRECTORY_WITH_PERMISSIONS:/data eclipse/che start