when you use kafka(in docker container), Where exactly is the plugin path? - apache-kafka

----- i use kafka, kafka-connect(image: confluentinc/cp-kafka-connect)
when you use kafka in docker container if you wanna operate kafka, you have to go into the container(like 'docker exec -it kafka' or 'docker exec -it kafka-connect' ----> this is another question what i wanna ask) , right..??
i tried putting some connector (jdbc connector, mysql connector) into kafka-connect container, but it didn't work.
so.. my question is
after docker-compose up(put in container), if i wanna connect with some connectors('./bin/connect-distributed.sh ./etc/kafka/connect-distributed.properties'),
what container i have to go into???
if i type plugin path, where should i write?( kafka? kafka-connect?)
I wouldn't mind if it was difficult to read... sorry for that

No, you don't need to exec anywhere unless you cannot download Kafka on your host machine to get the CLI scripts. But you'd only exec for kafka-topics, console producer/consumer, kafka-consumer-groups, etc, not any of the Connect scripts.
The Connect container automatically runs the Distributed script and you simply provide CONNECT_PLUGIN_PATH as an environment variable to any directory in the container you want to use for the plugins (I like /opt/connectors if I mount volume, but that's not where confluent-hub installs to for that image). That variable doesn't do anything for the broker image, only Connect.
Related How to install connectors to the docker image of apache kafka connect

If your requirement is startup a Kafka Connect.
You can use the basic guide published by Confluent "Build Your Own Apache Kafka® Demos"
Basically you need execute the following instructions:
git clone https://github.com/confluentinc/cp-all-in-one.git
cd cp-all-in-one/cp-all-in-one
git checkout 7.1.1-post
docker-compose up -d
This has Control Center at http://localhost:8088
If you need install a Connector, you can go to the https://www.confluent.io/hub select your specific connector.
Then, you can create your DockerImage of specific Kafka Connect server.
1.- Write a Dockerfile.
vim Dockerfile
2.- Add connector "example JDBC" from Confluent Hub.
FROM confluentinc/cp-kafka-connect
ENV MYSQL_DRIVER_VERSION 5.1.39
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.5.0
RUN curl -k -SL "https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-${MYSQL_DRIVER_VERSION}.tar.gz" \
| tar -xzf - -C /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib \
--strip-components=1 mysql-connector-java-5.1.39/mysql-connector-java-${MYSQL_DRIVER_VERSION}-bin.jar
3.- Build the docker image.
docker build . -t my-kafka-connect-jdbc:1.0.0
4.- Then, you can go to edit your docker-compose.yml, change the line 57
from:
image: cnfldemos/cp-server-connect-datagen:0.5.3-7.1.0
to:
image: my-kafka-connect-jdbc:1.0.0
5.- Finally, stop and start your Confluent Platform local environment:
docker-compose down
docker-compose up
Verify your docker
docker ps
Test your Connect server:
curl --location --request GET 'http://localhost:8083/connectors'

Related

Docker Postgres data host volume mapping

I'm trying to docker-containerize PostgreSQL server and this container will have many other applications as well. The need is that, PostgreSQL server data should be mapped to the host volume so that when container is stopped, we won't lose the data. Also that, the next time when we start the container, the same directory can be mapped again and postgres can use the old data. Below is the DOCKERFILE. Note that I'm using ubuntu 22.04 on the host.
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt install -y postgresql
ENTRYPOINT ["tail", "-f", "/dev/null"]
Docker image is built using the command
docker build -t pg_test .
and the container is run using the command
docker run --name test -v /home/me/data:/var/lib/postgresql/14/main pg_test
'/home/me/data' is the host directory which is empty where I want to map the postgres server data. '/var/lib/postgresql/14/main' is the directory inside the docker container where the postgres is supposed to store the data.
Once the docker container starts, I enter the docker container using the command
docker exec -it test bash
and once I'm inside, I'm trying to start the PostgreSQL service. But PostgreSQL fails to start as there is no data in '/var/lib/postgresql/14/main' directory. I understand that since I have mapped an empty host directory to '/var/lib/postgresql/14/main' directory, postgres doesn't have the files required to start.
I understand that I'm doing it the wrong way, but I couldn't find a way around it. Can anyone please help me to do this the right way, if there is one?
Any help would be appreciable.
You should use the postgres docker image, it will set up the db for you when you start the container, you can find instructions on https://hub.docker.com/_/postgres
If you must use a custom image, you will need to initialize the db yourself, usually by running initdb or whatever your system provides.
But really you should use the appropriate docker image, and if you need more services you start them in their own container and connect them to the postgres one

Enable GridGain Control Center to Ignite cluster deployed in Google Kubernetes cluster

I deployed ignite 2.10.0 in google cloud kubernetes. It is working properly. Now I need to enable control center to it. [1] provides instructions to deploy control agent backend and frontend. After deploying them I created account and need to connect cluster to it. [2] given download option but not provide information to configure libraries with kubernetes.
Please help.
[1]. https://www.gridgain.com/docs/control-center/latest/installation/kubernetes
[2]. https://www.gridgain.com/docs/control-center/latest/connect-ignite-cluster
Default ignite docker image doesn't contain control center files. I created new docker image using ignite docker image.
Following is the Dockerfile
FROM apacheignite/ignite:2.10.0
WORKDIR /opt/ignite
RUN wget https://www.gridgain.com/media/control-center-agent/control-center-agent-2.9.0.1.zip
RUN unzip control-center-agent-2.9.0.1.zip
RUN cp -r libs/control-center-agent/ apache-ignite/libs/control-center-agent/
RUN cp bin/* apache-ignite/bin/
RUN rm -r bin/
RUN rm -r libs/
RUN rm control-center-agent-2.9.0.1.zip
or take docker image from [1]
[1]. https://hub.docker.com/r/nuwansameera/ignite-with-control-center/
Use this docker image to create Deployment or Statefulset

How to use Confluent CLI on docker

I have started Confluent Platform on my windows 10 using docker with the help of https://docs.confluent.io/current/quickstart/ce-docker-quickstart.html. Now I want to try using Confluent CLI. But I don't see any documentation on how to use confluent cli on docker. Can you please suggest me how can I do this !
Confluent does not provide a docker image for the CLIs at this time (that I'm aware of). Until that time, you could build a simple image locally to package up the CLI for experimenting w/ the command.
Create Dockerfile:
FROM ubuntu:latest
RUN apt update && apt upgrade
RUN apt install -y curl
RUN curl -L --http1.1 https://cnfl.io/cli | sh -s -- -b /usr/local/bin
Then build with:
docker build -t confluent-cli:latest .
Then run on the cp-all-in-one network with:
$ docker run -it --rm --network="cp-all-in-one_default" confluent-cli:latest bash
Then from the containers shell, experiement w/ the command:
root#421e53d4a04a:/# confluent
Manage your Confluent Platform.
Usage:
confluent [command]
Available Commands:
cluster Retrieve metadata about Confluent clusters.
completion Print shell completion code.
help Help about any command
iam Manage RBAC, ACL and IAM permissions.
local Manage a local Confluent Platform development environment.
login Log in to Confluent Platform (required for RBAC).
logout Logout of Confluent Platform.
secret Manage secrets for Confluent Platform.
update Update the confluent CLI.
version Print the confluent CLI version.
Flags:
-h, --help help for confluent
-v, --verbose count Increase verbosity (-v for warn, -vv for info, -vvv for debug, -vvvv for trace).
--version version for confluent
Use "confluent [command] --help" for more information about a command.
Here is the image:
https://hub.docker.com/r/confluentinc/confluent-cli
Basically run the following commands:
devbox1#devbox1:~/onibex/wa$ docker pull confluentinc/confluent-cli
devbox1#devbox1:~/onibex/wa$ docker run confluentinc/confluent-cli
To check if the image was added:
devbox1#devbox1:~/onibex/wa$ docker ps -a | grep confluent-cli
a5ecf9223d35 confluentinc/confluent-cli
Add "sudo" if it is needed.

Is there a equivalent Debezium command to starting Kafka Connect without Docker container

The debezium kafka connect command is :
docker run -it --rm --name connect -p 8083:8083 -e GROUP_ID=1 -e CONFIG_STORAGE_TOPIC=my_connect_configs -e OFFSET_STORAGE_TOPIC=my_connect_offsets -e STATUS_STORAGE_TOPIC=my_connect_statuses --link zookeeper:zookeeper --link kafka:kafka --link mysql:mysql debezium/connect:0.9
Is there an equivalent way to not run inside a docker container with flags to specify the zookeeper instance and kafka bootstrap servers/broker ? I have my kafka and zookeeper running on my mac locally but not inside a docker container .
Thanks
There are no "flags", just properties files. The docker image is just using variable substitution inside of those files.
You can refer to the Debezium installation documentation, and it is just a plugin to Kafka Connect, which is included with your Kafka installation.
Find connect-standalone.properties in your Kafka install to get started. One important property you will want to edit is plugin.path, which must be the full parent path to where you put the Debezium JAR files. Then Kafka is configured there as well
Then you would run this to start a single node
connect-standalone.sh connect-standalone.properties your-debezium-config.properties
(Docker image is running connect-distributed.sh, but you wouldn't need to run a Cluster just on your Mac)

JHipster - Using docker-compose on remote server

I would like to setup my JHipster project on a remote server utilising docker-compose as per here.
Am I right in thinking (for the simplest approach), these are the steps I might follow:
Install docker on remote system.
Install docker-compose on remote system.
On laptop (with app src code) run ./mvnw package -Pprod docker:build to produce a docker image of the application.
Copy the image produced by this to remote server like this.
Install this image on remote system.
On laptop copy relevant yml files from src/main/docker to a directory (e.g. dir/on/remote) on the remote server.
Run docker-compose -f dir/on/remote/app.yml up on the remote server.
Thanks for your help.
Also any suggestions on how this process may be improved would be appreciated.
Expecting that your server is Ubunutu,
SSH to your server,
Install docker, docker-compose, install JAVA and set JAVA_HOME
Two approches
create docker image and push it to docker hub if you have docker hub account
create docker image within server
Second approch would be better to reduce the confusion
Clone your repo to server
cd <APPLICATION_FOLDER>
Do
./mvnw package -Pprod docker:build -DskipTests
List the images created
docker images
You can ignore -DskipTests , if you are writing test code.
Do
docker-compose -f /src/main/docker/app.yml up -d
List containers running
docker ps -a
Logs of the container
docker logs <CONTAINER_ID>