We tried to connect to kafka using below command however we are not able to reach the broker. Could anyone help us here.
sudo docker run --rm --name=jaeger1 -e SPAN_STORAGE_TYPE=kafka -p 14267:14267/tcp -p 14268:14268/tcp -p 9411:9411/tcp jaegertracing/jaeger-collector --kafka.producer.topic="test Span" --kafka.producer.brokers=<broker>:9092 --kafka.producer.plaintext.password="" --kafka.producer.plaintext.username="<username>"
{"level":"fatal","ts":1585232844.0954845,"caller":"collector/main.go:70","msg":"Failed to init storage factory","error":"kafka: client has run out of available brokers to talk to (Is your cluster reachable?)"
Please let us know what we are missing.
Related
----- i use kafka, kafka-connect(image: confluentinc/cp-kafka-connect)
when you use kafka in docker container if you wanna operate kafka, you have to go into the container(like 'docker exec -it kafka' or 'docker exec -it kafka-connect' ----> this is another question what i wanna ask) , right..??
i tried putting some connector (jdbc connector, mysql connector) into kafka-connect container, but it didn't work.
so.. my question is
after docker-compose up(put in container), if i wanna connect with some connectors('./bin/connect-distributed.sh ./etc/kafka/connect-distributed.properties'),
what container i have to go into???
if i type plugin path, where should i write?( kafka? kafka-connect?)
I wouldn't mind if it was difficult to read... sorry for that
No, you don't need to exec anywhere unless you cannot download Kafka on your host machine to get the CLI scripts. But you'd only exec for kafka-topics, console producer/consumer, kafka-consumer-groups, etc, not any of the Connect scripts.
The Connect container automatically runs the Distributed script and you simply provide CONNECT_PLUGIN_PATH as an environment variable to any directory in the container you want to use for the plugins (I like /opt/connectors if I mount volume, but that's not where confluent-hub installs to for that image). That variable doesn't do anything for the broker image, only Connect.
Related How to install connectors to the docker image of apache kafka connect
If your requirement is startup a Kafka Connect.
You can use the basic guide published by Confluent "Build Your Own Apache Kafka® Demos"
Basically you need execute the following instructions:
git clone https://github.com/confluentinc/cp-all-in-one.git
cd cp-all-in-one/cp-all-in-one
git checkout 7.1.1-post
docker-compose up -d
This has Control Center at http://localhost:8088
If you need install a Connector, you can go to the https://www.confluent.io/hub select your specific connector.
Then, you can create your DockerImage of specific Kafka Connect server.
1.- Write a Dockerfile.
vim Dockerfile
2.- Add connector "example JDBC" from Confluent Hub.
FROM confluentinc/cp-kafka-connect
ENV MYSQL_DRIVER_VERSION 5.1.39
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.5.0
RUN curl -k -SL "https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-${MYSQL_DRIVER_VERSION}.tar.gz" \
| tar -xzf - -C /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib \
--strip-components=1 mysql-connector-java-5.1.39/mysql-connector-java-${MYSQL_DRIVER_VERSION}-bin.jar
3.- Build the docker image.
docker build . -t my-kafka-connect-jdbc:1.0.0
4.- Then, you can go to edit your docker-compose.yml, change the line 57
from:
image: cnfldemos/cp-server-connect-datagen:0.5.3-7.1.0
to:
image: my-kafka-connect-jdbc:1.0.0
5.- Finally, stop and start your Confluent Platform local environment:
docker-compose down
docker-compose up
Verify your docker
docker ps
Test your Connect server:
curl --location --request GET 'http://localhost:8083/connectors'
I have been trying since yesterday to connect to a ChainLink node and I was not able to.
I followed the steps at this website
I am having a problem with "Set the Remote DATABASE_URL Config" (I think that this is my only error because of the [ERROR] listed below, I do not know if I am doing something else wrong since every command was executed without error)
I am using the Docker option to create the database listed here.
I am always having this error:
"[ERROR] unable to lock ORM: failed to connect to host=localhost user=some-postgres database=postgres: dial error (dial tcp [::1]:5432: connect: cannot assign requested address) logger/default.go:155 stacktrace=github.com/smartcontractkit/chainlink/core/logger.Errorf
/chainlink/core/logger/default.go:155"
After writing in my Ubuntu Terminal (ON WINDOWS 10):
"cd ~/.chainlink-kovan && docker run -p 6688:6688 -v ~/.chainlink-kovan:/chainlink -it --env-file=.env smartcontract/chainlink:0.10.1 local n"
I do not know how to connect to the database and what to write as attributes. All of the other steps and installs I have accomplished successfully.
I just want to know how to create a database on PostgreSQL and connect it to Docker as explained on the ChainLink website and write the appropriate command in the Ubunto terminal (for the "Remote DATABASE_URL Config PostgreSQL" step) so that I can run my node.
Thanks! (PS: I am a beginner and your help is much appreciated, and if I forgot to mention any important information please let me know so that I add it)
A comprehensive 101 for docker-postgres can be found here: https://hackernoon.com/dont-install-postgres-docker-pull-postgres-bee20e200198
Basically, you need to deploy a postgres db with docker
Pre-Reqs:
Create a dir for you docker/postgres:
mkdir -p $HOME/docker/volumes/postgres
Example:
docker run --rm --name pg-docker -e POSTGRES_USER=<any_desired_name> -e POSTGRES_PASSWORD=docker -e POSTGRES_DB=<any_db_name> -d -p 5432:5432 -v $HOME/docker/volumes/postgres:/var/lib/postgresql/data postgres
For postgres username, it can be anything like "super_chain" or etc.
For postgres db, it can be "chainlink"
After, docker is up and running. Just follow up the docs tut, where you need to write the DB URL to the .env file
Cheers
Currently I am using PostgreSQL 12 in my WSL2 environment, I wish to implement cdc with debezium and kafka. As I am a first timer to do this, My searched all tutorial showing this process with docker.
In my case no issue with docker if it is not about postgres. I dont want to use postgres/docker.
I just simply want to connect debezium and kafka to my existing postgres on disk. please suggest me tutorial or way how i can connect. It will be a huge help. Thanks.
I did these two steps:
step 1
docker run -it --rm --name zookeeper_debezium -p 2181:2181 -p 2888:2888 -p 3888:3888 debezium/zookeeper:1.2
step 2
docker run -it --rm --name kafka -p 9092:9092 --link zookeeper_debezium:zookeeper debezium/kafka:1.2
please follow this tutorial - https://debezium.io/documentation/reference/tutorial.html
There are few differences in your situation
You will not start any database in container, neither MySQL nor PostgreSQL
Your registration request (https://debezium.io/documentation/reference/tutorial.html#registering-connector-monitor-inventory-database) will be modified for PostgreSQL connector
You must setup your database following https://debezium.io/documentation/reference/1.2/connectors/postgresql.html#postgresql-server-configuration
As you use PostgreSQL 12 I recommend you to use pgoutput (https://debezium.io/documentation/reference/1.2/connectors/postgresql.html#postgresql-pgoutput) plugin. Thus you can skip the step about libraries
The debezium kafka connect command is :
docker run -it --rm --name connect -p 8083:8083 -e GROUP_ID=1 -e CONFIG_STORAGE_TOPIC=my_connect_configs -e OFFSET_STORAGE_TOPIC=my_connect_offsets -e STATUS_STORAGE_TOPIC=my_connect_statuses --link zookeeper:zookeeper --link kafka:kafka --link mysql:mysql debezium/connect:0.9
Is there an equivalent way to not run inside a docker container with flags to specify the zookeeper instance and kafka bootstrap servers/broker ? I have my kafka and zookeeper running on my mac locally but not inside a docker container .
Thanks
There are no "flags", just properties files. The docker image is just using variable substitution inside of those files.
You can refer to the Debezium installation documentation, and it is just a plugin to Kafka Connect, which is included with your Kafka installation.
Find connect-standalone.properties in your Kafka install to get started. One important property you will want to edit is plugin.path, which must be the full parent path to where you put the Debezium JAR files. Then Kafka is configured there as well
Then you would run this to start a single node
connect-standalone.sh connect-standalone.properties your-debezium-config.properties
(Docker image is running connect-distributed.sh, but you wouldn't need to run a Cluster just on your Mac)
i have a single machine with Windows 10. I installed Docker toolbox, and started my Kafka container using the below command.
docker run -it
-p 2181:2181 -p 3030:3030 -p 8081:8081
-p 8082:8082 -p 8083:8083 -p 9092:9092
-e ADV_HOST=192.168.99.100
landoop/fast-data-dev
I then created topics and added data to it, but after the restart, my topics are not available. I tried to replicate again, but the behaviour is same.
Please advice ?
I do not think the landoop image supports this as it is set up from scratch every time you run it. As they state in the readme:
Hit control+c to stop and remove everything
It is supposed to work as a dev and test tool