i have a single machine with Windows 10. I installed Docker toolbox, and started my Kafka container using the below command.
docker run -it
-p 2181:2181 -p 3030:3030 -p 8081:8081
-p 8082:8082 -p 8083:8083 -p 9092:9092
-e ADV_HOST=192.168.99.100
landoop/fast-data-dev
I then created topics and added data to it, but after the restart, my topics are not available. I tried to replicate again, but the behaviour is same.
Please advice ?
I do not think the landoop image supports this as it is set up from scratch every time you run it. As they state in the readme:
Hit control+c to stop and remove everything
It is supposed to work as a dev and test tool
Related
I am trying to install a K8S cluster with Rancher, after installing docker successfully I ran the following command to install Rancher containers:
$ sudo docker run -d --restart=unless-stopped -p 8088:8088 rancher/server:stable
The console I got was:
As you can see I am not being able to download the Rancher containers, what could I do to make it work?
Well, starting with the point this Rancher installation is based on what is proposed in this link https://phoenixnap.com/kb/install-rancher-on-ubuntu. The Rancher version to be installed following the script in the link, according to Docker Hub, is 1.x.
So, I don't recommend the command proposed in the script:
sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:stable
See, the rancher/server must be replaced by: rancher/rancher:stable, then you will install latest Rancher version 2.x.
Also, to avoid the timeout problem you instead of using the command "docker run" go for "docker pull" first, then "docker run". In other words, after finishing the "pull" then you go with the "run" option. I recommend you run the following commands which will be faster and effective:
sudo docker pull rancher/rancher:latest
sudo docker run -d --restart=unless-stopped -p 8088:8088 -p 8443:8443 --privileged rancher/rancher:latest
After running that everything is in good shape, you can run your Rancher. I hope that can help someone, I lost one day to figure that out, since the download time takes hours each time you run to timeout and fail.
I'm running docker for a project on Github. (https://github.com/mydexchain/mydexchain)
I'm running the code below and creating an image file and a container.
docker run -d --rm -p 2020:2020 -v mydexchain:/var/lib/postgresql/ --privileged --log-driver=none --name mydexchain mydexchain/mydexchain:latest
I set my tracker address on port 2020.
I run the container with the attach command. "docker attach mydexchain"
I'm running the code below and creating a second container.
docker run -d --rm -p 2021:2020 -v mydexchain1:/var/lib/postgresql/ --privileged --log-driver=none --name mydexchain1 mydexchain/mydexchain:latest
I set my tracker address on port 2021.
I run the container with the attach command. "docker attach mydexchain1"
-So far everything is normal-
I'm running the code below and creating a third container.
docker run -d --rm -p 2022:2020 -v mydexchain2:/var/lib/postgresql/ --privileged --log-driver=none --name mydexchain2 mydexchain/mydexchain:latest
I'm checking the containers with the docker ps command.
I see this screen.
As soon as I want to do anything with this container like attach or set tracker, the container disappears. Like This.
When I check the logs;
When I did these procedures for the first time, I did not encounter any errors.
I would be glad if you could help. I have been working for 1 week and could not find any solution.
Have you ensured that all volumes are clean and similar? Usually the path /var/lib/postgresql/data is mounted into container.
Might be also related to your pausing/closing your previous attach command which kills that container, since you are missing -i and -t flags when launching the container. Those should be used to prevent it from closing. See more from the documentation of attach command.
Currently I am using PostgreSQL 12 in my WSL2 environment, I wish to implement cdc with debezium and kafka. As I am a first timer to do this, My searched all tutorial showing this process with docker.
In my case no issue with docker if it is not about postgres. I dont want to use postgres/docker.
I just simply want to connect debezium and kafka to my existing postgres on disk. please suggest me tutorial or way how i can connect. It will be a huge help. Thanks.
I did these two steps:
step 1
docker run -it --rm --name zookeeper_debezium -p 2181:2181 -p 2888:2888 -p 3888:3888 debezium/zookeeper:1.2
step 2
docker run -it --rm --name kafka -p 9092:9092 --link zookeeper_debezium:zookeeper debezium/kafka:1.2
please follow this tutorial - https://debezium.io/documentation/reference/tutorial.html
There are few differences in your situation
You will not start any database in container, neither MySQL nor PostgreSQL
Your registration request (https://debezium.io/documentation/reference/tutorial.html#registering-connector-monitor-inventory-database) will be modified for PostgreSQL connector
You must setup your database following https://debezium.io/documentation/reference/1.2/connectors/postgresql.html#postgresql-server-configuration
As you use PostgreSQL 12 I recommend you to use pgoutput (https://debezium.io/documentation/reference/1.2/connectors/postgresql.html#postgresql-pgoutput) plugin. Thus you can skip the step about libraries
We tried to connect to kafka using below command however we are not able to reach the broker. Could anyone help us here.
sudo docker run --rm --name=jaeger1 -e SPAN_STORAGE_TYPE=kafka -p 14267:14267/tcp -p 14268:14268/tcp -p 9411:9411/tcp jaegertracing/jaeger-collector --kafka.producer.topic="test Span" --kafka.producer.brokers=<broker>:9092 --kafka.producer.plaintext.password="" --kafka.producer.plaintext.username="<username>"
{"level":"fatal","ts":1585232844.0954845,"caller":"collector/main.go:70","msg":"Failed to init storage factory","error":"kafka: client has run out of available brokers to talk to (Is your cluster reachable?)"
Please let us know what we are missing.
The debezium kafka connect command is :
docker run -it --rm --name connect -p 8083:8083 -e GROUP_ID=1 -e CONFIG_STORAGE_TOPIC=my_connect_configs -e OFFSET_STORAGE_TOPIC=my_connect_offsets -e STATUS_STORAGE_TOPIC=my_connect_statuses --link zookeeper:zookeeper --link kafka:kafka --link mysql:mysql debezium/connect:0.9
Is there an equivalent way to not run inside a docker container with flags to specify the zookeeper instance and kafka bootstrap servers/broker ? I have my kafka and zookeeper running on my mac locally but not inside a docker container .
Thanks
There are no "flags", just properties files. The docker image is just using variable substitution inside of those files.
You can refer to the Debezium installation documentation, and it is just a plugin to Kafka Connect, which is included with your Kafka installation.
Find connect-standalone.properties in your Kafka install to get started. One important property you will want to edit is plugin.path, which must be the full parent path to where you put the Debezium JAR files. Then Kafka is configured there as well
Then you would run this to start a single node
connect-standalone.sh connect-standalone.properties your-debezium-config.properties
(Docker image is running connect-distributed.sh, but you wouldn't need to run a Cluster just on your Mac)