How to run kafka rest proxy on windows - apache-kafka

How to run kafka rest proxy on windows.
I downloaded confluent-2.0.1-2.11.7.tar.gz
in windows folder i cannot see kafka-rest-start.

Windows isn't currently a supported platform. However, it should work fine if you adapt the script. Even just running java io.confluent.kafkarest.KafkaRestMain with the appropriate classpath should work.

Here's the example of the command they are actually executing in the end of the bash script:
java -Xmx256M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dlog4j.configuration=file:C:/Dev/kafka/confluent-4.0.0/etc/kafka-rest/log4j.properties -cp .;C:/Dev/kafka/confluent-4.0.0/target/kafka-rest-*-development/share/java/kafka-rest/*;C:/Dev/kafka/confluent-4.0.0/share/java/confluent-common/*;C:/Dev/kafka/confluent-4.0.0/share/java/rest-utils/*;C:/Dev/kafka/confluent-4.0.0/share/java/kafka-rest/* io.confluent.kafkarest.KafkaRestMain C:/Dev/kafka/confluent-4.0.0/etc/kafka-rest/kafka-rest.properties
Make sure you change paths to yours if you want to try it out.

Perhaps, this answer will help anybody who is new to Kafka and stumble upon this situation, like me :).
I was looking for an answer to the very same question a week ago, came across the official suggestion to run jar files(in this path confluent-x.x.x\share\java\kafka-rest) in windows and was NOT successful in doing so.
Always ran into this error no main attribute found with or without specifying the proper classpath and io.confluent.kafkarest.KafkaRestMain.
I even tried running the shell scripts packaged for Linux distribution using [babun]: http://babun.github.io/, but that resulted in the error like Error: Could not find or load main class io.confluent.kafkarest.KafkaRestMain .
Eventually, docker image built with zookeeper, kafka, schema-registry, kafka-rest worked like a charm.
Here is the official page with info about the image name, further reference to it's doc: https://hub.docker.com/r/confluentinc/cp-kafka-rest/
Upon pulling this image, a new virtual machine gets created with four more images inside it(one for each service like zookeeper, Kafka, schem-registry and Kafka-rest). Running the images runs a separate Docker container.
This guide should get you started quickly:
http://docs.confluent.io/current/cp-docker-images/docs/quickstart.html
And finally, if you would like to expose the kafka REST proxy server running as a Docker container to outside network(like windows machine which is part of the separate network than these containers) just mention the Docker host IP(find it by hitting docker-machine ip <hostname>) in KAFKA_REST_LISTENERS and expose the port with -p option.
Like this:
docker run -d \
--net=host \
--name=kafka-rest \
-p 8082:8082 \
-e KAFKA_REST_ZOOKEEPER_CONNECT=localhost:32181 \
-e KAFKA_REST_LISTENERS=http://192.168.99.100:8082 \
-e KAFKA_REST_SCHEMA_REGISTRY_URL=http://localhost:8081 \
-e KAFKA_REST_HOST_NAME=localhost \
confluentinc/cp-kafka-rest:3.2.1
If everything is OK, you will be able to access REST proxy at this url http://<Docker_host_IP>:8082 from the windows machine.

I was able to run the command that #lexler mentioned above, but outside of cygwin. (directly with the windows command prompt.)

Related

when you use kafka(in docker container), Where exactly is the plugin path?

----- i use kafka, kafka-connect(image: confluentinc/cp-kafka-connect)
when you use kafka in docker container if you wanna operate kafka, you have to go into the container(like 'docker exec -it kafka' or 'docker exec -it kafka-connect' ----> this is another question what i wanna ask) , right..??
i tried putting some connector (jdbc connector, mysql connector) into kafka-connect container, but it didn't work.
so.. my question is
after docker-compose up(put in container), if i wanna connect with some connectors('./bin/connect-distributed.sh ./etc/kafka/connect-distributed.properties'),
what container i have to go into???
if i type plugin path, where should i write?( kafka? kafka-connect?)
I wouldn't mind if it was difficult to read... sorry for that
No, you don't need to exec anywhere unless you cannot download Kafka on your host machine to get the CLI scripts. But you'd only exec for kafka-topics, console producer/consumer, kafka-consumer-groups, etc, not any of the Connect scripts.
The Connect container automatically runs the Distributed script and you simply provide CONNECT_PLUGIN_PATH as an environment variable to any directory in the container you want to use for the plugins (I like /opt/connectors if I mount volume, but that's not where confluent-hub installs to for that image). That variable doesn't do anything for the broker image, only Connect.
Related How to install connectors to the docker image of apache kafka connect
If your requirement is startup a Kafka Connect.
You can use the basic guide published by Confluent "Build Your Own Apache Kafka® Demos"
Basically you need execute the following instructions:
git clone https://github.com/confluentinc/cp-all-in-one.git
cd cp-all-in-one/cp-all-in-one
git checkout 7.1.1-post
docker-compose up -d
This has Control Center at http://localhost:8088
If you need install a Connector, you can go to the https://www.confluent.io/hub select your specific connector.
Then, you can create your DockerImage of specific Kafka Connect server.
1.- Write a Dockerfile.
vim Dockerfile
2.- Add connector "example JDBC" from Confluent Hub.
FROM confluentinc/cp-kafka-connect
ENV MYSQL_DRIVER_VERSION 5.1.39
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.5.0
RUN curl -k -SL "https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-${MYSQL_DRIVER_VERSION}.tar.gz" \
| tar -xzf - -C /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib \
--strip-components=1 mysql-connector-java-5.1.39/mysql-connector-java-${MYSQL_DRIVER_VERSION}-bin.jar
3.- Build the docker image.
docker build . -t my-kafka-connect-jdbc:1.0.0
4.- Then, you can go to edit your docker-compose.yml, change the line 57
from:
image: cnfldemos/cp-server-connect-datagen:0.5.3-7.1.0
to:
image: my-kafka-connect-jdbc:1.0.0
5.- Finally, stop and start your Confluent Platform local environment:
docker-compose down
docker-compose up
Verify your docker
docker ps
Test your Connect server:
curl --location --request GET 'http://localhost:8083/connectors'

Unable to debug java app through stack driver in google kubernetes cluster

I am trying to debug a java app on GKE cluster through stack driver.
I have created a GKE cluster with Allow full access to all Cloud APIs
I am following documentation: https://cloud.google.com/debugger/docs/setup/java
Here is my DockerFile:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} alnt-watchlist-microservice.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/alnt-watchlist-microservice.jar"]
In documentation, it was written to add following lines in DockeFile:
RUN mkdir /opt/cdbg && \
wget -qO- https://storage.googleapis.com/cloud-debugger/compute-java/debian-wheezy/cdbg_java_agent_gce.tar.gz | \
tar xvz -C /opt/cdbg
RUN java -agentpath:/opt/cdbg/cdbg_java_agent.so
-Dcom.google.cdbg.module=tpm-watchlist
-Dcom.google.cdbg.version=v1
-jar /alnt-watchlist-microservice.jar
When I build DockerFile, It fails saying tar: invalid magic , tar: short read.
In stackdriver debug console, It always show 'No deployed application found'. Which application it will show? I have already 2 services deployed on my kubernetes cluster.
I have already executed
gcloud debug source gen-repo-info-file --output-directory="WEB-INF/classes/
in my project's directory.
It generated source-context.json. After its creation, I tried building docker image and its failing.
The debugger will be ready for use when you deploy your containerized app. You are getting No deployed application found error because your debugger agent is failing to download or unzip in dockerfile.
Please check this discussion to resolve the tar: invalid magic , tar: short read. error.
Unfortunately it looks like Alpine isn't regularly tested with Debugger. There's a sample setup here that might help you: https://github.com/GoogleCloudPlatform/cloud-debug-java#alpine-linux
I resolved the issue.
Firstly, you will have to use java image "gcr.io/google-appengine/openjdk" instead of Alpine one.
Secondly,
I was putting entry points without comma separated (Basically in wrong format)
ENTRYPOINT ["java","-agentpath:/opt/cdbg/cdbg_java_agent.so", "-Djava.security.egd=file:/dev/./urandom" ,"-Dcom.google.cdbg.module=watchlist"]

Starting JBPM demo

I am trying to start the demo version of JBPM 7.26.0 (windows).
After a successful "ant start.demo", the wildfly server log fills up with
WARN [org.kie.server.common.KeyStoreHelperUtil] (Thread-149) Unable
to load key store. Using password from configuration
http://localhost:8080/jbpm-casemgmt/jbpm-cm.html never loads after logging in (spins indefinitely).
Any suggestions on how to troubleshoot?
thanks!
Beside the Single Zip Distribution you can also try the provided Docker builds.
Just install Docker and run docker run -p 8080:8080 -p 8001:8001 -d --name jbpm-server-full jboss/jbpm-server-full:7.26.0.Final and browse to http://localhost:8080/business-central
This works fine for me.

systemctl from inside docker container fails with D-Bus connection error

I have setup a docker container based on OpenSuse 12, installed some additional files and copied some installer binaries into the container. So far everything fine.
From inside a running image of the container I now need to run the aforementioned setup program but this needs to have uuid.socket up and running - uuid.socket in turn needs systemctl to work correctly and this causes an error like this:
hxehost:/usr/sap/SRCFiles # systemctl
Failed to get D-Bus connection: Unknown error -1
I started the docker container like this:
docker run -h hxehost -i -t f3096b0aa964 /bin/bash
Which, according to some postings should start a machine container as opposed to an application container.
Can anyone tell me what I'm doing wrong here??? How do I get systemctl to work inside a docker container?
I tried to starte the container with this command, which according to linked hints should do, but to no avail
docker run --privileged --rm -ti -e 'container=docker' -h hxehost --network="bridge" --tmpfs /run --tmpfs /tmp -v /sys/fs/cgroup:/sys/fs/cgroup:ro siliconchris/hxe:v0.0.2 /bin/bash
If I do this, systemctl still gives exact same error.
If I start /sbin/init instead of /bin/bash, I can see that quite a lot of services are started (some, like wicked, login and module, fail). In the end, the container presents me with a login. After login, I can now execute systemctl and it shows all services with their respective states.
Now my next question is: IS THIS APPROACH FEASIBLE AT ALL???
Best regards,
Chris
You may find the repo to this image at SAP HANA Express Edition inside docker
Most current Linux systems depend on SystemD running, and systemctl will send requests to it. However most applications did install easily when I replaced the systemctl binary with a script that just interprets start/stop/status/enable commands. As another benefit, it would not need anymore those complicated startup-commands for the resulting image to get the systemd mapped into the container. May be that would help you? Please have a look at the docker-systemctl-replacement.

strace to monitor Dockerized application activity

My goal is to monitor which ports are opened and closed by a multi-process application.
My plan is to run the application in a Docker container, in order to isolate it, and then use strace to report the application activity.
I've tried with Apache server dockerized :
strace -f -o /tmp/docker.out docker run -D -P apache
I don't see any line in the report file that shows that the application accept a connection in a socket.
Can strace report the activity of processes inside the container?
The issue with your command+strace combination is that docker has a client/server model, and your docker run represents the client side of a REST API transaction to ask the docker daemon to run the Apache container on your behalf. Depending on how your client is configured, that container may not even run on the same system on which you type your docker run command.
However, to take the simplest case where the Docker client and daemon are on the same system, you can use ps find the PID of the running Apache server and use strace to join and trace the already-started process, as long as that is sufficient for your tracing needs.
Given I had to debug several early-start issues with "runc", the executor for containers in docker version 1.11 and above, I also created a small wrapper for docker-runc which strace's the container process from the start (from the outside system, so strace is not required in the container filesystem). You can find it here on GitHub, although fair warning that it is somewhat buggy for regular use as I believe the shell+strace invocation gets in the way of some signaling between containerd and the real docker-runc and associated processes. A more elegant solution might be to create a variant of runc which knows how to prepend the actual start of the contained process with an strace wrapper rather than intercepting the entire invocation of runc in an strace.
Take a look at the solution described at https://medium.com/#rothgar/how-to-debug-a-running-docker-container-from-a-separate-container-983f11740dc6 which tells you how to fire up a container with strace installed, which is in the same pid and network namespace as the container/process you wish to run strace against.
This is nice since it means you don't need to install strace in the container you wish to debug.
The guts of it is that when debugging a container (caddy in the example below) you run a docker container called strace and with appropriate tools installed:
docker run -t --pid=container:caddy \
--net=container:caddy \
--cap-add sys_admin \
--cap-add sys_ptrace \
strace
Assuming you make it so when building your strace container, you'll now have a shell with appropriate tools, from which you can run ps and see the process in the caddy container, and you can run strace against it.
You will be in a different container, with a different file system, but you can see the file-space of the target container at /prof/$PID/root.
I just managed to strace a docker container using these steps:
Work out what distro image the container is based on, then obtain the strace binary from that distro, e.g. by installing the corresponding distro package from a container created for that purpose.
Copy the strace binary into a volume you can mount into the container.
Also create a small wrapper shell script called entry.sh which contains your strace invocation. In my case I wrote it like this:
#!/bin/sh
exec /path/to/strace -ff -o /path/to/dumps /bin/bash /original/entrypoint
This is assuming that the original entry point, which you read from the Dockerfile of the image you want to debug, started with #!/bin/bash. Make sure you set the execute bit of this script, and place it where you also placed the strace binary.
Launch docker using a command like this:
docker run -v $PWD/shared:/path/to \
--entrypoint="/path/to/entry.sh" \
--cap-add SYS_PTRACE \
image-name
The mounted volume will make the strace and entry.sh available inside the docker. The entry point will do the strace invocation before calling the actual entry point. This might potentially cause some trouble with strace itself becoming pid 1 in the container, and failing to reap children. If that's a problem, a different approach like the one Phil suggested would be better. Finally that added capability tells docker that it's OK to start tracing processes. Otherwise you'd get error messages like
strace: …: PTRACE_TRACEME doesn't work: Operation not permitted
Actually pointing out this capability setting is the reason I'm writing my answer. I had already done the steps except for this flag, and while searching for a solution there I found both this question here and a blog post by John Goulah containing that bit of information. For the sake of completeness I think the flag should be mentioned here, too. Haven't tried Phil's approach yet, so I definitely don't claim my approach to be superior to what he suggested. I guess it might work more easily on systems where you don't want to mess with the docker daemon.
we can add param
--security-opt=seccomp:unconfined
I have try it ,it work well!!
docker run -it --security-opt=seccomp:unconfined centos:7 /bin/bash
yum install strace
strace ls
execve("/usr/bin/ls", ["ls"], [/* 8 vars */]) = 0
brk(NULL) = 0x1d0a000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
0) = 0x7ffb588da000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or
directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
refer:http://johntellsall.blogspot.com/2016/10/tip-use-strace-to-debug-issues-inside.html
try to launch Apache docker run -D -P apache and connect inside docker exec -it container_id bash and then strace your Apache process.