How to properly query Kafka REST Proxy? - apache-kafka

I'm running a dockerized distribution of Confluent platform:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e6963904b485 confluentinc/cp-enterprise-control-center:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:9021->9021/tcp, :::9021->9021/tcp control-center
49ade0e752b4 confluentinc/cp-ksqldb-cli:7.0.1 "/bin/sh" 11 hours ago Up 11 hours ksqldb-cli
95b0982c0159 confluentinc/ksqldb-examples:7.0.1 "bash -c 'echo Waiti…" 11 hours ago Up 11 hours ksql-datagen
e28e3b937f6e confluentinc/cp-ksqldb-server:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:8088->8088/tcp, :::8088->8088/tcp ksqldb-server
af92bfb84cb1 confluentinc/cp-kafka-rest:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:8082->8082/tcp, :::8082->8082/tcp rest-proxy
318a999e76dc cnfldemos/cp-server-connect-datagen:0.5.0-6.2.0 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:8083->8083/tcp, :::8083->8083/tcp, 9092/tcp connect
0c299fbda7c5 confluentinc/cp-schema-registry:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:8081->8081/tcp, :::8081->8081/tcp schema-registry
a33075002386 confluentinc/cp-server:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:9092->9092/tcp, :::9092->9092/tcp, 0.0.0.0:9101->9101/tcp, :::9101->9101/tcp broker
135f832fbccb confluentinc/cp-zookeeper:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 2888/tcp, 0.0.0.0:2181->2181/tcp, :::2181->2181/tcp, 3888/tcp zookeeper
Kafka REST Proxy is running on port 8082
When I issue an HTTP GET call against the REST proxy:
curl --silent -X GET http://10.0.0.253:8082/kafka/clusters/ | jq
All I get is:
{
"error_code": 404,
"message": "HTTP 404 Not Found"
}
Given my configuration, what can I change to actually get some useful information out of Kafka REST Proxy?

Related

Kafka REST proxy: how to retrieve and deserialize Kafka data based on AVRO schema stored in schema-registry

I am new to Kafka. I run a docker based Kafka ecosystem on my local machine, including broker/zookeeper/schema-registry/rest-proxy. Also I have a external producer(temp-service), which sends AVRO schema serialized data to the topic temp-topic in broker.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
411564d10c06 confluentinc/cp-kafka-rest:latest "/etc/confluent/dock…" 27 seconds ago Up 25 seconds 0.0.0.0:8082->8082/tcp kafka_kafka-rest_1
38c4e3ea008c confluentinc/cp-schema-registry:latest "/etc/confluent/dock…" 30 seconds ago Up 27 seconds 0.0.0.0:8081->8081/tcp kafka_schema-registry_1
7abe6cf9a7a0 confluentinc/cp-kafka:latest "/etc/confluent/dock…" 30 minutes ago Up 30 seconds 0.0.0.0:9092->9092/tcp kafka
bdffd9e03088 confluentinc/cp-zookeeper:latest "/etc/confluent/dock…" 30 minutes ago Up 30 seconds 2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp zookeeper
d1909c6877c5 temp-service:latest "node /home/tempserv…" 3 hours ago Up 2 hours (healthy) 0.0.0.0:8107->8107/tcp, 0.0.0.0:9107->9107/tcp, 0.0.0.0:9229->9229/tcp temp-service
I have also posted the AVRO schema of the kafka data of temp-service to the schema-registry, so that it is stored there as id 1.
I created a consumer group temp_consumers and a consumer instance temp_consumer_instance,
$ curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" --data '{"name": "temp_consumer_instance", "format": "avro", "auto.offset.reset": "earliest"}' http://localhost:8082/consumers/temp_consumers
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 219 0 134 100 85 158 100 --:--:-- --:--:-- --:--:-- 260
{"instance_id":"temp_consumer_instance","base_uri":"http://kafka-rest:8082/consumers/temp_consumers/instances/temp_consumer_instance"}
checked topics in kafka:
$ curl -X GET http://localhost:8082/topics
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 29 0 29 0 0 1610 0 --:--:-- --:--:-- --:--:-- 1705
["temp-topic","_schemas"]
subscribed to the topic temp-topic.
$ curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" --data '{"topics":["temp-topic"]}' http://localhost:8082/consumers/temp_consumers/instances/temp_consumer_instance/subscription
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 29 0 0 100 29 0 1046 --:--:-- --:--:-- --:--:-- 1115
tried to consume the records in the topic but failed:
$ curl -X GET -H "Accept: application/vnd.kafka.binary.v2+json" http://localhost:8082/consumers/temp_consumers/instances/temp_consumer_instance/records
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 126 0 126 0 0 10598 0 --:--:-- --:--:-- --:--:-- 12600
{"error_code":40601,"message":"The requested embedded data format does not match the deserializer for this consumer instance"}
I would like to know if there are any ways to deserialize the kafka data posted by the producer, based on the AVRO schema stored in schema registry?

How to install collections i a AWX container

Is it possible to install collections in a axw container ?
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c4d255148cc8 netboxcommunity/netbox:latest "/opt/netbox/docker-…" 5 days ago Up 5 days 0.0.0.0:8000->8080/tcp, :::8000->8080/tcp netbox-docker_netbox_1
ac0784c16861 netboxcommunity/netbox:latest "/opt/netbox/venv/bi…" 5 days ago Up 5 days netbox-docker_netbox-worker_1
31b850bf8d12 redis:6-alpine "docker-entrypoint.s…" 5 days ago Up 5 days 6379/tcp netbox-docker_redis-cache_1
df0977f446f4 postgres:12-alpine "docker-entrypoint.s…" 5 days ago Up 5 days 5432/tcp netbox-docker_postgres_1
983b698274af redis:6-alpine "docker-entrypoint.s…" 5 days ago Up 5 days 6379/tcp netbox-docker_redis_1
4150e6ae71cc ansible/awx:17.1.0 "/usr/bin/tini -- /u…" 6 days ago Up 6 days 8052/tcp awx_task
5583bbf60d45 ansible/awx:17.1.0 "/usr/bin/tini -- /b…" 6 days ago Up 6 days 0.0.0.0:80->8052/tcp, :::80->8052/tcp awx_web
c9d92412d1cd redis "docker-entrypoint.s…" 6 days ago Up 6 days 6379/tcp awx_redis
71b99bde5d41 postgres:12 "docker-entrypoint.s…" 6 days ago Up 6 days 5432/tcp awx_postgres
Got the solution.
Create a folder: collections
Create file: requirements.yml
File content:
collections:
- name: community.general

Keycloak LetsEncrypt Nginx Reverse Proxy Docker Compose

I am trying to setup an keycloak instance with an ssl connection over an nginx proxy my 'docker ps' output:
d7fd473cc77b jboss/keycloak "/opt/jboss/tools/do…" 34 minutes ago Up 8 minutes 0.0.0.0:8080->8080/tcp, 8443/tcp auth
76e757bbe129 mariadb "sh -c ' echo 'CREAT…" 34 minutes ago Up 8 minutes 0.0.0.0:3306->3306/tcp backend-database
d99e23470955 stilliard/pure-ftpd:hardened-latest "/bin/sh -c '/run.sh…" 34 minutes ago Up 8 minutes 0.0.0.0:21->21/tcp, 0.0.0.0:30000->30000/tcp, 30001-30009/tcp ftp-server
95f4fbdea0de wordpress:latest "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 80/tcp wordpress
b3e40ca6de48 mariadb:latest "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 3306/tcp database
e5c12bb5ba52 nginx "/docker-entrypoint.…" 37 minutes ago Up 37 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx-web
c0ac90a6c408 jrcs/letsencrypt-nginx-proxy-companion "/bin/bash /app/entr…" 37 minutes ago Up 37 minutes nginx-letsencrypt
33ae7de5f598 jwilder/docker-gen "/usr/local/bin/dock…" 37 minutes ago Up 37 minutes nginx-gen
As you can see at the above console output, I am also running an instance of wordpress in a docker container and this works like a charm, no problems with unsigned or invalid ssl certificates, just erverything fine. But when I am trying to call the web interface of keycloak over the domain with the corresponding port (in my case: 8080), I got the following error:
Fehlercode: SSL_ERROR_RX_RECORD_TOO_LONG
And when I am trying to call the web interface over the ip address also with the corresponding port, I got the message that the connection isn't safe.
Hopefully this are enough information for you guys, to figure out what I've done wrong.
So far,
Daniel

Kubeadm - no port 6443 after cluster creation

I'm trying to create Kubernetes HA cluster using kubeadm.
Kubeadm version: v.1.11.1
I'm using following instructions: kubeadm ha
All passed ok, except the final point. Nodes can't see each other on port 6443.
sudo netstat -an | grep 6443
Shows nothing.
In journalctl -u kubelet I see following error:
reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://<LB>:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-19-111-200.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.19.111.200:6443: connect: connection refused
List of docker runs on instance:
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e3eabb527a92 0e4a34a3b0e6 "kube-scheduler --ad…" 19 hours ago Up 19 hours k8s_kube-scheduler_kube-scheduler-ip-172-19-111-200.ec2.internal_kube-system_31eabaff7d89a40d8f7e05dfc971cdbd_1
123e78fa73c7 55b70b420785 "kube-controller-man…" 19 hours ago Up 19 hours k8s_kube-controller-manager_kube-controller-manager-ip-172-19-111-200.ec2.internal_kube-system_85384ca66dd4dc0adddc63923e2425a8_1
e0aa05e74fb9 1d3d7afd77d1 "/usr/local/bin/kube…" 19 hours ago Up 19 hours k8s_kube-proxy_kube-proxy-xh5dg_kube-system_f6bc49bc-959e-11e8-be29-0eaa4481e274_0
f5eac0b8fe7b k8s.gcr.io/pause:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_kube-proxy-xh5dg_kube-system_f6bc49bc-959e-11e8-be29-0eaa4481e274_0
541011b3e83a k8s.gcr.io/pause:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_etcd-ip-172-19-111-200.ec2.internal_kube-system_84d934eebaace20c70e0f268eb100028_0
a5e203947686 k8s.gcr.io/pause:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_kube-scheduler-ip-172-19-111-200.ec2.internal_kube-system_31eabaff7d89a40d8f7e05dfc971cdbd_0
89dbcdda659c k8s.gcr.io/pause:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_kube-apiserver-ip-172-19-111-200.ec2.internal_kube-system_4202bb793950ae679b2a433ea8711d18_0
5948e629d90e k8s.gcr.io/pause:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_kube-controller-manager-ip-172-19-111-200.ec2.internal_kube-system_85384ca66dd4dc0adddc63923e2425a8_0
Forwarding in sysctl exists:
sudo sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.ip_forward = 1
Nodes can't see each other on port 6443.
It seems like your api server in not runnning.
Fact that you have error stating :6443: connect: connection refused is pointing towards your api server not running.
This is further confirmed from your list of running docker containers on instances - you are missing api server container. Note that you have related container with "/pause" but you are missing container with "kube-apiserver --...". Your scheduler and controller-manger appear to run correctly, but api server is not.
Now you have to dig in and see what prevented your api server from starting properly. Check kubelet logs on all control-plane nodes.
This also happens if your Linux kernel is not configured to do ip4/ip6 transparently.
An ip4 address configured when the kube-api listens on an ip6 interface breaks.

Deploying in GCE through jenkins creates 20 octet-stream files, every time

I have just gotten my jenkins VM working in GCE so that i can deploy through an URL and it's working nicely.
However, every time i deploy something 20 files issaved in my bucket, all named some gibberish with the type application/octet-stream:
009d705c4df3f1dad977db3848777703330f221b 641.43 KB application/octet-stream 6 minutes ago
0f29dadc1db1c1e3bd68b5e87c87030b28ff737e 51 B application/octet-stream 6 minutes ago
12179b7c0898cca08d1f2724b0e789ae77b539f4 3.55 KB application/octet-stream 6 minutes ago
14876ba2bedf347151e0268c8dde15e71c88b388 6.12 KB application/octet-stream 6 minutes ago
3a4948f3c6b79bf9240222609241f881c408d04d 1.63 KB application/octet-stream 6 minutes ago
3bc3db8eb76aaced6191a7dcf812796a6fa5057a 2.5 KB application/octet-stream 6 minutes ago
6b4646e0ae099703f738053bfaeeede3a1f8a67e 46.1 KB application/octet-stream 6 minutes ago
6d77bca129e58bbf053bbabc86c23b9103bdea0d 194 B application/octet-stream 6 minutes ago
8059e3541a420a5a2f60d99c46d8cc4a5bba3b8f 48.14 KB application/octet-stream 6 minutes ago
92b798df48237e525d34018efdb7f2aace4fdbb4 523.43 KB application/octet-stream 6 minutes ago
b1f22e54252cdb6a84e92414709340d668c33d3a 1,022 B application/octet-stream 6 minutes ago
bad8545d6a001b02f6225c2aade36b2100581d0d 2.83 KB application/octet-stream 6 minutes ago
bd80270cee4f7e90baed299f1d6ae1be55e7b4a5 10.45 KB application/octet-stream 6 minutes ago
c626a57d3f004800b634679fa1963d7c09ae585b 2.19 KB application/octet-stream 6 minutes ago
ce093c434a0f35df34034e6fc58d1889364cfdc2 1.66 KB application/octet-stream 6 minutes ago
d7a8d655f068e92a18971d74b410963e35251c8a 422 B application/octet-stream 6 minutes ago
d8dfb95d6e41de19f4112d99d1485a628096848d 185 B application/octet-stream 6 minutes ago
e06cd18064609994d27a83438b0e3dcbfebc5c67 1.39 KB application/octet-stream 6 minutes ago
f20ec52431df2c411bcb60965b3a2c212405f747 7.72 KB application/octet-stream 6 minutes ago
fb51d1d331190dfc3f2b6756bea79ff3ff92755d 815 B application/octet-stream 6 minutes ago
Anyone know what's causing this in jenkins?
I opened one of the files and there was some text about linotype and verisign?
When Jenkins runs build, test and deploy steps for your application, it uploads the logs and any build artifacts to Google Cloud Storage where you can view them, as mentioned in this Help Center article [1].