Sending and reading form kafka to mqtt exactly once - apache-kafka

For transferring mqtt messages from and to kafka I use a mqtt connect worker,
similar to this one. The difference is, that I read and write to the mqtt broker. This is working quite fine.
┌────────────┐ ┌────────────┐
│ │ │ │
│ KAFKA │ │ MQTT │
│ ┌────────│ │ BROKER │
│ │MQTT-Con│<──────>│ │
└───┴────────┘ └────────────┘
As e next step, I want to switch from one single kafa broker a kafka cluster (for now, 2 kafka broker).
┌─────────────────────┐
│ KAFKA CLUSTER │
│ ┌────────────┐ │
│ │ │ │
│ │ KAFKA A │ │
│ │ ┌────────│ │ ┌────────────┐
│ │ │MQTT-Con│<────│─>│ │
│ └───┴────────┘ │ │ MQTT │
│ │ │ BROKER │
│ ┌────────────┐ ┌──│─>│ │
│ │ │ │ │ └────────────┘
│ │ KAFKA B │ │ │
│ │ ┌────────│ │ │
│ │ │MQTT-Con│<─┘ │
│ └───┴────────┘ │
└─────────────────────┘
My question is, what can i do, that i don't end up with the same message duplicated in kafka A and kafka B?
My mqtt messages should be QOS 2, exactly once.

Related

Constant difference between producer and consumer Kafka stream metrics

Using Kafka stream metrics: sum(irate(kafka_producer_producer_metrics_record_send_total{}[1m])) and sum(irate(kafka_consumer_consumer_fetch_manager_metrics_records_consumed_total{}[1m])) I have noticed that for every Kafka stream app - consumer rate is always twice higher than producer rate. What is reason of this behavior?
If this is always higher I would suppose that there should be some kind of buffer and sometimes producer rate should be higher than consumer rate but it is not and memory doesn't explode.
Other info:
Topic has 1 partition with replication 2.
Kafka stream app from image is simple map.
│ Kafka:
│ Config:
│ default.replication.factor: 3
│ inter.broker.protocol.version: 3.3
│ min.insync.replicas: 2
│ offsets.topic.replication.factor: 3
│ transaction.state.log.min.isr: 2
│ transaction.state.log.replication.factor: 3
Kafka streams config:
props.put(StreamsConfig.TOPOLOGY_OPTIMIZATION_CONFIG, StreamsConfig.OPTIMIZE);
props.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 1000);
props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 1);
props.put(StreamsConfig.REPLICATION_FACTOR_CONFIG, 2);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, earliest);

CloudSQL auth proxy as a service instead of sidecar in Kubernetes cluster

I am trying to move away from cloudsql proxy using it as a sidecar to separate service. Followed all the steps mentioned in here https://github.com/GoogleCloudPlatform/cloud-sql-proxy/tree/main/examples/k8s-serviceafter deploying in kubernetes cluster i am getting an error mentioned below
┌─────────────────────────────────────────────────────────── Logs(pygeno/pgbouncer-xxxxxxx-xxxxx:pgbouncer)[1m] ───────────────────────────────────────────────────────────┐
│ Autoscroll:On FullScreen:Off Timestamps:Off Wrap:Off │
│ pgbouncer 04:58:23.60 │
│ pgbouncer 04:58:23.61 Welcome to the Bitnami pgbouncer container │
│ pgbouncer 04:58:23.61 Subscribe to project updates by watching https://github.com/bitnami/containers │
│ pgbouncer 04:58:23.62 Submit issues and feature requests at https://github.com/bitnami/containers/issues │
│ pgbouncer 04:58:23.62 │
│ pgbouncer 04:58:23.65 INFO ==> ** Starting PgBouncer setup ** │
│ pgbouncer 04:58:23.67 INFO ==> Validating settings in PGBOUNCER_* env vars... │
│ pgbouncer 04:58:23.69 INFO ==> Initializing PgBouncer... │
│ pgbouncer 04:58:23.74 INFO ==> Waiting for PostgreSQL backend to be accessible │
│ cannot resolve host "postgresql": lookup postgresql on xx.0.xx.xx:xx: no such host │
│ cannot resolve host "postgresql": lookup postgresql on xx.0.xx.xx:xx: no such host │
│ cannot resolve host "postgresql": lookup postgresql on xx.0.xx.xx:xx: no such host │
│ cannot resolve host "postgresql": lookup postgresql on xx.0.xx.xx:xx: no such host │
│ cannot resolve host "postgresql": lookup postgresql on xx.0.xx.xx:xx: no such host │
│ cannot resolve host "postgresql": lookup postgresql on xx.0.xx.xx:xx: no such host
i got struck here and not able to move further.
I resolved it by adding environment variable POSTGRESQL_HOST and assigned value 127.0.0.1

Cannot connect to the process running Strapi from URL on Google App Engine

After deploying Strapi to Google App Engine, I can see it running due to the following log messages:
2022-09-28 04:38:51 default[20220928t143613] ┌────────────────────┬──────────────────────────────────────────────────┐
2022-09-28 04:38:51 default[20220928t143613] │ Time │ Wed Sep 28 2022 04:38:51 GMT+0000 (Coordinated … │
2022-09-28 04:38:51 default[20220928t143613] │ Launched in │ 4328 ms │
2022-09-28 04:38:51 default[20220928t143613] │ Environment │ production │
2022-09-28 04:38:51 default[20220928t143613] │ Process PID │ 11 │
2022-09-28 04:38:51 default[20220928t143613] │ Version │ 4.3.9 (node v16.17.0) │
2022-09-28 04:38:51 default[20220928t143613] │ Edition │ Community │
2022-09-28 04:38:51 default[20220928t143613] └────────────────────┴──────────────────────────────────────────────────┘
2022-09-28 04:38:51 default[20220928t143613] Actions available
2022-09-28 04:38:51 default[20220928t143613] Welcome back!
2022-09-28 04:38:51 default[20220928t143613] To manage your project 🚀, go to the administration panel at:
2022-09-28 04:38:51 default[20220928t143613] http://0.0.0.0:8081/admin
2022-09-28 04:38:51 default[20220928t143613] To access the server ⚡️, go to:
2022-09-28 04:38:51 default[20220928t143613] http://0.0.0.0:8081
I also have Cron Jobs running successfully and appearing in the log messages:
2022-09-28 05:08:10 default[20220928t150116] CRON_JOB_PRODUCT_IMPORT_START
2022-09-28 05:08:10 default[20220928t150116] CRON_JOB_PRODUCT_IMPORT_END
After the deployment, I attempt to launch the website using gcloud app browse and the request times out eventually with a 404 Not Found. I see no activity in the log messages for the requests to the URL.
Other useful information:
app.yaml
runtime: nodejs16
instance_class: B1
basic_scaling:
idle_timeout: 5m
max_instances: 1
build_env_variables:
API_HOST: project-id.uc.r.appspot.com
NODE_ENV: 'production'
env_variables:
ADMIN_JWT_SECRET: 'XXXXXXXXXXXXX'
API_TOKEN_SALT: 'XXXXXXXXXXXXX'
APP_KEYS: 'XXXXXXXXXXXXX'
DATABASE_HOST: '/cloudsql/project-id:us-central1:database-id'
DATABASE_PORT: '5432'
DATABASE_NAME: 'database-id'
DATABASE_USERNAME: 'postgres'
DATABASE_PASSWORD: 'XXXXXXXXXXXXX'
DATABASE_SSL: 'false'
HOST: '0.0.0.0'
JWT_SECRET: 'XXXXXXXXXXXXX'
NODE_ENV: 'production'
PORT: '1337'
beta_settings:
cloud_sql_instances: 'project-id:us-central1:database-id'
Dockerfile
FROM node:16
# Installing libvips-dev for sharp Compatability
RUN apt-get update && apt-get install libvips-dev -y
# Node Environment (development/production)
ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}
RUN echo "NODE_ENV: $NODE_ENV"
ARG API_HOST=todo
ENV API_HOST=${API_HOST}
RUN echo "API_HOST: $API_HOST"
# Root directory
WORKDIR /opt/
COPY ./ .
ENV PATH /opt/node_modules/.bin:$PATH
RUN yarn config set network-timeout 600000 -g && yarn ci
WORKDIR /opt/app
RUN yarn build
EXPOSE 1337
CMD ["yarn", "start"]
I can see the service is starting multiple times in the logs, whilst max_instances is set to 1. Is there a /health status that could be failing?
I got to the end of writing this question and noticed something in the logs:
No entrypoint specified, using default entrypoint: /serve
This leads me to someone's comment about exposing their Nodejs application on port 8080. I open the application on port 8080 and now Strapi is loading.

How to compile frontend assets in Adonis app while it's running in a Docker container?

This Adonis application fails compiling frontend assets while running inside Docker container. The app was scaffolded using yarn utility container defined at docker-compose.yaml file running docker compose run --rm yarn create adonis-ts-app app command and selecting true for ESLint, Prettier and Encore choices.
% What Actually Happens %
The application fails to compile assets. No sign of error from the running container:
$ docker compose logs app
docker-bind-mount-issue-app-1 | yarn run v1.22.17
docker-bind-mount-issue-app-1 | $ node ace serve --watch
docker-bind-mount-issue-app-1 | [ info ] building project...
docker-bind-mount-issue-app-1 | [ info ] starting http server...
docker-bind-mount-issue-app-1 | [ encore ] Running webpack-dev-server ...
docker-bind-mount-issue-app-1 | [ info ] watching file system for changes
docker-bind-mount-issue-app-1 | [1645110239969] INFO (app/39 on b105e817d618): started server on 0.0.0.0:3333
docker-bind-mount-issue-app-1 | ╭────────────────────────────────────────────────────────╮
docker-bind-mount-issue-app-1 | │ │
docker-bind-mount-issue-app-1 | │ Server address: http://127.0.0.1:3333 │
docker-bind-mount-issue-app-1 | │ Watching filesystem for changes: YES │
docker-bind-mount-issue-app-1 | │ Encore server address: http://localhost:8080 │
docker-bind-mount-issue-app-1 | │ │
docker-bind-mount-issue-app-1 | ╰────────────────────────────────────────────────────────╯
docker-bind-mount-issue-app-1 | [ encore ] DONE Compiled successfully in 14380ms3:04:08 PM
docker-bind-mount-issue-app-1 | [ encore ] webpack compiled successfully
Here's the screenshot.
% What I Expect %
Assets should be compiled by Webpack Encore and I should get a confirmation like the following:
yarn run v1.22.15
$ node ace serve --watch
[ info ] building project...
[ info ] starting http server...
[ encore ] Running webpack-dev-server ...
[ info ] watching file system for changes
[1645110644372] INFO (app/9208 on msrumon): started server on 0.0.0.0:3333
╭────────────────────────────────────────────────────────╮
│ │
│ Server address: http://127.0.0.1:3333 │
│ Watching filesystem for changes: YES │
│ Encore server address: http://localhost:8080 │
│ │
╰────────────────────────────────────────────────────────╯
UPDATE: public\assets\manifest.json
UPDATE: public\assets\entrypoints.json
[ encore ] DONE Compiled successfully in 1449ms9:10:46 PM
[ encore ] webpack compiled successfully
Here's the screenshot.
% Reproduction Steps %
Clone the above repository.
Install dependencies by running docker compose run --rm yarn install.
Start the development server by running docker compose up --detach app.
Browse http://localhost:3333 from any browser.
% Extra Note %
When I install the dependencies using yarn utility container (docker compose run --rm yarn install) and then start the server directly from host machine (yarn run dev), I get 'encore' is not recognized as an internal or external command, operable program or batch file. error:
$ docker compose run --rm yarn install
[+] Running 1/0
- Network docker-bind-mount-issue_default Created 0.0s
yarn install v1.22.17
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
Done in 159.92s.
$ yarn run dev
yarn run v1.22.15
$ node ace serve --watch
[ info ] building project...
[ info ] starting http server...
[ encore ] 'encore' is not recognized as an internal or external command,
operable program or batch file.
[ warn ] Underlying encore dev server died with "1 code"
[1645111433640] INFO (app/27364 on msrumon): started server on 0.0.0.0:3333
[ info ] watching file system for changes
╭────────────────────────────────────────────────────────╮
│ │
│ Server address: http://127.0.0.1:3333 │
│ Watching filesystem for changes: YES │
│ Encore server address: http://localhost:8080 │
│ │
╰────────────────────────────────────────────────────────╯
Apparently node_modules/.bin was empty, but it wasn't when I ran yarn install directly on host machine. So, out of curiosity, I attempted to replicate the exact same issue for an Express app and nodemon binary to see whether the aforementioned directory stays empty or not. And I found that it wasn't empty. So I couldn't figure out where the problem was. I'd appreciate if anybody would be able to help me.
I tried multiple combinations of different solutions, and so far this seems to be the most consistent "workaround" I've got. I'm still open for solution.

Using NATS Transport Layer in HELM

I wonder if any HELM guru's can shed some light/point me in the right direction...
I'm testing a PoC where we're using molecular to build an app which has a few services linking in the back-end (ticket,notification). We're using NATS as the transport layer and have managed to get our services to talk to each other when either running the env using docker-compose or a simple k8's file we run using minikube.
Now I'm trying to bring this to our K8's cluster using HELM and am struggling to get NATS to talk to the services. I've tried setting the env var on the services for transportation as nats://nats:4222 and have advertised 4222 as the container port on the NATS chart.
However, once running my helm upgrade install and checking the NATS pod status, it is failing with the following output:
│ [1] 2021/03/19 14:12:21.843799 [INF] STREAM: Streaming Server is ready │
│ [1] 2021/03/19 14:12:23.569211 [ERR] 127.0.0.1:42594 - cid:5 - Client parser ERROR, state=0, i=0: proto='"GET / HTTP/1.1\r\nhost: 10.202.211"...' │
│ [1] 2021/03/19 14:12:32.679854 [ERR] 127.0.0.1:42800 - cid:6 - Client parser ERROR, state=0, i=0: proto='"GET / HTTP/1.1\r\nhost: 10.202.211"...' │
│ [1] 2021/03/19 14:12:33.568306 [ERR] 127.0.0.1:42830 - cid:7 - Client parser ERROR, state=0, i=0: proto='"GET / HTTP/1.1\r\nhost: 10.202.211"...' │
│ [1] 2021/03/19 14:12:42.679902 [ERR] 127.0.0.1:43040 - cid:8 - Client parser ERROR, state=0, i=0: proto='"GET / HTTP/1.1\r\nhost: 10.202.211"...' │
│ [1] 2021/03/19 14:12:43.568204 [ERR] 127.0.0.1:43066 - cid:9 - Client parser ERROR, state=0, i=0: proto='"GET / HTTP/1.1\r\nhost: 10.202.211"...' │
│ [1] 2021/03/19 14:12:52.680184 [ERR] 127.0.0.1:43270 - cid:10 - Client parser ERROR, state=0, i=0: proto='"GET / HTTP/1.1\r\nhost: 10.202.211"...' │
│ [1] 2021/03/19 14:12:53.570613 [ERR] 127.0.0.1:43288 - cid:11 - Client parser ERROR, state=0, i=0: proto='"GET / HTTP/1.1\r\nhost: 10.202.211"...' │
│ [1] 2021/03/19 14:12:55.655059 [INF] STREAM: Shutting down. │
│ [1] 2021/03/19 14:12:55.655252 [INF] Initiating Shutdown... │
│ [1] 2021/03/19 14:12:55.655536 [INF] Server Exiting.. │
│ stream closed
Not sure what I'm missing, should I be advertising the NATS address as an ingress?
Any guidance greatly appreciated
K8's ver: Client: v1.17.2 Server: v1.16.13
HELM ver: v3.1.1
image: nats-streaming:latest (I'm not using the full functionality of streaming, so could downgrade to regular nats if easier)
If you already have a working docker compose yaml, I recommend using Kompose tool to convert Docker Compose yaml to helm chart using below command
Documentation Link : https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
Command :
kompose convert -c