Sidekiq failing to connect to postgresql database - postgresql

I am attempting to deploy sidekiq as a sidecar container alongside Discourse and I am receiving the following error
2022-05-31T02:57:01.242Z pid=1 tid=cd1 WARN:
ActiveRecord::ConnectionNotEstablished: could not connect to server:
No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Both Sidekiq and Discourse uses the same bitnami docker image with the only difference is the Sidekiq container has a run file thats ran to start sidekiq. The postgreql server I am connecting to is an existing server and Discourse itself doesn't seem to have any issues connecting to it. I have looked at the run file for sidekiq and I don't think it's pulling the env variables properly. I have tried various different variable notations thinking it was a syntax issue. Below is the deployment I am using, Any insight would be greatly appreciated
containers:
- name: discourse
image: bitnami/discourse
livenessProbe:
tcpSocket:
port: 3000
initialDelaySeconds: 90
periodSeconds: 90
env:
- name: DISCOURSE_HOST
value: "xxx"
- name: DISCOURSE_DATABASE_HOST
value: "my-release-postgresql.default"
- name: DISCOURSE_DATABASE_PORT_NUMBER
value: "5432"
- name: DISCOURSE_DATABASE_USER
value: "postgres"
- name: DISCOURSE_DATABASE_PASSWORD
value: "xxx"
- name: DISCOURSE_DATABASE_NAME
value: "bitnami_discourse"
- name: DISCOURSE_REDIS_HOST
value: "redis.redis"
- name: DISCOURSE_REDIS_PORT_NUMER
value: "6379"
- name: POSTGRESQL_CLIENT_DATABASE_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_CLIENT_DATABASE_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_CLIENT_POSTGRES_USER
value: "postgres"
- name: POSTGRESQL_CLIENT_POSTGRES_PASSWORD
value: "xxx"
- name: POSTGRESQL_CLIENT_CREATE_DATABASE_NAME
value: "bitnami_discourse"
- name: POSTGRESQL_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: DISCOURSE_POSTGRESQL_USERNAME
value: "postgres"
- name: DISCOURSE_POSTGRESQL_PASSWORD
value: "xxx"
- name: DISCOURSE_POSTGRESQL_NAME
value: "bitnami_discourse"
- name: DISCOURSE_SMTP_HOST
value: "smtp.mailgun.com"
- name: DISCOURSE_SMTP_PORT
value: "587"
- name: DISCOURSE_SMTP_USER
value: "xxx"
- name: DISCOURSE_SMTP_PASSWORD
value: "xxx"
- name: DISCOURSE_SMTP_PROTOCOL
value: "tls"
ports:
- name: portone
containerPort: 3000
- name: porttwo
containerPort: 5432
- name: portthree
containerPort: 6379
volumeMounts:
- mountPath: "/bitnami/discourse"
name: discourse
- name: sidekiq
image: docker.io/bitnami/discourse
command: ["/opt/bitnami/scripts/discourse-sidekiq/run.sh"]
env:
- name: DISCOURSE_HOST
value: "xxx"
- name: DISCOURSE_DATABASE_HOST
value: "my-release-postgresql.default"
- name: DISCOURSE_DATABASE_PORT_NUMBER
value: "5432"
- name: DISCOURSE_DATABASE_USER
value: "postgres"
- name: DISCOURSE_DATABASE_PASSWORD
value: "xxx"
- name: DISCOURSE_DATABASE_NAME
value: "bitnami_discourse"
- name: DISCOURSE_REDIS_HOST
value: "redis.redis"
- name: DISCOURSE_REDIS_PORT_NUMER
value: "6379"
- name: DISCOURSE_SMTP_HOST
value: "smtp.mailgun.com"
- name: DISCOURSE_SMTP_PORT
value: "587"
- name: DISCOURSE_SMTP_USER
value: "xxx"
- name: DISCOURSE_SMTP_PASSWORD
value: "xxx"
- name: DISCOURSE_SMTP_PROTOCOL
value: "tls"
- name: POSTGRESQL_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: DISCOURSE_POSTGRESQL_USERNAME
value: "postgres"
- name: DISCOURSE_POSTGRESQL_PASSWORD
value: "xxx"
- name: DISCOURSE_POSTGRESQL_NAME
value: "bitnami_discourse"
- name: POSTGRESQL_CLIENT_DATABASE_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_CLIENT_DATABASE_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_CLIENT_POSTGRES_USER
value: "postgres"
- name: POSTGRESQL_CLIENT_POSTGRES_PASSWORD
value: "xxx"

Hello you need to add one more command ./opt/bitnami/scripts/discourse-sidekiq/setup.sh in sidekiq container command.
e.g
containers:
- name: discourse
image: bitnami/discourse
livenessProbe:
tcpSocket:
port: 3000
initialDelaySeconds: 90
periodSeconds: 90
env:
- name: DISCOURSE_HOST
value: "xxx"
- name: DISCOURSE_DATABASE_HOST
value: "my-release-postgresql.default"
- name: DISCOURSE_DATABASE_PORT_NUMBER
value: "5432"
- name: DISCOURSE_DATABASE_USER
value: "postgres"
- name: DISCOURSE_DATABASE_PASSWORD
value: "xxx"
- name: DISCOURSE_DATABASE_NAME
value: "bitnami_discourse"
- name: DISCOURSE_REDIS_HOST
value: "redis.redis"
- name: DISCOURSE_REDIS_PORT_NUMER
value: "6379"
- name: POSTGRESQL_CLIENT_DATABASE_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_CLIENT_DATABASE_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_CLIENT_POSTGRES_USER
value: "postgres"
- name: POSTGRESQL_CLIENT_POSTGRES_PASSWORD
value: "xxx"
- name: POSTGRESQL_CLIENT_CREATE_DATABASE_NAME
value: "bitnami_discourse"
- name: POSTGRESQL_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: DISCOURSE_POSTGRESQL_USERNAME
value: "postgres"
- name: DISCOURSE_POSTGRESQL_PASSWORD
value: "xxx"
- name: DISCOURSE_POSTGRESQL_NAME
value: "bitnami_discourse"
- name: DISCOURSE_SMTP_HOST
value: "smtp.mailgun.com"
- name: DISCOURSE_SMTP_PORT
value: "587"
- name: DISCOURSE_SMTP_USER
value: "xxx"
- name: DISCOURSE_SMTP_PASSWORD
value: "xxx"
- name: DISCOURSE_SMTP_PROTOCOL
value: "tls"
ports:
- name: portone
containerPort: 3000
- name: porttwo
containerPort: 5432
- name: portthree
containerPort: 6379
volumeMounts:
- mountPath: "/bitnami/discourse"
name: discourse
- name: sidekiq
image: docker.io/bitnami/discourse
command:
- bash
- -c
- |
./opt/bitnami/scripts/discourse-sidekiq/setup.sh
./opt/bitnami/scripts/discourse-sidekiq/run.sh
env:
- name: DISCOURSE_HOST
value: "xxx"
- name: DISCOURSE_DATABASE_HOST
value: "my-release-postgresql.default"
- name: DISCOURSE_DATABASE_PORT_NUMBER
value: "5432"
- name: DISCOURSE_DATABASE_USER
value: "postgres"
- name: DISCOURSE_DATABASE_PASSWORD
value: "xxx"
- name: DISCOURSE_DATABASE_NAME
value: "bitnami_discourse"
- name: DISCOURSE_REDIS_HOST
value: "redis.redis"
- name: DISCOURSE_REDIS_PORT_NUMER
value: "6379"
- name: DISCOURSE_SMTP_HOST
value: "smtp.mailgun.com"
- name: DISCOURSE_SMTP_PORT
value: "587"
- name: DISCOURSE_SMTP_USER
value: "xxx"
- name: DISCOURSE_SMTP_PASSWORD
value: "xxx"
- name: DISCOURSE_SMTP_PROTOCOL
value: "tls"
- name: POSTGRESQL_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: DISCOURSE_POSTGRESQL_USERNAME
value: "postgres"
- name: DISCOURSE_POSTGRESQL_PASSWORD
value: "xxx"
- name: DISCOURSE_POSTGRESQL_NAME
value: "bitnami_discourse"
- name: POSTGRESQL_CLIENT_DATABASE_HOST
value: "my-release-postgresql.default"
- name: POSTGRESQL_CLIENT_DATABASE_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_CLIENT_POSTGRES_USER
value: "postgres"
- name: POSTGRESQL_CLIENT_POSTGRES_PASSWORD
value: "xxx"

Related

create postgresql users with ansible sub nested-list

I am trying to find the best YAML structure to maintain databases & roles/users) for Postgres using ansible, one of the structures I tested is:
---
- databases:
- name: database1
owner: postrgres
users:
- name: user1
pass: secret
priv: CONNECT,REPLICATION
- name: user2
pass: secret
priv: CONNECT
- name: database2
owner: postgres
users:
- name: user3
pass: secret
priv: CONNECT
- name: user2 <--- user previously created needs to either create users first implies
pass: secret
priv: CONNECT
But how could I loop and get only a list of users so that I could use them in:
- name: Create users
postgresql_user:
name: '{{ item.name }}'
password: '{{ item.pass }}'
I may split the YAML and have something like:
---
- postgres_users:
- user: user1
pass: secret
- name: user2
pass: secret
- postgres_databases:
- name: db1
owner: <user> | default('postgres')
users:
- user: user1
priv: XXX.YYY
- user: user2
- name: db2
owner: <user> | default('postgres')
users:
- user: user1
priv: ZZZ
- user: user2
priv: XXX
But still wondering how to use in the loop postgres_databases and from there only use users.
Any ideas/tips?
Given the first structure -- and assuming that there's a typo and that databases is not actually a member of a list -- you could write:
- name: create users
postgresql_user:
name: "{{ item.1.name }}"
password: "{{ item.1.pass }}"
loop: "{{ databases|subelements('users') }}"
loop_control:
label: "{{ item.1.name }}"
Here's a complete reproducer; I've wrapped the postgres_user call in a debug task so that I can run it locally:
- hosts: localhost
gather_facts: false
vars:
databases:
- name: database1
owner: postrgres
users:
- name: user1
pass: secret
priv: CONNECT,REPLICATION
- name: user2
pass: secret
priv: CONNECT
- name: database2
owner: postgres
users:
- name: user3
pass: secret
priv: CONNECT
- name: user2
pass: secret
priv: CONNECT
tasks:
- name: create users
debug:
msg:
postgresql_user:
name: "{{ item.1.name }}"
password: "{{ item.1.pass }}"
loop: "{{ databases|subelements('users') }}"
loop_control:
label: "{{ item.1.name }}"
This outputs:
TASK [create users] *********************************************************************************
ok: [localhost] => (item=user1) => {
"msg": {
"postgresql_user": {
"name": "user1",
"password": "secret"
}
}
}
ok: [localhost] => (item=user2) => {
"msg": {
"postgresql_user": {
"name": "user2",
"password": "secret"
}
}
}
ok: [localhost] => (item=user3) => {
"msg": {
"postgresql_user": {
"name": "user3",
"password": "secret"
}
}
}
ok: [localhost] => (item=user2) => {
"msg": {
"postgresql_user": {
"name": "user2",
"password": "secret"
}
}
}
The above will attempt to create user2 twice, but that should be okay; the second attempt won't make any changes because the user already exists. If you wanted a unique list of users you could do something like this:
- name: get unique list of users
set_fact:
all_users: "{{ databases|json_query('[].users[]')|unique }}"
- name: create users
debug:
msg:
postgresql_user:
name: "{{ item.name }}"
password: "{{ item.pass }}"
loop: "{{ all_users }}"
loop_control:
label: "{{ item.name }}"

MongoDB credentials are not working with StatefulSet

I have this sts:
apiVersion: "apps/v1"
kind: "StatefulSet"
metadata:
name: "mongo-benchmark"
spec:
serviceName: mongo-benchmark-headless
replicas: 1
selector:
matchLabels:
app: "mongo-benchmark"
template:
metadata:
labels:
app: "mongo-benchmark"
spec:
containers:
- name: "mongo-benchmark"
image: "mongo:5"
imagePullPolicy: "IfNotPresent"
env:
- name: "MONGO_INITDB_ROOT_USERNAME"
value: "admin"
- name: "MONGO_INITDB_ROOT_PASSWORD"
value: "admin"
ports:
- containerPort: 27017
name: "mongo-port"
volumeMounts:
- name: "mongo-benchmark-data"
mountPath: "/data/db"
volumes:
- name: "mongo-benchmark-data"
persistentVolumeClaim:
claimName: "mongo-benchmark-pvc"
Everything is deployed.
The root user's username and password is admin
But when I go to the pod terminal and execute these commands I get:
$ mongo
$ use admin
$ db.auth("admin", "admin")
Error: Authentication failed.
0
I can't even read/write from/to other databases.
For example:
$ mongo
$ use test
$ db.col.findOne({})
uncaught exception: Error: error: {
"ok" : 0,
"errmsg" : "not authorized on test to execute command { find: \"col\", filter: {}, limit: 1.0, singleBatch: true, lsid: { id: UUID(\"30788b3e-48f0-4ff0-aaec-f17e20c67bde\") }, $db: \"test\" }",
"code" : 13,
"codeName" : "Unauthorized"
}
I don't know where I'm doing wrong. Anyone knows how to authenticate?

Using Pulumi and Azure, is there any API to create a SecretProviderClass without using yaml?

I'm trying to find a better way to solve this scenario than resorting to a yaml inside a pulumi.apply call (which has problems with preview apparently).
The idea here is (using Azure Kubernetes) to create a secret and then make it available inside a pod (nginx pod here just for test purposes).
The current code works, but is there an API that I'm missing?
Started to mess around with:
const foobar = new k8s.storage.v1beta1.CSIDriver("testCSI", { ...
but not really sure if it is the right path and if it is, what to put where to get the same effect.
Sidenote, no, I do not want to put secrets into environment variables. Although convenient they leak in the gui and logs and possibly more places.
const provider = new k8s.Provider("provider", {
kubeconfig: config.kubeconfig,
namespace: "default",
});
const secret = new keyvault.Secret("mysecret", {
resourceGroupName: environmentResourceGroupName,
vaultName: keyVaultName,
secretName: "just-some-secret",
properties: {
value: administratorLogin,
},
});
pulumi.all([environmentTenantId, keyVaultName, clusterManagedIdentityClientId])
.apply(([environmentTenantId, keyVaultName, clusterManagedIdentityClientId]) => {
let yammie = `apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-kvname-system-msi
spec:
provider: azure
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: "${clusterManagedIdentityClientId}"
keyvaultName: ${keyVaultName}
cloudName: ""
objects: |
array:
- |
objectName: just-some-secret
objectType: secret
tenantId: ${environmentTenantId}`;
const yamlConfigGroup = new k8s.yaml.ConfigGroup("test-secret",
{
yaml: yammie,
},
{
provider: provider,
dependsOn: [secret],
}
);
});
const deployment = new k8s.apps.v1.Deployment(
name,
{
metadata: {
labels: appLabels,
},
spec: {
replicas: 1,
selector: { matchLabels: appLabels },
template: {
metadata: {
labels: appLabels,
},
spec: {
containers: [
{
name: name,
image: "nginx:latest",
ports: [{ name: "http", containerPort: 80 }],
volumeMounts: [
{
name: "secrets-store01-inline",
mountPath: "/mnt/secrets-store",
readOnly: true,
},
],
},
],
volumes: [
{
name: "secrets-store01-inline",
csi: {
driver: "secrets-store.csi.k8s.io",
readOnly: true,
volumeAttributes: { secretProviderClass: "azure-kvname-system-msi" },
},
},
],
},
},
},
},
{
provider: provider,
}
);
SecretsProviderClass is a CustomResource which isn't typed because the fields can be anything you want.
const secret = new k8s.apiextensions.CustomResource("cert", {
apiVersion: "secrets-store.csi.x-k8s.io/v1",
kind: "SecretProviderClass",
metadata: {
namespace: "kube-system",
},
spec: {
provider: "azure",
secretObjects: [{
data: [{
objectName: cert.certificate.name,
key: "tls.key",
}, {
objectName: cert.certificate.name,
key: "tls.crt"
}],
secretName: "ingress-tls-csi",
type: "kubernetes.io/tls",
}],
parameters: {
usePodIdentity: "true",
keyvaultName: cert.keyvault.name,
objects: pulumi.interpolate`array:\n - |\n objectName: ${cert.certificate.name}\n objectType: secret\n`,
tenantId: current.then(config => config.tenantId),
}
}
}, { provider: k8sCluster.k8sProvider })
Note: the objects array might work with JSON.stringify, but I haven't yet tried that.
If you want to get strong typing for a card, you can use crd2pulumi

Cloud Formation refuses to create subnet

I have a cloud formation template that creates a new VPC. Along with subnets, security groups an IGW and route table with association.
Everything works! EXCEPT. I'm asking CF to create 4 subnets (A,B,C,D). Instead it only creates 3 (A,B,C). It doesn't produce any errors. It just creates the VPC and everything but subnet D and says 'have a nice day'.
Here's my CF template.
---
AWSTemplateFormatVersion: 2010-09-09
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 172.16.64.0/16
EnableDnsSupport: true
EnableDnsHostnames: true
InstanceTenancy: default
Tags:
- Key: Name
Value: JF-Staging-VPC
InternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: JF-Staging-IGW
VPCGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref VPC
InternetGatewayId: !Ref InternetGateway
SubnetA:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: us-east-1a
VpcId: !Ref VPC
CidrBlock: 172.16.16.0/24
MapPublicIpOnLaunch: False
Tags:
- Key: Name
Value: JF-Staging-Web-Subnet-A
SubnetB:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: us-east-1b
VpcId: !Ref VPC
CidrBlock: 172.16.24.0/24
MapPublicIpOnLaunch: False
Tags:
- Key: Name
Value: JF-Staging-Web-Subnet-B
SubnetC:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: us-east-1c
VpcId: !Ref VPC
CidrBlock: 172.16.32.0/24
MapPublicIpOnLaunch: False
Tags:
- Key: Name
Value: JF-Staging-RDS-Subnet-C
SubnetD:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: us-east-1d
VpcId: !Ref VPC
CidrBlock: 172.16.40.0/24
MapPublicIpOnLaunch: False
Tags:
- Key: Name
Value: JF-Staging-RDS-Subnet-D
RouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: JF-Staging-Default-Route-Table
DHCPOpts:
Type: "AWS::EC2::DHCPOptions"
Properties:
DomainName: stg.jokefire.com
Tags:
- Key: Name
Value: JF-Staging-Default-DHCPOpts
InternetRoute:
Type: AWS::EC2::Route
DependsOn: InternetGateway
Properties:
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
RouteTableId: !Ref RouteTable
SubnetARouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref RouteTable
SubnetId: !Ref SubnetA
SubnetBRouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref RouteTable
SubnetId: !Ref SubnetB
SecurityGroupSSH:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: "SSH Group"
GroupDescription: "SSH traffic in, all traffic out."
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: 0.0.0.0/0
SecurityGroupEgress:
- IpProtocol: -1
CidrIp: 0.0.0.0/0
Tags:
- Key: Name
Value: SSH-Access
SecurityGroupWeb:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: "Web Group"
GroupDescription: "Web traffic in, all traffic out."
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '80'
ToPort: '80'
CidrIp: 0.0.0.0/0
SecurityGroupEgress:
- IpProtocol: -1
CidrIp: 0.0.0.0/0
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '443'
ToPort: '443'
CidrIp: 0.0.0.0/0
SecurityGroupEgress:
- IpProtocol: -1
CidrIp: 0.0.0.0/0
Tags:
- Key: Name
Value: Web-Server-Access
SecurityGroupDB:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: "DB Group"
GroupDescription: "DB traffic in from web group, out to web group."
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '3306'
ToPort: '3306'
SourceSecurityGroupId:
Ref: SecurityGroupWeb
SecurityGroupEgress:
- IpProtocol: tcp
FromPort: '3306'
ToPort: '3306'
SourceSecurityGroupId:
Ref: SecurityGroupWeb
Tags:
- Key: Name
Value: DB-Server-Access
What's going wrong and how do I correct this?

Kafka monitoring via JMX

I'm using Prometheus JMX Exporter to monitor Kafka. I've defined the following pattern rules in the JMX config file:
- pattern : kafka.server<type=(.+), name=(.+)PerSec\w*, topic=(.+)><>Count
name: kafka_server_$1_$2_total
labels:
topic: "$3"
- pattern: kafka.server<type=(.+), name=(.+)PerSec\w*><>Count
name: kafka_server_$1_$2_total
type: COUNTER
- pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>(Count|Value)
name: kafka_server_$1_$2
labels:
clientId: "$3"
topic: "$4"
partition: "$5"
- pattern: kafka.server<type=(.+), name=(.+), topic=(.+), partition=(.*)><>(Count|Value)
name: kafka_server_$1_$2
labels:
topic: "$3"
partition: "$4"
- pattern: kafka.server<type=(.+), name=(.+), topic=(.+)><>(Count|Value)
name: kafka_server_$1_$2
labels:
topic: "$3"
type: COUNTER
- pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), brokerHost=(.+), brokerPort=(.+)><>(Count|Value)
name: kafka_server_$1_$2
labels:
clientId: "$3"
broker: "$4:$5"
- pattern: kafka.server<type=(.+), name=(.+), clientId=(.+)><>(Count|Value)
name: kafka_server_$1_$2
labels:
clientId: "$3"
- pattern: kafka.server<type=(.+), name=(.+)><>(Count|Value)
name: kafka_server_$1_$2
Now I'm having the following issue. When I send data to the topic in this way:
/bin/kafka-console-producer.sh --broker-list kafka-hostname:9092 --topic test1
The counter of the metric kafka_server_brokertopicmetrics_bytesin_total increases correctly.
When I try to send data by using the following code:
"use strict";
const envs = process.env;
const options = {
"metadata.broker.list": "kafka-hostname:9092",
"group.id": "kafka1",
topic: "test1",
key: "testKey"
};
const kafkesque = require("untubo")(options);
let count = 0;
const interval = setInterval(function() {
kafkesque.push({ hello: "world", count });
console.log("sent", count);
count++;
}, 500);
process.once("SIGINT", function() {
clearInterval(interval);
console.log("closing");
kafkesque.stop(() => {
console.log("closed");
});
});
In this case the metric doesn't change at all but I can receive the message in the consumer. I think there is something not configured properly in the pattern. Do you have any idea?