I am trying to implement dynamic security plugin in mosquitto broker using the following documentation
https://mosquitto.org/documentation/dynamic-security/#installation
my mosquitto.conf file looks like this
listener 1883
allow_anonymous false
per_listener_settings false
persistence true
persistence_location /var/lib/mosquitto/
plugin /usr/lib/x86_64-linux-gnu/mosquitto_dynamic_security.so
plugin_opt_config_file /var/lib/mosquitto/dynamic-security.json
log_dest file /var/log/mosquitto/mosquitto.log
include_dir /etc/mosquitto/conf.d
after making configuration changes if I start the mosquitto service it is failing to start and when I check sudo journalctl -xu mosquitto i can see and error
Mosquitto Error: Unknown configuration variable "plugin"
plugin is available in this path /usr/lib/x86_64-linux-gnu/mosquitto_dynamic_security.so
and it has 777 permission, am I missing something? because the same working in windows.
Related
I'm downloaded the .tz (I am on MAC) for confluent version 7.0.0 from the official confluent site and was following the setup for LOCAL (1 node) and Kafka/ZooKeeper are starting fine, but the Schema Registry keeps failing (Note, I am behind a corporate VPN)
The exception message in the SchemaRegistry logs is:
[2021-11-04 00:34:22,492] INFO Logging initialized #1403ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2021-11-04 00:34:22,543] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer)
[2021-11-04 00:34:22,614] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.ApplicationServer)
[2021-11-04 00:35:23,007] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryException: Failed to get Kafka cluster ID
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1488)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:166)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:71)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:90)
at io.confluent.rest.Application.configureHandler(Application.java:271)
at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:245)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
Caused by: java.util.concurrent.TimeoutException
at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:180)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1486)
... 7 more
My schema-registry.properties file has bootstrap URL set to
kafkastore.bootstrap.servers=PLAINTEXT://localhost:9092
I saw some posts saying its the SchemaRegistry unable to connect to the KafkaCluster URL because of the localhost address potentially. I am fairly new to Kafka and basically just need this local setup to run a git repo that is utilizing some Topics/Kafka so my questions...
How can I fix this (I am behind a corporate VPN but I figured this shouldn't affect this)
Do I even need the SchemaRegistry?
I ended up just going with the Docker local setup inside, and the only change I had to make to the docker compose YAML was to change the schema-registry port (I changed it to 8082 or 8084, don't remember exactly but just an unused port that is not being used by some other Confluent service listed in the docker-compose.yaml) and my local setup is working fine now
Currently I'm trying to upgrade our cluster with Confluent Playbooks. I setup a local environment with Vagrant where I'm simulate our production environment.
I must be honest I'm experimenting with a quite unusually thing, because our current setup was not installed with the Confluent Playbooks. I know the documentation says I should use the Upgrade Playbooks if I used the Install Playbooks with the same hosts.yml.
Anyway I'm trying to find out if it would be possible to use the official Confluent Upgrade Playbooks. It would eventually save lot's of time for us, if I don't have to create my own Upgrade Playbooks.
The Zookeeper Upgrade was successful and after upgrading Zookeeper now I'm trying to upgrade the Brokers.
During the upgrade the Broker goes to "failed" status. If I'm trying to restart the Broker service I get the following error:
[2021-07-12 08:59:52,144] ERROR [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.NullPointerException
at java.util.Objects.requireNonNull(Objects.java:203)
at io.confluent.rbacapi.resources.base.AuditLogConfigResource.<init>(AuditLogConfigResource.java:78)
at io.confluent.rbacapi.resources.v1.V1AuditLogConfigResource.<init>(V1AuditLogConfigResource.java:47)
at io.confluent.rbacapi.app.RbacApiApplication.setupResources(RbacApiApplication.java:339)
at io.confluent.rest.Application.configureHandler(Application.java:258)
at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:227)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at io.confluent.http.server.KafkaHttpServerImpl.doStart(KafkaHttpServerImpl.java:105)
at java.lang.Thread.run(Thread.java:748)
[2021-07-12 08:59:52,145] INFO [KafkaServer id=1] shutting down (kafka.server.KafkaServer)
[2021-07-12 08:59:52,141] WARN KafkaHttpServer transitioned from STARTING to FAILED.: null. (io.confluent.http.server.KafkaHttpServerImpl)
java.lang.NullPointerException
at java.util.Objects.requireNonNull(Objects.java:203)
at io.confluent.rbacapi.resources.base.AuditLogConfigResource.<init>(AuditLogConfigResource.java:78)
at io.confluent.rbacapi.resources.v1.V1AuditLogConfigResource.<init>(V1AuditLogConfigResource.java:47)
at io.confluent.rbacapi.app.RbacApiApplication.setupResources(RbacApiApplication.java:339)
at io.confluent.rest.Application.configureHandler(Application.java:258)
at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:227)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at io.confluent.http.server.KafkaHttpServerImpl.doStart(KafkaHttpServerImpl.java:105)
at java.lang.Thread.run(Thread.java:748)
Since I see a reference to rbac api in the error logs above, I installed confluent-security package, but it didn't helped.
My hosts.yml in my cp-ansible Ansible root directory file looks like this:
---
all:
vars:
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
ansible_connection: ssh
ansible_user: vagrant
ansible_become: true
ansible_port: 22
ansible_ssh_private_key_file: ~/.ssh/id_rsa
sasl_protocol: scram
ssl_enabled: true
ssl_provided_keystore_and_truststore: true
ssl_keystore_filepath: "/vagrant/ssl/kafka.server.keystore.jks"
ssl_keystore_key_password: pass_key
ssl_keystore_store_password: pass_key
ssl_truststore_filepath: "/vagrant/ssl/kafka.server.truststore.jks"
ssl_truststore_password: pass_trust
rbac_enabled: false
mds_super_user: mds
mds_super_user_password: password
kafka_broker_ldap_user: mds
kafka_broker_ldap_password: password
schema_registry_ldap_user: mds
schema_registry_ldap_password: password
ksql_ldap_user: mds
ksql_ldap_password: password
control_center_ldap_user: mds
control_center_ldap_password: password
create_mds_certs: false
token_services_public_pem_file: /vagrant/ssl/mds.publickey.pem
token_services_private_pem_file: /vagrant/ssl/mds.tokenkeypair.pem
kafka_broker_cluster_name: broker-cluster
schema_registry_cluster_name: schema-registry-cluster
kafka_broker_principal: User:mds
confluent_server_enabled: true
kafka_broker_schema_validation_enabled: true
kafka_broker_custom_listeners:
broker:
name: SSL
port: 9093
ssl_enabled: true
ssl_mutual_auth_enabled: true
sasl_protocol: none
zookeeper:
hosts:
bro1:
bro2:
bro3:
kafka_broker:
vars:
kafka_broker_custom_properties:
ldap.java.naming.factory.initial: com.sun.jndi.ldap.LdapCtxFactory
ldap.com.sun.jndi.ldap.read.timeout: 3000
ldap.java.naming.provider.url: ldap://192.168.0.198:10389
ldap.user.search.base: ou=TECH,ou=SPEZ-USER,o=VISA
ldap.group.search.base: ou=TECH,ou=SPEZ-USER,o=VISA
ldap.user.name.attribute: cn
ldap.user.memberof.attribute.pattern: cn=(.*),ou=TECH,ou=SPEZ-USER,o=TEST
ldap.group.name.attribute: cn
ldap.group.member.attribute.pattern: cn=(.*),ou=TECH,ou=SPEZ-USER,o=TEST
ldap.user.object.class: person
hosts:
bro1:
bro2:
bro3:
schema_registry:
hosts:
reg1:
control_center:
hosts:
cc1:
Do you have any hints where I should look for an issue?
With my team we are trying to move our micro-services to openj9, they are running on kubernetes. However, we encounter a problem on the configuration of JMX. (openjdk8-openj9)
We have a connection refused when we try a connection with jvisualvm (and a port-forwarding with Kubernetes).
We haven't changed our configuration, except for switching from Hotspot to OpenJ9.
The error :
E0312 17:09:46.286374 17160 portforward.go:400] an error occurred forwarding 1099 -> 1099: error forwarding port 1099 to pod XXXXXXX, uid : exit status 1: 2020/03/12 16:09:45 socat[31284] E connect(5, AF=2 127.0.0.1:1099, 16): Connection refused
The java options that we use :
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.port=1099
-Dcom.sun.management.jmxremote.rmi.port=1099
We are using the last adoptopenjdk/openjdk8-openj9 docker image.
Do you have any ideas?
Thank you !
Regards.
I managed to figure out why it wasn't working.
It turns out that to pass the JMX options to the service we were using the Kubernetes service descriptor in YAML. It looks like this:
- name: _JAVA_OPTIONS
value: -Dzipkinserver.listOfServers=http://zipkin:9411 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote.rmi.port=1099
I realized that the JMX properties were not taken into account from _JAVA_OPTIONS when the application is not launch with ENTRYPOINT in the docker container.
So I pass the properties directly into the Dockerfile like this and it works.
CMD ["java", "-Dcom.sun.management.jmxremote", "-Dcom.sun.management.jmxremote.authenticate=false", "-Dcom.sun.management.jmxremote.ssl=false", "-Dcom.sun.management.jmxremote.local.only=false", "-Dcom.sun.management.jmxremote.port=1099", "-Dcom.sun.management.jmxremote.rmi.port=1099", "-Djava.rmi.server.hostname=127.0.0.1", "-cp","app:app/lib/*","OurMainClass"]
It's also possible to keep _JAVA_OPTIONS and setup an ENTRYPOINT in the dockerfile.
Thanks!
I am trying to publish messages from rsyslog to kafka on a remote machine using omkafka module.
My omkafka action is configured as:
if $HOSTNAME == 'localhost' then {
action(type="omkafka"
name="log_kafka"
broker="192.168.100.50:9092"
topic="rsyslog_kafka"
errorfile="/var/log/omkafka/log_kafka_failures.log"
template="hostipFormat"<br/>
)
}
My kafka instance is running fine and I am able to publish data using kafka-producer.bat file from another windows machine.
But when I start my rsyslog service, I get following error:
Feb 17 16:42:01 localhost rsyslogd: [origin software="rsyslogd" swVersion="8.24.0" x-pid="1764" x-info="http://www.rsyslog.com"] start
Feb 17 16:42:05 localhost rsyslogd: omkafka: kafka message 192.168.100.50:9092/bootstrap: Failed to connect to broker at 192.168.100.50:9092: Permission denied [v8.24.0 try http://www.rsyslog.com/e/2422 ]
Feb 17 16:42:05 localhost rsyslogd: omkafka: kafka message 1/1 brokers are down [v8.24.0 try http://www.rsyslog.com/e/2422 ]
Feb 17 16:42:05 localhost rsyslogd: omkafka: kafka message 192.168.100.50:9092/bootstrap: Failed to connect to broker at 192.168.100.50:9092: Permission denied [v8.24.0 try http://www.rsyslog.com/e/2422 ]
Feb 17 16:42:05 localhost rsyslogd: omkafka: kafka message 1/1 brokers are down [v8.24.0 try http://www.rsyslog.com/e/2422 ]
I am not sure whether this is related to omkafka or librdkafka.
Need help.
I had the same issue. Instead of disabling SELinux and thus opening yourself up to a world of hurt. I used audit2why which tells you exactly why something is being denied in their avc denials. It is helpful as well in that it can tell you just what you need to do to fix the problem.
Audit2why reads /var/log/audit/audit.log and then tells you why something is being denied, and sometimes can tell you what you need to do to fix an issue. In my case it was
type=AVC msg=audit(1492149030.280:296487): avc: denied { name_connect } for pid=2277 comm=72733A6D61696E20513A526567 dest=9092 scontext=system_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
Was caused by:
The boolean nis_enabled was set incorrectly.
Description:
Allow nis to enabled
Allow access by executing:
# setsebool -P nis_enabled 1
The final commented line there is exactly what needs to be typed to allow this to execute, and it can be done without disabling proper security controls. After sudo setsebool -P nis_enabled 1 was executed I restarted rsyslog and kafka was able to consume my messages just fine.
I got the reason of this issue. It happened because of SELINUX in centos. Once I disabled the SELINUX service, the configuration is working fine.
Definitely this is related to SELinux, but because disabling SELinux is not best choice I correct problem whit this:
sudo semanage port -d -t unreserved_port_t -p tcp 9092
sudo semanage port -a -t http_port_t -p tcp 9092
And the restart syslogd.
I'm trying to deploy contextBroker using the command /etc/init.d/contextBroker and I get the following error:
Starting...
contextBroker is stopped
Starting contextBroker... su: user orion does not exist
cat: /var/log/contextBroker/contextBroker.pid: No such file or directory
pidfile not found [FAILED]
Using the following command I can start contextBroker:
/usr/bin/contextBroker -port 10026 -logDir /var/log/contextBroker
-pidpath /var/log/contextBroker/contextBroker.pid -dbhost localhost -db orion
Which could be the cause of the problem?
There was a bug in the Orion RPM fixed in 0.16.0 that causes the removal of the "orion" user when updating the RPM package. The "orion" user is the one used by default by the /etc/init.d/contextBroker script, thus causing the error message su: user orion does not exist.
Note that although the bug has been fixed in 0.16.0, updating from 0.15.0 (for instance) to 0.16.0 will be problematic, as the version being updated (0.15.0) is still "buggy". Updating from 0.16.0 to any newer version (e.g. upcoming 0.17.0) should work without problem.
Fortunatelly, the problem has an easy solution: instead of updating the package, remove it and install again, typically with:
yum remove contextBroker
yum install contextBroker