Setting Up HLF network V1.4 with tls enabled and kafka based ordering - apache-kafka

I am creating an HLF v1.4 network with TLS enabled and Kafka based ordering, But when I am trying to create a channel it throws an error saying
and when I saw the logs of orderer it is showing
Configs for TLS in network
Peer Configs
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto/peer/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto/peer/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto/peer/tls/ca.crt
Orderer Configs
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/etc/hyperledger/crypto/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/etc/hyperledger/crypto/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/etc/hyperledger/crypto/orderer/tls/ca.crt, /etc/hyperledger/crypto/peer/tls/ca.crt]
Cli Configs
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/peer/peers/peer0.org1/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/peer/peers/peer0.org1/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/peer/peers/peer0.org1/tls/ca.crt
Can anyone help me in this regard

as the error says, bad certificate while creating a channel, orderer certificate is not found, that's why the error bad certificate.
In the compose.yaml file, set the environment variable
FABRIC_LOGGING_SPEC=DEBUG, to see exactly what the error is.

Related

Passing JAAS config to bitnami/kafka helm chart for SASL/PLAIN

Goal:
I want to use bitnami/kafka helm chart with SASL enabled with the PLAIN mechanism for only the external client. (client-broker, broker-broker, broker-zookeeper connection can be in PLAINTEXT mechanism)
What I have Done:
I've set configured parameters in values.yaml file:
superUsers: User:adminuser
auth.externalClientProtocol: sasl
auth.sasl.jaas.clientUsers:
- adminuser
- otheruser
auth.sasl.jaas.clientPasswords:
- adminuserpass
- otheruserpass
auth.sasl.jaas.interBrokerUser: adminuser
And left other parameters as it is. But it doesn't seem to be enough. The broker container is going to backOff state when I try to install the chart.
Question#1: Aren't these configuration parameters enough for setting up what I'm trying to achieve? Won't these create a JAAS config file for me?
From Kafka documentation Kafka_SASL, I have to pass a JAAS config file for the broker. It can be done by sasl.jaas.config configuration parameter. For me it should be something like this:
listener.name.EXTERNAL.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="adminuser" \
password="adminuserpass" \
user_adminuser="adminuserpass" \
user_otheruser="otheruserpass";
But it doesn't seem there is any sasl.jaas.config available in bitnami/kafka.values.yaml.
Question#2: How can I pass this JAAS config file values if the answer for question#1 is NO? Should I use config or extraEnvVars for this?
Thanks!
This is work for me:
...
authorizerClassName: "kafka.security.authorizer.AclAuthorizer"
auth:
clientProtocol: sasl
externalClientProtocol: sasl
interBrokerProtocol: plaintext
sasl:
mechanisms: plain,scram-sha-256,scram-sha-512
interBrokerMechanism: plain
jaas:
clientUsers:
- yourusername
clientPasswords:
- yourpassword
....
I don't know why but I have to set clientProtocol: sasl
otherwise, I get java.io.IOException: /opt/bitnami/kafka/conf/kafka_jaas.conf (No such file or directory)
also, I didn't see the error till I set image.debug: true
Note: as u you probably see this will force authentication also between the clients inside the cluster

Docker Compose Config Server unreachable by Spring Cloud Data Flow Microservices

The config server is reachable from localhost:8888 but when I deploy my applications on SCDF the following error occurs:
Fetching config from server at : http://localhost:8888
2021-07-30 14:58:53.535 INFO 143 --- [ main] o.s.b.context.config.ConfigDataLoader : Connect Timeout Exception on Url - http://localhost:8888. Will be trying the next url if available
2021-07-30 14:58:53.535 WARN 143 --- [ main] o.s.b.context.config.ConfigDataLoader : Could not locate PropertySource ([ConfigServerConfigDataResource#3de88f64 uris = array<String>['http://localhost:8888'], optional = true, profiles = list['default']]): I/O error on GET request for "http://localhost:8888/backend-service/default": Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused (Connection refused)
The application(s) deploy successfully on SCDF apart from the config server connection. The only property I specify in SCDF is the docker network. I'm using spring.config.import and am not using any bootstraps. This all works correctly when deployed locally but the microservices can't connect to the config server when deployed on SCDF.
Spring Boot Version: 2.5.1
app properties
spring.application.name=backend-service
spring.cloud.config.fail-fast=true
spring.cloud.config.retry.max-attempts=6
spring.cloud.config.retry.max-interval=11000
spring.config.import=optional:configserver:http://localhost:8888
config server properties
spring.cloud.config.server.git.uri=...
management.endpoints.web.exposure.include=*
spring.cloud.config.fail-fast=true
spring.cloud.config.retry.max-attempts=6
spring.cloud.config.retry.max-interval=11000
spring.cloud.bus.id=my-config-server
spring.cloud.stream.rabbit.bindings.springCloudBus.consumer.declareExchange=false
spring.rabbitmq.host=127.0.0.1
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest
spring.cloud.bus.enabled=true
spring.cloud.bus.refresh.enabled: true
spring.cloud.bus.env.enabled: true
server.port=8888
docker-compose.yml
version: '3.1'
services:
h2:
...
rabbitmq-container:
image: rabbitmq:3.7.14-management
hostname: dataflow-rabbitmq
expose:
- '5672'
ports:
- "5672:5672"
- "15672:15672"
networks:
- scdfnet
dataflow-server:
...
networks:
- scdfnet
app-import:
...
networks:
- scdfnet
skipper-server:
...
networks:
- scdfnet
configserver-container:
image: ...
ports:
- "8888:8888"
expose:
- '8888'
environment:
- spring_rabbitmq_host=rabbitmq-container
- spring_rabbitmq_port=5672
- spring_rabbitmq_username=guest
- spring_rabbitmq_password=guest
depends_on:
- rabbitmq-container
networks:
- scdfnet
networks:
scdfnet:
external:
name: scdfnet
volumes:
h2-data:
For anyone else having this problem, I have found two ways of solving it. The problem is that once the Spring Boot application is containerized, the localhost referred to in the properties file will cause the program to fetch the localhost of the application container's virtual network and not that of your local machine.
There are numerous Stack Overflow answers for this same error but all center around corrections to bootstrap properties. However, bootstrap context initialization is deprecated since Spring Boot 2.4.
The first solution is to use your IPv4 address instead of localhost.
spring.config.import=configserver:http://<insert IPv4 address>:8888
For Example:
spring.config.import=configserver:http://10.6.39.148:8888
A much better solution than hardwiring addresses is to reference the config server container running in docker compose:
spring.config.import=optional:configserver:http://configserver-container:8888
Make sure that all of the Docker Compose services are running on the same network (scdf_network in my case) and note that this address will only work when running on docker-compose so if you are building the maven file on Eclipse, you may need to remove or disable your tests to build successfully. That might be unnecessary; it could just be that there is some property that I failed to copy to my local application.properties file which is causing the context tests to fail. According to the documentation, the optional label should allow the config client to run even if contact cannot be established with the config server.

Kafka Upgrade from 5.4.1 to 6.1.2 with Confluent Playbooks

Currently I'm trying to upgrade our cluster with Confluent Playbooks. I setup a local environment with Vagrant where I'm simulate our production environment.
I must be honest I'm experimenting with a quite unusually thing, because our current setup was not installed with the Confluent Playbooks. I know the documentation says I should use the Upgrade Playbooks if I used the Install Playbooks with the same hosts.yml.
Anyway I'm trying to find out if it would be possible to use the official Confluent Upgrade Playbooks. It would eventually save lot's of time for us, if I don't have to create my own Upgrade Playbooks.
The Zookeeper Upgrade was successful and after upgrading Zookeeper now I'm trying to upgrade the Brokers.
During the upgrade the Broker goes to "failed" status. If I'm trying to restart the Broker service I get the following error:
[2021-07-12 08:59:52,144] ERROR [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.NullPointerException
at java.util.Objects.requireNonNull(Objects.java:203)
at io.confluent.rbacapi.resources.base.AuditLogConfigResource.<init>(AuditLogConfigResource.java:78)
at io.confluent.rbacapi.resources.v1.V1AuditLogConfigResource.<init>(V1AuditLogConfigResource.java:47)
at io.confluent.rbacapi.app.RbacApiApplication.setupResources(RbacApiApplication.java:339)
at io.confluent.rest.Application.configureHandler(Application.java:258)
at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:227)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at io.confluent.http.server.KafkaHttpServerImpl.doStart(KafkaHttpServerImpl.java:105)
at java.lang.Thread.run(Thread.java:748)
[2021-07-12 08:59:52,145] INFO [KafkaServer id=1] shutting down (kafka.server.KafkaServer)
[2021-07-12 08:59:52,141] WARN KafkaHttpServer transitioned from STARTING to FAILED.: null. (io.confluent.http.server.KafkaHttpServerImpl)
java.lang.NullPointerException
at java.util.Objects.requireNonNull(Objects.java:203)
at io.confluent.rbacapi.resources.base.AuditLogConfigResource.<init>(AuditLogConfigResource.java:78)
at io.confluent.rbacapi.resources.v1.V1AuditLogConfigResource.<init>(V1AuditLogConfigResource.java:47)
at io.confluent.rbacapi.app.RbacApiApplication.setupResources(RbacApiApplication.java:339)
at io.confluent.rest.Application.configureHandler(Application.java:258)
at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:227)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at io.confluent.http.server.KafkaHttpServerImpl.doStart(KafkaHttpServerImpl.java:105)
at java.lang.Thread.run(Thread.java:748)
Since I see a reference to rbac api in the error logs above, I installed confluent-security package, but it didn't helped.
My hosts.yml in my cp-ansible Ansible root directory file looks like this:
---
all:
vars:
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
ansible_connection: ssh
ansible_user: vagrant
ansible_become: true
ansible_port: 22
ansible_ssh_private_key_file: ~/.ssh/id_rsa
sasl_protocol: scram
ssl_enabled: true
ssl_provided_keystore_and_truststore: true
ssl_keystore_filepath: "/vagrant/ssl/kafka.server.keystore.jks"
ssl_keystore_key_password: pass_key
ssl_keystore_store_password: pass_key
ssl_truststore_filepath: "/vagrant/ssl/kafka.server.truststore.jks"
ssl_truststore_password: pass_trust
rbac_enabled: false
mds_super_user: mds
mds_super_user_password: password
kafka_broker_ldap_user: mds
kafka_broker_ldap_password: password
schema_registry_ldap_user: mds
schema_registry_ldap_password: password
ksql_ldap_user: mds
ksql_ldap_password: password
control_center_ldap_user: mds
control_center_ldap_password: password
create_mds_certs: false
token_services_public_pem_file: /vagrant/ssl/mds.publickey.pem
token_services_private_pem_file: /vagrant/ssl/mds.tokenkeypair.pem
kafka_broker_cluster_name: broker-cluster
schema_registry_cluster_name: schema-registry-cluster
kafka_broker_principal: User:mds
confluent_server_enabled: true
kafka_broker_schema_validation_enabled: true
kafka_broker_custom_listeners:
broker:
name: SSL
port: 9093
ssl_enabled: true
ssl_mutual_auth_enabled: true
sasl_protocol: none
zookeeper:
hosts:
bro1:
bro2:
bro3:
kafka_broker:
vars:
kafka_broker_custom_properties:
ldap.java.naming.factory.initial: com.sun.jndi.ldap.LdapCtxFactory
ldap.com.sun.jndi.ldap.read.timeout: 3000
ldap.java.naming.provider.url: ldap://192.168.0.198:10389
ldap.user.search.base: ou=TECH,ou=SPEZ-USER,o=VISA
ldap.group.search.base: ou=TECH,ou=SPEZ-USER,o=VISA
ldap.user.name.attribute: cn
ldap.user.memberof.attribute.pattern: cn=(.*),ou=TECH,ou=SPEZ-USER,o=TEST
ldap.group.name.attribute: cn
ldap.group.member.attribute.pattern: cn=(.*),ou=TECH,ou=SPEZ-USER,o=TEST
ldap.user.object.class: person
hosts:
bro1:
bro2:
bro3:
schema_registry:
hosts:
reg1:
control_center:
hosts:
cc1:
Do you have any hints where I should look for an issue?

Received AliveMessage from a peer with the same PKI-ID as myself

I am attempting to port the Hyperledger Fabric Getting Started to Kubernetes. But am struggling to get peer1's to deploy. If I enable CORE_PEER_GOSSIP_BOOTSTRAP, I receive errors "Received AliveMessage from a peer with the same PKI-ID as myself".
How can I debug a peer reportedly having the same PKI-ID as another?
Using this as a starting point:
https://hyperledger-fabric.readthedocs.io/en/latest/getting_started.html
I am able to create:
orderer and cli pods in default namespace
peer0's one in each org1|org2 namespace.
peer1's but only if I disable (comment out) CORE_PEER_GOSSIP_BOOTSTRAP
If I enable CORE_PEER_GOSSIP_BOOTSTRAP for the peer1's, I receive the following warning and error:
[gossip/gossip#10.0.0.10:7051] NewGossipService -> WARN 01c External endpoint is empty, peer will not be accessible outside of its organization
...
[gossip/discovery#10.0.0.10:7051] handleAliveMessage -> ERRO 02a Bad configuration detected: Received AliveMessage from a peer with the same PKI-ID as myself: tag:EMPTY alive_msg:<membership:<pki_id:"[[REDACTED]]" > timestamp:<inc_number:1495468533769417608 seq_num:416 > >
In order to better map the Orderer, Peers to DNS names, I'm using Kubernetes Namespaces and this configuration:
OrdererOrgs:
- Name: Orderer
Domain: default.svc.cluster.local
Specs:
- Hostname: orderer
PeerOrgs:
- Name: Org1
Domain: org1.svc.cluster.local
Template:
Count: 2
Users:
Count: 2
- Name: Org2
Domain: org2.svc.cluster.local
Template:
Count: 2
Users:
Count: 2
In order to expose the peer0's to the other peers in the org and to expose the orderer, I have ClusterIP services for the peer0's (selecting only the peer0's) and orderer. It's inelegant but I'm trying to get it to work before I get it working more beautifully.
I am able to resolve orderer.default.svc.cluster.local, peer0.org1.svc.cluster.local, `peer0.org2.svc.cluster.local' using nslookup from within a pod deployed to default on the cluster.
Absent a curl-like tool for gPRC, I am able to open sockets against these endpoints on 7051 and 7053.
First, make sure you are using the right certificates.
Second, verify that your environment/configuration for gossip is set correctly
environment:
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.example.com:8051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:7051
- CORE_PEER_GOSSIP_ENDPOINT=peer0.org1.example.com:7051
OR in core.yaml
peer:
gossip:
bootstrap: peer0.org1.example.com:7051
externalEndpoint: peer1.org1.example.com:8051
endpoint: peer0.org1.example.com:7051
Edited: Also make sure that you have properly setup your CA
Hope this helps, it worked for me. And I was successfully able to connect peers.
If the peers are started from the same node, its possible that you are mounting the same crypto-material (path to mspconfig directory) for both the peers. If that is the case, separate the directory structures for both the peers and keep their respective certificates in them, update the respective paths for msp in docker-compose file and try to run.

Apache Storm: Could not find leader nimbus from seed hosts

I installed Apache Storm 1.0 by following this tutorial but I am not able to access to the Storm UI from the Internet. Accessing localhost:8080 gives the following error:
org.apache.storm.utils.NimbusLeaderNotFoundException: Could not find leader nimbus from seed hosts [localhost]. Did you specify a valid list of nimbus hosts for config nimbus.seeds?
at org.apache.storm.utils.NimbusClient.getConfiguredClientAs(NimbusClient.java:90)
at org.apache.storm.ui.core$cluster_configuration.invoke(core.clj:343)
at org.apache.storm.ui.core$fn__12106.invoke(core.clj:929)
at org.apache.storm.shade.compojure.core$make_route$fn__2467.invoke(core.clj:93)
at org.apache.storm.shade.compojure.core$if_route$fn__2455.invoke(core.clj:39)
at org.apache.storm.shade.compojure.core$if_method$fn__2448.invoke(core.clj:24)
at org.apache.storm.shade.compojure.core$routing$fn__2473.invoke(core.clj:106)
at clojure.core$some.invoke(core.clj:2570)
at org.apache.storm.shade.compojure.core$routing.doInvoke(core.clj:106)
at clojure.lang.RestFn.applyTo(RestFn.java:139)
at clojure.core$apply.invoke(core.clj:632)
at org.apache.storm.shade.compojure.core$routes$fn__2477.invoke(core.clj:111)
at org.apache.storm.shade.ring.middleware.json$wrap_json_params$fn__11576.invoke(json.clj:56)
at org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__3543.invoke(multipart_params.clj:103)
at org.apache.storm.shade.ring.middleware.reload$wrap_reload$fn__4286.invoke(reload.clj:22)
at org.apache.storm.ui.helpers$requests_middleware$fn__3770.invoke(helpers.clj:46)
at org.apache.storm.ui.core$catch_errors$fn__12301.invoke(core.clj:1230)
at org.apache.storm.shade.ring.middleware.keyword_params$wrap_keyword_params$fn__3474.invoke(keyword_params.clj:27)
at org.apache.storm.shade.ring.middleware.nested_params$wrap_nested_params$fn__3514.invoke(nested_params.clj:65)
at org.apache.storm.shade.ring.middleware.params$wrap_params$fn__3445.invoke(params.clj:55)
at org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__3543.invoke(multipart_params.clj:103)
at org.apache.storm.shade.ring.middleware.flash$wrap_flash$fn__3729.invoke(flash.clj:14)
at org.apache.storm.shade.ring.middleware.session$wrap_session$fn__3717.invoke(session.clj:43)
at org.apache.storm.shade.ring.middleware.cookies$wrap_cookies$fn__3645.invoke(cookies.clj:160)
at org.apache.storm.shade.ring.util.servlet$make_service_method$fn__3351.invoke(servlet.clj:127)
at org.apache.storm.shade.ring.util.servlet$servlet$fn__3355.invoke(servlet.clj:136)
at org.apache.storm.shade.ring.util.servlet.proxy$javax.servlet.http.HttpServlet$ff19274a.service(Unknown Source)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:654)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1320)
at org.apache.storm.logging.filters.AccessLoggingFilter.handle(AccessLoggingFilter.java:47)
at org.apache.storm.logging.filters.AccessLoggingFilter.doFilter(AccessLoggingFilter.java:39)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1291)
at org.apache.storm.shade.org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:247)
at org.apache.storm.shade.org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:210)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1291)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:443)
at org.apache.storm.shade.org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1044)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:372)
at org.apache.storm.shade.org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:978)
at org.apache.storm.shade.org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.apache.storm.shade.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.apache.storm.shade.org.eclipse.jetty.server.Server.handle(Server.java:369)
at org.apache.storm.shade.org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:486)
at org.apache.storm.shade.org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:933)
at org.apache.storm.shade.org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:995)
at org.apache.storm.shade.org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644)
at org.apache.storm.shade.org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at org.apache.storm.shade.org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.apache.storm.shade.org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668)
at org.apache.storm.shade.org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at org.apache.storm.shade.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.apache.storm.shade.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
Content of storm.yaml:
########### These MUST be filled in for a storm configuration
storm.zookeeper.servers:
- "localhost"
storm.local.dir: "/var/storm"
nimbus.host: "localhost"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
I resolved the two problems by myself.
for the first problem:
I had to restart zookeeper after installing apache storm.
for the second problem:
the problem was not a problem of storm.
the cause of this problem is due to the platform of azure, the 8080 port was closed by default.
So, I thank myself for this effort.
If it were allowed, I will give you (myself) +1M points
I had the same error and the answer I needed is not here, so here I am.
Before going further, I am using Storm version 2.1.0. In this version nimbus.host has been replaced by the array option nimbus.seeds.
In the log is written Did you specify a valid list of nimbus hosts for config nimbus.seeds?. As nimbus.seeds is a mandatory option, to fix the problem I simply added the host IP address as the only element in this list:
nimbus.seeds: ["HOST IP"]