I'm new to Zookeeper. I have created a node in the zookeeper server in standalone mode. Here is the code snippet for that.
public Connect(String hostPort, String znode, String filename) throws KeeperException, IOException, InterruptedException {
this.filename = filename;
zk = new ZooKeeper(hostPort, 3000, this);
zk.create(znode, new byte[0],
ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
}
Now I want to give authentication requirements using SASL in DIGEST-MD5 mode when I create the node (in the above code). I have successfully configured required configurations in the Zookeeper server hosted and have not configured in the Client.
Thanks in advance.
I found a way to enable sasl authentication for zookeeper nodes.
Here is the code I used.
zk.addAuthInfo("digest", "admin:admin".getBytes());
Related
I'm using spring cloud stream alongside Aiven's schema registry which uses confluent's schema registry. Aiven's schema registry is secured with a password. Based on these instructions, these two config parameters need to be set to successfully access the schema registry server.
props.put("basic.auth.credentials.source", "USER_INFO");
props.put("basic.auth.user.info", "avnadmin:schema-reg-password");
Everything is fine when I only use vanilla java's kafka drivers, but if I use Spring cloud stream, I don't know how to inject these two parameters. At the moment, I'm putting "basic.auth.user.info" and "basic.auth.credentials.source" under "spring.cloud.stream.kafka.binder.configuration" in the application.yml file.
Doing this, I'm getting "401 Unauthorized" on the line where the schema wants to get registered.
Update 1:
Based on 'Ali n's suggestion, I updated the way SchemaRegistryClient's bean was configured so that it becomes aware of the SSL context.
#Bean
public SchemaRegistryClient schemaRegistryClient(
#Value("${spring.cloud.stream.schemaRegistryClient.endpoint}") String endpoint) {
try {
final KeyStore keyStore = KeyStore.getInstance("PKCS12");
keyStore.load(new FileInputStream(
new File("path/to/client.keystore.p12")),
"secret".toCharArray());
final KeyStore trustStore = KeyStore.getInstance("JKS");
trustStore.load(new FileInputStream(
new File("path/to/client.truststore.jks")),
"secret".toCharArray());
TrustStrategy acceptingTrustStrategy = (X509Certificate[] chain, String authType) -> true;
SSLContext sslContext = SSLContextBuilder
.create()
.loadKeyMaterial(keyStore, "secret".toCharArray())
.loadTrustMaterial(trustStore, acceptingTrustStrategy)
.build();
HttpClient httpClient = HttpClients.custom().setSSLContext(sslContext).build();
ClientHttpRequestFactory requestFactory = new HttpComponentsClientHttpRequestFactory(
httpClient);
ConfluentSchemaRegistryClient schemaRegistryClient = new ConfluentSchemaRegistryClient(
new RestTemplate(requestFactory));
schemaRegistryClient.setEndpoint(endpoint);
return schemaRegistryClient;
} catch (Exception ex) {
ex.printStackTrace();
return null;
}
}
This helped getting rid of the error on app's startup and registered the schema. However, whenever the app wanted to push a message to Kafka, a new error was thrown again. Finally this was also fixed by mmelsen's answer.
I ran into the same problem as the situation I was in was to connect to a secured schema registry hosted by aiven and secured by basic auth. In order for me to make it work I had to configure the following properties:
spring.kafka.properties.schema.registry.url=https://***.aiven***.com:port
spring.kafka.properties.basic.auth.credentials.source=USER_INFO
spring.kafka.properties.basic.auth.user.info=username:password
the other properties for my binder are:
spring.cloud.stream.binders.input.type=kafka
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.brokers=https://***.aiven***.com:port <-- different from the before mentioned port
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.configuration.security.protocol=SSL
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.configuration.ssl.truststore.location=truststore.jks
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.configuration.ssl.truststore.password=secret
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.configuration.ssl.keystore.type=PKCS12
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.configuration.ssl.keystore.location=clientkeystore.p12
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.configuration.ssl.keystore.password=secret
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.configuration.ssl.key.password=secret
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.binder.configuration.value.deserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer
spring.cloud.stream.binders.input.environment.spring.cloud.stream.kafka.streams.binder.autoCreateTopics=false
what actually happens is that Spring cloud stream will add the spring.kafka.properties.basic* to the DefaultKafkaConsumerFactory and that will add the config to the KafkaConsumer. At some point during the initialization of the spring kafka, a CachedSchemaRegistryClient is created that is provisioned with these properties. This Client contains a method called configureRestService that will check if a map of properties contains "basic.auth.credentials.source". As we provide this through the spring.kafka.properties it will find this property and will take care of creating the appropriate headers when accessing the schema registry's endpoint.
hope this works out for you as well.
I'm using spring cloud version Greenwich.SR1, spring-boot-starter 2.1.4.RELEASE, avro-version 1.8.2 and confluent.version 5.2.1
The binder configuration only handles well-known consumer and producer properties.
You can set arbitrary properties at the binding level.
spring.cloud.stream.kafka.binding.<binding>.consumer.configuration.basic.auth...
Since Aiven uses SSL for Kafka security protocol, it is required to use certificates for the authentication.
You can follow this page to understand how it works. In the nutshell, you need to run the following command to generate certificates and import them:
openssl pkcs12 -export -inkey service.key -in service.cert -out client.keystore.p12 -name service_key
keytool -import -file ca.pem -alias CA -keystore client.truststore.jks
Then you can use the following properties to make use of the certificates:
spring.cloud.stream.kafka.streams.binder:
configuration:
security.protocol: SSL
ssl.truststore.location: client.truststore.jks
ssl.truststore.password: secret
ssl.keystore.type: PKCS12
ssl.keystore.location: client.keystore.p12
ssl.keystore.password: secret
ssl.key.password: secret
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: org.apache.kafka.common.serialization.StringSerializer
When I am trying to connect solace VMR Server and deliver the messages from a Java client called Vertx AMQP Bridge.
I am able to connect the Solace VMR Server but after connecting, not able to send messages to solace VMR.
I am using below sender code from vertx client.
public class Sender extends AbstractVerticle {
private int count = 1;
// Convenience method so you can run it in your IDE
public static void main(String[] args) {
Runner.runExample(Sender.class);
}
#Override
public void start() throws Exception {
AmqpBridge bridge = AmqpBridge.create(vertx);
// Start the bridge, then use the event loop thread to process things thereafter.
bridge.start("13.229.207.85", 21196,"UserName" ,"Password", res -> {
if(!res.succeeded()) {
System.out.println("Bridge startup failed: " + res.cause());
return;
}
// Set up a producer using the bridge, send a message with it.
MessageProducer<JsonObject> producer =
bridge.createProducer("T/GettingStarted/pubsub");
// Schedule sending of a message every second
System.out.println("Producer created, scheduling sends.");
vertx.setPeriodic(1000, v -> {
JsonObject amqpMsgPayload = new JsonObject();
amqpMsgPayload.put(AmqpConstants.BODY, "myStringContent" + count);
producer.send(amqpMsgPayload);
System.out.println("Sent message: " + count++);
});
});
}
}
I am getting the error below:
Bridge startup failed: io.vertx.core.impl.NoStackTraceThrowable:
Error{condition=amqp:not-found, description='SMF AD bind response
error', info={solace.response_code=503, solace.response_text=Unknown
Queue}} Apr 27, 2018 3:07:29 PM io.vertx.proton.impl.ProtonSessionImpl
WARNING: Receiver closed with error
io.vertx.core.impl.NoStackTraceThrowable:
Error{condition=amqp:not-found, description='SMF AD bind response
error', info={solace.response_code=503, solace.response_text=Unknown
Queue}}
I have created queue and also topic correctly in solace VMR but not able to send/receive messages. Am I missing any configuration from solace VMR Server side? Is there any code-change required in the Vertx Sender Java code above? I am getting the error trace above when delivering message. Can someone help on the same?
Vertx AMQP Bridge Java client :https://vertx.io/docs/vertx-amqp-bridge/java/
There are a few different reason why you may be encountering this error.
It could be that the client is not authorized to publish guaranteed messages. To fix this, you need to enable "guaranteed endpoint create" in the client-profile on the Solace router side.
It may also be that the application is using Reply Handling. This is not currently supported with the Solace router. Support for this will be added in the 8.11 release of the Solace VMR. A workaround for this would be to set ReplyHandlingSupport to false.
AmqpBridgeOptions().setReplyHandlingSupport(false);
There is also a known issue in the Solace VMR which causes this error when unsubscribing from a durable topic endpoint. A fix for this issue will also be in the 8.11 release of the Solace VMR. A workaround for this is to disconnect the client without first unsubscribing.
I am trying to connect to spring vault using role based authentication (spring boot project).
As per documentation, I should be able to connect to spring vault only using approle (pull mode). However, I am getting secrect-id missing exception on application start up.
http://cloud.spring.io/spring-cloud-vault/single/spring-cloud-vault.html#_approle_authentication
When I pass, secret-id also, I am able to connect and properties/values are getting correctly autowired.
Is there any way I can connect with vault using "token + role/role-id" and spring generate secret-id for me automatically at run time using mentioned info.
spring.cloud.vault:
scheme: http
host: <host url>
port: 80
token : <token>
generic.application-name: vault/abc/pqr/test
generic.backend: <some value>
generic.default-context: vault/abc/pqr/test
token: <security token>
authentication: approle
app-role:
role-id: <role-id>
POM:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-vault-starter-config</artifactId>
<version>1.0.0.BUILD-SNAPSHOT</version>
</dependency>
Please let me know in case any other info is required.
Update
#mp911de, I tried as per your suggestion, however spring-cloud-vault is picking properties set in bootstrap.yml and not one set inside "onApplicationEvent" and thus solution is not working. I tried setting property by "System.setProperty" method but that event didn't worked.
However, if I am setting properties in main before run method, it is working as expected. But I need to load application.properties first (need to pick some configuration from there) and thus don't want to write logic there.
Is there anything wrong in my approach ??
#Component public class LoadVaultProperties implements ApplicationListener<ApplicationEnvironmentPreparedEvent> {
private RestTemplate restTemplate = new RestTemplate();
#Override
public void onApplicationEvent(ApplicationEnvironmentPreparedEvent event) {
try {
String roleId = getRoleIdForRole(event); //helper method
String secretId = getSecretIdForRoleId(event); //helper method
Properties properties = new Properties();
properties.put("spring.cloud.vault.app-role.secret-id", secretId);
properties.put("spring.cloud.vault.app-role.role-id", roleId);
event.getEnvironment().getPropertySources().addFirst(new PropertiesPropertySource(
PropertySourceBootstrapConfiguration.BOOTSTRAP_PROPERTY_SOURCE_NAME, properties));
} catch (Exception ex) {
throw new IllegalStateException(ex);
}
}
Spring Vault's AppRole authentication supports two modes but not the pull mode:
Push mode in which you need to supply the secret_id
Authenticating without a secret_id by just passing role_id. This mode requires the role to be created without requiring the secret_id by setting bind_secret_id=false on role creation
Pull mode as mention in the Vault documentation requires the client to know about the secret_id, obtained from a wrapped response. Spring Vault does not fetch a wrapped secret_id but I think that would be a decent enhancement.
Update: Setting system properties before application start:
#SpringBootApplication
public class MyApplication {
public static void main(String[] args) {
System.setProperty("spring.cloud.vault.app-role.role-id", "…");
System.setProperty("spring.cloud.vault.app-role.secret-id", "…");
SpringApplication.run(MyApplication.class, args);
}
References:
Vault documentation on AppRole creation
Spring Cloud Vault documentation on AppRole authentication.
I have a concourse-web and concourse-server instance but am having issues getting the worker to successfully connect with the web-server.
Apr 21 15:42:26 concourse-worker concourse[24460]: {"timestamp":"1492789346.467736244","source":"worker","message":"worker.beacon.restarting","log_level":2,"data":{"error":"failed to dial: failed to construct client connection:%!(EXTRA *errors.errorString=ssh: handshake failed: remote host public key mismatch)","session":"3"}}
I have added the workers public key (id_worker_rsa.pub) to authorized_worker_keys file on the web server but the issue remains. Is there any documentation on how to do this?
concourse:
worker:
config:
garden-dns-server: 10.x.y.z
tsa-host: web.concourse.service.consul
tsa-public-key: /etc/concourse/.ssh/id_web_rsa.pub
tsa-worker-private-key: /etc/concourse/.ssh/id_worker_rsa
work-dir: /var/concourse/worker
service: True
When you start concourse-web you need to provide --tsa-host-key with path to your TSA server key and --tsa-authorized-keys with path to file containing worker public key.
When you start worker you need to provide --tsa-public-key with path to your TSA server public key and --tsa-worker-private-key with path to worker private key.
See here: https://concourse-ci.org/binaries.html
Good Day
I would just like to ask some help / clarification as well since I am new in implementing JMS, I've already been to the following link
"Failed to create session factory" from client connecting to jboss-as7 hornetq
[HORNETQ-952] When binding AS to 0.0.0.0 remote HornetQ clients fail - JBoss Issue Tracker
https://issues.jboss.org/browse/JBPAPP-9393
May I ask how do you do JMS Queue remoting on JBOss 7.1.1.Final
I'm using RemoteConnectionFactory in my connection
Do I need to bind my remote(172.45.45.45) jboss server to a specific IP address in order to connect to it remotely? Like passing a –b IPAddress upon starting jboss?
or configuring it on standalone.conf like this
And in remote IP /jboss/bin/standalone.conf (do I really need this?)
# set this value so we can send JMS messages from remote clients
JAVA_OPTS="$JAVA_OPTS -Djboss.bind.address=172.45.45.45”
Here is a scenario I wanted to send a JMS message to my Queue in my local laptop, I have a publisher for my queue and on 172.45.45.45 (remote ip) web.war() is deployed in that remote IP and I have my consumer configured to this queue
<jms-queue name=“testqueue">
<entry name="java:jboss/queue/mss/testqueue"/>
<durable>true</durable>
</jms-queue>
I have configured my Jboss application user with the guest role
I have my queue config in standalone.xml(copy of standalone-full.xml)
I have my JMS publisher/consumer
If I have the following config below, sending JMS message remotely is working fine. Without that I’m getting a error=2 cannot connect to server(s)
my local etc/hosts config for that IP (do I really need this?)
enter code here
172.45.45.45 ip-172-45-45-45.ec2.internal
remote Jboss server config
And in remote IP /jboss/bin/standalone.conf (do I really need this?)
#set this value so we can send JMS messages from remote clients
JAVA_OPTS="$JAVA_OPTS -Djboss.bind.address=172.45.45.45”
BTW i followed this tutorial
http://theopentutorials.com/examples/java-ee/ejb3/remote-jms-client-and-ejb3-mdb-consumer-eclipse-jboss-7-1/
Need help on implementing JMS queue remotely without the following configuration
my local etc/hosts config for that IP (do I really need this?)
172.45.45.45 ip-172-45-45-45.ec2.internal
remote Jboss server config
And in remote IP /jboss/bin/standalone.conf (do I really need this?)
#set this value so we can send JMS messages from remote clients
JAVA_OPTS="$JAVA_OPTS -Djboss.bind.address=172.45.45.45”
Here is the Code snippet for my JMS Publisher
private static String username = "testuser";
private static String password = "testpass";
private static String host = "remote://172.45.45.45:4447";
Properties jndiProps = new Properties();
try {
jndiProps.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory");
jndiProps.put(Context.PROVIDER_URL, host);
jndiProps.put(Context.SECURITY_PRINCIPAL, username);
jndiProps.put(Context.SECURITY_CREDENTIALS, password);
Context context = new InitialContext(jndiProps);
ConnectionFactory connectionFactory = (ConnectionFactory) context.lookup("jms/RemoteConnectionFactory");
connection = connectionFactory.createConnection(username, password);
topic = (Topic) context.lookup("jms/topic/testTopic");
System.out.println("Connected to JMS");
connection.start();
Sample Code snippet from my Consumer
/**
* Message-Driven Bean implementation class for: MDBSample- This is for Consume the Queue
*/
#MessageDriven(activationConfig = {#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
#ActivationConfigProperty(propertyName = "destination", propertyValue = "java:/queue/test"),
#ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge")})
public class MessageConsumer implements MessageListener {
/**
* Default constructor.
*/
public MessageConsumer() {
}
/**
* #see MessageListener#onMessage(Message)
*/
public void onMessage(Message message) {
try {
ObjectMessage msg = (ObjectMessage) message;
JMSMessage jmsMessage = (JMSMessage) msg.getObject();
System.out.println("Received message is ==========> " + jmsMessage.getAction());
} catch (JMSException e) {
e.printStackTrace();
}
}
}
Any Ideas or suggestion?