Unable to Connect Open Shift Kafka Instance to Spring Boot Application - apache-kafka

This is the application.properties file configuration in the Spring Boot Application:
spring.kafka.bootstrap-servers: test-cevr------------:443
topics.hydra.name: aayush
topics.hydra.partitions: 1
topics.hydra.replica:1
spring.kafka.jaas.enabled=true
spring.kafka.properties.security.protocol=SASL_PLAINTEXT
spring.kafka.properties.sasl.mechanism=PLAIN
// username is clientID and password is client secrete
spring.kafka.properties.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="b7e41-----------cf-fcdd34d6bcca" password="H0cZS8------4a4YVAf36";
After Executing this I am getting the following error:
What is the reason behind this ?

Related

Not able to post MQ message from Websphere Application Server image using docker image

I have an legacy application that uses websphere application server 9.0.0 , I want to containerize it using docker, but I am putting the message on MQ , it is giving me below error:
Unwrapping Non-DiagnosticException:
Exception Type : com.ibm.msg.client.jms.DetailedJMSException
Exception Message: JMSWMQ0018: Failed to connect to queue manager 'MQGWD2' with connection mode 'Client' and host name 'mqgwd2.sdde.deere.com(2171)'.
Begin Stack Trace:
com.ibm.msg.client.jms.DetailedJMSException: JMSWMQ0018: Failed to connect to queue manager 'MQGWD2' with connection mode 'Client' and host name 'mqgwd2.sdde.deere.com(2171)'.
Check the queue manager is started and if running in client mode, check there is a listener running. Please see the linked exception for more information.
at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:595)
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:215)
at com.ibm.msg.client.wmq.internal.WMQConnection.<init>(WMQConnection.java:424)
at com.ibm.msg.client.wmq.internal.WMQXAConnection.<init>(WMQXAConnection.java:67)
at com.ibm.msg.client.wmq.factories.WMQXAConnectionFactory.createV7ProviderConnection(WMQXAConnectionFactory.java:187)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createProviderConnection(WMQConnectionFactory.java:7810)
at com.ibm.msg.client.wmq.factories.WMQXAConnectionFactory.createProviderXAConnection(WMQXAConnectionFactory.java:98)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl.createXAConnectionInternal(JmsConnectionFactoryImpl.java:390)
at com.ibm.mq.jms.MQXAQueueConnectionFactory.createXAQueueConnection(MQXAQueueConnectionFactory.java:154)
My docker-compose.yml looks like:
version: "3.9"
services:
consolidated:
build: .
ports:
- "9043:9043"
- "9443:9443"
- "9083:9083"
To run this application without container, I didn't install any MQ separately, I am just using the one the is there already in websphere server, the same doesn't work with the containerized image. I have compared all the connection factories through admin console and they are looking ok. The same configuration works on non-containerized version of this application

kafka-connect-fs to connect SFTP using Key File or Passwordless Entry

I'm trying to integrate Kafka Connect FS & Source SFTP with Username & Passwordless Entry(Private key). But I'm getting AUTH Failure with below settings.
Its completely working fine with username:password#hostname:port format for a Test SFTP Location, but actual source doesnt allow password based authentication.
Even i tried, "fs.sftp.keyfile". but no luck.
Here is my Property file:
name=SourceConnector
connector.class=com.github.mmolimar.kafka.connect.fs.FsSourceConnector
tasks.max=1
policy.fs.fs.sftp.impl=org.apache.hadoop.fs.sftp.SFTPFileSystem
fs.uris=sftp://username:#hostname:22/home/user/output/
fs.sftp.keyfile=/home/user/.ssh/id_rsa
topic=sampletopic
policy.class=com.github.mmolimar.kafka.connect.fs.policy.CronPolicy
policy.recursive=true
file_reader.delimited.settings.data_type_mapping_error=false
file_reader.delimited.settings.allow_nulls=true
policy.regexp=^SOURCE_1.*.gz$
policy.batch_size=0
policy.cleanup=none
file_reader.class=com.github.mmolimar.kafka.connect.fs.file.reader.CsvFileReader
file_reader.batch_size=3000
policy.cron.expression=0/30 * * ? * * *
file_reader.delimited.compression.type=gzip
Please help me to connect with private key. Thanks
ERROR FsSourceTask Cannot retrieve files to process from the FS: [[]]. There was an error executing the policy but the task tolerates this and continues: com.jcraft.jsch.JSchException: Auth fail
I'm able to resolve this error with below config:
policy.fs.fs.sftp.keyfile=/home/user/.ssh/id_rsa

How to register app from private repo in Spring Cloud dataflow 2.6.1

I'm using SCDF 2.6.1 in Openshift 3, and I'm facing error while registering the app, error log like below :
java.lang.NullPointerException: null
at org.springframework.cloud.dataflow.configuration.metadata.container.DefaultContainerImageMetadataResolver.getRegistryRequest(DefaultContainerImageMetadataResolver.java:162)
at org.springframework.cloud.dataflow.configuration.metadata.container.DefaultContainerImageMetadataResolver.getImageLabels(DefaultContainerImageMetadataResolver.java:110)
at org.springframework.cloud.dataflow.configuration.metadata.BootApplicationConfigurationMetadataResolver.resolvePortNamesFromContainerImage(BootApplicationConfigurationMetadataResolver.java:215)
at org.springframework.cloud.dataflow.configuration.metadata.BootApplicationConfigurationMetadataResolver.listPortNames(BootApplicationConfigurationMetadataResolver.java:163)
at org.springframework.cloud.dataflow.server.controller.AppRegistryController.getInfo(AppRegistryController.java:193)
at org.springframework.cloud.dataflow.server.controller.AppRegistryController.info(AppRegistryController.java:162)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
I checked the line of code in DefaultContainerImageMetadataResolver.java:162
// Convert the image name into a well-formed ContainerImage
ContainerImage containerImage = this.containerImageParser.parse(imageName);
// Find a registry configuration that matches the image's registry host
RegistryConfiguration registryConf = this.registryConfigurationMap.get(containerImage.getRegistryHost());
// Retrieve a registry authorizer that supports the configured authorization type.
RegistryAuthorizer registryAuthorizer = this.registryAuthorizerMap.get(registryConf.getAuthorizationType());
I'm pretty sure the error is because registryConf is null as result from
RegistryConfiguration registryConf = this.registryConfigurationMap.get(containerImage.getRegistryHost());
How to put my private repo URI in registryConfigurationMap ?
I have tried to put imagePullSecret in the deployment.yml which is registered with the private repo, but I think it doesn't work because in the startup log, I still see :
2020-09-03 04:55:24.111 INFO 1 --- [ main] urationMetadataResolverAutoConfiguration :
Final Registry Configurations: {registry-1.docker.io=RegistryConfiguration{registryHost='registry-1.docker.io', user='null', secret='****'', authorizationType=dockeroauth2, manifestMediaType='application/vnd.docker.distribution.manifest.v2+json', disableSslVerification='false',
extra={registryAuthUri=https://auth.docker.io/token?service=registry.docker.io&scope=repository:{repository}:pull&offline_token=1&client_id=shell }}}
The only place where SCDF server downloads the container image layer is when it looks for app metadata.
Currently, this is configured to use the docker registry host (as this is where all the out-of-the-box applications are hosted).
If you want to override, you can modify these property values at the time of server startup and proceed.
Remember the fact that this configuration is only needed to download the app metadata layer of the image - not to download the entire container image at the SCDF server side.

Spring Boot Admin not resolving URLS for health and manage

Spring Version: 5.0.8.RELEASE
Spring Boot Dependencies Version: 2.0.4.RELEASE
Java Version: 1.8.0_131
Spring Boot Admin reports that a client is down. However I can see that the client is running by navigating to it in the browser. On the details view for the client in Sprint Boot Admin the message under the health section is "Fetching health failed, Network Error". There are three URLs shown in the header of the details page:
http://localhost:8090/WorkOrderPrinting
http://localhost:8090/WorkOrderPrinting/manage
http://localhost:8090/WorkOrderPrinting/manage/health
Clicking them opens the respective views. Here is the output from the health view:
{"status":"UP","details":{"diskSpace":{"status":"UP","details":{"total":80482930688,"free":77726302208,"threshold":10485760}},"db":{"status":"UP","details":{"tenantRoutingDataSource":{"status":"UP","details":{"database":"Informix Dynamic Server","hello":104}},"userSetTenantRoutingDS":{"status":"UP","details":{"database":"Informix Dynamic Server","hello":104}},"dataSourceCbCommon":{"status":"UP","details":{"database":"Informix Dynamic Server","hello":104}},"dataSourceCbOrg":{"status":"UP","details":{"database":"Informix Dynamic Server","hello":104}}}}}}
It seems to be telling me that the client application is up and running. So I am not sure why Sprint Boot Admin UI is not reflecting that.
I have a second application running as a client and its working as expected. The result of the health URL is the same.
{"status":"UP","details":{"diskSpace":{"status":"UP","details":{"total":80482930688,"free":77726248960,"threshold":10485760}},"db":{"status":"UP","details":{"tenantRoutingDataSource":{"status":"UP","details":{"database":"Informix Dynamic Server","hello":104}},"userSetTenantRoutingDS":{"status":"UP","details":{"database":"Informix Dynamic Server","hello":104}},"dataSourceCbCommon":{"status":"UP","details":{"database":"Informix Dynamic Server","hello":104}},"dataSourceCbOrg":{"status":"UP","details":{"database":"Informix Dynamic Server","hello":104}}}}}}
The Spring Boot Admin Server log shows this error for Work Order Printing. Again, not sure why the URLs are working, but the log shows an error.
2018-09-13 09:19:19.071 DEBUG 5208 --- [ parallel-2] d.c.b.a.server.services.StatusUpdater : Update status for Instance(id=6212ad7c5ab4, version=1, registration=Registration(name=Work Order Printing Development, managementUrl=http://localhost:8090/WorkOrderPrinting/manage, healthUrl=http://localhost:8090/WorkOrderPrinting/manage/health, serviceUrl=http://localhost:8090/WorkOrderPrinting, source=http-api), registered=true, statusInfo=StatusInfo(status=DOWN, details={error=Found, status=302}), statusTimestamp=2018-09-13T13:15:08.169Z, info=Info(values={}), endpoints=Endpoints(endpoints={health=Endpoint(id=health, url=http://localhost:8090/WorkOrderPrinting/manage/health)}), buildVersion=null)
Spring Boot Admin Server Config
server.servlet.context-path=/adminserver
logging.file=/var/log/eti/webui/adminserver.log
logging.level.de.codecentric.boot.admin.server=INFO
Failing Client Config
#Admin Panel config
#
#This is the URL for the admin panel that this application will send its information to
spring.boot.admin.client.url=http://localhost:8080/adminserver
#This is required when deploying to Tomcat because the Admin panel cant seem to determine what the URL will be on its own
spring.boot.admin.client.instance.service-base-url=http://localhost:8090
#This is the name that will be displayed in the admin panel for this application
spring.boot.admin.client.instance.name=Work Order Printing
#
spring.boot.admin.auto-registration=true
#
#Actuator config needed to expose endpoints to admin panel
#
management.endpoints.web.base-path=/manage
management.endpoints.web.exposure.include:*
management.endpoint.health.show-details=always
Working Client Config
#Admin Panel config
#
#This is the URL for the admin panel that this application will send its information to
spring.boot.admin.client.url=http://localhost:8080/adminserver
#This is required when deploying to Tomcat because the Admin panel cant seem to determine what the URL will be on its own
spring.boot.admin.client.instance.service-base-url=http://localhost:8085
#This is the name that will be displayed in the admin panel for this application
spring.boot.admin.client.instance.name=LaunchPad
spring.boot.admin.auto-registration=true
#
#Actuator config needed to expose endpoints to admin panel
#
management.endpoints.web.base-path=/manage
management.endpoints.web.exposure.include:*
management.endpoint.health.show-details=always
So the problem turned out to be a configuration issue on my end. The Admin Server log shows the following:
2018-09-13 09:19:19.071 DEBUG 5208 --- [ parallel-2] d.c.b.a.server.services.StatusUpdater : Update status for Instance(id=6212ad7c5ab4, version=1, registration=Registration(name=Work Order Printing Development, managementUrl=http://localhost:8090/WorkOrderPrinting/manage, healthUrl=http://localhost:8090/WorkOrderPrinting/manage/health, serviceUrl=http://localhost:8090/WorkOrderPrinting, source=http-api), registered=true, statusInfo=StatusInfo(status=DOWN, details={error=Found, status=302}), statusTimestamp=2018-09-13T13:15:08.169Z, info=Info(values={}), endpoints=Endpoints(endpoints={health=Endpoint(id=health, url=http://localhost:8090/WorkOrderPrinting/manage/health)}), buildVersion=null)
Its basically saying that there is a 302 (redirect) happening so it cant reach the URL. The reason for this that I forgot to allow access to the URLs in Spring Security config. I could get to them with the browser because I was logged in. Spring Boot Admin could not, because it was not logged in.
I added a rule to allow access to the /manage/ urls
public void configure(WebSecurity web) throws Exception
{
web.ignoring().antMatchers("/css/**", "/fonts/**", "/img/**", "/js/**", "/close", "/webjars/**", "/manage/**");
}

Failed to open native connection to Cassandra from jar produced by intellij

I've developed a spark-based application that gets data from kafka and saves it in cassandra DB, using intellij.
connection code in scala:
val cluster = Cluster.builder().addContactPoint("192.168.0.253").withPort(9042).build();
val session = cluster.connect()
the code work fine when I run it from intellij, but I get this error when I try to run it from jar using command line:
Exception in thread "main" java.io.IOException:
Failed to open native connection to Cassandra at {192.168.0.253}:9042 at .... ....
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException:
All host(s) tried for query failed
(tried: /192.168.0.253:9042 (com.datastax.driver.core.exceptions.TransportException:
[/192.168.0.253] Error writing)) at
com.datastax.driver.core.ControlConnection.reconnectInternal(
ControlConnection.java:233) at
com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1424)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:403)
at com.datastax.spark.connector.cql.CassandraConnector$.
com$datastax$spark$connector$cql$CassandraConnector$$createSession(
CassandraConnector.scala:155) ... 13 more
I've produced a jar file from intellij, and I built the jar with dependencies, using [ copy to the output directory and link via manifist ] option.
cassandra.yaml file:
#Whether to start the native transport server.
start_native_transport: true
#port for the CQL native transport to listen for clients on
native_transport_port: 9042
Why is this error raised & and how can I fix it?
Thanks in advance