how to use jconsole remote connect to Resin 4 - jconsole

I want to use jconsole remote connect to Resin 4,but it doesn't work when I modify the resin.properties:
#Jconsole config
-Dcom.sun.management.jmxremote.port : 8080
-Dcom.sun.management.jmxremote.ssl : false
-Dcom.sun.management.jmxremote.authenticate : false
-Djava.rmi.server.hostname : host_ip
I think that it's the resin.properties doesn't take effect,but I don't know how to config it now.

From 4.0 it has to be configured in resin.xml, below is the documentation link however, I am still unable to get the jmx port up and running.
http://caucho.com/resin-4.0/admin/resin-admin-console.xtp#JMXConsole

Related

"SchemaRegistryException: Failed to get Kafka cluster ID" for LOCAL setup

I'm downloaded the .tz (I am on MAC) for confluent version 7.0.0 from the official confluent site and was following the setup for LOCAL (1 node) and Kafka/ZooKeeper are starting fine, but the Schema Registry keeps failing (Note, I am behind a corporate VPN)
The exception message in the SchemaRegistry logs is:
[2021-11-04 00:34:22,492] INFO Logging initialized #1403ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2021-11-04 00:34:22,543] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer)
[2021-11-04 00:34:22,614] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.ApplicationServer)
[2021-11-04 00:35:23,007] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryException: Failed to get Kafka cluster ID
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1488)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:166)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:71)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:90)
at io.confluent.rest.Application.configureHandler(Application.java:271)
at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:245)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
Caused by: java.util.concurrent.TimeoutException
at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:180)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1486)
... 7 more
My schema-registry.properties file has bootstrap URL set to
kafkastore.bootstrap.servers=PLAINTEXT://localhost:9092
I saw some posts saying its the SchemaRegistry unable to connect to the KafkaCluster URL because of the localhost address potentially. I am fairly new to Kafka and basically just need this local setup to run a git repo that is utilizing some Topics/Kafka so my questions...
How can I fix this (I am behind a corporate VPN but I figured this shouldn't affect this)
Do I even need the SchemaRegistry?
I ended up just going with the Docker local setup inside, and the only change I had to make to the docker compose YAML was to change the schema-registry port (I changed it to 8082 or 8084, don't remember exactly but just an unused port that is not being used by some other Confluent service listed in the docker-compose.yaml) and my local setup is working fine now

JMX Connection refused on Kubernetes with AdoptOpenJDK OpenJ9

With my team we are trying to move our micro-services to openj9, they are running on kubernetes. However, we encounter a problem on the configuration of JMX. (openjdk8-openj9)
We have a connection refused when we try a connection with jvisualvm (and a port-forwarding with Kubernetes).
We haven't changed our configuration, except for switching from Hotspot to OpenJ9.
The error :
E0312 17:09:46.286374 17160 portforward.go:400] an error occurred forwarding 1099 -> 1099: error forwarding port 1099 to pod XXXXXXX, uid : exit status 1: 2020/03/12 16:09:45 socat[31284] E connect(5, AF=2 127.0.0.1:1099, 16): Connection refused
The java options that we use :
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.port=1099
-Dcom.sun.management.jmxremote.rmi.port=1099
We are using the last adoptopenjdk/openjdk8-openj9 docker image.
Do you have any ideas?
Thank you !
Regards.
I managed to figure out why it wasn't working.
It turns out that to pass the JMX options to the service we were using the Kubernetes service descriptor in YAML. It looks like this:
- name: _JAVA_OPTIONS
value: -Dzipkinserver.listOfServers=http://zipkin:9411 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote.rmi.port=1099
I realized that the JMX properties were not taken into account from _JAVA_OPTIONS when the application is not launch with ENTRYPOINT in the docker container.
So I pass the properties directly into the Dockerfile like this and it works.
CMD ["java", "-Dcom.sun.management.jmxremote", "-Dcom.sun.management.jmxremote.authenticate=false", "-Dcom.sun.management.jmxremote.ssl=false", "-Dcom.sun.management.jmxremote.local.only=false", "-Dcom.sun.management.jmxremote.port=1099", "-Dcom.sun.management.jmxremote.rmi.port=1099", "-Djava.rmi.server.hostname=127.0.0.1", "-cp","app:app/lib/*","OurMainClass"]
It's also possible to keep _JAVA_OPTIONS and setup an ENTRYPOINT in the dockerfile.
Thanks!

WLST - Cannot connect() with HTTPS - T3S Protocol - Port 9002

We changed the configuration of our WebLogic servers to use HTTPS and T3S for connections and use the secure encrypted port 9002 instead of cleartext port 7001. However when using the Web Logic Scripting Tool (WLST)'s connect() function, errors are thrown. One such error is as follows:
WLSTException: Error occurred while performing connect : Cannot connect via t3s or https. If using demo certs, verify that the -Dweblogic.security.TrustKeyStore=DemoTrust system property is set. : t3s://DatServer:9002: Destination 10.10.100.3, 9002 unreachable; nested exception is:
javax.net.ssl.SSLHandshakeException: General SSLEngine problem; No available router to destination
Use dumpStack() to view the full stacktrace :
The syntax of the connect function is: connect('user', 'password', 't3s://host:9002')
This connect() function works fine before the switch from HTTP to HTTPS. Now we cannot connect to the remote admin server using the connect command. Does anyone have any idea how to fix this?
I read some interesting help options but none of them seemed to work. These help suggestions and tips are located here: https://community.oracle.com/thread/1036828
We were able to connect to the remote host and port via telnet. We saw that the port is open and listening for connections on the loop back address with netstat. We tried adding these options to the script invocation: java -cp /path/to/weblogic.jar weblogic.WLST -Dweblogic.security.TrustKeyStore=DemoTrust -Dssl.debug=true Dweblogic.security.SSL.ignoreHostnameVerification=true -Djava.security.egd=file:/dev/./urandom but this also did not work.
We enabled tunneling in the General tab of WebLogic but not in the HTTP tab. I am not the one in control of the server so I just have to suggest things and hope that the instructions are followed.
I get it running in 12.2. by adding to
../oracle_common/common/bin/setWlstEnv_internal.sh
at the end the following lines (youu need to customize line 5 und 6, the values in brackets):
JAVA_OPTIONS="-Dweblogic.ssl.JSSEEnabled=true ${JAVA_OPTIONS}"
JAVA_OPTIONS="-Dweblogic.security.SSL.enableJSSE="true" ${JAVA_OPTIONS}"
JAVA_OPTIONS="-Dweblogic.security.SSL.ignoreHostnameVerification=true ${JAVA_OPTIONS}"
JAVA_OPTIONS="-Dweblogic.security.TrustKeyStore=CustomTrust ${JAVA_OPTIONS}"
JAVA_OPTIONS="-Dweblogic.security.CustomTrustKeyStoreFileName= ${JAVA_OPTIONS}"
JAVA_OPTIONS="-Dweblogic.security.CustomTrustKeyStorePassPhrase= ${JAVA_OPTIONS}"
JAVA_OPTIONS="-Dweblogic.security.CustomTrustKeyStoreType=JKS ${JAVA_OPTIONS}"
export JAVA_OPTIONS
and modifying in
../oracle_common/common/bin/wlst_internal.sh
the line starting with
eval '"${JAVA_HOME}/bin/java"' ${JVM_ARGS} ...
by adding ${JAVA_OPTIONS}
so that it looks as follows:
eval '"${JAVA_HOME}/bin/java"' ${JVM_ARGS} ${JAVA_OPTIONS} weblogic.WLST '"$#"'
Hope this helps, allthough modifying scripts that are named "..internal.." doesn´t give me a good feeling
export this before running wlst.sh
export WLST_PROPERTIES=" -Dweblogic.security.TrustKeyStore=CustomTrust -Dweblogic.security.CustomTrustKeyStoreFileName=/u01/oracle/properties/truststore.jks -Dweblogic.security.CustomTrustKeyStoreType=jks -Dweblogic.security.CustomTrustKeyStorePassPhrase=qaz#1234 " ;

failed to find free socket port for process dispatcher when trying remote debug

Highlights:
windows 10 host machine
ubuntu vagrant box (virtualbox) as guest vm
using vagrant port forwarding as like this: config.vm.network "forwarded_port", guest: 1234, host: 12340
IDE: IntelliJ IDEA with Ruby plugin
The Issue:
I've tried to set up remote ruby debug following this guide and getting an error in IDE: "failed to find free socket port for process dispatcher". It looks this issue is not IntelliJ-specific, I was able to reproduce it with latest RubyMine as well.
From IDEA's log
2017-07-07 21:53:03,515 [8879188] INFO - tion.impl.ExecutionManagerImpl - Failed to find free socket port for process dispatcher
com.intellij.execution.ExecutionException: Failed to find free socket port for process dispatcher
at org.jetbrains.plugins.ruby.ruby.debugger.RubyProcessDispatcher.<init>(RubyProcessDispatcher.java:46)
at org.jetbrains.plugins.ruby.ruby.debugger.RubyRemoteDebugRunner.doExecute(RubyRemoteDebugRunner.java:62)
...
Caused by: java.net.BindException: Address already in use: JVM_Bind
at java.net.TwoStacksPlainSocketImpl.socketBind(Native Method)
at java.net.TwoStacksPlainSocketImpl.socketBind(TwoStacksPlainSocketImpl.java:137)
...
I can understand it says Address already in use: JVM_Bind, but how remote debug supposed to work at all then? (I mean Is there any way to access guest vm port not forwarding it before? Clearly no) Any help to solve this issue is much appreciated.
For me the issue was due to another debug session that was open in the background. To prevent that from happening again (and also close all other currently open sessions, once you run the configuration again) select "Single instance only" in the Debug Configuration:

Run two instances of JBoss Fuse on the same box

What configuration files/values to change in order to run second instance of the JBoss Fuse on the same box?
Second instance properties after configuration:
Installation home: c:\jboss-fuse-6.2.1.redhat-084-2 (/usr/app/jboss-fuse-6.2.1.redhat-084-2)
Remote debug port: 5006
Jetty/CXF port: 8282
RMI registry port: 2099
RMI server port: 54444
SSH port: 8202
ActiveMQ port: 62616
HawtIO console: http://localhost:8282/hawtio/login
Installation home:
$JBOSS_FUSE_HOME\bin\setenv
----
KARAF_HOME=/usr/app/jboss-fuse-6.2.1.redhat-084-2
KARAF_DATA=/usr/app/jboss-fuse-6.2.1.redhat-084-2/data
KARAF_ETC=/usr/app/jboss-fuse-6.2.1.redhat-084-2/etc
export KARAF_HOME
export KARAF_DATA
export KARAF_ETC
%JBOSS_FUSE_HOME%\bin\setenv.bat
----
SET KARAF_HOME=c:\jboss-fuse-6.2.1.redhat-084-2
SET KARAF_DATA=c:\jboss-fuse-6.2.1.redhat-084-2\data
SET KARAF_ETC=c:\jboss-fuse-6.2.1.redhat-084-2\etc
Remote debug port
$JBOSS_FUSE_HOME\bin\admin
$JBOSS_FUSE_HOME\bin\karaf
$JBOSS_FUSE_HOME\bin\patch
----
DEFAULT_JAVA_DEBUG_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5006"
%JBOSS_FUSE_HOME%\bin\admin.bat
%JBOSS_FUSE_HOME%\bin\karaf.bat
%JBOSS_FUSE_HOME%\bin\patch.bat
----
set DEFAULT_JAVA_DEBUG_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5006
Jetty/CXF port
JBOSS_FUSE_HOME\etc\jetty.xml
---
<Property name="jetty.port" default="8282"/>
JBOSS_FUSE_HOME\etc\org.ops4j.pax.web.cfg
---
org.osgi.service.http.port=8282
JBOSS_FUSE_HOME\etc\system.properties
---
org.osgi.service.http.port=8282
RMI registry port/RMI server port
JBOSS_FUSE_HOME\etc\org.apache.karaf.management.cfg
---
rmiRegistryPort = 2099
rmiServerPort = 54444
SSH port
JBOSS_FUSE_HOME\etc\org.apache.karaf.shell.cfg
---
sshPort = 8202
ActiveMQ port
JBOSS_FUSE_HOME\etc\system.properties
---
activemq.port = 62616
activemq.host = localhost
This depends on the applications installed, so let's stick with vanilla JBoss Fuse 6.2+
There are 3 components that need a change in configuration:
ActiveMQ broker
Hawtio web interface
sshd
Conflicts happen while binding on TCP/IP ports. Use two sets of ports and you're done.
Configuration files are located in $KARAF_ETC folder, usually etc/ inside JBoss Fuse installation folder.
ActiveMQ
Change property activemq.port inside etc/system.properties.
Default value is 61616.
Hawtio / OSGi HTTP
Change property org.osgi.service.http.port inside etc/system.properties. Default is 8181.
This is also defined in etc/org.ops4j.pax.web.cfg.
SSH
Change property sshPort inside etc/org.apache.karaf.shell.cfg. Default is 8101
Another way: Create a Fabric with two child containers. Each container is just like a regular instance. The infrastructure is just a bit more complex than the standalone one.