We are running Jenkins in Open-shift and its fully up and running. Now, when trying to add a static agents we are getting the 404 not found error.
Agent startup script:
java -jar remoting_dslave.jar -jnlpUrl http://xxx-xxx-xxx.apps.ocp1.uat.dbs.com/computer/xxxxxxxx2a/jenkins-agent.jnlp -secret xxxxxxxxxxxxxxxxxxxxxx -workDir "/dcifent/JenkinsSlaves/ci3_dynamicSlave"
Getting the below error:
WARNING: Connection refused (Connection refused)
Jun 07, 2022 11:17:23 AM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: http:/xxxxxxxxx.apps.ocp1.uat.dbs.com/ provided port:8080 is not reachable
java.io.IOException: http://xxxxxxxxx.apps.ocp1.uat.dbs.com/ provided port:8080 is not reachable
at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:311)
at hudson.remoting.Engine.innerRun(Engine.java:689)
at hudson.remoting.Engine.run(Engine.java:514)
Creted new router in OpenShift for port 8080 updated the startup script as below
java -jar agent.jar -jnlpUrl http://routefor8080.apps.ocp1.uat.dbs.com/computer/xxxxxxxx2a/jenkins-agent.jnlp -secret xxxxxxxxxxxxxxxxx -workDir "/dcifent/JenkinsSlaves/ci3_dynamicSlave"
Getting the different error now.
Failed to obtain http://routefor8080.apps.ocp1.uat.dbs.com/computer/xxxxxxxx2a/jenkins-agent.jnlp?encrypt=true
java.io.IOException: Failed to load http://routefor8080.apps.ocp1.uat.dbs.com/computer/xxxxxxxx2a/jenkins-agent.jnlp?encrypt=true: 404 Not Found
at hudson.remoting.Launcher.parseJnlpArguments(Launcher.java:517)
at hudson.remoting.Launcher.run(Launcher.java:345)
at hudson.remoting.Launcher.main(Launcher.java:296)
Waiting 10 seconds before retry
How can i connect static agents to dynamic Jenkins, can someone please help?
Related
On running the command $EAP_HOME/bin/standalone.sh -c standalone-full.xml -b I'm getting error like
12:06:15,197 INFO
[org.kie.server.controller.websocket.client.WebSocketKieServerControllerImpl]
(KieServer-ControllerConnect) Kie Server points to non Web Socket
controller 'http://localhost:8080/business-central/rest/controller',
using default REST mechanism 12:06:15,198 WARN
[org.kie.server.services.impl.controller.DefaultRestControllerImpl]
(KieServer-ControllerConnect) Exception encountered while syncing with
controller at
http://localhost:8080/business-central/rest/controller/server/default-kieserver
error Connection refused (Connection refused) 12:06:19,805 WARN
[org.kie.server.client.impl.AbstractKieServicesClientImpl]
(Thread-125) Marking endpoint
'http://localhost:8080/kie-server/services/rest/server' as failed due
to Connection refused (Connection refused) 12:06:19,805 WARN
[org.kie.server.client.impl.AbstractKieServicesClientImpl]
(Thread-125) Cannot invoke request - 'No available endpoints found'
12:06:24,812 WARN
[org.kie.server.client.impl.AbstractKieServicesClientImpl]
(Thread-125) Marking endpoint
'http://localhost:8080/kie-server/services/rest/server' as failed due
to Connection refused (Connection refused) 12:06:24,812 WARN
[org.kie.server.client.impl.AbstractKieServicesClientImpl]
(Thread-125) Cannot invoke request - 'No available endpoints found'
on bind address, business central is running but I cannot find any execution server
but when I run the same command without bind address like
./standalone.sh -c standalone-full.xml
All are working properly
What would be the issue when using bind address
I'm using
rhpam 7.12.0
jboss eap 7.4.0
I've done default configuration. And I didn't change any configuration
I am trying to connect to a nfs server. where i seeing the below issue.
mount.nfs: timeout set for Mon May 11 19:27:01 2020
mount.nfs: trying text-based options 'nfsvers=3,addr=IP'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: portmap query retrying: RPC: Timed out
mount.nfs: prog 100003, trying vers=3, prot=17
mount.nfs: portmap query failed: RPC: Unable to receive - Connection refused
nfs-utils is installed and running in the client.
rpcbind is running in the client machine.
Ports 111, 2049 and 892 are opened to the NFS server.
However i am not sure if i am missing anything to resolve this issue.
I am trying to backup and restore rancher server (single node install), as the described here.
After backup, I tried to turn off the rancher server node, and I run a new rancher container on a new node (in the same network, but another ip address), then I restored using the backup file.
After restoring, I logged in to the rancher UI and it showed the error below:
So, I checked the logs of the rancher server and it showed as below:
2019-10-05 16:41:32.197641 I | http: TLS handshake error from 127.0.0.1:38388: EOF
2019-10-05 16:41:32.202442 I | http: TLS handshake error from 127.0.0.1:38380: EOF
2019-10-05 16:41:32.210378 I | http: TLS handshake error from 127.0.0.1:38376: EOF
2019-10-05 16:41:32.211106 I | http: TLS handshake error from 127.0.0.1:38386: EOF
2019/10/05 16:42:26 [ERROR] ClusterController c-4pgjl [user-controllers-controller] failed with : failed to start user controllers for cluster c-4pgjl: failed to contact server: Get https://192.168.94.154:6443/api/v1/namespaces/kube-system?timeout=30s: waiting for cluster agent to connect
2019/10/05 16:44:34 [ERROR] ClusterController c-4pgjl [user-controllers-controller] failed with : failed to start user controllers for cluster c-4pgjl: failed to contact server: Get https://192.168.94.154:6443/api/v1/namespaces/kube-system?timeout=30s: waiting for cluster agent to connect
2019/10/05 16:48:50 [ERROR] ClusterController c-4pgjl [user-controllers-controller] failed with : failed to start user controllers for cluster c-4pgjl: failed to contact server: Get https://192.168.94.154:6443/api/v1/namespaces/kube-system?timeout=30s: waiting for cluster agent to connect
2019-10-05 16:50:19.114475 I | mvcc: store.index: compact 75951
2019-10-05 16:50:19.137825 I | mvcc: finished scheduled compaction at 75951 (took 22.527694ms)
2019-10-05 16:55:19.120803 I | mvcc: store.index: compact 76282
2019-10-05 16:55:19.124813 I | mvcc: finished scheduled compaction at 76282 (took 2.746382ms)
After that, I checked logs of the master nodes, I found that the rancher agent still tries to connect to the old rancher server (old ip address), not as the new one, so it makes the cluster not available.
How can I fix this?
You need to re-register the node in Rancher using the following steps.
Update the server-url in Rancher by going to Global -> Settings -> server-url
This should be the full URL with https://
Then use this script to re-register the node in Rancher https://github.com/mattmattox/cluster-agent-tool
I have Wildfly 10 running on a docker swarm cluster. All HTTP requests go to a Load Balancer (traefik). In traefik (btw. labels in docker stack yml, it works perfectly) app.wildfly.my.swarm on port 80 redirects to the Wildfly container's 8080 port and admin.wildfly.my.swarm on port 80 redirects to port 9990. In my browser everything is working fine.
But if I try to use the maven wildfly plugin for remote deployment it fails with:
[ERROR] Failed to execute goal org.wildfly.plugins:wildfly-maven-plugin:1.2.1.Final:deploy (default-cli) on project automat: Failed to execute goal deploy. java.net.ConnectException: WFLYPRT0053: Could not connect to remote+http://admin.wildfly.my.swarm:80. The connection failed: XNIO000816: Redirect encountered establishing connection -> [Help 1]
Only if I open the management port directly it works.
Is there anywhere configuration needed to be able to deploy remotely with maven wildfly plugin to wildfly which is behind a proxy?
EDIT:
If trying to connect with CLI:
./bin/jboss-cli.sh
You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands.
[disconnected /] connect admin.wildfly.my.swarm
The controller is not available at admin.wildfly.my.swarm:9990: java.net.ConnectException: WFLYPRT0053: Could not connect to http-remoting://admin.wildfly.my.swarm:9990. The connection failed: WFLYPRT0053: Could not connect to http-remoting://admin.wildfly.my.swarm:9990. The connection failed: Connection refused
[disconnected /] connect admin.wildfly.my.swarm:30000
Authenticating against security realm: ManagementRealm
Username: admin
Password:
Warning! There were errors trying to load extensions. For more details, please, execute 'extension-commands --errors'
[standalone#admin.wildfly.my.swarm:30000 /]
Port 30000 is auto-assigned by swarm.
I am not able to shutdown jboss server with the below command.
Command :
JBOSS_HOME/bin/jboss-cli.sh --connect controller=$SERVER_IP_ADDRESS:$SERVER_PORT command=:shutdown
Each time I was killing the server to restart which is not a good process to do. as we are moving to PROD environment we should use shutdown command to stop the server instead of killing
I am getting the below error. Please help.
Server Log :
jboss#devkopmdmh01.corp.ybusa.net::/usr/local/prod/jboss/jboss-eap-6.1/jboss-as/bin > ./shutdownMDM.sh
org.jboss.as.cli.CliInitializationException: Failed to connect to the controller
at org.jboss.as.cli.impl.CliLauncher.initCommandContext(CliLauncher.java:280)
Caused by: org.jboss.as.cli.CommandLineException: The controller is not available at 10.0.15.162:8080
at org.jboss.as.cli.impl.CommandContextImpl.tryConnection(CommandContextImpl.java:951)
... 8 more
Caused by: java.io.IOException: java.net.ConnectException: JBAS012144: Could not connect to remote://10.0.15.162:8080. The connection timed out
at org.jboss.as.controller.client.impl.AbstractModelControllerClient.executeForResult(AbstractModelControllerClient.java:129)
... 11 more
Caused by: java.net.ConnectException: JBAS012144: Could not connect to remote://10.0.15.162:8080. The connection timed out
at org.jboss.as.protocol.ProtocolConnectionUtils.connectSync(ProtocolConnectionUtils.java:131)
... 13 more
The JBoss CLI is attempting to connect to the native management endpoint for the running JBoss and send a shutdown command. It looks like it's trying to send to 10.0.15.162:8080 which is not the right port (most likely).
Take a look in your bin/jboss-cli.xml file which should contain the host and port to connect to. For example:
<default-controller>
<host>localhost</host>
<port>9999</port>
</default-controller>