HTTP ERROR 503 Service Unavailable MESSAGE: Service Unavailable SERVLET: - Powered by Jetty:// 10.0.6 - service

I ran a java project but got the above error. Please help me resolve it. Here is the log of console windows and the content of error:
HTTP ERROR 503 Service Unavailable
URI: /PES/
STATUS: 503
MESSAGE: Service Unavailable
SERVLET: -
Powered by Jetty:// 10.0.6
Starting preview server on port 8080
Modules:
PES (/PES)
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

Related

ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)

~/kafka$ bin/kafka-server-start.sh config/server.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/boitran/hive/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/boitran/kafka/libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/boitran/kafka/libs/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
[2022-12-29 13:46:12,977] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2022-12-29 13:46:13,359] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
java.lang.NoSuchMethodError: scala.Predef$.refArrayOps([Ljava/lang/Object;)Ljava/lang/Object;
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:43) ~[kafka_2.13-3.3.1.jar:?]
at kafka.Kafka$.main(Kafka.scala:86) [kafka_2.13-3.3.1.jar:?]
at kafka.Kafka.main(Kafka.scala) [kafka_2.13-3.3.1.jar:?]
Exception in thread "main" java.lang.NoSuchMethodError: scala.Option.orNull(Lscala/$less$colon$less;)Ljava/lang/Object;
at kafka.utils.Exit$.exit(Exit.scala:28)
at kafka.Kafka$.main(Kafka.scala:122)
at kafka.Kafka.main(Kafka.scala)
I have install and start kafka with exactly following to apache site, but it's error
Error is common if you have a lower version of Scala installed on your computer. Either download Kafka 3.3.1-2.12, for example, or upgrade Scala

kie server in Jboss EAP 7.4.0 localhost:8080/kie-server/services/rest/server' as failed due to Connection refused (Connection refused)

On running the command $EAP_HOME/bin/standalone.sh -c standalone-full.xml -b I'm getting error like
12:06:15,197 INFO
[org.kie.server.controller.websocket.client.WebSocketKieServerControllerImpl]
(KieServer-ControllerConnect) Kie Server points to non Web Socket
controller 'http://localhost:8080/business-central/rest/controller',
using default REST mechanism 12:06:15,198 WARN
[org.kie.server.services.impl.controller.DefaultRestControllerImpl]
(KieServer-ControllerConnect) Exception encountered while syncing with
controller at
http://localhost:8080/business-central/rest/controller/server/default-kieserver
error Connection refused (Connection refused) 12:06:19,805 WARN
[org.kie.server.client.impl.AbstractKieServicesClientImpl]
(Thread-125) Marking endpoint
'http://localhost:8080/kie-server/services/rest/server' as failed due
to Connection refused (Connection refused) 12:06:19,805 WARN
[org.kie.server.client.impl.AbstractKieServicesClientImpl]
(Thread-125) Cannot invoke request - 'No available endpoints found'
12:06:24,812 WARN
[org.kie.server.client.impl.AbstractKieServicesClientImpl]
(Thread-125) Marking endpoint
'http://localhost:8080/kie-server/services/rest/server' as failed due
to Connection refused (Connection refused) 12:06:24,812 WARN
[org.kie.server.client.impl.AbstractKieServicesClientImpl]
(Thread-125) Cannot invoke request - 'No available endpoints found'
on bind address, business central is running but I cannot find any execution server
but when I run the same command without bind address like
./standalone.sh -c standalone-full.xml
All are working properly
What would be the issue when using bind address
I'm using
rhpam 7.12.0
jboss eap 7.4.0
I've done default configuration. And I didn't change any configuration

Unable to resolve address for Kubernetes service

I have installed Kafka single-node using Confluent. There is an error in Kafka pod :
[WARN] 2022-04-26 14:29:47,008 [main-SendThread(zookeeper.confluent.svc.cluster.local:2181)] org.apache.zookeeper.ClientCnxn run - Session 0x0 for sever zookeeper.confluent.svc.cluster.local:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException.
java.lang.IllegalArgumentException: Unable to canonicalize address zookeeper.confluent.svc.cluster.local:2181 because it's not resolvable
at org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:78)
at org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:41)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1161)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1210)
[INFO] 2022-04-26 14:29:47,273 [main] kafka.zookeeper.ZooKeeperClient info - [ZooKeeperClient Kafka server] Closing.
[ERROR] 2022-04-26 14:29:48,112 [main-SendThread(zookeeper.confluent.svc.cluster.local:2181)] org.apache.zookeeper.client.StaticHostProvider resolve - Unable to resolve address: zookeeper.confluent.svc.cluster.local:2181
java.net.UnknownHostException: zookeeper.confluent.svc.cluster.local
Error messages :
Unable to canonicalize address zookeeper.confluent.svc.cluster.local:2181 because it's not resolvable
Unable to resolve address: zookeeper.confluent.svc.cluster.local:2181
I checked my zookeper ... it's good and works without a problem. Also, check DNS using dnsutils :
$ kubectl -n default exec -it dnsutils -- nslookup zookeeper.confluent.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: zookeeper.confluent.svc.cluster.local
Address: 192.168.0.111
What can I do? Is this a k8s related problem?
This happens with me, but on docker-compose project
finally, I found that no space left on server causes this issue.
clean some spaces and it worked.

Weavescope on Microk8s doesn't recognize containers

I'm running a Microk8s single-node cluster and just installed Weavescope, however it doesn't recognize any containers running. I can see my pods and services though fine, however each pod simply states "0 containers" underneath.
logs from the weavescope agent and app pods indicate that something is very wrong, but I'm not adept enough with Kubernetes to know how to deal with the errors.
Logs from Weavescope agent:
microk8s.kubectl logs -n weave weave-scope-cluster-agent-7944c858c9-bszjw
time="2020-05-23T14:56:10Z" level=info msg="publishing to: weave-scope-app.weave.svc.cluster.local:80"
<probe> INFO: 2020/05/23 14:56:10.378586 Basic authentication disabled
<probe> INFO: 2020/05/23 14:56:10.439179 command line args: --mode=probe --probe-only=true --probe.http.listen=:4041 --probe.kubernetes.role=cluster --probe.publish.interval=4.5s --probe.spy.interval=2s weave-scope-app.weave.svc.cluster.local:80
<probe> INFO: 2020/05/23 14:56:10.439215 probe starting, version 1.13.1, ID 6336ff46bcd86913
<probe> ERRO: 2020/05/23 14:56:10.439261 Error getting docker bridge ip: route ip+net: no such network interface
<probe> INFO: 2020/05/23 14:56:10.439487 kubernetes: targeting api server https://10.152.183.1:443
<probe> ERRO: 2020/05/23 14:56:10.440206 plugins: problem loading: no such file or directory
<probe> INFO: 2020/05/23 14:56:10.444345 Profiling data being exported to :4041
<probe> INFO: 2020/05/23 14:56:10.444355 go tool pprof http://:4041/debug/pprof/{profile,heap,block}
<probe> WARN: 2020/05/23 14:56:10.444505 Error collecting weave status, backing off 10s: Get http://127.0.0.1:6784/report: dial tcp 127.0.0.1:6784: connect: connection refused. If you are not running Weave Net, you may wish to suppress this warning by launching scope with the `--weave=false` option.
<probe> INFO: 2020/05/23 14:56:10.506596 volumesnapshotdatas are not supported by this Kubernetes version
<probe> INFO: 2020/05/23 14:56:10.506950 volumesnapshots are not supported by this Kubernetes version
<probe> INFO: 2020/05/23 14:56:11.559811 Control connection to weave-scope-app.weave.svc.cluster.local starting
<probe> INFO: 2020/05/23 14:56:14.948382 Publish loop for weave-scope-app.weave.svc.cluster.local starting
<probe> WARN: 2020/05/23 14:56:20.447578 Error collecting weave status, backing off 20s: Get http://127.0.0.1:6784/report: dial tcp 127.0.0.1:6784: connect: connection refused. If you are not running Weave Net, you may wish to suppress this warning by launching scope with the `--weave=false` option.
<probe> WARN: 2020/05/23 14:56:40.451421 Error collecting weave status, backing off 40s: Get http://127.0.0.1:6784/report: dial tcp 127.0.0.1:6784: connect: connection refused. If you are not running Weave Net, you may wish to suppress this warning by launching scope with the `--weave=false` option.
<probe> INFO: 2020/05/23 15:19:12.825869 Pipe pipe-7287306037502507515 connection to weave-scope-app.weave.svc.cluster.local starting
<probe> INFO: 2020/05/23 15:19:16.509232 Pipe pipe-7287306037502507515 connection to weave-scope-app.weave.svc.cluster.local exiting
logs from Weavescope app:
microk8s.kubectl logs -n weave weave-scope-app-bc7444d59-csxjd
<app> INFO: 2020/05/23 14:56:11.221084 app starting, version 1.13.1, ID 5e3953d1209f7147
<app> INFO: 2020/05/23 14:56:11.221114 command line args: --mode=app
<app> INFO: 2020/05/23 14:56:11.275231 Basic authentication disabled
<app> INFO: 2020/05/23 14:56:11.290717 listening on :4040
<app> WARN: 2020/05/23 14:56:11.340182 Error updating weaveDNS, backing off 20s: Error running weave ps: exit status 1: "Link not found\n". If you are not running Weave Net, you may wish to suppress this warning by launching scope with the `--weave=false` option.
<app> WARN: 2020/05/23 14:56:31.457702 Error updating weaveDNS, backing off 40s: Error running weave ps: exit status 1: "Link not found\n". If you are not running Weave Net, you may wish to suppress this warning by launching scope with the `--weave=false` option.
<app> ERRO: 2020/05/23 15:19:16.504169 Error copying to pipe pipe-7287306037502507515 (1) websocket: io: read/write on closed pipe

Can't produce messages to Apache Kafka from outside Hortonworks HDP sandbox 2.5

I have a problem trying to produce messages to Apache Kafka topic from outside of Hortonworks HDP Sandbox. Right now I am using version 2.5 deployed to Azure but I experienced similar behaviour using HDP 2.6 on local VirtualBox. I was able to open port 6667 and confirm that TCP connection is getting through to the VM. I am also able to access the list of topics.
Using listeners=PLAINTEXT://0.0.0.0:6667
C:\GIT\kafka\bin\windows>kafka-console-producer.bat --broker-list MY_PUBLIC_IP:6667 --topic test1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/core/build/dependant-libs-2.11.11/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/tools/build/dependant-libs-2.11.11/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/connect/api/build/dependant-libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/connect/file/build/dependant-libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/connect/json/build/dependant-libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>sadasdas
>[2017-11-04 23:06:14,672] WARN [Producer clientId=console-producer] Connection to node 1001 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2017-11-04 23:06:15,174] ERROR Error when sending message to topic test1 with key: null, value: 8 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test1-0: 1508 ms has passed since batch creation plus linger time
[2017-11-04 23:06:15,724] WARN [Producer clientId=console-producer] Connection to node 1001 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2017-11-04 23:06:16,875] WARN [Producer clientId=console-producer] Connection to node 1001 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
With the following configuration entries added:
advertised.port=6667
advertised.listeners=PLAINTEXT://MY_PUBLIC_IP:6667
advertised.host.name=MY_PUBLIC_IP
result:
C:\GIT\kafka\bin\windows>kafka-console-producer.bat --broker-list MY_PUBLIC_IP:6667 --topic test1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/core/build/dependant-libs-2.11.11/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/tools/build/dependant-libs-2.11.11/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/connect/api/build/dependant-libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/connect/file/build/dependant-libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/connect/json/build/dependant-libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>asdasd
[2017-11-04 22:37:11,713] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 2 : {test1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
Could you give me any advice as to what may be wrong? I'd like to stick with the sandbox for now.
UPDATE
I finally got it working. This was the solution:
advertised.listeners=PLAINTEXT://sandbox.hortonworks.com:6667
Seems like my Kafka can only talk to brokers when hostname is given, not IP...