I've installed Cloudera Manager in a three-machine cluster. While I added CDH5 services (including Hive,Spark,Yarn...) to cluster, it then failed with "ArithmeticException: / by zero".
version: Cloudera Enterprise Data Hub Edition trail 5.4.5 (#5 built by jenkins on 20150728-0320 git: ...)
org.drools.runtime.rule.ConsequenceException:Exception executing consequence for rule "Compute hiveserver2_spark_executor_cores" in com.cloudera.cmf.rules: org.drools.RuntimeDroolsException: java.lang.ArithmeticException: / by zero
at DefaultConsequenceExceptionHandler.java line 39
in org.drools.runtime.rule.impl.DefaultConsequenceExceptionHandler handleException()
Last caused-by is:
Caused by: java.lang.ArithmeticException:/ by zero
at ComputationFunctions.java line 278
in com.cloudera.cmf.rules.ComputationFunctions calculateHiveSparkExecutorMemoryTotal()
1. ComputationFunctions.java line 278
in com.cloudera.cmf.rules.ComputationFunctions calculateHiveSparkExecutorMemoryTotal()
2. Rule_Compute_HiveServer2_executor_and_driver_memory_and_overehead_25364fb53f7d4bf08ec8a11bca01bcf8.java line 16
in com.cloudera.cmf.rules.Rule_Compute_HiveServer2_executor_and_driver_memory_and_overehead_25364fb53f7d4bf08ec8a11bca01bcf8 accumulateExpression1()
3. Rule_Compute_HiveServer2_executor_and_driver_memory_and_overehead_25364fb53f7d4bf08ec8a11bca01bcf8AccumulateExpression1Invoker.java line 31
in com.cloudera.cmf.rules.Rule_Compute_HiveServer2_executor_and_driver_memory_and_overehead_25364fb53f7d4bf08ec8a11bca01bcf8AccumulateExpression1Invoker evaluate()
4. JavaAccumulatorFunctionExecutor.java line 107
in org.drools.base.accumulators.JavaAccumulatorFunctionExecutor accumulate()
5. Accumulate.java line 173
in org.drools.rule.Accumulate accumulate()
......
Anybody help? Or tells me where these compiled class such as "Rule_Compute_HiveServer2_executor_and_driver_memory_and_overehead_25364fb53f7d4bf08ec8a11bca01bcf8.java" is placed. I attempt to replace this class with more debug message.
Many thanks,
Related
I'm trying to configure Oracle APEX to use SAML with ForgeRock as the IDP. I'm running APEX 21.2.0 on Enterprise DB 21.3.0.0 and ORDS 21.4.1 (all images from the Oracle Container Registry). Worked through the docs here.
I think I'm just about there, I have the SAML config done in APEX, I've created a remote SP in ForgeRock and the app redirects as expected. Once I authenticate with ForgeRock IDM, I get redirected back to the apex_authentication.saml_callback endpoint then I get an error page. The APEX logs have the following error:
- ora_sqlerrm: ORA-19032: Expected XML tag , got no content
ORA-06512: at "SYS.XMLTYPE", line 310
ORA-06512: at line 1
ORA-06512: at "APEX_210200.WWV_FLOW_XML_SECURITY", line 1096
ORA-06512: at "APEX_210200.WWV_FLOW_XML_SECURITY", line 1307
ORA-06512: at "APEX_210200.WWV_FLOW_AUTHENTICATION_SAML", line 462
ORA-06512: at "APEX_210200.WWV_FLOW_AUTHENTICATION_NATIVE", line 1268
ORA-06512: at "APEX_210200.WWV_FLOW_PLUGIN", line 3500
ORA-06512: at "APEX_210200.WWV_FLOW_PLUGIN", line 4097
ORA-06512: at "APEX_210200.WWV_FLOW_AUTHENTICATION", line 1688
I can't seem to find anything useful about this error in a SAML authentication context. I'm guessing there's an issue processing the assertion. I double checked the certs and the assertion looks good in SAML Tracer so I'm stuck. Any ideas what I'm missing? Are there additional logs somewhere that might be more useful?
You'll need to apply the latest patchset for Apex 21.2 to get beyond this issue. It was fixed in Apex 21.2.2 but it's now up to 21.2.6. Even if you get beyond this issue it may not be all plain sailing depending on the IdP you are using.
Some useful hints and help can be found on this thread
We have a running TwinCat project on a PC. After the restart of the machine, the following error occurs when i try to run the project in "Run Mode" or try Online Reset.
Errors:
Type Server (Port) Timestamp Message
Error (65535) 'Term 29 (EK1100)' (1006): state change aborted (requested 'PREOP', back to 'INIT').
Error (65535) 'Term 29 (EK1100)' (1006): 'INIT to PREOP' failed! Error: 'check product code'. Device 'EL1014-XXXX-XXXX' found and 'EK1100-0000-0018' expected.
Warning (65535) 'Term 33 (EL1014) (1010) - Term 34 (EL1014) (1011)' Communication interrupted
Warning (65535) 'Term 35 (EK1100) (1012) - Term 43 (EL1014) (1020)' Communication interrupted
Tree structure
Online State
The EK1100 and EL1014 were changed, but the error stays. Please, can you suggest a fix for this problem?
Very likely the problem is that the configured ethercat tree structure does not match the one found.
As you can see the Ethercat master expects 'EK1100-0000-0018' but finds 'EL1014-XXXX-XXXX'
Rescan you ethercat tree from the system manager and see if the configured and the found hardware-configuration matches.
Correct your ethercat configuration if you notice errors and reactivate the project.
Specifically take a look at TERM29.
There you can probably find the source of your error.
Either a wrong card has been swapped or the ethercat connection is faulty there.
I am trying to implement the GitHub project (https://github.com/tomatoTomahto/CDH-Sensor-Analytics) on our internal Hadoop cluster via Cloudera Data Science Workbench.
On running the project on Cloudera Data Science Workbench, I get the error "No Brokers available" when trying to connect to Kafka through Python api KafkaProducer(bootstrap_servers='broker1:9092') [Code can be found in https://github.com/tomatoTomahto/CDH-Sensor-Analytics/blob/master/datagenerator/KafkaConnection.py].
I have authenticated using Kerberos. I have tried giving broker node without port number, and also as a list. But, nothing has worked so far.
Below is the stack trace.
NoBrokersAvailable: NoBrokersAvailable
NoBrokersAvailable Traceback (most recent call
last)
in engine
----> 1 dgen = DataGenerator(config)
/home/cdsw/datagenerator/DataGenerator.py in __init__(self, config)
39
40 self._kudu = KuduConnection(self._config['kudu_master'],
self._config['kudu_port'], spark)
---> 41 self._kafka =
KafkaConnection(self._config['kafka_brokers'],
self._config['kafka_topic'])
42
43 #self._kafka
/home/cdsw/datagenerator/KafkaConnection.py in __init__(self, brokers,
topic)
4 class KafkaConnection():
5 def __init__(self, brokers, topic):
----> 6 self._kafka_producer =
KafkaProducer(bootstrap_servers=brokers)
7 self._topic = topic
8
/home/cdsw/.local/lib/python3.6/site-packages/kafka/producer/kafka.py
in __init__(self, **configs)
333
334 client = KafkaClient(metrics=self._metrics,
metric_group_prefix='producer',
--> 335 **self.config)
336
337 # Get auto-discovered version from client if necessary
/home/cdsw/.local/lib/python3.6/site-packages/kafka/client_async.py in
__init__(self, **configs)
208 if self.config['api_version'] is None:
209 check_timeout =
self.config['api_version_auto_timeout_ms'] / 1000
--> 210 self.config['api_version'] =
self.check_version(timeout=check_timeout)
211
212 def _bootstrap(self, hosts):
/home/cdsw/.local/lib/python3.6/site-packages/kafka/client_async.py in
check_version(self, node_id, timeout, strict)
806 try_node = node_id or self.least_loaded_node()
807 if try_node is None:
--> 808 raise Errors.NoBrokersAvailable()
809 self._maybe_connect(try_node)
810 conn = self._conns[try_node]
NoBrokersAvailable: NoBrokersAvailable
I also tried connecting outside of workbench through CLI by having VPN connection. I got the same error.
Any pointers on what am I missing? Thanks in advance!
The first step is establishing whether the network route is open and the broker is up and listening on that port. After that you can check authentication, etc.
Did you try telnet <broker host> 9092?
You may need to explicitly set advertised.listeners in addition to listeners, I've occasionally seen a weird quirk with Java where it wasn't binding to the expected network interface (or at least the one I expected!) and I had to force it using advertised.listeners.
I've installed bugzilla in my local machine (Windows 7) and its working good. But when I try to create a new account it says:
Traceback:
at Bugzilla/Mailer.pm line 179.
Bugzilla::Mailer::MessageToMTA(...) called at Bugzilla/Token.pm line 89
Bugzilla::Token::issue_new_user_account_token(...) called at Bugzilla/User.pm line 2423
Bugzilla::User::check_and_send_account_creation_confirmation(...) called at C:/bugzilla/createaccount.cgi line 39
I followed the documentation provided in bugzilla. But unable to resolve this issue. Can anyone help this?
I'm trying to configure eclipse running on windows to remote debug a java application i have running on a unix box. The remote debugger connects but the Launcher fails with the following stack trace:
Thread [main] (Suspended (exception ClassNotFoundException))
URLClassLoader$1.run() line: 200 [local variables unavailable]
AccessController.doPrivileged(PrivilegedExceptionAction<T>, AccessControlContext) line: not available [native method]
Launcher$AppClassLoader(URLClassLoader).findClass(String) line: 188
Launcher$AppClassLoader(ClassLoader).loadClass(String, boolean) line: 306
Launcher$AppClassLoader.loadClass(String, boolean) line: 268
Launcher$AppClassLoader(ClassLoader).loadClass(String) line: 251
Launcher$AppClassLoader(ClassLoader).loadClassInternal(String) line: 319
I have the project src referenced in the Source tab of the debug config, the default dir contains the jars a I need and I checked 'Search for duplicate source files on the path' incase that made any difference... it didn't.
when stepping through I noticed the AppClassLoader has a URLClasspath member called ucp which has the a path ArrayList containing items from the unix classpath (ie unix paths like: /home/example.jar ) - I'm wondering if these are trying to be resolved on windows (the debug session I'm running in eclipse) which is causing the error ??
I've been searching the web for answers all day without luck - has anyone dealt with this before or got and suggestion how to resolve ?
thanks in advance...