Unable to start the daemon process. This problem might be caused by incorrect configuration of the daemon - flutter

I Try almost all Solution That I found on Internet But my problem is Still not solved.
I Open gradle.properties and I add The Below code It doesn't help me,
org.gradle.jvmargs=-Xmx1024m -XX:MaxPermSize=512m
I also delete .gradle directory then I try building again. But The Problem Still Exists. Please I want Help?
FAILURE: Build failed with an exception.
* What went wrong:
Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the User Manual chapter on the daemon at https://docs.gradle.org/6.7/userguide/gradle_daemon.html
Process command line: C:\Program Files\Java\jdk1.8.0_291\bin\java.exe -Xmx1024M -Dfile.encoding=windows-1252 -Duser.country=US -Duser.language=en -Duser.variant -cp C:\Users\likec\.gradle\wrapper\dists\gradle-6.7-all\cuy9mc7upwgwgeb72wkcrupxe\gradle-6.7\lib\gradle-launcher-6.7.jar org.gradle.launcher.daemon.bootstrap.GradleDaemon 6.7
Please read the following process output to find out more:
-----------------------
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 1048576 bytes for AllocateHeap
# An error report file with more information is saved as:
# C:\Users\likec\.gradle\daemon\6.7\hs_err_pid14492.log

Related

Failed to read postmaster.pid file while running embedded-postgres

My Spring application uses yandex-qatools/postgresql-embedded for executing Unit Tests.
While executing them, I am constantly getting the below error :
ERROR 75847 --- [ Test worker] r.y.q.embed.postgresql.PostgresProcess : Failed to read PID file (File '/var/folders/sh/xr6l_7bs1_z9v1jfsyctc45w0000gp/T/postgresql-embed-b05c213f-7416-4200-a586-a3afb3263478/db-content-4f285249-22ea-4625-b771-156adbf5851f/postmaster.pid' does not exist)
java.io.FileNotFoundException: File '/var/folders/sh/xr6l_7bs1_z9v1jfsyctc45w0000gp/T/postgresql-embed-b05c213f-7416-4200-a586-a3afb3263478/db-content-4f285249-22ea-4625-b771-156adbf5851f/postmaster.pid' does not exist
There is a warning popped up before the exception, but for now, let's ignore it.
WARN 75847 --- [ Test worker] r.y.q.embed.postgresql.PostgresProcess : Possibly failed to run initdb:
no data was returned by command ""/private/var/folders/sh/xr6l_7bs1_z9v1jfsyctc45w0000gp/T/postgresql-embed-b05c213f-7416-4200-a586-a3afb3263478/pgsql-10.3-1/pgsql/bin/postgres" -V"
The program "postgres" is needed by initdb but was not found in the
same directory as "/private/var/folders/sh/xr6l_7bs1_z9v1jfsyctc45w0000gp/T/postgresql-embed-b05c213f-7416-4200-a586-a3afb3263478/pgsql-10.3-1/pgsql/bin/initdb".
Check your installation.
I verified that no other instance of Postgress is running on my local machine using
ps -ef|grep postgres
Followed this thread as well, but it doesn't help.
Ran out of options to fix this, can anyone please suggest how to resolve it.
OSX version: 12.1
Thanks in advance
In my case, besides your error, I also could see the following error:
r.y.q.embed.postgresql.PostgresProcess : Possibly failed to run initdb:
initdb: invalid locale settings; check LANG and LC_* environment variables
This message led me to the solution. I just added the below environment properties to my .zshrc file:
export LANG="en_US.UTF-8"
export LC_ALL="en_US.UTF-8"
export LC_CTYPE="en_US.UTF-8"

No assemblies found

Getting the following exception.I ensured that I have jars in the location and i run the job with root permission.
ERROR [ExecutorRunner for app-20170509111035-0004/19686] 2017-05-09 11:11:19,267 SPARK-WORKER Logging.scala:95 - Error running executor
java.lang.IllegalStateException: No assemblies found in '/usr/apps/cassandra/dse/resources/spark/lib'.
at org.apache.spark.launcher.CommandBuilderUtils.checkState(CommandBuilderUtils.java:249) ~[spark-launcher_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.launcher.AbstractCommandBuilder.findAssembly(AbstractCommandBuilder.java:342) ~[spark-launcher_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.launcher.AbstractCommandBuilder.buildClassPath(AbstractCommandBuilder.java:187) ~[spark-launcher_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.launcher.AbstractCommandBuilder.buildJavaCommand(AbstractCommandBuilder.java:119) ~[spark-launcher_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.launcher.WorkerCommandBuilder.buildCommand(WorkerCommandBuilder.scala:39) ~[spark-core_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.launcher.WorkerCommandBuilder.buildCommand(WorkerCommandBuilder.scala:48) ~[spark-core_2.10-1.6.3.3.jar:1.6.3.3]
at org.apache.spark.deploy.worker.CommandUtils$.buildCommandSeq(CommandUtils.scala:63) ~[spark-core_2.10-1.6.3.3.jar:5.0.8]
at org.apache.spark.deploy.worker.CommandUtils$.buildProcessBuilder(CommandUtils.scala:51) ~[spark-core_2.10-1.6.3.3.jar:5.0.8]
at org.apache.spark.deploy.worker.ExecutorRunner.org$apache$spark$deploy$worker$ExecutorRunner$$fetchAndRunExecutor(ExecutorRunner.scala:143) ~[spark-core_2.10-1.6.3.3.jar:5.0.8]
at org.apache.spark.deploy.worker.ExecutorRunner$$anon$1.run(ExecutorRunner.scala:71) [spark-core_2.10-1.6.3.3.jar:5.0.8]
INFO [dispatcher-event-loop-0] 2017-05-09 11:11:19,268 SPARK-WORKER Logging.scala:58 - Executor app-20170509111035-0004/19686 finished with state FAILED message java.lang.IllegalStateException: No assemblies found in '/usr/apps/cassandra/dse/resources/spark/lib'.
Any help would be appreciated.
I find out the root cause. I'm running DSE analytics cluster and on one of my seed node, accidentally JRE version got changed
and after pointing to the correct JRE this issue disappeared.

FAILURE: Build failed with an exception. Ionic

I am trying to build a IONIC project but I am facing this exception and I am not able to debug the solution for this.I tried a lot of suggestions but there was no solution.I'm waiting for your help
1-ionic start test4 tabs
2-cd test4
3-ionic platform add android
4-ionic build android
ERROR
C:\Users\onurr\test4>ionic build android
Running command: "C:\Program Files\nodejs\node.exe"
C:\Users\onurr\test4\hooks\after_prepare\010_add_platform_class.js
C:\Users\onurr\test4
add to body class: platform-android
ANDROID_HOME=C:\Users\onurr\AppData\Local\Android\sdk
JAVA_HOME=C:\Program Files (x86)\Java\jdk1.8.0_112
Subproject Path: CordovaLib
Starting a new Gradle Daemon for this build (subsequent builds will be faster).
FAILURE: Build failed with an exception.
* What went wrong:
Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the user guide chapter on the daemon at https://docs.gradle.org/2.14.1/userguide/gradle_daemon.html
Please read the following process output to find out more:
-----------------------
Error occurred during initialization of VM
Could not reserve enough space for 2097152KB object heap
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
Error: cmd: Command failed with exit code 1 Error output:
FAILURE: Build failed with an exception.
* What went wrong:
Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the user guide chapter on the daemon at https://docs.gradle.org/2.14.1/userguide/gradle_daemon.html
Please read the following process output to find out more:
-----------------------
Error occurred during initialization of VM
Could not reserve enough space for 2097152KB object heap
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
I'm assuming you're using Windows looking at your stack trace.
Pay attention to this message in your stack trace:
Error occurred during initialization of VM
Could not reserve enough space for 2097152KB object heap
This normally happens when you have 32-bit JVM. If your OS is 64-bit, replace 32-bit JVM with 64-bit JVM. It has higher heap constraint and will solve the problem. This is the recommended solution.
Read this oracle documentation for more info.
If your OS is 32-bit, try setting up your OS as following and reissue ionic build android command (might need to restart Command Prompt):
Go to Start → Control Panel → System → Advanced System Settings → Advanced (tab) → Environment Variables → System Variables → New:
Variable name: _JAVA_OPTIONS
Variable value: -Xmx512M

Cloudera Kafka can not run

I installed Zookeeper and tried to install Kafka0.8.0 on Cloudera5.4.4. It successfully deployed, but when I ran it, it failed. The error log as following:
[Errno 2] No such file or directory: '/var/log/kafka/server.log'
I really have no any idea.
Thanks in advance!
I met this problem before, the default maximum memory of brokers were set to 0MB by Cloudera(it seems a bug of Cloudeara), it caused Kafka could not get run, and the parameter fetch.message.max.bytes also was set to low by default. Check the stderr installation log, search keyword ERROR, otherwise the log too messy to check. You would find the root error message. The message above [Errno 2] No such file or directory: '/var/log/kafka/server.log' is not the root exception.

Error running hadoop application in Eclipse on Windows

I'm trying to set up an Eclipse environment for developing and debugging hadoop. I'm following Tom White's Definitive Hadoop 3rd ed. What I would like to do is get the MaxTemperature app working locally on my Windows within Eclipse before moving it to my Hortonworks sandbox VM. The comment on page 158 about using the local job runner seems to be what I want. I don't want to set up a full hadoop implementation on Windows. I'm hoping with the right config params I can convince it to run as a java application inside Eclipse.
Windows: 7
Eclipse: Luna
Hadoop: 2.4.0
JDK: 7
When I set the Run configuration for MaxTemperatureDriver (Source code on page 157) to
inputfile outputdir foo (deliberate bogus 3rd parameter)
I get the usage message so I know I'm running my program with those params.
If I remove the bogus third param I get
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at mark.MaxTemperatureDriver.run(MaxTemperatureDriver.java:52)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at mark.MaxTemperatureDriver.main(MaxTemperatureDriver.java:56)
I've tried inserting -conf but it seems to be ignored. There is no error message if I specify a nonexistent path.
I've tried inserting -fs file:/// -jt local, but it makes no difference
I've tried inserting -D mapreduce.framework.name=local
I've tried specifying the input and output with the file: format
Note. I'm not asking about how to configure eclipse to connect to a remote Hadoop installation. I want the application to run within eclipse.
Is this possible? Any ideas?
Additional info:
I turned on debugging. I saw:
582 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
583 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Cannot pick org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider - returned null protocol
I'm wondering not why YarnClientProtocolProvider failed, but why it didn't try LocalClientProtocolProvider.
New info:
It seems that this is an issue with Hadoop 2.4.0. I recreated my environment with Hadoop 1.2.1, followed the instructions in
http://gerrymcnicol.com/index.php/2014/01/02/hadoop-and-cassandra-part-4-writing-your-first-mapreduce-job/
added the Windows hack from
http://bigdatanerd.wordpress.com/2013/11/14/mapreduce-running-mapreduce-in-windows-file-system-debug-mapreduce-in-eclipse
and it all started working.
Following blog will be useful.
Running mapreduce in Windows filesystem