Supervisord log file is not created in given path - supervisord

I have the following in my supervisord configuration:
[program:nodejs]
command=node server.js
autostart=true
autorestart=true
stderr_logfile=/home/user/logs/nodejs.err.log
stderr_outfile=/home/user/logs/nodejs.out.log
I don't see log files being created in /logs directory. I see that temporary log files are created in /tmp/nodejs_some_random_string.log. What am I am missing in order to have log files created properly as specified in the configuration?

supervisord has no configuration key with stderr_outfile - check out child-process-logs
use stdout_logfile or stderr_logfile
Try like this:
[program:nodejs]
command=node server.js
autostart=true
autorestart=true
stderr_logfile=/home/user/logs/nodejs.err.log
stdout_logfile=/home/user/logs/nodejs.out.log
see example configuration

Related

Unable to load database on disk

I got this error when I stop my zookeeper instance and copy all data of zookeeper in other path and change dataDir=/data/zookeeper-data in zookeeper.properties.
ERROR Unable to load database on disk (org.apache.zookeeper.server.quorum.QuorumPeer)
java.io.IOException: Unreasonable length = 198238896
at org.apache.jute.BinaryInputArchive.checkLength(BinaryInputArchive.java:127)
at org.apache.jute.BinaryInputArchive.readBuffer(BinaryInputArchive.java:92)
at org.apache.zookeeper.server.persistence.Util.readTxnBytes(Util.java:233)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:629)
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:166)
at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223)
at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:601)
at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:591)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:164)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:111)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)
Seems that some snapshot files under folder /opt/confluent/zookeeper/data/version-2 are corrupted or it doesn't have permission because when I use systemctl start confluent-zookeeper i got thsi error and if I start zookeeper manually I don't have this problem.
It was from systemd want to write logs in a path that is root:root and it doesn't have permission so when I used chown -R kafka:kafka dir/to/log/path and changed permission, my problem solved.

Kafka Server - Could not find a 'KafkaServer' in JAAS

I have a standalone kafka broker that I'm trying to configure SASL for. Configurations are below. I'm trying to set up SASL_PLAIN authentication on the broker.
My understanding is that with the listener.name... configuration in the server.properties, I shouldn't need the jaas file. But I've experimented with one to see if that might be a better approach.
I have experimented with each of these commands, but both result in the same exception.
sudo bin/kafka-server-start etc/kafka/server.properties
sudo -Djava.security.auth.login.config=etc/kafka/kafka_server_jaas.conf bin/kafka-server-start etc/kafka/server.properties
the exception displayed is:
Fatal error during KafkaServer startup. Prepare to shutdown... Could
not find a 'KafkaServer' or 'sasl_plaintext.KafkaServer' entry in the
JAAS configuration. System property 'java.security.auth.login.config'
is not set
server.properties:
listeners=SASL_PLAINTEXT://0.0.0.0:9092
listener.security.protocol.map: SASL_PLAINTEXT:SASL_PLAINTEXT
listener.name.SASL_PLAINTEXT.plain.sasl.jaas.config:
org.apache.kafka.common.security.plain.PlainLoginModule required /
username="username" /
password="Password" /
user_username="Password";
advertised.listeners=SASL_PLAINTEXT://[ipaddress]:9092
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
secutiy.inter.broker.protocol=SASL_PLAINTEXT
kafka_server_jaas.conf:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="username"
password="Password"
user_username="Password";
};
I've spent a day looking at this already - has anyone else had experience with this problem?
You need to export a variable, not in-line the config to kafka-server-start (or sudo).
export KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/kafka_server_jaas.conf"
bin/kafka-server-start /path/to/server.properties
Ref. Confluent's sections on Kafka security
Putting my mistakes here for austerity:
Don't do your startup commands from the cli, put them in a .sh file and run from there:
For example, something like this:
zkstart
export KAFKA_OPTS="-Djava.security.auth.login.config=etc/kafka/zookeeper_jaas.conf"
bin/zookeeper-server-start etc/kafka/zookeeper.properties &
kafkastart
export KAFKA_OPTS=-Djava.security.auth.login.config=etc/kafka/kafka_server_jaas.conf
bin/kafka-server-start etc/kafka/server.properties
If you still encounter an error related to the configs, check your _jaas files to ensure all the configuration sections in the error messages are present. If they are, it's likely the format isn't quite correct - check for the two semi-colons in each section and if that fails, try recreating the file entirely from scratch (or from a c&p from the documentation).
edit
So, the final solution for me was to add the export.... lines to the beginning of the corresponding kafka-server-start and zookeeper-server-start files. It took me a while before the 'everything is a file' finally clicked and I realized the script files were the actual basis for the services.

Voltdb init encountered an unrecoverable error and is exiting

I followed the official documents about voltdb, but encounter a error when using
voltdb init --config=deployment.xml
init voltdb configure file.
and the error is
ERROR: Deployment information could not be obtained from cluster node or locally
VoltDB has encountered an unrecoverable error and is exiting
The log may contain additional information.
my voltdb version is voltdb-community-8.0
about the log file volt.log:
2018-05-02 08:52:25,048 INFO [main] HOST: PID of this Volt process is 15950
2018-05-02 08:52:25,062 INFO [main] HOST: Command line arguments: org.voltdb.VoltDB initialize deployment deployment.xml
2018-05-02 08:52:25,063 INFO [main] HOST: Command line JVM arguments: -Xmx2048m -Xms2048m -XX:+AlwaysPreTouch -Djava.awt.headless=true -Djavax.security.auth.useSubjectCredsOnly=false -Dsun.net.inetaddr.ttl=300 -Dsun.net.inetaddr.negative.ttl=3600 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseTLAB -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCondCardMark -Dsun.rmi.dgc.server.gcInterval=9223372036854775807 -Dsun.rmi.dgc.client.gcInterval=9223372036854775807 -XX:CMSWaitDuration=120000 -XX:CMSMaxAbortablePrecleanTime=120000 -XX:+ExplicitGCInvokesConcurrent -XX:+CMSScavengeBeforeRemark -XX:+CMSClassUnloadingEnabled -Dlog4j.configuration=file:///usr/local/voltdb-community-8.0/voltdb/log4j.xml -Djava.library.path=default
2018-05-02 08:52:25,064 INFO [main] HOST: Command line JVM classpath: /usr/local/voltdb-community-8.0/voltdb/voltdb-8.0.jar:/usr/local/voltdb-community-8.0/lib/vmetrics.jar:/usr/local/voltdb-community-8.0/lib/commons-logging-1.1.3.jar:/usr/local/voltdb-community-8.0/lib/log4j-1.2.16.jar:/usr/local/voltdb-community-8.0/lib/jetty-io-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/avro-1.7.7.jar:/usr/local/voltdb-community-8.0/lib/lz4-1.2.0.jar:/usr/local/voltdb-community-8.0/lib/jetty-server-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/jline-2.10.jar:/usr/local/voltdb-community-8.0/lib/tomcat-juli.jar:/usr/local/voltdb-community-8.0/lib/jsch-0.1.51.jar:/usr/local/voltdb-community-8.0/lib/slf4j-nop-1.6.2.jar:/usr/local/voltdb-community-8.0/lib/kafka-clients-0.8.2.2.jar:/usr/local/voltdb-community-8.0/lib/httpcore-4.3.3.jar:/usr/local/voltdb-community-8.0/lib/super-csv-2.1.0.jar:/usr/local/voltdb-community-8.0/lib/felix.jar:/usr/local/voltdb-community-8.0/lib/commons-codec-1.6.jar:/usr/local/voltdb-community-8.0/lib/scala-xml_2.11-1.0.2.jar:/usr/local/voltdb-community-8.0/lib/jetty-util-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/slf4j-api-1.6.2.jar:/usr/local/voltdb-community-8.0/lib/jetty-servlet-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/snappy-java-1.1.1.7.jar:/usr/local/voltdb-community-8.0/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/voltdb-community-8.0/lib/kafka_2.11-0.8.2.2.jar:/usr/local/voltdb-community-8.0/lib/jetty-security-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/scala-library-2.11.5.jar:/usr/local/voltdb-community-8.0/lib/owner-1.0.9.jar:/usr/local/voltdb-community-8.0/lib/owner-java8-1.0.9.jar:/usr/local/voltdb-community-8.0/lib/snmp4j-2.5.2.jar:/usr/local/voltdb-community-8.0/lib/jetty-continuation-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/httpclient-4.3.6.jar:/usr/local/voltdb-community-8.0/lib/servlet-api-3.1.jar:/usr/local/voltdb-community-8.0/lib/jna.jar:/usr/local/voltdb-community-8.0/lib/jetty-http-9.3.21.v20170918.jar:/usr/local/voltdb-community-8.0/lib/metrics-core-2.2.0.jar:/usr/local/voltdb-community-8.0/lib/tomcat-jdbc.jar:/usr/local/voltdb-community-8.0/lib/httpasyncclient-4.0.2.jar:/usr/local/voltdb-community-8.0/lib/httpcore-nio-4.3.2.jar:/usr/local/voltdb-community-8.0/lib/protobuf-java-3.4.0.jar:/usr/local/voltdb-community-8.0/lib/scala-parser-combinators_2.11-1.0.2.jar:/usr/local/voltdb-community-8.0/lib/jackson-core-asl-1.9.13.jar:/usr/local/voltdb-community-8.0/lib/commons-lang3-3.0.jar:/usr/local/voltdb-community-8.0/lib/extension/voltdb-rabbitmq.jar
2018-05-02 08:52:25,064 ERROR [main] HOST: Deployment information could not be obtained from cluster node or locally
so, it lead to can't generating configure file. Please tell me what means about the "Deployment information could not be obtained from cluster node or locally".
This error means that it could not find the specified deployment.xml file in the local directory. You can omit the --config=deployment.xml and just run "voltdb init" it will generate a default deployment.xml file for you. Then you could proceed to "voltdb start" if you just want a simple standalone instance with the default settings.
Or, if you want to modify the configuration settings, you could run "voltdb init" to get a default configuration, then run "voltdb get deployment" to retrieve the generated deployment.xml file from the voltdbroot directory to the local directory. Then you could delete the voltdbroot directory, modify this deployment.xml file and start over. You could also start over using a deployment file you generate manually or one copied from the examples/HOWTOs/deployment-file-examples folder.
(Disclosure: I work at VoltDB)

How do i enable remote jmx with port in zookeeper zkServer.cmd

Here my zkServer.cmd file :
#echo off
setlocal
call "%~dp0zkEnv.cmd"
set ZOOMAIN=org.apache.zookeeper.server.quorum.QuorumPeerMain
echo on
call %JAVA% "-Dzookeeper.log.dir=%ZOO_LOG_DIR%" "-Dzookeeper.root.logger=%ZOO_LOG4J_PROP%" -cp "%CLASSPATH%" %ZOOMAIN% "%ZOOCFG%" %*
endlocal
The skServer.sh script will run the zkEnv.sh script which in-turn will look for a script '../conf/zookeeper-env.sh'
create a file on the conf folder called zookeeper-env.sh
Paste this into the file and restart Zookeeper:
JMXLOCALONLY=false
JMXDISABLE=false
JMXPORT=4048
JMXAUTH=false
JMXSSL=false
First obtain the hostname (or reachable IP eg. lan/public/NAT address):
hostname -i
# or find ip
ip a
next add following options to ZOOMAIN (assumed hostname my.remoteconsole.org and desired port 8989)
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.port=8989
-Djava.rmi.server.hostname=my.remoteconsole.org
More details about available options in java docs (http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html).
ADD org.apache.zookeeper.server.quorum.QuorumPeerMain in server-start.
The class org.apache.zookeeper.server.quorum.QuorumPeerMain will start a JMX manageable ZooKeeper server. This class registers the proper MBeans during initalization to support JMX monitoring and management of the instance.
In addition to above answer by Marcell du Plessis, if you are running zookeeper as a systemd service, then you can specify jmx port in the environment variable.
[Unit]
Description=Apache Kakfa Zookeeper
Requires=network.target
After=network.target
[Service]
Type=simple
User=user
Group=users
ExecStart=/your-zookeeper-install-path/bin/zkServer.sh start
ExecStop=/your-zookeeper-install-path/bin/zkServer.sh stop
TimeoutStopSec=180
Restart=on-failure
Environment="JMX_PORT=9999"
[Install]
WantedBy=multi-user.target
Alias=zookeeper.service

Zookeeper issue in setting kafka

To install kafka , I downloaded the kafka tar folder. To start the server I tried this command :
bin/zookeeper-server-start.sh config/zookeeper.properties
The following error occured on entering the above command:
INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2014-08-21 11:53:55,748] FATAL Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain)
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing config/zookeeper.properties
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:110)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:99)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:76)
Caused by: java.lang.IllegalArgumentException: config/zookeeper.properties file is missing
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:94)
... 2 more
Invalid config, exiting abnormally
Is it that I need to setup zookeeper separately? How could I resolve this?
For Windows:
Go to kafka_2.11-2.0.0\bin\windows folder
Then run zookeeper-server-start.bat ../../config/zookeeper.properties
This is basically because of this
java.lang.IllegalArgumentException: config/zookeeper.properties file is missing
it would be really useful if you could share what exactly have you done so far. Also check if the same file exists at the said location and you are running the command from the correct location .. it is supposed to be run from your $KAFKA_HOME folder (where you've extracted the tar file)
I too faced the same issue when I installed kafka from Brew on Macbook
This is happening because the zookeeper.properties file is not in config of bin.
Follow these step.
Enter the command---> cd /usr/local/Cellar/kafka/2.3.0
Enter the command ---->cd libex
Now enter the command--->zookeeper-server-start config/zookeeper.properties
You will get the INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) Message.
Earlier I was getting this error:
$ zookeeper-server-start config/zookeeper.properties
[2019-10-02 14:35:20,159] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2019-10-02 14:35:20,160] ERROR Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain)
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing config/zookeeper.properties
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:156)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:104)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:81)
Caused by: java.lang.IllegalArgumentException: config/zookeeper.properties file is missing
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:140)
... 2 more
Invalid config, exiting abnormally
I saw when you run the above command it doesn't take config file. So if you Put complete path like c:\Kafka\config\zookeeper.properties... this works.
I faced the exact same error, and after a while I realized that the reason for the error was, I wasn't able to find the zookeeper.properties file, and that was because the path wasn't correct, I installed kafka through brew so the config folder was created inside libexec, so find where the config directory is and check for zookeeper.properties inside it and give that path.
Had the same issue.
I was following this guide and step 2 mentions to run this command:
bin/zookeeper-server-start.sh config/zookeeper.properties
I had 2 problems, that first was that I wasn't inside the root directory of the file you untar and the second was that I didn't copy the complete command. Make sure both of them are correct and try again.
Just make sure that whether /config folder exist or not.
Try to type properties directly. e.g. zookeeper-server-start zookeeper.properties
I installed it with homebrew, it works.
This happens because bin/windows is added to the path but kafka/config is not.
Just navigate to your kafka folder and then try to run.
I am adding screenshot if it can help.
Before
After
You can use Powershell as an alternative to CMD.
Consider myKafka is your kafka home directory, Extract your kafka tar file here.
Extracted folder(KafkaDir) will be having ./bin,/config, etc. internal folders.
Now, open Powershell prompt, go to myKafka folder.
Run below command:
.\kafkaDir\bin\windows\zookeeper-server-start.bat
.\kafkaDir\config\zookeeper.properties
Zookeeper will get start.
You need to fix the absolute path to:
$KAFKA_HOME/config/zookeeper.properties
For me I used:
$KAFKA_HOME = /usr/local/kafka
In \bin\windows\kafka-run-class.bat add the file content
rem Classpath addition for release
for %%i in ("%BASE_DIR%\libs\*") do (
call :concat "%%i"
)
// Section to Add
rem Classpath addition for LSB style path
if exist %BASE_DIR%\share\java\kafka\* (
call :concat %BASE_DIR%\share\java\kafka\*
)
**
// Above Section to Add
rem Classpath addition for core
for %%i in ("%BASE_DIR%\core\build\libs\kafka_%SCALA_BINARY_VERSION%*.jar") do (
call :concat "%%i"
Have to run in from Kafka home directory, but you are running from the bin.