How to purge zookeeper logs with PurgeTxnLog? - apache-zookeeper

Zookeeper's rapidly pooping its internal binary files all over our production environment.
According to: http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html
and
http://dougchang333.blogspot.com/2013/02/zookeeper-cleaning-logs-snapshots.html
this is expected behavior and you must call org.apache.zookeeper.server.PurgeTxnLog
regularly to rotate its poop.
So:
% ls -l1rt /tmp/zookeeper/version-2/
total 314432
-rw-r--r-- 1 root root 67108880 Jun 26 18:00 log.1
-rw-r--r-- 1 root root 947092 Jun 26 18:00 snapshot.e99b
-rw-r--r-- 1 root root 67108880 Jun 27 05:00 log.e99d
-rw-r--r-- 1 root root 1620918 Jun 27 05:00 snapshot.1e266
... many more
% sudo java -cp zookeeper-3.4.6.jar::lib/jline-0.9.94.jar:lib/log4j-1.2.16.jar:lib/netty-3.7.0.Final.jar:lib/slf4j-api-1.6.1.jar:lib/slf4j-log4j12-1.6.1.jar:conf \
org.apache.zookeeper.server.PurgeTxnLog \
/tmp/zookeeper/version-2 /tmp/zookeeper/version-2 -n 3
but I get:
% ls -l1rt /tmp/zookeeper/version-2/
... all the existing logs plus a new directory
/tmp/zookeeper/version-2/version-2
Am I doing something wrong?
zookeeper-3.4.6/

ZooKeeper now has an Autopurge feature as of 3.4.0. Take a look at https://zookeeper.apache.org/doc/trunk/zookeeperAdmin.html
It says you can use autopurge.snapRetainCount and autopurge.purgeInterval
autopurge.snapRetainCount
New in 3.4.0: When enabled, ZooKeeper auto purge feature retains the autopurge.snapRetainCount most recent snapshots and the corresponding transaction logs in the dataDir and dataLogDir respectively and deletes the rest. Defaults to 3. Minimum value is 3.
autopurge.purgeInterval
New in 3.4.0: The time interval in hours for which the purge task has to be triggered. Set to a positive integer (1 and above) to enable the auto purging. Defaults to 0.

Since I'm not hearing a fix via Zookeeper, this was an easy workaround:
COUNT=6
DATADIR=/tmp/zookeeper/version-2/
ls -1drt ${DATADIR}/* | head --lines=-${COUNT} | xargs sudo rm -f
Should run once a day from a cron job or jenkins to prevent zookeeper from exploding.

You need to specify the parameter dataDir and snapDir with the value that is configured as dataDir in your .properties file of zookeeper.
If your configuration looks like the following.
dataDir=/data/zookeeper
You need to call PurgeTxnLog (version 3.5.9) like the following if you want to keep the last 10 logs/snapshots
java -cp zookeeper.jar:lib/slf4j-api-1.7.5.jar:lib/slf4j-log4j12-1.7.5.jar:lib/log4j-1.2.17.jar:conf org.apache.zookeeper.server.PurgeTxnLog /data/zookeeper /data/zookeeper -n 10

Related

MongoDB fails to rotate logs, keeping deleted files open

I have a similar problem to what was reported in [1], in that the MongoDB log files are being kept open as deleted when a log rotation happens. However, I believe the underlying causes are quite different, so I created a new question. The long and short of it is that when this happens, which is not all the time, I end up with no Mongo logs at all; and sometimes the deleted log file is kept for so long it becomes an issue as its size becomes too big.
Unlike [1], I have setup log rotation directly in Mongo [2]. It is done as follows:
systemLog:
verbosity: 0
destination: file
path: "/SOME_PATH/mongo.log"
logAppend: true
timeStampFormat: iso8601-utc
logRotate: reopen
In terms of the setup: I am running MongoDB 4.2.9 (WireTiger) on RHEL 7.4. The database sits on an XFS filesystem. The mount options we have for XFS are as follows:
rw,nodev,relatime,seclabel,attr2,inode64,noquota
Any pointers as to what could be causing this behaviour would be greatly appreciated. Thanks in advance for your time.
Update 1
Thanks to everyone for all the pointers. I now understand better the configuration, but I still think there is something amiss. To recap, In addition to the settings telling MongoDB to reopen the file on log rotate, I am also using the logrotate command. The configuration is fairly similar to what is suggested in [3]:
# rotate log every day
daily
# or if size limit exceeded
size 100M
# number of rotations to keep
rotate 5
# don't error if log is missing
missingok
# don't rotate if empty
notifempty
# compress rotated log file
compress
# permissions of rotated logs
create 644 SOMEUSER SOMEGROUP
# run post-rotate once per rotation, not once per file (see 'man logrotate')
sharedscripts
# 1. Signal to MongoDB to start a new log file.
# 2. Delete the empty 0-byte files left from compression.
postrotate
/bin/kill -SIGUSR1 $(cat /SOMEDIR/PIDFILE.pid 2> /dev/null) > /dev/null 2>&1
find /SOMEDIR/ -size 0c -delete
endscript
The main difference really is the slightly more complex postrotate command, though it does seem semantically to do the same as in [3], e.g.:
kill -USR1 $(/usr/sbin/pidof mongod)
At any rate, what seems to be happening with the present log rotate configuration is that, very infrequently, MongoDB appears to get the SIGUSR1 but does not create a new log file. I stress the "seems/appears" as I do not have any hard evidence of this since its a very tricky problem to replicate under a controlled environment. But we can compare the two scenarios. I see that the log rolling is working in the majority of cases:
-rw-r--r--. 1 SOMEUSER SOMEGROUP 34M May 5 10:51 mongo.log
-rw-------. 1 SOMEUSER SOMEGROUP 9.2M Feb 25 2020 mongo.log-20200225.gz
-rw-------. 1 SOMEUSER SOMEGROUP 8.3M Nov 17 03:39 mongo.log-20201117.gz
-rw-r--r--. 1 SOMEUSER SOMEGROUP 8.6M Jan 30 03:19 mongo.log-20210130.gz
-rw-------. 1 SOMEUSER SOMEGROUP 8.6M Feb 27 03:31 mongo.log-20210227.gz
...
However, on occasion it seems that instead of creating a new log file, MongoDB keeps hold of the deleted file handle (note the missing mongo.log):
$ ls -lh
total 74M
-rw-r--r--. 1 SOMEUSER SOMEGROUP 18M Feb 17 03:29 mongo.log-20210217.gz
-rw-r--r--. 1 SOMEUSER SOMEGROUP 18M Feb 18 03:11 mongo.log-20210218.gz
-rw-r--r--. 1 SOMEUSER SOMEGROUP 18M Feb 19 03:41 mongo.log-20210219.gz
-rw-r--r--. 1 SOMEUSER SOMEGROUP 15M Feb 20 03:07 mongo.log-20210220.gz
-rw-r--r--. 1 SOMEUSER SOMEGROUP 6.5M Mar 13 03:41 mongo.log-20210313.gz
$ lsof -p SOMEPID | grep deleted | numfmt --field=7 --to=iec
mongod SOMEPID SOMEUSER 679w REG 253,5 106M 1191182408 /SOMEDIR/mongo.log (deleted)
Its not entirely clear to me how would one get more information on what MongoDB is doing upon receiving the SIGUSR1 signal. I also noticed that I get a lot of successful rotations before hitting the issue - may just be a coincidence, but I wonder if its the final rotation that is causing the problem (e.g. rotate 5). I'll keep on investigating but any additional pointers are most welcome. Thanks in advance.
[1] SO: MongoDB keeps deleted log files open after rotation
[2] MongoDB docs: Rotate Log Files
[3] How can I disable the logging of MongoDB?

kafka + which files should created under kafka-logs

usually after kafka cluster scratch installation I saw this files under /data/kafka-logs ( kafka broker logs. where all topics should be located )
ls -ltr
-rw-r--r-- 1 kafka hadoop 0 Jan 9 10:07 cleaner-offset-checkpoint
-rw-r--r-- 1 kafka hadoop 57 Jan 9 10:07 meta.properties
drwxr-xr-x 2 kafka hadoop 4096 Jan 9 10:51 _schemas-0
-rw-r--r-- 1 kafka hadoop 17 Jan 10 07:39 recovery-point-offset-checkpoint
-rw-r--r-- 1 kafka hadoop 17 Jan 10 07:39 replication-offset-checkpoint
but on some other Kafka scratch installation we saw the folder - /data/kafka-logs is empty
is this indicate on problem ?
note - we still not create the topics
I'm not sure when each checkpoint file is created (though, they track log cleaner and replication offsets), but I assume that the meta properties is created at broker startup.
Otherwise, you would see one folder per Topic-partition, for example, looks like you had one topic created, _schemas.
If you only see one partition folder out of multiple brokers, then your replication factor for that topic is set to 1

Dynamically configuring retention.ms not working for a kafka topic

I have a Kafka topic called retention and Below are the server configuration related to retention:
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=3600000 (~ 1 hour)
log.cleaner.enable=true
And below is the topic specific config:
retention.ms=2592000000,retention.bytes=3298534883328
where retention.ms ~ 30d and retention.bytes = ~ 3.29 TB
I configured the retention.ms and retention.bytes using the below command recently (on 14th Jan 2019) using below commands:
./bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic retentions --config retention.bytes=219902325555
Here the configuration for the retntion.bytes seems to be working while retention.ms does not seem to be working. Here is the evidence that I could collect:
cd log_dir/retentions-0/
ls -lrt 00000000000000000000.*
-rw-r--r-- 1 root root 294387381 Nov 26 22:37 00000000000000000000.log
-rw-r--r-- 1 root root 3912 Jan 14 18:06 00000000000000000000.index
-rw-r--r-- 1 root root 5868 Jan 14 18:06 00000000000000000000.timeindex
If we look into the logs of older segments these are nearly 2 months old.
Can anybody tell which of these two configurations will take effect on priority Or, both can work whichever crosses the configured threshold.
In my assumptions, both configurations should work in conjunction. Plz, let me know if this is not the case.
Both work in conjunction.
From Kafka: The Definitive Guide book
If you have specified a value for both log.retention.bytes and log.retention.ms ... messages may be removed when either criteria is met.

How to find the kafka version in linux

How to find the kafka version in linux?
whether there is a way to find the installed kafka version other than mentioning the version while downloading it?
Not sure if there's a convenient way, but you can just inspect your kafka/libs folder. You should see files like kafka_2.10-0.8.2-beta.jar, where 2.10 is Scala version and 0.8.2-beta is Kafka version.
Kafka 2.0 have the fix(KIP-278) for it:
kafka-topics.sh --version
Or
kafka-topics --version
Using confluent utility:
Kafka version check can be done with confluent utility which comes by default with Confluent platform(confluent utility can be added to cluster separately as well - credits cricket_007).
${confluent.home}/bin/confluent version kafka
Checking the version of other Confluent platform components like ksql schema-registry and connect
[confluent-4.1.0]$ ./bin/confluent version kafka
1.1.0-cp1
[confluent-4.1.0]$ ./bin/confluent version connect
4.1.0
[confluent-4.1.0]$ ./bin/confluent version schema-registry
4.1.0
[confluent-4.1.0]$ ./bin/confluent version ksql-server
4.1.0
There is nothing like kafka --version at this point. So you should either check the version from your kafka/libs/ folder or you can run
find ./libs/ -name \*kafka_\* | head -1 | grep -o '\kafka[^\n]*'
from your kafka folder (and it will do the same for you). It will return you something like kafka_2.9.2-0.8.1.1.jar.asc where 0.8.1.1 is your kafka version.
There are several methods to find kafka version
Method 1 simple:-
ps -ef|grep kafka
it will displays all running kafka clients in the console...
Ex:- /usr/hdp/current/kafka-broker/bin/../libs/kafka-clients-0.10.0.2.5.3.0-37.jar
we are using 0.10.0.2.5.3.0-37 version of kafka
Method 2:-
go to
cd /usr/hdp/current/kafka-broker/libs
ll |grep kafka
Ex:- kafka_2.10-0.10.0.2.5.3.0-37.jar
kafka-clients-0.10.0.2.5.3.0-37.jar
same result as method 1 we can find the version of kafka using in kafka libs.
You can grep the logs to see the version. Let's say kafka is installed under /usr/local/kafka, then:
$ grep "Kafka version" /usr/local/kafka/logs/*
/usr/local/kafka/logs/kafkaServer.out: INFO Kafka version : 0.9.0.1 (org.apache.kafka.common.utils.AppInfoParser)
will reveal the version
If you want to check the version of a specific Kafka broker, run this CLI on the broker*
kafka-broker-api-versions.sh --bootstrap-server localhost:9092 --version
where localhost:9092 is the accessible <hostname|IP Address>:<port> this API will check (localhost can be used if it's the same host you're running this command on). Example of output:
2.4.0 (Commit:77a89fcf8d7fa018)
* Apache Kafka comes with a variety of console tools in the ./bin sub-directory of your Kafka download; e.g. ~/kafka/bin/
Simple way on macOS e.g. installed via homebrew
$ ls -l $(which kafka-topics)
/usr/local/bin/kafka-topics -> ../Cellar/kafka/0.11.0.1/bin/kafka-topics
You can use for Debian/Ubuntu:
dpkg -l|grep kafka
Expected result should to be like:
ii confluent-kafka-2.11 0.11.0.1-1 all publish-subscribe messaging rethought as a distributed commit log
ii confluent-kafka-connect-elasticsearch 3.3.1-1 all Kafka Connect connector for copying data between Kafka and Elasticsearch
ii confluent-kafka-connect-hdfs 3.3.1-1 all Kafka Connect connector for copying data between Kafka and Hadoop HDFS
ii confluent-kafka-connect-jdbc 3.3.1-1 all Kafka Connect connector for JDBC-compatible databases
ii confluent-kafka-connect-replicator 3.3.1-1 all Kafka Connect connector for replicating topics between Kafka clusters
ii confluent-kafka-connect-s3 3.3.1-1 all Kafka Connect S3 connector for copying data between Kafka and
ii confluent-kafka-connect-storage-common 3.3.1-1 all Kafka Connect Storage Common contains packages used by storage
ii confluent-kafka-rest 3.3.1-1 all A REST proxy for Kafka
go to kafka/libs folder
we can see multiple jars search for something similar kafka_2.11-0.10.1.1.jar.asc in this case the kafka version is 0.10.1.1
I found an easy way to do this without searching directories or log files:
kafka-dump-log --version
Output looks like this:
5.3.0-ccs (Commit:6481debc2be778ee)
cd kafka
./bin/kafka-topics.sh --version
When you install Kafka in Centos7 with confluent :
yum install confluent-platform-oss-2.11
You can see the version of Kafka with :
yum deplist confluent-platform-oss-2.11
You can read : confluent-kafka-2.11 >= 0.10.2.1
To find the Kafka Version, We can use the jps command which show all the java processes running on the machine.
Step 1: Let's say, you are running Kafka as the root user, so login to your machine with root and use jps -m. It will show the result like
4979 Jps -m
9434 Kafka config/server.properties
Step 2: From the above result, you can take the PID for Kafka application and use pwdx 9434 which reports the current directory of the process. the result will be like
9434: /apps/kafka_2.12-2.4.0
here you can see the Kafka version which is 2.12-2.4.0
cd confluent-7.2.0/share/java/kafka
then
$ ls -lha | grep kafka
-rw-r--r-- 1 root root 5.3M Jul 5 09:45 kafka_2.13-7.2.0-ccs.jar
-rw-r--r-- 1 root root 4.8M Jul 5 09:45 kafka-clients-7.2.0-ccs.jar
lrwxrwxrwx 1 root root 26 Jul 23 10:10 kafka.jar -> ./kafka_2.13-7.2.0-ccs.jar
-rw-r--r-- 1 root root 9.4K Jul 5 09:45 kafka-log4j-appender-7.2.0-ccs.jar
-rw-r--r-- 1 root root 458K Jul 5 09:45 kafka-metadata-7.2.0-ccs.jar
-rw-r--r-- 1 root root 182K Jul 5 09:45 kafka-raft-7.2.0-ccs.jar
-rw-r--r-- 1 root root 36K Jul 5 09:45 kafka-server-common-7.2.0-ccs.jar
-rw-r--r-- 1 root root 84K Jul 5 09:45 kafka-shell-7.2.0-ccs.jar
-rw-r--r-- 1 root root 151K Jul 5 09:45 kafka-storage-7.2.0-ccs.jar
-rw-r--r-- 1 root root 23K Jul 5 09:45 kafka-storage-api-7.2.0-ccs.jar
-rw-r--r-- 1 root root 1.6M Jul 5 09:45 kafka-streams-7.2.0-ccs.jar
-rw-r--r-- 1 root root 41K Jul 5 09:45 kafka-streams-examples-7.2.0-ccs.jar
-rw-r--r-- 1 root root 161K Jul 5 09:45 kafka-streams-scala_2.13-7.2.0-ccs.jar
-rw-r--r-- 1 root root 52K Jul 5 09:45 kafka-streams-test-utils-7.2.0-ccs.jar
-rw-r--r-- 1 root root 127K Jul 5 09:45 kafka-tools-7.2.0-ccs.jar
You can also type
cat /build.info
This will give you an output like this
BUILD_BRANCH=master
BUILD_COMMIT=434160726dacc4a1a592fe6036891d6e646a3a4a
BUILD_TIME=2017-05-12T16:02:04Z
DOCKER_REPO=index.docker.io/landoop/fast-data-dev
KAFKA_VERSION=0.10.2.1
CP_VERSION=3.2.1
To check kafka version :
cd /usr/hdp/current/kafka-broker/libs
ls kafka_*.jar

FAILED TO WRITE PID installing Zookeeper

I am new to Zookeeper and it has being a real issue to install it and run. I am not sure what is wrong in here but I will explain what I've being doing to make it more clear:
1.- I've followed the installation guide provided by Apache. This means download the Zookeeper distribution (stable release) extracted the file and moved into the home directory.
2.- As I am using Ubuntu 12.04 I've modified the .bashrc file including this:
export ZOOKEEPER_INSTALL=/home/myusername/zookeeper-3.4.5
export PATH=$PATH:$ZOOKEEPER_INSTALL/bin
3.- Create a config file on conf/zoo.cfg
tickTime=2000
dataDir=/var/zookeeper
clientPort=2181
and also tried with:
dataDir=/var/log/zookeeper
and
dataDir=/var/bin/zookeeper
4.- When running the start command
zkServer.sh start or `bin/zkServer.sh start` nothing happens and always returns this
JMX enabled by default
Using config: /home/sasuke/zookeeper-3.4.5/bin/../conf/zoo.cfg
mkdir: cannot create directory `/var/zookeeper': Permission denied
Starting zookeeper ... /home/sasuke/zookeeper-3.4.5/bin/zkServer.sh: line 113: /var/zookeeper/zookeeper_server.pid: No such file or directory
FAILED TO WRITE PID
I have Java installed and inside the zookeper directory there is a zookeeper.jar file that I think it's not running.
Checking here on stackoverflow there was a guy that said he could run zookeeper after typing
ssh localhost
But when I try to do it I get this error
ssh: connect to host localhost port 22: Connection refused
Please help. I've being here trying to solve it for too long.
Getting started guide of zookeeper:
http://zookeeper.apache.org/doc/r3.1.2/zookeeperStarted.html
Previous case solved with the shh localhost
Zookeeper: FAILED TO WRITE PID
UPDATE:
The permissions for log are:
drwxr-xr-x 19 root root 4096 Oct 10 07:52 log
and for zookeeper:
drwxr-xr-x 2 zookeeper zookeeper 4096 Mar 23 2012 zookeeper
Should I change any of these?
I have had the same problem. In my case was useful to start Zookeeper and directly specify a configuration file:
/bin/zkServer.sh start conf/zoo.conf
It seems you do not have the required permissions. The /var/log owner is is going to be root. Zookeeper stores the process id and snapshot of data in that directory. The process id of the spawned zookeeper server is stored in a file -zookeeper_server.pid (as of 3.3.6)
If you have root previleges, you could start zookeeper with sudo (root) previleges, it should work but definitely not recommended. Make sure you start zookeeper with the same(or higher) permissions as the owner of the directory.
Create a new directory in your home folder like /home/username/zookeeper-data.
Let dataDir point to that directory and it should work.
The default zookeeper installation (tar extract) comes with the conf file named conf/zoo_sample.cfg while the same extract's bin/zkServer.sh expects the conf file to be called zoo.cfg thereby resulting in a "No such file or dir" and the "failed to write pid" error. So before running zkServer.sh to start or stop zookeeper instance, either:
rename the zoo_sample.cfg in the conf dir to zoo.cfg, or
give the name (and path) to the conf file (as suggested by Ilya Lapitan), or, of course
edit zkServer.sh ;-)
When you create the Directory for dataDir make sure to use the -p option. This will allow subsequent directories to be created as required by the application placing files.
mkdir -p /var/log/zookeeperData
Then set:
dataDir=/var/log/zookeeperData
Seems there's all kinds of reasons this can happen. So many helpful answers here!
For me, I had improper line endings in my zoo.cfg file, and possibly invisible characters, so zookeeper was trying to create directories like /var/zookeeper? and /var/zookeeper\r. Reworking my zoo.cfg a bit fixed it for me, along with deleting zoo_sample.conf.
This happens to me due to low disk space. cause zookeeper cant create pid file inside zookeeper data folder.
I have faced the same issue while starting the zookeeper with this command:
hadoop#ubuntu:~/hadoop/zookeeper/zookeeper-3.4.8$ bin/zkServer.sh
start
ERROR [main] client.ConnectionManager$HConnectionImplementation:
The node /hbase is not in ZooKeeper.
It should have been written by the master. Check the value configured in zookeeper.znode.parent. There could be a mismatch with the one configured in the master.
But running the script as su rectified the issue:
hadoop#ubuntu:~/hadoop/zookeeper/zookeeper-3.4.8$ sudo bin/zkServer.sh
start
ZooKeeper JMX enabled by default Using config:
/home/hadoop/hadoop/zookeeper/zookeeper-3.4.8/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
Go to /usr/local/etc/
You will find zookeeper directory
delete the directory
and restart the server - zkServer start
Change the path give dataDir=/tmp/zookeeper. If it works then its clearly access issues
But its generally not advisable to use tmp directory.
This seems to be an ownership issue; running the following solved this for me.
$ sudo chown -R $USER /var/lib/zookeeper
N.B.
I've outlined my steps below which show the error I was getting (the same as the error in this SO question) and the attempt at trying the solution proposed by a user above, which advised to provide zoo.cfg as an argument.
13:01:29 ✔ ~ :: $ZK/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/Cellar/zookeeper/3.4.14/libexec/bin/../conf/zoo.cfg
Starting zookeeper ... /usr/local/Cellar/zookeeper/3.4.14/libexec/bin/zkServer.sh: line 149: /var/lib/zookeeper/zookeeper_server.pid: Permission denied
FAILED TO WRITE PID
13:01:32 ✘ ~ :: $ZK/bin/zkServer.sh start $ZK/conf/zoo.cfg
ZooKeeper JMX enabled by default
Using config: /usr/local/Cellar/zookeeper/3.4.14/libexec/conf/zoo.cfg
Starting zookeeper ... /usr/local/Cellar/zookeeper/3.4.14/libexec/bin/zkServer.sh: line 149: /var/lib/zookeeper/zookeeper_server.pid: Permission denied
FAILED TO WRITE PID
13:04:45 ✔ /var/lib :: ls -la
total 0
drwxr-xr-x 4 root wheel 128 Apr 19 18:55 .
drwxr-xr-x 27 root wheel 864 Apr 19 18:55 ..
drwxr--r-- 3 root wheel 96 Mar 24 15:07 zookeeper
13:04:48 ✔ /var/lib :: echo $USER
tallamjr
13:06:03 ✔ /var/lib :: sudo chown -R $USER zookeeper
Password:
13:06:44 ✔ /var/lib :: ls -la
total 0
drwxr-xr-x 4 root wheel 128 Apr 19 18:55 .
drwxr-xr-x 27 root wheel 864 Apr 19 18:55 ..
drwxr--r-- 3 tallamjr wheel 96 Mar 24 15:07 zookeeper
13:06:48 ✔ ~ :: $ZK/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/Cellar/zookeeper/3.4.14/libexec/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
REF:
- https://askubuntu.com/questions/6723/change-folder-permissions-and-ownership
For me this solution worked:
I granted the read, write and execute permissions for everyone using the command $sudo chmod 777 foldername for the directory zookeeper by going inside the directory /var (/var/zookeeper).
After executing this command try running the zookeeper. It ran in my case
try to use sudo -E bin/zkServer.sh start