I was trying to run Kafka on Windows machine and when I try to start the zookeeper I am facing this weird error:
classpath is empty. please build the project first e.g. by running 'gradlew jarall'
If anyone else is facing this issue:
Note: Do not download a source files from appache kafka, download a binary file
Download Kafka from here: Link
Also follow this link for any additional information
Also this group has some additional information
I had the exact same problem and I finally solved it.
the problem is that you have space character in your path (inside folder names) which causes "dirname" command to receive more than one argument.
Therefore, in order to solve, you only need to remove space from folder names within your Kafka folder path.
Follow below steps for windows & kafka 0.9.0.0 (same steps will go with lower versions of kafka)
First download binary from:
https://www.apache.org/dyn/closer.cgi?path=/kafka/0.9.0.0/kafka_2.11-0.9.0.0.tgz
extract to your particular folder and then
Step 1: create new directories in your kafka directory
- kafka-logs
- zookeeper
your directory after step 1 will be :
- bin
- config
- kafka-logs
- libs
- site-docs
- zookeeper
Step 2: Open config/server.properties and change below property
- log.dirs={fullpath}/kafka-logs
Step 3: Open config/zookeeper.properties and change belwo property
- dataDir={fullpath}/zookeeper
Step 4: create a run.bat file under bin/windows folder with following script:
start zookeeper-server-start.bat ..\..\config\zookeeper.properties
TIMEOUT 10
start kafka-server-start.bat ..\..\config\server.properties
exit
You can change timeout for your convenience.
Here i think you downloaded kafka source. you need to download binary
https://www.apache.org/dyn/closer.cgi?path=/kafka/0.9.0.0/kafka_2.11-0.9.0.0.tgz
Follow below steps to resolve this error.
step1: Get inside kafka downloaded folder
cd kafka-2.5.0-src
step2: Run gradle
./gradlew jar
step3: Once build is successful, start the kafka server
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
Now Kafka will be starts on localhost:9092
Had the same problem and it was because I download the source file instead of the binary file.
Simple ensure there are no white spaces in your folder hierarchy
for example:
instead of -> "c:\desktop\work files\kafka_2.12-2.7.0"
use this -> "c:\desktop\work-files\kafka_2.12-2.7.0"
this worked for me!
If you are using the Kafka source to run the Kafka server on the Windows 10 machine. We need to build the source first using the below step.
Please note: we need to have gradle build tool installed and path variable set before following the below steps.
Open the command prompt and navigate to the Kafka home directory
C:\kafka-1.1.1-src>
Enter the command 'gradle' and press Enter
C:\kafka-1.1.1-src>gradle
Once the build is successful enter the below command
C:\kafka-1.1.1-src>gradlew jar
Now enter the below command to start the server
C:\kafka-1.1.1-src>.\bin\windows\kafka-server-start.bat .\config\server.properties
If everything went fine, your command prompt will look like this one
Ensure that you have no white space or special character
Step 1 : Navigate to \confluent-community-5.5.0–2.12\confluent-5.5.0\bin\windows folder.
Step 2: Open kafka-run-class.bat file.
Step 3 : Search rem Classpath addition for core in this bat file
Step 4: Now, just add the below code just above the rem Classpath addition for core line.
rem classpath addition for LSB style path
if exist %BASE_DIR%\share\java\kafka\* (
call:concat %BASE_DIR%\share\java\kafka\*
)
Using Windows 10:
Download and extract bin kafka and change the config/server.properties; for me it changes from
log.dirs=/tmp/kafka-logs
to
log.dir= D:\Elastic_search\kafka_2.11-0.9.0.0\kafka-logs
Create the new directory, kafka-logs.
Run
.\bin\windows\kafka-server-start.bat .\config\server.properties
in your root kafka_2.11-0.9.0.0 folder with CMD "again"
I found that the bit of code below that adds to the Classpath was missing from \bin\windows\kafka-run-class.bat from the previous version I was using. (Confluent 4.0.0 vs 5.3.1)
rem Classpath addition for LSB style path
if exist %BASE_DIR%\share\java\kafka\* (
call :concat %BASE_DIR%\share\java\kafka\*
)
I followed the link https://janschulte.wordpress.com/2013/10/13/apache-kafka-0-8-on-windows/ to configure kafka and it worked. But I used the same version as mentioned in the post (which is old version). For now I need kafka for my project so decided to proceed with the version.
Few things the author missed out in the explanation. Please find them below
1) After downloading the sbt windows installer, you need to restart the system not only the shell,to reflect the necessary changes
2) Add the following in the 66,67th line of kafka-run-class.sh
JAVA="java"
$JAVA $KAFKA_OPTS $KAFKA_JMX_OPTS -cp cygpath -wp $CLASSPATH "$#"
(Make sure your java is configured in environment variables)
3) Traverse to the appropriate path, to run the zookeeper command
bin/zookeeper-server-start.sh config/zookeeper.properties
Tag me if you have any doubts! Happy to Help!
I sufferred the same issue. Download the zookeeper tar file as well.
Downloading the zookeeper in the same folder and then typing the same commands worked for me.
Guys be sure that you are using the right path to zookeeper.properties file. In my occassion I was using the full path for the .bat file and a wrong relative path for the .properties file.
Having a wrong path to zookeeper.properties will produce the error that you mentioned.
Notice that I have used the binary, not the kafka source.
For me the issue was when unzipping the files. I moved them to another folder, and something went wrong. I unzipped again keeping the directory structure, and it worked.
thanks to orlando mendez for the advice!
https://www.youtube.com/watch?v=7F9tBwTUSeY
This happened to me when the length of Kafka folder path was long. Try it with a shorter path like "D:\kafka_2.12-2.7.0"
Please download binary package, not source code.
I faced the same issue, this is what worked for me
I downloaded the binary version I Created the directory as
following C:/kafka
Changed the properties files
Changes in zookeeper.properties -
dataDir=C:/kafka/zookeeper-data
Changes in server.properties - log.dirs = C:/kafka/kafka-logs
All the directories got created automatially
This should work.
Video for reference -
https://www.youtube.com/watch?v=3XjfYH5Z0f0
Download the Kafka binaries not source or make sure there are no empty characters in file paths
This site describes a solution that worked for me.
The solution was to modify a bat file so that java knows the path of several jar libs.
Of cource I downloaded the binary and not source files from confluent.
During setup a temporary log file is written to directory %TEMP%. This file is moved as installation.log to ${installer:sys.installationDir}/.install4j after setup finished.
Is there a way to let install4j write this temp. log file always directly to directory .install4j? Having it there it would be much easier to find it in case the setup crashes.
We're still using install4j 5.0.11.
Thanks in advance!
Frank
The problem is that the installation may not exit at startup and the installation directory may be changed in the installer.
However, you can pass the VM parameter
-Dinstall4j.alternativeLogfile=[path to log file]
to the installer to specify an alternative log file.
I've got a bit of a problem with deployments on my project and after hours of searching the web I can't find an answer to this.
Situation:
I am working on a Web application that lives of uploads and other files that get generated during use.
To keep things simple I store these into: .../mywebapp/web/some subfolders/*
So far, so good.
My Problem:
Every time I redeploy my project on the actual server (after updating classes/jsp's)
Glassfish deletes the entire content of .../mywebapp/ during redeployment.
My Procedure so far:
Export the latest version of my webapp as .war.
Add the changed files into the .war file on the server (rename to .zip, then back to .war)
Redeploy the .war on my server using the admin console (locahost:4848)
My question is
This current procedure is very prone to dataloss (I could lose the files!)
Is there a straight forward way where I can upload changes to my server without the risk of losing all the files that have been added during runtime?
I see two choices:
move the data 'out of harm's way' (find some place for it that isn't
in the deployment directory; like a database)
Switch to directory deployment instead of archive deployment.
The better of these two choices is the first one... It is more portable than the other; every server out there supports deploying archives. A lot of servers support directory based deployment... but they all do it a bit differently... so a directory structure that deploys on A may not deploy on B.
I had this same issue, solved using XCOPY and Event Scheduler.
Effectively, you are continuously sync two folders
Run a scheduled task for the following batch file every X minutes
sync.bat:
xcopy "domain1\applications\%YOUR_APP_NAME%l\path\to\folder" "D:\folder\to\sync" /D /I /Y
xcopy "D:\folder\to\sync" "domain1\applications\%YOUR_APP_NAME%l\path\to\folder" /D /I /Y
Switches:
/D - Only copy newer files if the destination file exists
/I - If the destination does not exist, and you are copying more than one file, this switch assumes that the destination is a folder.
/Y - Overwrite without prompting
Sometimes we have huge amount of JAR files in jboss/server/web/tmp/vfs-nested.tmp directory.
For example today this directory contained over 350k jar files.
But on other hosts there are only 2 jar files in this directory.
What can be the root cause of this problem?
We use JBoss 5.1
UPDATE:
I found following information in release notes for JBoss 5.1.0.GA:
JBoss VFS provides a set of different
switches to control it's internal
behavior. JBoss AS sets
jboss.vfs.forceCopy=true by default.
To see all the provided VFS flags
check out the code of the
VFSUtils.java class.
So I do not understand what should I set?
Should I set -Djboss.vfs.forceNoCopy=true or -Djboss.vfs.forceCopy=false?
Or should I set both of them?
UPDATE 1:
I have read entire thread http://community.jboss.org/thread/2148?start=0&tstart=0
and now I am not shure that I should change either jboss.vfs.forceCopy or jboss.vfs.forceNoCopy.
According to this thread I will have OutOfMemory error instead of huge amount of files in tmp dir.
From here: http://sourceforge.net/project/shownotes.php?release_id=575410
"Excessive nestedjarNNN.tmp files in the tmp directory. The VFS unwraps nested jars by extracting the nested jar into a tmp file in the java tmp directory. This can result in a large number of files that fill up the tmp directory. You can disable this behavior by setting -Djboss.vfs.forceNoCopy=true on command line used to start jboss. This will be enabled by default in a future release, JBAS-4389."
jskaggz has a good answer. In addition, I have this in the beginning of my run.bat file:
rmdir /s /q c:\apps\jboss-5.1.0.ga\server\default\tmp
rmdir /s /q c:\apps\jboss-5.1.0.ga\server\default\work
rmdir /s /q c:\apps\jboss-5.1.0.ga\server\default\log
mkdir c:\apps\jboss-5.1.0.ga\server\default\tmp
mkdir c:\apps\jboss-5.1.0.ga\server\default\work
mkdir c:\apps\jboss-5.1.0.ga\server\default\log
echo --- Cleared temp folders ---
I've had problems with old copies of classes hanging around, so this seems to help.
We have solved this problem by exploded deployment ( works for war and ear) as described in jboss documentation http://docs.jboss.org/jbossas/docs/Administration_And_Configuration_Guide/5/html/ch03s01.html
That's way vfs is not used.
I had the same issue described above in production and resolved it with the following solution.
Added java options
-Djboss.vfs.cache=org.jboss.virtual.plugins.cache.IterableTimedVFSCache
-Djboss.vfs.cache.TimedPolicyCaching.lifetime=1440
My setup also defines additional deployment directories so I needed to add these additional directories to vfs.xml file located in $JBOSS_SERVER_HOME/conf/bootstrap/ in order to see the benefit.
The lifetime setting I think is in minutes so I set it to a day as I have a scheduled restart of the server overnight.
Prior to finding this solution I had also tried using -Djboss.vfs.forceNoCopy=true and -Djboss.vfs.forceCopy=false
This appeared to work but I noticed the application ran a lot slower - presumably because these settings turn vfs caching off.
My Jboss version is jboss-5.1.0.GA
and my application runs in a cluster on production.
Found a lot others having the same problem running in cluster (or farm) environments.
https://issues.jboss.org/browse/JBAS-7126 desribes to solve the problem having a farm directory as deployment directory.
I had the same problem using a 2nd deploy directory.
The jar files out of my applications coming from this 2nd deploy directory got copied until the disk was full.
Tried adding the 2nd deploy directory the same way as at https://issues.jboss.org/browse/JBAS-7126 described for the farm directory.
It works well!
We were facing the same issue and were able to circumvent the issue by using a farm directory as deployment directory.
After putting that process in place we were facing one more issue due to the nature of our DEV environment ( We have clustered environment and we have many developers deploying on the shared DEV environment ) of not getting a consistent results while we were deploying the EARs and WARs that way .We circumvented the issue by making sure that the EARs and JARs that are being deployed are TOUCHED (http://en.wikipedia.org/wiki/Touch_(Unix) ) on the servers to make sure that inconsistencies are avoided .
I've been desperately looking for the answer to this and I feel I'm missing something obvious.
I need to copy a folder full of data files into the TARGETDIR of my deployment project at compile time. I can see how I would add individual files (ie. right click in File System and go to Add->File) but I have a folder full of data files which constantly get added to. I'd prefer not to have to add the new files each time I compile.
I have tried using a PreBuildEvent to copy the files:
copy $(ProjectDir)..\Data*.* $(TargetDir)Data\
which fails with error code 1 when I build. I can't help but feel I'm missing the point here though. Any suggestions?
Thanks in advance.
Graeme
Went to this route.
Created a new project (deleted the default source file Class1)
Added the files/folders necessary to the project.
Added the project as project output in the installer, choosing the option content files.
This removes the complexity of having to zip/unzip the files as suggested earlier.
Try
xcopy $(ProjectDir)..\Data\*.* $(TargetDir)Data /e /c /i [/f] [/r] /y
/e to ensure tree structure fulfilment (use /s if you want to bypass empty folders)
/c to continue on error (let the build process finish)
/i necessary to create the destination folder if none exists
/y assume "yes" for overwrite in case of previously existing files
[optionnal]
/f if you wanna see the whole pathes resulting from the copy
/r if you want to overwrite even previously copied read-only files
The method is simpler on the project than on files, yes. Beside, on files, it copies only the modified/missing files on each build but forces you to maintain the project on each data pack modification. Depends on the whole data size and the variability of your data pack.
Also beware the remaining files if you remove some from your data pack and rebuild without emptying your target folder.
Good luck.
I solved the problem by a workaround:
Add a build action of packaging entire directory (could be filtered) to a ZIP file.
Add a reference to an empty ZIP file to deployment project.
Add a custom action to deployment project to extract the ZIP to destination folder.
It's simple and stable.
Your error is probably because your path has spaces in it and you don't have the paths in quotes.
ex copy "$(ProjectDir)..\Data*.*" "$(TargetDir)Data\"
I need to do a similar thing. Thinking a custom action...
I found a different workaround for this. I added a web project to my solution that points at the data directory I want included in the deployment project. The web project automatically picks up any new files in the data directory and you can refer to the project content in the deployment project.