I was trying to run Kafka on Windows machine and when I try to start the zookeeper I am facing this weird error:
classpath is empty. please build the project first e.g. by running 'gradlew jarall'
If anyone else is facing this issue:
Note: Do not download a source files from appache kafka, download a binary file
Download Kafka from here: Link
Also follow this link for any additional information
Also this group has some additional information
I had the exact same problem and I finally solved it.
the problem is that you have space character in your path (inside folder names) which causes "dirname" command to receive more than one argument.
Therefore, in order to solve, you only need to remove space from folder names within your Kafka folder path.
Follow below steps for windows & kafka 0.9.0.0 (same steps will go with lower versions of kafka)
First download binary from:
https://www.apache.org/dyn/closer.cgi?path=/kafka/0.9.0.0/kafka_2.11-0.9.0.0.tgz
extract to your particular folder and then
Step 1: create new directories in your kafka directory
- kafka-logs
- zookeeper
your directory after step 1 will be :
- bin
- config
- kafka-logs
- libs
- site-docs
- zookeeper
Step 2: Open config/server.properties and change below property
- log.dirs={fullpath}/kafka-logs
Step 3: Open config/zookeeper.properties and change belwo property
- dataDir={fullpath}/zookeeper
Step 4: create a run.bat file under bin/windows folder with following script:
start zookeeper-server-start.bat ..\..\config\zookeeper.properties
TIMEOUT 10
start kafka-server-start.bat ..\..\config\server.properties
exit
You can change timeout for your convenience.
Here i think you downloaded kafka source. you need to download binary
https://www.apache.org/dyn/closer.cgi?path=/kafka/0.9.0.0/kafka_2.11-0.9.0.0.tgz
Follow below steps to resolve this error.
step1: Get inside kafka downloaded folder
cd kafka-2.5.0-src
step2: Run gradle
./gradlew jar
step3: Once build is successful, start the kafka server
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
Now Kafka will be starts on localhost:9092
Had the same problem and it was because I download the source file instead of the binary file.
Simple ensure there are no white spaces in your folder hierarchy
for example:
instead of -> "c:\desktop\work files\kafka_2.12-2.7.0"
use this -> "c:\desktop\work-files\kafka_2.12-2.7.0"
this worked for me!
If you are using the Kafka source to run the Kafka server on the Windows 10 machine. We need to build the source first using the below step.
Please note: we need to have gradle build tool installed and path variable set before following the below steps.
Open the command prompt and navigate to the Kafka home directory
C:\kafka-1.1.1-src>
Enter the command 'gradle' and press Enter
C:\kafka-1.1.1-src>gradle
Once the build is successful enter the below command
C:\kafka-1.1.1-src>gradlew jar
Now enter the below command to start the server
C:\kafka-1.1.1-src>.\bin\windows\kafka-server-start.bat .\config\server.properties
If everything went fine, your command prompt will look like this one
Ensure that you have no white space or special character
Step 1 : Navigate to \confluent-community-5.5.0–2.12\confluent-5.5.0\bin\windows folder.
Step 2: Open kafka-run-class.bat file.
Step 3 : Search rem Classpath addition for core in this bat file
Step 4: Now, just add the below code just above the rem Classpath addition for core line.
rem classpath addition for LSB style path
if exist %BASE_DIR%\share\java\kafka\* (
call:concat %BASE_DIR%\share\java\kafka\*
)
Using Windows 10:
Download and extract bin kafka and change the config/server.properties; for me it changes from
log.dirs=/tmp/kafka-logs
to
log.dir= D:\Elastic_search\kafka_2.11-0.9.0.0\kafka-logs
Create the new directory, kafka-logs.
Run
.\bin\windows\kafka-server-start.bat .\config\server.properties
in your root kafka_2.11-0.9.0.0 folder with CMD "again"
I found that the bit of code below that adds to the Classpath was missing from \bin\windows\kafka-run-class.bat from the previous version I was using. (Confluent 4.0.0 vs 5.3.1)
rem Classpath addition for LSB style path
if exist %BASE_DIR%\share\java\kafka\* (
call :concat %BASE_DIR%\share\java\kafka\*
)
I followed the link https://janschulte.wordpress.com/2013/10/13/apache-kafka-0-8-on-windows/ to configure kafka and it worked. But I used the same version as mentioned in the post (which is old version). For now I need kafka for my project so decided to proceed with the version.
Few things the author missed out in the explanation. Please find them below
1) After downloading the sbt windows installer, you need to restart the system not only the shell,to reflect the necessary changes
2) Add the following in the 66,67th line of kafka-run-class.sh
JAVA="java"
$JAVA $KAFKA_OPTS $KAFKA_JMX_OPTS -cp cygpath -wp $CLASSPATH "$#"
(Make sure your java is configured in environment variables)
3) Traverse to the appropriate path, to run the zookeeper command
bin/zookeeper-server-start.sh config/zookeeper.properties
Tag me if you have any doubts! Happy to Help!
I sufferred the same issue. Download the zookeeper tar file as well.
Downloading the zookeeper in the same folder and then typing the same commands worked for me.
Guys be sure that you are using the right path to zookeeper.properties file. In my occassion I was using the full path for the .bat file and a wrong relative path for the .properties file.
Having a wrong path to zookeeper.properties will produce the error that you mentioned.
Notice that I have used the binary, not the kafka source.
For me the issue was when unzipping the files. I moved them to another folder, and something went wrong. I unzipped again keeping the directory structure, and it worked.
thanks to orlando mendez for the advice!
https://www.youtube.com/watch?v=7F9tBwTUSeY
This happened to me when the length of Kafka folder path was long. Try it with a shorter path like "D:\kafka_2.12-2.7.0"
Please download binary package, not source code.
I faced the same issue, this is what worked for me
I downloaded the binary version I Created the directory as
following C:/kafka
Changed the properties files
Changes in zookeeper.properties -
dataDir=C:/kafka/zookeeper-data
Changes in server.properties - log.dirs = C:/kafka/kafka-logs
All the directories got created automatially
This should work.
Video for reference -
https://www.youtube.com/watch?v=3XjfYH5Z0f0
Download the Kafka binaries not source or make sure there are no empty characters in file paths
This site describes a solution that worked for me.
The solution was to modify a bat file so that java knows the path of several jar libs.
Of cource I downloaded the binary and not source files from confluent.
Related
One of my colleague is trying to install rundeck on a windows 10 OS. We have both followed the documentation guide on Rundeck's website. I was able to run and install Rundeck as a service but my colleague is stuck on the part where we need to run start_rundeck.bat because his Rundeck is not generating service.log. He can access the Rundeck UI but the logs are still not generating. Anyone knows how to fix this issue? Thanks!
Your colleague just needs to follow the instructions carefully.
Let me clarify the first steps focused on redirecting the standard output to the service.log file, I recreated the Rundeck installation on my Windows virtual machine:
First, check the minimal requirements. Rundeck needs OpenJDK 11 as the main dependency to work. You can obtain it from here, this implementation also works.
Create a new directory where the Rundeck .war file must be saved, e.g: C:\rundeck.
Go to this page and download .war file on the directory created in the previous step.
Open a new PowerShell terminal and set the Rundeck base directory: set RDECK_BASE=C:\rundeck.
Launch Rundeck for the first time to generate all files: java -jar rundeck-3.4.6-20211110.war (inside the directory created in the second step), wait until the Grails application running at http://localhost:4440 message appear, then stop the instance with ctrl + c key combination.
With any text editor, create a new file at C:\rundeck\ path called start_rundeck.bat with the following content:
set CURDIR=%~dp0
call %CURDIR%etc\profile.bat
java %RDECK_CLI_OPTS% %RDECK_SSL_OPTS% -jar rundeck-3.4.6-20211110.war --skipinstall -d >> %CURDIR%\var\logs\service.log 2>&1
And save it.
Go to the PowerShell terminal and just run the start_rundeck.bat file with .\start_rundeck.bat.
Now check the C:\rundeck\var\logs\service.log file.
At this point, it's possible to continue with the next steps: Configure Rundeck as a service using the nssm.exe program.
I have downloaded the s3-source connector zip file as given in the confluent web page. I am not getting where to place the extracted files. I am getting the following error
Please guide me. To load the connector, Iam using this command -
confluent local load s3-source -- -d /etc/kafka-connect-s3/confluentinc-kafka-connect-s3-source-1.3.2/etc/quickstart-s3-source.properties
I am not getting where to place the extracted files
If you used confluent-hub install, it would put them in the correct place for you.
Otherwise , you can put them whereever, if you update plugin.path in the Connect properties to include the parent directory of the JARs for the connector
Extract the zip file whether its a source connector or sink connector and place the whole folder with all the jars inside of it under: plugin.path which you have set
I need to change some configuration information to use SpringToolSuite4. But when I downloaded SpringToolSuite4 4.1.2 and unziped, there isn't SpringToolSuite4.ini file. So I created one, but SpringToolSuite4 didn't reference to SpringToolSuite4.ini file when starting.
I solved the problem with these steps:
Open the commad terminal;
go to the location where you put the jar file;
Run "java -jar [****.jar]";
PS: Make sure you have java runtime installed in you machine and you have the correct environment settings.
Hope this could help you.
I guess this is an admin rights problem:
I have downloaded the self extracting .jar file:
spring-tool-suite-4-4.8.1.RELEASE-e4.17.0-win32.win32.x86_64.self-extracting.jar
I saved it in c:\Program Files (needs admin rights).
Double-clicked it (not as admin) -> Same error (cannot find the SpringToolSuite4.ini file).
I have started TotalCommander (you may use a differnt explorer) with admin rights -> Problem fixed.
Open jar with winrar. Open the "contents.zip". Move to "sts-4.8.0.RELEASE" folder in documents. After, we run the program.
jar:spring-tool-suite-4-4.8.0.RELEASE-e4.17.0-win32.win32.x86_64.self-extracting
You can solve the problem like this.
When I downloaded it for the second time. I also got this error(SpringToolSuite4.ini). That's how I passed. instead of downloading it again. Downloading it again didn't work.
right click on the sts icon inside the extracted sts-bundle package and select show content.
I setup an Ubuntu environment and used bzr to get the 3 trunks: addons, server, web
Everything works and the server starts fine.
I then loaded the project in Eclipse and tried to run openerp-server. I got this error (module web: module not found). I then copied the entries (addons_path) in the openerp-server.conf from /etc/ to the conf file in server/install folder. I also created a copy of this file and pasted in server folder, in the hopes that eclipse would pick it up.
But I am still getting the same error. Three questions please:
Which conf file should I add this path for eclipse to use? Where does this file reside?
If I must use a command line switch to specify the web/addons path then how do I do that in eclipse?
There used to be a file with a lot of different variable such as pg_path, rpc, etc. Is that file still around? Maybe that is where I need to make this entry?
Thanks
I had to add a "-c /etc/openerp-server.conf" argument. Right click on the openerp-server file, launch the properties window, select Run/Debug Settings, select the only launch configuration available, edit, select the arguments tab.
I added them manually to eclipse run configurations > arguments.
e.g.
--db_host = {host}
--db_port = {port}
--db_user = {user}
--db_password = {password}
--addons = ../addons,../server/addons
Sometimes we have huge amount of JAR files in jboss/server/web/tmp/vfs-nested.tmp directory.
For example today this directory contained over 350k jar files.
But on other hosts there are only 2 jar files in this directory.
What can be the root cause of this problem?
We use JBoss 5.1
UPDATE:
I found following information in release notes for JBoss 5.1.0.GA:
JBoss VFS provides a set of different
switches to control it's internal
behavior. JBoss AS sets
jboss.vfs.forceCopy=true by default.
To see all the provided VFS flags
check out the code of the
VFSUtils.java class.
So I do not understand what should I set?
Should I set -Djboss.vfs.forceNoCopy=true or -Djboss.vfs.forceCopy=false?
Or should I set both of them?
UPDATE 1:
I have read entire thread http://community.jboss.org/thread/2148?start=0&tstart=0
and now I am not shure that I should change either jboss.vfs.forceCopy or jboss.vfs.forceNoCopy.
According to this thread I will have OutOfMemory error instead of huge amount of files in tmp dir.
From here: http://sourceforge.net/project/shownotes.php?release_id=575410
"Excessive nestedjarNNN.tmp files in the tmp directory. The VFS unwraps nested jars by extracting the nested jar into a tmp file in the java tmp directory. This can result in a large number of files that fill up the tmp directory. You can disable this behavior by setting -Djboss.vfs.forceNoCopy=true on command line used to start jboss. This will be enabled by default in a future release, JBAS-4389."
jskaggz has a good answer. In addition, I have this in the beginning of my run.bat file:
rmdir /s /q c:\apps\jboss-5.1.0.ga\server\default\tmp
rmdir /s /q c:\apps\jboss-5.1.0.ga\server\default\work
rmdir /s /q c:\apps\jboss-5.1.0.ga\server\default\log
mkdir c:\apps\jboss-5.1.0.ga\server\default\tmp
mkdir c:\apps\jboss-5.1.0.ga\server\default\work
mkdir c:\apps\jboss-5.1.0.ga\server\default\log
echo --- Cleared temp folders ---
I've had problems with old copies of classes hanging around, so this seems to help.
We have solved this problem by exploded deployment ( works for war and ear) as described in jboss documentation http://docs.jboss.org/jbossas/docs/Administration_And_Configuration_Guide/5/html/ch03s01.html
That's way vfs is not used.
I had the same issue described above in production and resolved it with the following solution.
Added java options
-Djboss.vfs.cache=org.jboss.virtual.plugins.cache.IterableTimedVFSCache
-Djboss.vfs.cache.TimedPolicyCaching.lifetime=1440
My setup also defines additional deployment directories so I needed to add these additional directories to vfs.xml file located in $JBOSS_SERVER_HOME/conf/bootstrap/ in order to see the benefit.
The lifetime setting I think is in minutes so I set it to a day as I have a scheduled restart of the server overnight.
Prior to finding this solution I had also tried using -Djboss.vfs.forceNoCopy=true and -Djboss.vfs.forceCopy=false
This appeared to work but I noticed the application ran a lot slower - presumably because these settings turn vfs caching off.
My Jboss version is jboss-5.1.0.GA
and my application runs in a cluster on production.
Found a lot others having the same problem running in cluster (or farm) environments.
https://issues.jboss.org/browse/JBAS-7126 desribes to solve the problem having a farm directory as deployment directory.
I had the same problem using a 2nd deploy directory.
The jar files out of my applications coming from this 2nd deploy directory got copied until the disk was full.
Tried adding the 2nd deploy directory the same way as at https://issues.jboss.org/browse/JBAS-7126 described for the farm directory.
It works well!
We were facing the same issue and were able to circumvent the issue by using a farm directory as deployment directory.
After putting that process in place we were facing one more issue due to the nature of our DEV environment ( We have clustered environment and we have many developers deploying on the shared DEV environment ) of not getting a consistent results while we were deploying the EARs and WARs that way .We circumvented the issue by making sure that the EARs and JARs that are being deployed are TOUCHED (http://en.wikipedia.org/wiki/Touch_(Unix) ) on the servers to make sure that inconsistencies are avoided .