I'm making a CI environment using Jenkins and wildfly. I used the following command:
jboss-cli.bat --connect --controller=ip:adminPort --user=admin --password=*** --commands="deploy test.war --force"
It works fine, I can successfully access to the application after this command finished, but if I execute this command around 5 times through exec call, wildfly never respond eventually...
I'm wondering if this way of deployment is not correct. I need to find the way to deploy a war file many times. Any help will be appreciated.
I changed the command slightly as below and slowness or server stop haven't happened so far (for several months)
one of big difference is not using "commands" in the new command but I'm not sure why result is different.
/opt/wildfly-8.2.0.Final/bin/jboss-cli.sh --connect "deploy --force /home/wildfly/test/build/libs/test.war"
my wildfly version is wildfly-8.2.0.Final.
Related
Before explaining what my problem is, please know that I have looked up for solutions on similar topics but none of them seems to work nor even to corresponds to my problem.
What I am trying to do:
I have this python code on multiple files that I run with flask with the following command:
python -m flask run --host=0.0.0.0
So far, everything works, but I would like this code to automatically run everytime the computer boots. In the future this will be used on mini PCs without any graphical interface nor human intervention.
Since I need to do some configuration checks before running the web server, I've created a powershell script that ends with Flask running (using the previous command).
So far, everything works too. Now we're coming to the problem:
I'd like this script to run when I boot the machine. Specificity: Every things needs to work with Administrator privileges, on the local system without any interaction.
I've tried scheduled tasks but Flask won't run even if the rest of the script works (like creating folders or other things)
Ok, it's not a big deal I have other ways to do it, so I've created a Windows Service in C# to run the Script at startup on the local system.
The script works, I've checked the privileges too, everything's fine but arriving at the flask command line that is supposed to make it run, nothing works.
It's the same thing if I run flask using "pythonw" which is supposed to run python as a background process.
What the problem seems to be:
Well, as long as I run flask and I have either a command prompt or a powershell terminal, everything works greats. But if in a way or another I run the script as a background process, it won't work.
Normally it would take around 30 seconds for Flask to start-up. Here if I try to create a folder right after flask ended starting up (as a test) I can see the folder is created almost instantly, which means the process is immediately killed.
The problem doesn't seem to come from the service itself but really Windows that kills the process I don't know why
I'm running out of idea so if you guys have anything that I could try it would really help me.
I am not sure if I can get help for this on here, but I thought it was worth a try.
I have 3 node cluster on AWS, I am running MAPR M3 , I installed Storm, Kafka and Divolte-collector and Cassandra. I would like try some of the clickstream examples and I am running into an issue with the tcp-consumer example. Also being quite new to java and distributed processing I have some clarification questions. Again I am not quite sure where to post this because I feel like this is divolte-collector specific and I also have some gaps in my understanding of the javadoc concept and the building and running of jar files; but I figured someone could point me to some resources or help with some clarifications. I can't get the json string to appear in the console running netcat socket listening for clicks:
Divolte tcp-kafka-consumer example
Everything works until the netcat part step 7 and my knowledge gap is with step 6.
Step 1: install and configure Divolte Collector
Install works and hello world click collections is promising :-)
Step 2: download, unpack and run Kafka
# In one terminal session
cd kafka_2.10-0.8.1.1/bin
./zookeeper-server-start.sh ../config/zookeeper.properties
# Leave Zookeeper running and in another terminal session, do:
cd kafka_2.10-0.8.1.1/bin
./kafka-server-start.sh ../config/server.properties
No erros plus tested kafka examples so seems to working as well
Step 3: start Divolte Collector
Go into the bin directory of your installation and run:
cd divolte-collector-0.2/bin
./divolte-collector
Step 3 no hitch, can test default divole-collector test page
Step 4: host your Javadoc files
Setup a HTTP server that serves the Javadoc files that you generated or downloaded for the examples. If you have Python installed, you can use this:
cd <your-javadoc-directory>
python -m SimpleHTTPServer
Ok so I can reach the javadoc pages
Step 5: listen on TCP port 1234
nc -kl 1234
Note: when using netcat (nc) as TCP server, make sure that you configure the Kafka consumer to use only 1 thread, because nc won't handle multiple incoming connections.
Tested netcat by opening port and sending messages so I figured I don't have any port issues on AWS.
Step 6: run the example
cd divolte-examples/tcp-kafka-consumer
mvn clean package
java -jar target/tcp-kafka-consumer-*-jar-with-dependencies.jar
Note: for this to work, you need to have the avro-schema project installed into your local Maven repository.
I installed the avro-schema with mvn clean install in avro project that comes with the examples. as per instructions here
Step 7: click around and check that you see events being flushed to the console where you run netcat
When you click around the Javadoc pages, you console should show events in JSON format similar to this:
I don't see the clicks in my netcat window :(
Investigating the issue I viewed the console and network tabs using chrome developer tools it seems divolte is running, but I am not sure how to dig further. This is the console view. Any ideas or pointers?
Thanks anyways
Initializing Divolte.
divolte.js:140 Divolte base URL detected http://ec2-x-x-x-x.us-west-x.compute.amazonaws.com:8290/
divolte.js:280 Divolte party/session/pageview identifiers ["0:i6i3g0jy:nxGMDVdU9~f1wF3RGqwmCKKICn4d1Sb9", "0:i6qx4rmi:IXc1i6Qcr17pespL5lIlQZql956XOqzk", "0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh"]
divolte.js:307 Module initialized. Object {partyId: "0:i6i3g0jy:nxGMDVdU9~f1wF3RGqwmCKKICn4d1Sb9", sessionId: "0:i6qx4rmi:IXc1i6Qcr17pespL5lIlQZql956XOqzk", pageViewId: "0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh", isNewPartyId: false, isFirstInSession: false…}
divolte.js:21 Signalling event: pageView 0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh0
allclasses-frame.html:9 GET http://ec2-x-x-x-x.us-west-x.compute.amazonaws.com:8000/resources/fonts/dejavu.css
overview-summary.html:200 GET http://localhost:8290/divolte.js net::ERR_CONNECTION_REFUSED
(Intro: I work on Divolte Collector)
It seems that you are running the example on an AWS instance somewhere. If you are using the pre-packaged JavaDoc files that come with the examples, they have hard-coded the divolte location as http://localhost:8290/divolte.js. So if you are running somewhere other than localhost, you should probably create your own JavaDoc for the example, using the correct hostname for the Divolte Collector server.
You can do so using this command. Be sure to run it from the directory where you source tree is rooted. And of course change localhost for the hostname where you are running the collector.
javadoc -d YOUR_OUTPUT_DIRECTORY \
-bottom '<script src="//localhost:8290/divolte.js" defer async></script>' \
-subpackages .
As an alternative, you could also just try to run the examples locally first (possibly in a virtual machine, if you are on a Windows machine).
It doesn't seem there is anything MapR specific with the issue that you are seeing so far. The Kafka based examples and pipeline should work in any environment that has the required components installed. This doesn't touch MapR-FS or anything else MapR specific. Writing to the distributed filesystem is another story.
We don't compile Divolte Collector against MapR Hadoop currently, but incidentally I have given it a run on the MapR sandbox VM. When installing from the RPM distribution, create a /etc/divolte/divolte-env.sh with the following env var setting:
HADOOP_CONF_DIR=/usr/share/divolte/lib/guava-18.0.jar:/usr/share/divolte/lib/avro-1.7.7.jar:$(hadoop classpath)
Obviously this is a bit of a hack to get around classpath peculiarities and we hope to provide a distribution compiled against MapR that works out of the box in the future.
Also, you need Java 8 to run Divolte. If you install this from the Oracle RPM, add the proper JAVA_HOME to divolte-env.sh as well, e.g.:
JAVA_HOME=/usr/java/jdk1.8.0_31
With these settings I'm able to run the server and collect Avro files on MapR FS, create a external Hive table on those files and run a query.
I have a Jenkins job where I want to
- build my application
- start the jboss via batch
- sleep some time to wait for the jboss
- do some junit tests
- stop the jboss
The problem I have is that the job does not proceed after the jboss start. It shows the complete jboss log and just keeps refreshing this log.
So the sleep and junit tests are never executed.
batchcall im using:
cmd.exe /C F:\jboss-5.1.0.GA-jdk6\bin\run.bat -c Servername -Djboss.service.binding.set=ports-05 -Djboss.bind.address=0.0.0.0
I can't use the jenkins jboss management plugin because i have to set java_opts for this specific job.
Any idea how to start the Jboss without showing the log in the jenkins console?
EDIT :
Thanks for your answer, but call/start didn't work for me either.
My working solution:
(not nice but it works, just thought i should share it)
I created a 2nd Jenkins job which starts the JBoss with the batch call from above.
Then changed this job to be triggered remotely. "Trigger builds remotely"
Now i changed my 1st job to trigger the 2nd in a build step "Execute batch command"
wget --spider build_trigger_url
So my Job is doing this now:
build my application
trigger the jboss jenkins job via wget
this 2nd job is now also running on jenkins, until it is manually shut down
sleep some time, until the jboss is started
execute junit tests
stop the jboss
via jboss management plugin, this kills the 2nd job
You should change it to cmd.exe /C call F:\jboss-5.1.0.GA-jdk6\bin\run.bat <whatever params>
When you trigger a .bat it passes control to it and runs it until the .bat terminates. Instead, you need to spawn off another process to run the .bat. This is done with call command.
Problem
I need to integrate AspectJ code into an existing application running on Tomcat, but I think I am not setting JAVA_OPTS correctly.
Our vendor has created some AspectJ code that passes logged in user id information to the CONTEXT_INFO() object within MSSQLServer Connection. This is so that within an audit database trigger that we created, we can capture the user id that made the change.
What I have done
Added the following code to our database trigger
DECLARE #appUserID INT
SET #appUserID = ISNULL(REPLACE(CONVERT(VarChar(128), CONTEXT_INFO()),CHAR(0), ''), '0');
Added aspectjrt.jar to the web application WEB-INF\lib folder.
Added vendorAspectJCode.jar to the web application WEB-INF\lib folder.
Added aspectjweaver.jar to tomcat's lib folder \tomcat7.0.27\lib
Edited catalina.bat with the following:
there is a line of code that looks like this:
set JAVA_OPTS=%JAVA_OPTS% %LOGGING_CONFIG%
I have changed that to
set JAVA_OPTS=”%JAVA_OPTS% %LOGGING_CONFIG% -javaagent:D:\tomcat\tomcat7.0.27\lib\aspectjweaver.jar"
but it did not seem to work.
So then I have tried setting it like that, adding a new set JAVA_OPTS:
set JAVA_OPTS=%JAVA_OPTS% %LOGGING_CONFIG%
set JAVA_OPTS="-javaagent:D:\tomcat\tomcat7.0.27\lib\aspectjweaver.jar"
but that did not seem to do the trick either
After making the following changes and running a test through the web application front end, the user id that was inserted into the database was 0, so that tells me that something has not been done right and the part that I feel less comfortable with all of the steps above was Step 5.
Does anybody know if the syntax for setting JAVA_OPTS is correct?
or whether there is another place to put it?
After a lot of trial and error I found out how to integrate AspectJ in Tomcat running as a Service on a Windows server. I do not know why, but the bolded stuff was the cause to my problems.
Of course, as I mentioned in my question above you need the following prerequisites:
Add aspectjrt.jar to the web application WEB-INF\lib folder.
Add vendorAspectJCode.jar to the web application WEB-INF\lib folder.
Add aspectjweaver.jar to tomcat's lib folder \tomcat7.0.27\lib
Setting -javaagent:PathToMyAspectjweaver\aspectjweaver.jar in the service.bat did not work. So I had to set it in the registry along with uninstalling/installing the Tomcat service for changes to be picked up by doing as follows:
First I recommend turning UAC off and to make sure that you are an Administrator
Stop the Tomcat service if running.
Delete the tomcat service.
Verify in Windows Services that the service is no longer there.
Verify the Windows registry that everything related to the service got deleted. If not, do so manually.
Install the Tomcat service.
Verify in Windows Services that the service got created.
Find the service in the registry and edit the variable Options apppending the following:
-javaagent:PathToMyAspectjweaver\aspectjweaver.jar
I have created a couple of bat files for these steps. Steps 2 and 3 would look something similar to this below (TomcatServiceUninstall.bat):
echo OFF
ECHO Removing Tomcat Service...
sc stop YourServiceName
sc delete YourServiceName
ECHO Removing Registry Key containing config data for Tomcat7
REG DELETE "HKLM\SOFTWARE\Wow6432Node\Apache Software Foundation\Procrun 2.0\YourServiceName" /f
REG DELETE "HKLM\SOFTWARE\Wow6432Node\Apache Software Foundation\Tomcat\7.0" /f
ECHO Uninstall Complete - File Directories remain intact.
Step 6 would look like that (TomcatServiceInstall.bat)
ECHO OFF
ECHO Running Service.bat to install the Tomcat 7 - YourServiceName - Service
cd "C:\Path to your tomcat\tomcat7.0.27\bin"
service.bat install
I need to deploy an EAR file that is located in sever A to a WebSphere Server located in server B. I need to know how to deploy the EAR from server A to my WAS through command line. I have seared the web but found results only fro WAS 6 (i have WAS 7).
does any one know how to deploy an EAR to WAS (in a different server) through command line?
I assume both servers are standalone. If so, use WAS_HOME/bin/wsadmin on server A, and specify the RMI host/port for serverB. If not, specify the host/port of the deployment manager for serverB.
wsadmin -host serverB.host.com -port serverBRMIPortNumber -c '$AdminApp install /path/to/localfile.ear {...options...}'
Note, this is UNIX syntax; for Windows syntax, use "double quotes". Alternatively, you can omit the -c and use interactive mode, or you can use -f file.jacl. Jython scripting is available with -lang jython. See the following for AdminApp install options (e.g., -appname or -usedefaultbindings):
http://publib.boulder.ibm.com/infocenter/wasinfo/fep/topic/com.ibm.websphere.nd.multiplatform.doc/info/ae/ae/rxml_taskoptions.html
You should really consider a nodeagent, that would make all of this go away. I'm assuming you're not in a clustered environment, otherwise a simple push to and synch of a nodeagent would do the trick.
The answer above is correct, but you could also simply FTP the package to be deployed to serverB and just use wsadmin to install locally, as well.