JConsole can't find process - jconsole

I tried to run JConsole to analyze the memory used by a running process, but JConsole doesn't show me processes even though I am absolutely sure that one is running (in addition to that it should show JConsole in the process list as well but it doesn't).
Does anyone have an idea why it doesn't show any processes?
Cheers

at window prompt, run echo %TMP%, it will give you default temp dir. Go to that directory and find directory named hsperfdata_user where user is your login. This is directory to store your process id. Any new process you created such as java application will have a new file named by process id. Jconsole will pick up the process ids from this directory. If you cannot create a file in this directory, that means you need change permission to allow write. Once done that, start a new java application to see if new process id file is in the dir. Once confirmed, start jconsole

I have the same problem. But if I explicitly specify the PID, as in jconsole 1234, jconsole is able to analyze the process.

If you are running jconsole on windows - simply :
Find jconsole.exe
Right click it
Select run as administrator.

In my case, removal of hsperfdata_USERNAME directory (in %TMP% directory) and closing all the JVMs has helped.

This happens when %TMP% value is different for monitored JVM and the monitoring tool (JConsole/JMC/Java Mission Control, maybe even VisualVM).
This may be the standard scenario with Cygwin (at least in my case: Cygwin+Babun)
Easiest solution is to set value of the TMP environment variable to the default value used by Windows, at least in scope of shell launching the JVM.

You have to start jconsole with the same user as the process you want to analyze is started by.

Just ran into this issue
If you are using multiple jdk's by any chance (ex. SDKMAN), then make sure that jconsole is run using the same jdk as the application

8 years later... I had the same problem. I could only see certain processes but couldn't see and monitor any java processes running in a docker container in Linux.
Inspired by the Windows solution by RoyalBigMack:
Solution 1. Run terminal as super user (su command) and run jconsole
Solution 2. Run solution 1 as one command, sudo jconsole
Only the first solution worked for me, and once the jconsole UI popped up- all the hidden processes were now visible.

Related

Kafka does not start blank output

Im workign to install Kafa and Zookeeper.
I have already run the Zookeeper and it is currently running.
I set up everything as in [https://dzone.com/articles/running-apache-kafka-on-windows-os]
when i finally run in my cmd,
.\bin\windows\kafka-server-start.bat .\config\server.properties
there is no output, it just moves shows the next command line started.
Please help me out.
Finally I find someone with with the same issue I had! Zookeeper running, but kafka not doing anything at all except returning to the next line with no log, error, or anything. Dunno if the cause is the same, but the solution for me, oddly enough, was to download and open cygwin, and run the command exactly as you have it, except with flipping all the \s to /s and it worked.
After lot of search this is the way I solved
You have to add in User path in Environment Variable:
%SystemRoot%\System32\Wbem;%SystemRoot%\System32\;SystemRoot%
In User %PATH% Environment variable, and not in system %PATH% Environment variable.
this question already replied on this page:
Kafka server not returning anything
Solution that worked for me:
Create the logs folder and mention it on the sever.properties, it will not create the folder automatically.
go to your cmd and run kafka-server-start.bat D:\<pathofkafka>\config\server.properties
Thanks!

Accessing Wildfly AS remotely using jconsole from a windows machine

I Have Wildfly running on a Linux machine I'm trying to access it from a windows machine on the same network using jconsole without success.
I can access the managment console of the wildlfy instance from the browser using:
http://192.168.1.6:9990/
I've read that I have to add the jboss-client.jar to the jconsole class path, but I cant seem to get it to work, this is my attempt
jconsole -J-Djava.class.path=C:\Progra~1\Java\jdk1.8.0_73\bin\jconsole.jar:C:\Progra~1\Java\wildfly\jboss-client.jar
I'm running the command from Powershell I have my environment PATH set so I can use jconsole no problem, so I think my syntax is the problem here.
In $WILDFLY_HOME\bin there are jconsole-scripts: jconsole.(sh|bat|ps1) which set up the class path for you, so setting it up manually should not be necessary. If you really need to do that, analyzing the scripts will probably help to how to escape it correctly.
Also, on Windows the classpath entries are separated with a semicolon ; but on Unix it is a colon :

Divolte-collector with MAPR, Storm, Kafka and Cassandra

I am not sure if I can get help for this on here, but I thought it was worth a try.
I have 3 node cluster on AWS, I am running MAPR M3 , I installed Storm, Kafka and Divolte-collector and Cassandra. I would like try some of the clickstream examples and I am running into an issue with the tcp-consumer example. Also being quite new to java and distributed processing I have some clarification questions. Again I am not quite sure where to post this because I feel like this is divolte-collector specific and I also have some gaps in my understanding of the javadoc concept and the building and running of jar files; but I figured someone could point me to some resources or help with some clarifications. I can't get the json string to appear in the console running netcat socket listening for clicks:
Divolte tcp-kafka-consumer example
Everything works until the netcat part step 7 and my knowledge gap is with step 6.
Step 1: install and configure Divolte Collector
Install works and hello world click collections is promising :-)
Step 2: download, unpack and run Kafka
# In one terminal session
cd kafka_2.10-0.8.1.1/bin
./zookeeper-server-start.sh ../config/zookeeper.properties
# Leave Zookeeper running and in another terminal session, do:
cd kafka_2.10-0.8.1.1/bin
./kafka-server-start.sh ../config/server.properties
No erros plus tested kafka examples so seems to working as well
Step 3: start Divolte Collector
Go into the bin directory of your installation and run:
cd divolte-collector-0.2/bin
./divolte-collector
Step 3 no hitch, can test default divole-collector test page
Step 4: host your Javadoc files
Setup a HTTP server that serves the Javadoc files that you generated or downloaded for the examples. If you have Python installed, you can use this:
cd <your-javadoc-directory>
python -m SimpleHTTPServer
Ok so I can reach the javadoc pages
Step 5: listen on TCP port 1234
nc -kl 1234
Note: when using netcat (nc) as TCP server, make sure that you configure the Kafka consumer to use only 1 thread, because nc won't handle multiple incoming connections.
Tested netcat by opening port and sending messages so I figured I don't have any port issues on AWS.
Step 6: run the example
cd divolte-examples/tcp-kafka-consumer
mvn clean package
java -jar target/tcp-kafka-consumer-*-jar-with-dependencies.jar
Note: for this to work, you need to have the avro-schema project installed into your local Maven repository.
I installed the avro-schema with mvn clean install in avro project that comes with the examples. as per instructions here
Step 7: click around and check that you see events being flushed to the console where you run netcat
When you click around the Javadoc pages, you console should show events in JSON format similar to this:
I don't see the clicks in my netcat window :(
Investigating the issue I viewed the console and network tabs using chrome developer tools it seems divolte is running, but I am not sure how to dig further. This is the console view. Any ideas or pointers?
Thanks anyways
Initializing Divolte.
divolte.js:140 Divolte base URL detected http://ec2-x-x-x-x.us-west-x.compute.amazonaws.com:8290/
divolte.js:280 Divolte party/session/pageview identifiers ["0:i6i3g0jy:nxGMDVdU9~f1wF3RGqwmCKKICn4d1Sb9", "0:i6qx4rmi:IXc1i6Qcr17pespL5lIlQZql956XOqzk", "0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh"]
divolte.js:307 Module initialized. Object {partyId: "0:i6i3g0jy:nxGMDVdU9~f1wF3RGqwmCKKICn4d1Sb9", sessionId: "0:i6qx4rmi:IXc1i6Qcr17pespL5lIlQZql956XOqzk", pageViewId: "0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh", isNewPartyId: false, isFirstInSession: falseā€¦}
divolte.js:21 Signalling event: pageView 0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh0
allclasses-frame.html:9 GET http://ec2-x-x-x-x.us-west-x.compute.amazonaws.com:8000/resources/fonts/dejavu.css
overview-summary.html:200 GET http://localhost:8290/divolte.js net::ERR_CONNECTION_REFUSED
(Intro: I work on Divolte Collector)
It seems that you are running the example on an AWS instance somewhere. If you are using the pre-packaged JavaDoc files that come with the examples, they have hard-coded the divolte location as http://localhost:8290/divolte.js. So if you are running somewhere other than localhost, you should probably create your own JavaDoc for the example, using the correct hostname for the Divolte Collector server.
You can do so using this command. Be sure to run it from the directory where you source tree is rooted. And of course change localhost for the hostname where you are running the collector.
javadoc -d YOUR_OUTPUT_DIRECTORY \
-bottom '<script src="//localhost:8290/divolte.js" defer async></script>' \
-subpackages .
As an alternative, you could also just try to run the examples locally first (possibly in a virtual machine, if you are on a Windows machine).
It doesn't seem there is anything MapR specific with the issue that you are seeing so far. The Kafka based examples and pipeline should work in any environment that has the required components installed. This doesn't touch MapR-FS or anything else MapR specific. Writing to the distributed filesystem is another story.
We don't compile Divolte Collector against MapR Hadoop currently, but incidentally I have given it a run on the MapR sandbox VM. When installing from the RPM distribution, create a /etc/divolte/divolte-env.sh with the following env var setting:
HADOOP_CONF_DIR=/usr/share/divolte/lib/guava-18.0.jar:/usr/share/divolte/lib/avro-1.7.7.jar:$(hadoop classpath)
Obviously this is a bit of a hack to get around classpath peculiarities and we hope to provide a distribution compiled against MapR that works out of the box in the future.
Also, you need Java 8 to run Divolte. If you install this from the Oracle RPM, add the proper JAVA_HOME to divolte-env.sh as well, e.g.:
JAVA_HOME=/usr/java/jdk1.8.0_31
With these settings I'm able to run the server and collect Avro files on MapR FS, create a external Hive table on those files and run a query.

How to check user ".profile" exist or not before running crontab in Solaris 10

I am using Solaris 10.
I have another user apart from root say testuser, which is mounted in NAS file system
I have some script which need to be run as testuser. so I had added them to the crontab of testuser.
As long as NAS is up all the cronjobs are rqn properly, but when NAS goes down then cron itself crashed by giving ! could not obtain latest contract for PID 15621: No such process
this error.
I search for this issue and came to know that because it's .profile file is not accessible due to which it is giving this error. So is there any way by which we can check user specific .profile file exist or not before run any schedule job
Any help on this will be appreciated.
I think a better solution would be to actively monitor the NAS share, and report an error (however errors are reported at your location) if it isn't. You can use tools like nfsstat to get statistics on the NAS share (assuming this NAS share is mounted via NFS). It seems a better solution than checking to see if it's working before running cron -- check to make sure the share is available, because if it isn't, attention is needed.
Cron doesn't depend on anything but time, so it will run regardless of whether or not the user's home directory is available. If the script that the cron job is running is local, then you could prepend a check to make sure the home directory is available before running, otherwise just exit with an error code.
If the script that cron is attempting to run is in the user's home directory, you're out of luck, because an error will occur in even trying to run the script to check for the existence. You will need to check the status of the NAS share before attempting to run the cron job, but the cron job will run regardless. See where I'm going?
Again, I would suggest monitoring the NAS and reporting when it is failing.

Change site configuration without restarting G-WAN

I'm looking at hosting a number of small, static websites and have been looking at a few alternatives including G-WAN. At the moment I'm just trying to get a feel for how well each server suits my needs before picking one.
G-WAN seems to do exactly what I want, though I'm running into problems with updating the configuration (by adding new folders) after the server's started. I can't find anything in the documentation or online about this, so I don't know if I'm doing anything dumb, running an unsupported configuration, or whether it's a feature that doesn't exist in G-WAN.
Here's my setup:
G-WAN 3.3.28 64-bit on Ubuntu 12.04.1 LTS.
I have what I think is the required minimal folder structure:
0.0.0.0_80
#0.0.0.0
www
$site.com
www
$othersite.com
www
I startup gwan via (I'm still messing around, so hopefully ):
sudo .\gwan -d
Everything works brilliantly. I add $thirdsite.com/, $thirdsite.com/www/, and $thirdsite.com/www/index.html; then when I try to visit thirdsite.com it gives me the root host (ie it doesn't seem to pick up the changes).
To reload the modified configuration, I have to either do:
sudo .\gwan -k; sudo .\gwan -d
or kill the non-angel process (kill -s 15) to restart the child process.
Can G-WAN reload the host definitions another way? If so, is it something that works out of the box or is there a command that can cycle the server without dropping requests made to other hosts (/is it safe to kill -s 15 on the non-angel process + if so, is there a reliable way to identify the process)? Thanks in advance!
G-WAN loads the host definitions at startup and does not check them as time goes to reload them dynamically.
To force a reload, you have to stop the child process (when in daemon mode) and v3.9+ keeps the old child alive the time to process any pending request while the new child accepts new connections.
Since stopping the child can also be done from the maintenance script or from a handler or from a servlet by just running exit(0) there is not need for a dedicated command.
Note that when you use kill you can pick the pid file from the gwan directory:
the parent process starts with a capital letter: Gwan_xxxx.pid
the child process starts with a lowercase letter: gwan_xxxx.pid
That will make your life easier.