Jetty stops responding after some period of time - webserver

I have a project with jetty webserver.
The app is up and works fine. After some period (don't know exactly when) if I try to access the app I receive:
"The page you are looking for is temporarily unavailable. Please try
again later."
The app has schedulers set as well. I would be suspicious for timeout but it's not the case. By analyzing the logs I noticed: 2017-12-08
12:45:24.566:WARN:oejsh.ErrorHandler:qtp1555845260-195: Error page
loop /error/not-found.faces.
I don't see any other logs which can cause the issue. Any suggestions for solution?

The likely case is that you have the default work/temp directory setup in your Jetty deployment and there is a process on your machine that periodically cleans up the system temp directory.
Would suggest you specify the work directory or the temp directory (either jvm temp, jetty.base temp, or webapp temp).
See previous answers on how to configure the work / temp directory.
How to change the temporary directory in jetty9?
Jetty: Starts in C:\Temp

Related

LocalDeployer: app working directory

I have an app that creates a file temporarily, does not delete it. I was hopping to see the contents of the file while running.
The app is deployed using the local deployer, does any body knows where would it create the file??
I tried the temp path, and also the working directory where the out and error logs are... nothing, the app does seem to be erroring, that would be on my normal console log.
Running on unix, temp is at /tmp.
thanks
You can control this location via the local deployer property workingDirectoriesRoot and deleteFilesOnExit.
For more information, you can refer this doc:
https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-deployer
Actually looking at the code of the local deployer, it seems the location it defaults to is the system temp path (System.getProperty(“java.io.tmpdir”)) and adds the stream id, plus the app id, etc. It is the same folder where the console and error streams write to.
thanks!

robocopy error with ERROR 32 (0x00000020)

I have two drives A and B. Using a python script I am creating some files in "A" drive and I am running a powerscript which copies all the files in the drive A to drive B in the interval of 1 sec.
I am getting this error in my powershell.
2015/03/10 23:55:35 ERROR 32 (0x00000020) Time-Stamping Destination
File \x.x.x.x\share1\source\ Dummy_100.txt The process cannot access
the file because it is being used by another process. Waiting 30
seconds...
How will I overcome this error?
This happened is because the file is locked by running process. To fix this, download Process Explorer. Then use Find>Find Handle or DLL, find out which process locked this file. Use 'taskkill' to kill that process in commandline. You will be fine.
if you want to skip this files you can use /r:n that n is times of tries
for example /w:3 /r:5 will try 5 time every 3 seconds
How will I overcome this error?
If backup is, what you got in mind, and you encounter in-use files frequently, you look into Volume Shadow Copies (VSS), which allow to copy files despite them being ‘in use’. It's not a product, but a windows technology used by various backup tool.
Sadly, it's not built into robocopy, but can be used in conjunction with it. See
➝ https://superuser.com/a/602833/75914
and especially:
➝ https://github.com/candera/shadowspawn
It could be many reasons.
In my case, I was running a CMD script to copy from one server to another, a heap of SQL Server backups and transaction logs. I too had the same problem because it was trying to write into a log file that was supposedly opened by another process. It was not.
I ran many IP checks and Process ID checkers that I ran out of knowing what was hogging the log file. Event viewer said nothing.
I found out it was not even the log file that was being locked. I was able to delete it by logging into the server as a normal user with no admin privileges!
It was the backup files themselves by the SQL Server Agent. Like #Oseack said, there may have been the need to use another tool whilst the backup files themselves were still being used or locked by the SQL Server Agent.
The way I got around it was to force ROBOCOPY to wait.
/W:5
did it.

JConsole can't find process

I tried to run JConsole to analyze the memory used by a running process, but JConsole doesn't show me processes even though I am absolutely sure that one is running (in addition to that it should show JConsole in the process list as well but it doesn't).
Does anyone have an idea why it doesn't show any processes?
Cheers
at window prompt, run echo %TMP%, it will give you default temp dir. Go to that directory and find directory named hsperfdata_user where user is your login. This is directory to store your process id. Any new process you created such as java application will have a new file named by process id. Jconsole will pick up the process ids from this directory. If you cannot create a file in this directory, that means you need change permission to allow write. Once done that, start a new java application to see if new process id file is in the dir. Once confirmed, start jconsole
I have the same problem. But if I explicitly specify the PID, as in jconsole 1234, jconsole is able to analyze the process.
If you are running jconsole on windows - simply :
Find jconsole.exe
Right click it
Select run as administrator.
In my case, removal of hsperfdata_USERNAME directory (in %TMP% directory) and closing all the JVMs has helped.
This happens when %TMP% value is different for monitored JVM and the monitoring tool (JConsole/JMC/Java Mission Control, maybe even VisualVM).
This may be the standard scenario with Cygwin (at least in my case: Cygwin+Babun)
Easiest solution is to set value of the TMP environment variable to the default value used by Windows, at least in scope of shell launching the JVM.
You have to start jconsole with the same user as the process you want to analyze is started by.
Just ran into this issue
If you are using multiple jdk's by any chance (ex. SDKMAN), then make sure that jconsole is run using the same jdk as the application
8 years later... I had the same problem. I could only see certain processes but couldn't see and monitor any java processes running in a docker container in Linux.
Inspired by the Windows solution by RoyalBigMack:
Solution 1. Run terminal as super user (su command) and run jconsole
Solution 2. Run solution 1 as one command, sudo jconsole
Only the first solution worked for me, and once the jconsole UI popped up- all the hidden processes were now visible.

Rack: Bundler::GemNotFound errors during `bundle install --deployment`

So I have a few machines in production that are running a Sinatra app on top of Rack. Usually everything is hunky dory until Puppet - which we're using to sync changes to our servers - notices that the Gemfile.lock for the project has changed, and as a result, needs to issue the bundle install --binstubs --deployment command so we get the new gems. When this happens, ANY http request will cause a 500 error when it calls into Bundler to require our gems, because the new gems haven't been installed yet.
We usually have at least one Rack process hanging around due to another process that periodically makes an http request to ensure the server is alive, but when this happens, there are no Rack processes alive. It seems like the PassengerMinInstances directive might help if the problem were with new instances, but we also have a process that periodically fetches a page to test that the server is still up, so there still should be at least one Rack process alive to handle the request.
I should probably note that puppet doesn't actually restart Rack (by touching the restart.txt file) until after the bundle install has completed, so it doesn't make any sense why our Rack processes would go away at this time. Has anyone encountered anything like this? Is there some Rack option to not reload the entire environment on every request that I've overlooked?
I know this doesn't directly answer your question, but what I've done in the past to get around this kind of thing happening is to deploy the app into version-numbered dirs with a soft link pointing to them and an (Nginx) proxy server routing requests to the link. At the end of the deployment the deploy script points the link to the new app.
It seemed to work well enough for me, and if things really go wrong you can always manually repoint the link back to the previous version.
For posterity's sake, I'll answer this question. As part of the deployment, all of the files were touched with chown -R, which updates the ctime (but not the mtime) of the file. There is also an interesting bug/feature in Passenger where they will restart the server whenever the mtime or ctime of the /tmp/restart.txt file changes.
Solution: stop chowning the directory during a deployment.

Aggregation of IIS logs

We have an IIS .Net application deployed across several machines. We use IIS log information to do reporting of performance of the web application and navigation by the user. Currently the reporting is only required infrequently (once a day, for the previous day), so we just roll the logs every 24 hours, and move the old logs to our reporting server.
We have a new requirement that means we need much faster turnaround on the IIS log information, say every minute for the sake of the discussion.
There exist Apache tools like Facebook's Scribe to scalably move Apache web server logs across a network of servers.
Are there any similar tools available for IIS?
Is this the right question to ask?
Should we be doing something different, if the timing requirements have changed so much?
I've looked at this question and the answers, and the only one that seems to come close is this one.
Pointers appreciated!
Snare is a little old but worth mentioning.
Snare Agent for IIS Servers
http://www.intersectalliance.com/projects/SnareIIS/index.html
I used this old version a long time ago and it worked well by forwarding/sending/replicating IIS logs over a network via syslog.
Today, they have a newer version called Snare Epilog
http://www.intersectalliance.com/projects/EpilogWindows/index.html
The code is also open source; perhaps you might find it useful.
You might also want to try ...
http://nxlog.org
http://www.syslogserver.com/syslogagent.html
I tend to write a .bat file in conjunction with LOG Parser 2.2. The .Bat file will determine the appropriate file dates and pull the corresponding logs from multiple IIS server log locations into a single local directory. Once the files are across I then run a Log Parser command to query the log content over all log files and then produce a single output file in .csv format. Finally, I run an SSIS job to import the new .csv file into a running log table which I can then query on an ongoing basis.