Scala Play app always times out on first request - scala

After starting my app for the first time, the first request always times out. If I tail the logs when this request is invoked, Play appears to be doing some kind of required post compilation work- resolving the same list of dependencies that were resolved on startup and initiating the database connection. Is there any way to force this extra work on startup?

When you run in prod mode this will not happen.
Even if your not building yet for production you can
run a test instance
You will need to be sure to set an application secret

Related

RobotFrameWork: Is there a way of checking the report.html although the run paused?

Situation: VisualStudioCode (Browser library) runs a couple of .robot files (manually started)
Then it pauses because of an error...
At that point the process breaks and there is no final report.html
If you stop the run it doesn't generate an report.html that's not what you want. You actually want the results until that point. (or even better described: you still want the links output.xml, log.html and report.html)
you should be able to generate lag.htm and report.html using the rebot command. however you need output.xml for this. output.xml is created when you run the tests. when you break you will probobaly not have all the resources you need.
I would suggest to assign test timeout to the test that causes the pause. When the timeout is reached the test will be stoped automaticaly and you should have all reports. You can also set it globaly for all tests eg.:
*** Settings ***
Test Timeout 2 minutes

Query builds failed because of timeout - Azure DevOps Server

In our dev environment we have lots of repos, lots of builds and lots of buildservers, and most of the time things work just like they should - however, we are seeing an increase in builds that fail because of timeouts.
These timeouts are not happening because we are getting close to the limit, but because something "gets stuck/blocked" in the pipeline and it stays on that step until timeout kills the build.
To better debug why that happens, we need to be able to query what builds fails because of this timeout, so we for instance can see, if it is a particular build server or agent that has this problem.
We can not find anything in the API that would give us the timeout error, but we can see that the UI is able to deduct it somehow:
So far we have narrowed it down to query all builds with completed status (through this API), but we get no completion reason, and buildtimes are never exact the same as the timeout of the build defintion, so "guessing" it from the execution plan will also be a bit shaky.
How can we filter our builds down to only the builds that have timed out?
We can use the below API to get details for a build.
Note: do not add timelineId, we should list all info
GET https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}/timeline?api-version=6.1-preview.2
If the build is canceled because of the timeout setting, we can get the message: The job running on agent Hosted Agent ran longer than the maximum time of xxx minutes. For more information, see https://go.microsoft.com/fwlink/?linkid=2077134
By the way, we can use the API Builds - List to filter all failed build. if the build is canceled due to a timeout setting. the result is failed instead of cancel.

OPA5: How to make sure that every test starts in a fresh environment?

I got to refactor a module of OPA5-tests, because most of the test-cases fail currently.
While trying to find the reason for the failing I found out that most of the tests aren't erroneous.
When you run them in isolation they work just fine. The problem occurs when you run them as a module. This means you run them as a group. One test after the other.
The problem occurs when one test fails. Normally you execute iTeardownMyAppFrame() as
the very last method of the test. To remove the used iFrame. So that the following test
finds an untouched environment in which it can run.
Now when a test fails at some line, then the test stops, and the following invocations aren't done.
iTeardownMyAppFrame is never executed and the following test starts in the environment of the previous (the failed) test. So it might fail too because the environment isn't as expected.
Is there a way to make sure that every test starts in a new iFrame?
Something like "try-finally" with the iTeardownMyAppFrame in the final block. So that it
is executed in any case. No matter if the test has worked or it has failed.

ServiceFabric not closing CommunicationListener on update?

We have some custom implementation ICommunicationListener and in our tests we find out that during application update the CloseAsync method of the listener is not invoked. In VS when we close app it is invoked. This is important for us because we are suspecting that it cussing our troubles during update of the application – it sometimes fails so we have to write a script that delete service and type from cluster and then installation is not failing. Rolling back on failed installation was always on node where service with ICommunicationListener existed.
Is this a know problem? How to force closing on update?

Rack: Bundler::GemNotFound errors during `bundle install --deployment`

So I have a few machines in production that are running a Sinatra app on top of Rack. Usually everything is hunky dory until Puppet - which we're using to sync changes to our servers - notices that the Gemfile.lock for the project has changed, and as a result, needs to issue the bundle install --binstubs --deployment command so we get the new gems. When this happens, ANY http request will cause a 500 error when it calls into Bundler to require our gems, because the new gems haven't been installed yet.
We usually have at least one Rack process hanging around due to another process that periodically makes an http request to ensure the server is alive, but when this happens, there are no Rack processes alive. It seems like the PassengerMinInstances directive might help if the problem were with new instances, but we also have a process that periodically fetches a page to test that the server is still up, so there still should be at least one Rack process alive to handle the request.
I should probably note that puppet doesn't actually restart Rack (by touching the restart.txt file) until after the bundle install has completed, so it doesn't make any sense why our Rack processes would go away at this time. Has anyone encountered anything like this? Is there some Rack option to not reload the entire environment on every request that I've overlooked?
I know this doesn't directly answer your question, but what I've done in the past to get around this kind of thing happening is to deploy the app into version-numbered dirs with a soft link pointing to them and an (Nginx) proxy server routing requests to the link. At the end of the deployment the deploy script points the link to the new app.
It seemed to work well enough for me, and if things really go wrong you can always manually repoint the link back to the previous version.
For posterity's sake, I'll answer this question. As part of the deployment, all of the files were touched with chown -R, which updates the ctime (but not the mtime) of the file. There is also an interesting bug/feature in Passenger where they will restart the server whenever the mtime or ctime of the /tmp/restart.txt file changes.
Solution: stop chowning the directory during a deployment.