How to display "Application is down for upgrade" message when redeploying Glassfish application? - deployment

I tried to access a web application while it was in the process of redeploying or reloading, and I just got a 404 error. This is likely to result in time-wasting helpdesk calls if a user happens to see it. How can I replace the 404 message with something more helpful, like "This application is being upgraded - check back in a minute or two"?

You may want to consider looking at Application Versioning feature to "pre-deploy" an application to minimize the impact.
Deploy your app:
$ asadmin deploy myapp.war
Deploy version2 in "disabled" mode, meaning the old version is still active:
$ asadmin deploy --enabled=false --name myapp:version2 myapp.war (version2 is an arbitrary name)
When ready to activate version2:
$ asadmin enable myapp:version2
The nice thing about this approach is that if you run into issues with version2, you can always fall back to the original version:
$ asadmin enable myapp

I normally deploy my webapps behind an Apache proxy. When the appserver goes down Apache returns a 503 response.
This can be customised with an alternative "I'm sorry we're doing maintanence" message

You can also customize the standard response codes (403, 404, etc.) in the server configuration. The simple change is to change the message text, but it isn't as elegant as what you are looking for. However, there will always be a point where the environment will return 404, 503, etc., so you might consider adding this, in addition to, the "behind the proxy" answer provided by #Mark O'Connor.

Related

Nginx Proxy Manager redirects hosted website to 502 Bad Gateway

I have a website running where I use Nginx Proxy Manager to redirect to this website. However, as soon as I hit my website I get the following message:
Does anyone have a clue what is happening here?
Finally, I came to the following conclusion. I think from my experience it can mean two things:
Either your website/docker container is not running
Or Nginx cannot find an index.html file on the main root of web address 'example.com'.
However, hostinger.com points out the following:
Unresolved domain name
Server overload
Browser issues
Home-network equipment error
Firewall blocks
So make sure that index.html is present for your website and you have trouble shooted your container where you are 100% sure that docker container has no exceptions, errors and runs perfectly fine. Try use something like 'docker-compose logs' where the docker-compose.yml is located (this only works for a running docker container)

Show maintenance page during Wildfly startup

I have a WildFly installation which takes some time during startup due to the count and size of the deployments. So I would like to show a maintenance page until the full application is ready.
In one of the previous WildFly versions I used default-web-module as configuration option and registered a small WAR file, which was visible right away and was replaced as soon as the big application was available.
Unfortunately this is no longer possible with WildFly 22, instead an exception is thrown as soon as the real root application is deployed:
org.jboss.msc.service.DuplicateServiceException: Service jboss.undertow.deployment.default-server.default-host./.UndertowDeploymentInfoService is already registered
I know that I could put a small web server (nginx or similar) in front of WildFly to return my maintenance webpage as long as WildFly returns a 503 error. The only thing which prevents that is the fact, that the maintenance page still contains some logic which I would need to emulate on nginx.
Is there any other option which ensures that my maintenance page is delivered immediately while the other apps are still starting?
If I do understand your question correctly, what you want is to show error page of status 503 when server is down or 404 when server is replacing deployments.
But for your environment which don't have a web server in front of application server, we just need to consider 404 situation.
503 is for web server when the application server is down, therefore, 503 service unavailable.
First: Console -> Configuration tab
Head over to management console of Wildfly. And depends on what mode(domain or standalone) you are using ,there will be a little different inside configuration tab.
Second: Configuration -> Web -> Filters
If using domain mode, choose the profile you are using it.
And head over to Subsystem -> Web(Undertow) -> filters -> click 'View'.
I used profile 'full' in my local machine with domain mode, therefore, this is what my console looks like.
Path_to_Filters
Third: Choose Error page tab inside Filters
Set code and path like this, where code is what status code you hope the static page to show. And path is the file location of your static page.
setPathAndCode
You can see Wildfly doc for error-page settings.
Fourth: Configuration tab -> Web(Undertow) -> Server -> default-server(or what you used)
HeadToServer
Fifth: Choose Hosts tab inside servers
Choose 'Hosts' tab, and click add filter button.
Select the filter we just set in step three, and set predicate to true.
I'm not sure what will be effected if didn't set as true or left empty, cause it's not required fields.
Last: Restart the server in order to let your configuration works.
Now you should able to see your static page showed when you undeploy or re-deploy your application.
Sorry for not answering in too good answer format because I didn't answer any questions before.
** You can also set your standalone.xml or domain.xml like this to get the same result.
setXmlFile

Setting up load-balancer based on authenticated users

I'm trying to set up a loadbalancer that would redirect to specific version of an application certein users. So far i was using Blue/Green deployment strategy (so once i made new version of an app i created new environment and redirected traffic there). Now i would like to change this approach. I want to be able to specify users (more experienced or whatever) that would see new site after authentication while the others would still be redirected to old one. If something goes wrong with new version all users will see old version. Currently my loadbalancing is made in apache and authentication is done on application level. So is this even possible? I know i could hardcode it in application but what if there is a bug in new feature and new users are still being redirected there? I would then need to stop application for all users and rollback to old version and that's bad i guess. I was thinking about using external CAS however didnt find any information if it would be possible then. So i would like to ask is it possible and are there any tools (maybe some apache plugin) for that purpose?
Here's a working solution with nginx
create conf.d/balancer.conf
put the code into it (see below)
docker run -p8080:8080 -v ~/your_path/conf.d:/etc/nginx/conf.d openresty/openresty:alpine
use curl to play with it
balancer.conf:
map $cookie_is_special_user $upstream {
default http://example.com;
~^1$ http://scooterlabs.com/echo;
}
server {
listen 8080;
resolver 8.8.8.8;
location / {
proxy_pass $upstream;
}
}
testing
curl --cookie "is_special_user=1" http://localhost:8080
It would return the contents of scooterlabs.com dumping the request it receives
curl http://localhost:8080
Produces the contents of example.com
explanation
the idea is that you set a special cookie to the users you treat as special by the backend app after they get authorized as usual
of course it would only work if both app versions are served on the same domain so that the cookie is seen by both versions
after that you balance them to a desired server depending on the cookie value
you can easily disable such routing by tweaking your nginx config file
with this approach you can come up with even more complex scenarios like setting random cookie values in the range 1-10 and then gradually switching some of the special users in your config file i.e. start with those having value 1, after that 1-2 etc

Google cloud datalab deployment unsuccessful - sort of

This is a different scenario from other question on this topic. My deployment almost succeeded and I can see the following lines at the end of my log
[datalab].../#015Updating module [datalab]...done.
Jul 25 16:22:36 datalab-deploy-main-20160725-16-19-55 startupscript: Deployed module [datalab] to [https://main-dot-datalab-dot-.appspot.com]
Jul 25 16:22:36 datalab-deploy-main-20160725-16-19-55 startupscript: Step deploy datalab module succeeded.
Jul 25 16:22:36 datalab-deploy-main-20160725-16-19-55 startupscript: Deleting VM instance...
The landing page keeps showing a wait bar indicating the deployment is still in progress. I have tried deploying several times in last couple of days.
About additions described on the landing page -
An App Engine "datalab" module is added. - when I click on the pop-out url "https://datalab-dot-.appspot.com/" it throws an error page with "404 page not found"
A "datalab" Compute Engine network is added. - Under "Compute Engine > Operations" I can see a create instance for datalab deployment with my id and a delete instance operation with *******-ompute#developer.gserviceaccount.com id. not sure what it means.
Datalab branch is added to the git repo- Yes and with all the components.
I think the deployment is partially successful. When I visit the landing page again, the only option I see is to deploy the datalab again and not to start it. Can someone spot the problem ? Appreciate the help.
I read the other posts on this topic and tried to verify my deployment using - "https://console.developers.google.com/apis/api/source/overview?project=" I get the following message-
The API doesn't exist or you don't have permission to access it
You can try looking at the App Engine dashboard here, to verify that there is a "datalab" service deployed.
If that is missing, then you need to redeploy again (or switch to the new locally-run version).
If that is present, then you should also be able to see a "datalab" network here, and a VM instance named something like "gae-datalab-main-..." here. If either of those are missing, then try going back to the App Engine console, deleting the "datalab" service, and redeploying.

Stop framework from using HTTP Proxy

After nearly drowning in tears of frustration I have to ask you a question.
My play (2.0.3, scala) application is consuming a wsdl, which works perfectly fine, if I run the dev version of my webservice on localhost, which makes the wsdl-url something like http://localhost:8080/Service/Service?wsdl.
When I try to consume the WSDl from the remote test system server, with an Url like http://testserver.company.net:8084/Service/Service?wsdl, I get:
[WebServiceException: Failed to access the WSDL at: http://testserver.company.net:8084/Service/Service?wsdl. It failed with: Got Server returned HTTP response code: 502 for URL: http://testserver.company.net:8084/Service/Service?wsdl while opening stream from http://testserver.company.net:8084/Service/Service?wsdl.]
My company uses a http proxy for internet use, which is the reason for the 502 error. So I want play to stop using the proxy.
So far I have tried (all together):
deleted proxy from Intenet Explorer
set _JAVA_OPTIONS=-Dhttp.noProxyHosts="testserver.company.net"
set JAVA_OPTIONS=-Dhttp.noProxyHosts="testserver.company.net"
play run -Dhttp.noProxyHosts="testserver.company.net"
None of this worked. Any ideas? How can I stop play from using the HttpProxy?
EDIT:
I found it has someting to do with java Webservices-api / jaxws libraries.
Any ideas?
EDIT 2012-10-17:
It seams to depend on system proxy settings. I still don't know why it didn't work that day although I deleted the whole proxy from IE and restarted everything. Is there any way to make my play app independend from system settings?
Try:
play -Dhttp.noProxyHosts="testserver.company.net" run
I noticed a typo in your property, the correct property is http.nonProxyHosts so add and extra n after no.