How to define standard port numbers for different apps - deployment

We have several enterprise application to deploy on Weblogic server. As you know for each domain we could define specific port, deploy an application on that and clients could access to server application by that port. My question is about the port number standard.
Is there any standard to assign server ports number to different application, if No what is your suggestion ? Is counting method (for example) god ?!! or ...
Thanks for your replies

You can deploy your applications in different managed servers. What you have to do is create a managed server per application under your weblogic domain. During the configuration of the managed server you will be asked to give a port number. After setting the managed server your applications can be deployed on them, using the ports you have specified earlier. More information about the weblogic managed servers can be found in weblogic administration guide.

Related

How can I create a Managed Server on a different machine?

I am trying to create a second managed server on mine OSB domain, but due to the specification that you can see below I am not being able to do it.
I have installed on Machine 2 the following package:
fmw_12.2.1.4.0_infrastructure.jar
fmw_12.2.1.4.0_osb.jar
On the domain creation I selected the nodemanager per domain.
Since my Domain is installed on a shared FS, which steps I should follow in order to create a Managed Server on Machine 1 for my OSB domain?
Best Regards.

Web sockets and Rest API in same Tomcat based application

I have read up on web sockets providing full duplex connections over TCP which can be used in scenarios where long polling was used to get live updates to client from server. Now I have a Tomcat based application which serves multiple REST based web service response, and I want couple of API's to be implemented using web sockets say to render dashboard with latest data where multiple users are working on them concurrently, is that possible ? My concern here was even if the connection was upgraded to TCP from HTTP wouldn't web socket require a separate port to run than the default Tomcat port 8080. In that case should I house the Web Socket based endpoints separate to the Tomcat based application already running. Please do correct me if any of the above is wrong.
A couple of month ago, I wrote a small Spring Boot webapp with embedded Tomcat that provides both, REST endpoints and websocket support, and both via the same port. So, yes that works... if you wanna sneak a peek: https://github.com/tommybrettschneider/pinterest-boot
Besides that, this post should also clarify things:
Shall I use WebSocket on ports other than 80?

Deploy application to wildfly on a certain port

By default, applications in wildfly are deployed to localhost:8080/app. How to deploy application on dedicated port, i.e. open it on localhost:8282 without application name ending?
I need to change the port for certain application, not the default port.
I have not tried this, but AFAICT it should be possible to:
run a single Wildfly instance listening on multiple HTTP ports. This is, in theory at least, possible (ref: https://developer.jboss.org/thread/233414?start=0&tstart=0)
Configure undertow subsystem as a reverse proxy, and proxy your app to the other port/location (ref: http://www.mastertheboss.com/jboss-server/wildfly-8/configuring-a-reverse-proxy-with-undertow). That said, I have never used undertow for a reverse proxy and as such cannot speak for whether this really works.
Once you have done this, you have effectively just turned your Wildfly instance into an overly complex application server and reverse proxy in one. Ultimately however, the app in question would still be running on both ports, but you redirect the traffic using the proxy the way you would like.
The same proxy configuration in an Apache (ref: https://httpd.apache.org/docs/current/mod/mod_proxy.html#forwardreverse or https://www.leaseweb.com/labs/2014/12/tutorial-apache-2-4-transparent-reverse-proxy/) or NGINX (ref: https://www.nginx.com/resources/admin-guide/reverse-proxy/) would be IMHO less complex and better tested in countless production scenarios.

Application Server and Web Server on Two Different Machines

Today I'm hosting a Laravel v4 web application on a MacMini. Why a Mac? Because I created the application logic in Objective-C (leveraging my experience with iOS dev). Whether or not this was the right choice isn't the point of the question.
What I'm interested in knowing is how can I separate my web and application server. For instance, if I put my web server on Linode (or whatever) how do I go about communicating back and forth between the web server and the application server? Is there some sort of resource I can look to to understand how to do this?
Assumptions
Here's some assumptions I'm making:
I'm guessing Laravel and the Objetive-C Application are part of the same "system" and so I'm just gonna treat this as if you need a web server to send requests to a PHP application.
The Linode server will be a web server which sends request to the PHP application (Laravel)
Hosting PHP Applications
There are three moving parts:
The web server (Apache, Nginx)
The application gateway (PHP-FPM)
The application
The gateway and the code must live on the same computer/server. The web server can live on a separate computer/server.
This means you'll need your Macintosh to run PHP-FPM, which can then listen for remote connects and send them to the PHP application.
Macintosh
Install php-fpm on your mac. Make sure it can listen for remote network connections. This is usually done in the www.conf file in the listen directory, you can listen for connections on the remote network interface (whatever IP address the computer is assigned).
Linode
Install Nginx or Apache and have it proxy FastCGI requests off to your macintosh server at the macintosh's IP address (the one you set up to listen to addresses in the step above).
Firewalls
You may need to ensure the firewalls at both ends allow incoming/outgoing connects on the networks being used to communicate to eachother.

Highly available standalone java server built using J2SE

What is the best way to make a standalone java server built using J2SE Socket API high available? Using an HTTP server would have been a good choice specially for the built-in features e.g. security, clustering, transactions, etc. but the server should be capable of accepting TCP/IP socket connection from java & non-java clients (mainly legacy). Tomcat does not accept non-http TCP/IP requests? Moreover this post points out servlet for implementing socket connection it's not a good practice. What would be good approach?
After exploring online, this is what I have compe up with. A standalone java application can be made high available by using a combination of the following:
2 VM deployed with HAproxy and keepalived to form the highly available load balancing layer.
Keepalived will keep the load balancers in active-passive mode and the HAproxy will forward the requests to a cluster of backend socket based java server apps
At least 2 VM deployed with the custom socket based java server apps. The HAproxy servers will distribute the requests over these 2 VMs
Use at least 2 terracotta server to share the java server apps. Terracotta will provide the sharing of the memory and help the custom java servers to scale.
Use MySQL NDB Cluster for the database.
Any suggestions?