we are running a spark thrift server and the configuration as below.
thrift drivers and application master are separated with firewall and all the port between these two are opened .
Issue is after 2hr 11 mins application server will die because its not able to connect to thrift driver .
So what are the ports which need use thrift driver and application master communication ?
I know thrift is based on RPC protocol and is it TCP or UDP ?
2hr11 mins actually the value of net.ipv4.tcp_keepalive_time=7200 which is default in linux OS.
I can increase this value to higher because it will impact other applications also .
SO if I get a clear view on how TS will communicate then it would be easy for me to configure firewall
Related
I have a debian Jupyter running on port 8888.
I want to make it easier to connect to my server,so I have a node.js
app running that forwards request to jupyter.mydomain.com:80 to
port 8888, andother domains to other ports.
This way I dont have to remember the ports of different apps, and instead
can refer to the server with different dns names. All the different names
are setup as a links in the dns server.
Now Jupyter works this way; but the Websockets that report the result of
calculations do not due to security error.
Is there any setting how I can get this to work?
Regards
Andreas
node-http-proxy is a node proxy that supports websockets. Your node app that's proxying requests must also proxy websocket connections.
JupyterHub is a multi-user server for spawning and authenticating single-user notebook servers, and it uses configurable-http-proxy, a subclass of node-http-proxy that adds some live configuration, to relay connections to notebooks. If you use NHP or CHP for your proxy app, the websockets should work.
From the node-http-proxy readme:
You can activate the websocket support for the proxy using ws:true
in the options.
//
// Create a proxy server for websockets
//
httpProxy.createServer({
target: 'ws://localhost:9014',
ws: true
}).listen(8014);
I have learn basic zookeeper concept and did a sample project, But I only it only local pc or one computer.
I understand the zookeeper but still confused on how the client connect to the zookeeper server if they are not in one computer? for instance, if we start a zookeeper server in my own computer, and we can use connect() like connect 2181 to connect to the zookeeper server, that make sense, since they are all in one computer have have some association in lower layer. But what if the zookeeper server and client they are separated into two computer? how can we handle that?
I'm not sure what language you're using for the client, so this will have to be a generic answer.
The client and server communicate over TCP. This requires that the client simply know the server's host and port. In general, your ZooKeeper servers bind to some private network interface. For instance, your zoo.conf configuration file might contain a line like the following:
clientPort=2181
server.1=123.456.789.1:2888:3888
The first portion of the server.1 section 123.456.789.1 is the host to which the ZooKeeper server will bind. As long as this host is not the loop back interface (i.e. localhost or 127.0.0.1) you should be able to connect to that host from another machine on the client port 2181. So, for instance, in Java I create a new ZkClient that points to that host and port:
ZkClient client = new ZkClient("123.456.789.1:2181");
I have a semantic web/RDF application which runs on Tomcat and is backended with MySQL. Apache HTTPD is in front as a reverse proxy and the OS is Red Hat Linux 6. I am seeing a lot of Connections to port 80 per IP. What I want to know is what determines whether the include for a css or .js file is served over the existing TCP socket, or a new one is created for each GET that occurs while the web page is "built". Is that exclusively the application itself, or is it the Apache web server, or the Linux kernel as well?
I was trying to monitor request using TCP/IP Monitor.
But, I see there are two ports which are in use. One is the application port[8080] and other monitoring port[9833].
Can anybody tell , why there are two different ports?
When I launch the application it launches at 9833 instead of 8080. Why this change?
Eclipse monitoring is done by capturing all the requests sent to an application (a host and a port), dumping it on the Monitor console for you, then forwarding the original request to the application.
The monitored application itself will return its responses to eclipse (where it is the client from its prospective) where eclipse dumps it on the monitoring console too.
Now, how does eclipse captures the requests sent to the monitored application at the first place? it simply runs a service that accepts these requests (on behalf of the application) and forward it, this service also returns the application responses to the original requester.
Based on the above, in eclipse TCP/IP Monitor screen, the Local monitoring port is the port of the eclipse service (which you can use any available port number for), and the other Port is the monitored application port number.
So, in your case, the application you are monitoring is running on port 8080 and eclipse service is using the port 9833 (which is just a random port that you can change).
Your application port have not been changed, it still runs on 8080 and you can try that, but no data will be captured by eclipse TCP/IP monitor unless you use the port 9833.
What is the best way to make a standalone java server built using J2SE Socket API high available? Using an HTTP server would have been a good choice specially for the built-in features e.g. security, clustering, transactions, etc. but the server should be capable of accepting TCP/IP socket connection from java & non-java clients (mainly legacy). Tomcat does not accept non-http TCP/IP requests? Moreover this post points out servlet for implementing socket connection it's not a good practice. What would be good approach?
After exploring online, this is what I have compe up with. A standalone java application can be made high available by using a combination of the following:
2 VM deployed with HAproxy and keepalived to form the highly available load balancing layer.
Keepalived will keep the load balancers in active-passive mode and the HAproxy will forward the requests to a cluster of backend socket based java server apps
At least 2 VM deployed with the custom socket based java server apps. The HAproxy servers will distribute the requests over these 2 VMs
Use at least 2 terracotta server to share the java server apps. Terracotta will provide the sharing of the memory and help the custom java servers to scale.
Use MySQL NDB Cluster for the database.
Any suggestions?