Detecting Port Utilized by Webbrowser - c#-3.0

When the webbrowser control issues an HTTP request to a URL, it is assigned a port - which is utilized for the length of that connection.
Is there away to find out which port is being utilized for each connection the webbrowser control establishes/issues?

Every request is potentially using a different port. Since most requests are resolved in a couple of seconds and then closed, having the port information on the client isn't going to be very helpful.
If you're interested from a historical perspective, you can add the port number to the logs that many web servers generate.
In order to view this information live you can use a tool such as TCPView
Now for the real question. What are you trying to do? There may be an easier way.

you can run in background:
netstat -bn
and parse output to get information about your application (ports, ips, etc.)

Related

Why do outgoing sockets need port numbers?

I understand why a server would need sockets for incoming data, but I do not understand why it is necessary that a socket connecting to another computer needs a source port.
While others have mentioned the exact reason why, let me illustrate the point by giving you an example:
Say you want to ssh to your server. OK, you ssh in and do some stuff. Then you tail a log file. So now you don't have access to the console anymore. No problem you think, I'll ssh again...
With one port number, if you ssh again that second connection will be a mirror of the first since the server won't know that there are two connections (no source port number to tell the difference) so you're out of luck.
With two port numbers you can ssh a second time to get a second console.
Say you browse a website, say Stackoverflow. You're reading a question but you think you've seen it before. You open a new tab in your browser to stackoverflow to do a search.
With only one port number the server have no way of knowing which packet belongs to which socket on the client so opening a second page will not be possible (or worse, both pages receive mixed data from each other).
With two port numbers the server will see two different connections from the client and send the correct data to the correct tab.
So you need two port numbers for client to tell what data is coming from what server and for the server to tell what data is coming from which socket from the client.
A TCP connection is defined in terms of the source and destination IP addresses and port numbers.
Otherwise for example you could never distinguish between two connections to the same server from the same client host.
Check out this link:
http://compnetworking.about.com/od/basiccomputerarchitecture/g/computer-ports.htm
Ultimately, they allow different applications and services to share the same networking resources. For example, your browser probably uses port 80, but your email application may use port 25.
TCP communication is two-way. A segment being sent from the server, even if it is in response to a segment from the client, is an incoming segment as seen from the client. If a client opens multiple connections to the same port on the server (such as when you load multiple StackOverflow pages at once), both the server and the client need to be able to tell the TCP segments from the different connections apart; this is done by looking at the combination of source port and destination port.

Multiple service connections vs internal routing in MMO

The server consists of several services with which a user interacts: profiles, game logics, physics.
I heard that it's a bad practice to have multiple client connections to the same server.
I'm not sure whether I will use UDP or TCP.
The services are realtime, they should reply as fast as possible so I don't want to include any additional rerouting if there are no really important reasons. So are there any reasons to rerote traffic through one external endpoint service to specific internal services in my case?
This seems to be multiple questions in one package. I will try to answer the ones I can identify as separate...
UDP vs TCP: You're saying "real-time", this usually means UDP is the right choice. However, that means having to deal with lost packets and possible re-ordering of packets. But, using UDP leaves a couple of possible delay-decreasing tricks open.
Multiple connections from a single client to a single server: This consumes resources (end-points, as it were) on both the client (probably ignorable) and on the server (possibly a problem, possibly ignorable). The advantage of using separate connections for separate concerns (profiles, physics, ...) is that when you need to separate these onto separate servers (or server farms), you don't need to update the clients, they just need to connect to other end-points, using code that's already tested.
"Re-router" (or "load balancer") needed: Probably not going to be an issue initially. However, it will probably become an issue later. Depending on your overall design and server OS, using UDP may actually become an asset here. UDP packet arrives at the load balancer, dispatched to the right backend and that could then in theory send back a reply with the source IP of the load balancer.
An alternative would be to have a "session broker". The client makes an initial connection to a well-known endpoint, says "I am a client, tell me where my profile, physics, what-have0-you servers are", the broker considers the current load, possibly the location of the client and other things that may make sense and the client then connects to the relevant backends on its own. The downside of this is that it's harder (not impossible, but harder) to silently migrate an ongoing session to a new backend, when there's a load-balancer in the way, this can be done essentially-transparently.

Random sessions using Fiddler

all,
I use Fiddler for developing and debugging Web apps, and I find that random stuff shows up in the list of "sessions" after I turn off Capture Traffic. It's perhaps a few items every ten or fifteen minutes. I know this is a really broad question, but is there some way to figure out why these things are showing up and what they are?
Thanks!
The Capture Traffic setting controls whether or not Fiddler is registered as the system's proxy server. Most clients (Internet Explorer, etc) will react to the system proxy setting at runtime, so that when you disable the Capture traffic setting, they'll stop sending traffic to Fiddler.
However, some clients (particularly .NET applications) do not react to proxy setting changes and always use whatever proxy was set when the client was started; they'll continue to send traffic to Fiddler until the client is restarted.
You can examine the Process column in Fiddler to see what client isn't properly reacting to changes in the system's proxy setting.
You likely have web pages open that periodically hit the server from within javascript (ajax calls), fiddler captures that traffic and that's what you're seeing.

Confusion over Sockets and Ports

I am trying to write a programme that will 'listen' to application that is running on a port over TCP/IP.
When I point my browser to localhost:30003 , I get the output stream from the application printed to the screen. It would appear that the browser successfully 'listens' to the port.
What is happening here? Is my browser polling the application or is the application pushing tcp data which the browser picks up?
I am not sure whether to get this data I need to create a client or server instance.
One of the best ways to find out what is actually happening is to fire up Wireshark and follow the tcp stream.
http://www.wireshark.org/
Alternately, you can use something like TCP mon if you only care about the text, and none of the networking details.
http://ws.apache.org/commons/tcpmon/download.cgi
Based on the limited information in your question, the most likely thing is that the browser makes the tcp connection, and you send back a malformed response. The brower assumes you are a broken site, and does it's best to adjust. If you aren't sending the correct http header, it dosn't know what else to do so it probably just puts the text on the screen.
Best way to know the details is with wireshark or tcpmon
Pointing the browser to localhost:30003 will cause it the open the connection to port 30003 on the localhost and sent the string "GET /" to request a web page from what is thinks is a web host. Whatever text is sent by your app upon receiving a connection is simply displayed by the web browser as if it had received the contents of a text file on a web server.
when you write "localhost:30003" in your browser a connection is established to some program that listens to the port 30003 on your computer. The prefix in the URL, (default HTTP) determines the protocol used by server and client, in this case the browser is the client connecting to your PC, the server.
If you want to do the same with your program you can set up a socket connection to your localhost using the same port 30003. Your program then becomes the client. Depending on the program (which you don't mention anything about) you may have more protocol options and would need to handle the protocol in your program.
An alternative is to use telnet to connect to your program but it depends on available protocols.

Multiple TCP/IP servers and sharing the same "well known port" ... somehow?

I apologize for the weird question wording... here's the design problem:
I am developing a server (on Linux using C++, FWIW) that provides a service to many instances of a client application running on consumer PCs.
I want the following:
1) All clients first identify themselves to a "gatekeeper" server application. Consider this a login procedure, with credentials like a user name and password being passed in. Call the gatekeeper program "gserver". (for gatekeeper.)
2) Once each client has been validated, it is then placed into a long term connection with one of several instances of a different server application running on the same physical server box bound to the same server address. Call any of these instances "wserver" (for "working" server.)
So, what the client sees is that a "gatekeeper" application gives it passworded access to one of several "working" servers running on the same box.
Here is the "real" challenge: we want to exclusively use a "well known" port number for the inbound server connections (like port 80 or 443, say.) Or, our own "well known" port.
We would prefer not to have to make the client talk to a second port on the server for the long term connection phase with wserver(n). The problem with this, of course, is that only one server process at a time can be bound to the same port and server address.
This implies that a connection made by the client with gserver must also fill the role of the long term connection. The only way I see to accomplish this is that gserver must, after login, act like a proxy and copy traffic between itself and the client to the particular wserver(n) that the client is bound to logically.
It would be ideal if a TCP/IP connection first made between client(n) and gserver could be somehow "transported" to another application on the same server, intact, and could then be sustained by one of the wserver(n) instances for the long term connection.
I know that web servers do something like this for spreading out server loads. "Load balancing". The main difference here is that the "balancing" is the allocation of a particular user to a particular wserver(n) instance. But I also have the impression that load balancing is a kind of proxying - which I am trying to avoid (since it complicates the architecture and adds overhead as well as a single point of failure.)
This is a conceptual and design question. Don't worry about source code examples, unless they are absolutely essential to get the ideas across. If we pin down an approach, I can code it up.
Thanks!
What you are looking for is file descriptor passing. See UNP 15.7. One well-known heavy user of this facility is postfix.
I developed such an application long time ago. Since multiple servers can't listen on the same port. What you need is to have gserver listening on the well-known port. Once connection is established, pass the connection to the other servers via an Unix socket. Once the connection is passed to other server, gserver is out of picture. It can die and the other server will be still serving the connection.
I dont' know if this applies to your design, but the usual solution (as implemmented by the xinetd daemon) is to fork() and then exec() the process. For example, xinetd may serve services like rlogin, rsh, tftp, telnet, etc. which are actually served by different programs. This will not be useful to you if your wservers are processes already running in the system.