Forwarding to different IP addresses depending on which application I use - github

Is there anyway that I can go from a server to a website depending whether you use PuTTY or a browser? I have 1 domain using GoDaddy and I want it to go to a server when using PuTTY but a website when using a browser.
This is what I have so far.
By the way I am using GitHub pages for the browser side of things.

HTTP (port 80 and 443) is an end user protocol, SSH is an admin protocol.
The semantics (meaning rules) behind the Domain Name protocol, is that names represent an IP address, and they in turn represent a physical server.
While it is technically possible to break the semantic rules of a protocol, it is not convenient to do so, as it would make maintenance harder by making the design more opaque. If these were both end user protocols I would understand if maintenance ease were sacrificed for the sake of user ease, but you are trying to create an abstraction for admins, which will only make debugging harder.
The solution is to use a different name for a different server, you can use a subdomain, for example, by using ssh.domain.com if the endpoint is a bastion host that tunnels connections to domain.com, or just use a different domain name altogether if it's a completely different service.

Related

Simplest server to server authentication

I have microservice on a new server/vps that will only ever be called via REST by monolith app to perform some heavy lifting and then post the operation results back to the monolith in few minutes.
How should I protect these two endpoints? I think my main goal, for now, is just preventing someone that found servers address to be able to do anything.
Almost every solution I google seems like overkill/premature optimization.
Is it sufficient that I generate random long token once on each machine and then just pass it to headers and check it's presence on the other end?
Do I even need to SSL this? As far as I understand we need SSL encryption for clients that are trying to send sensitive data via wireless or unsafe shared networks.
What are the chances(is it even possible?) that somebody is gonna eavesdrop between two digitalocean vps's sending data via http? Did it ever happen before ?
Q: Is it sufficient that I generate random long token once on each machine and then just pass it to headers and check it's presence on the other end?
A: Generally microservices are behind a GateKeeper/Gateway(nginx,haproxy) so you can expose the endpoints you want. In your case I would recommend to create a private network between the two vps's and expose your microservice on that internal IP.
Q: Do I even need to SSL this? As far as I understand we need SSL encryption for clients that are trying to send sensitive data via wireless or unsafe shared networks.
A: No. If you use internal networks and don't expose to the public then there is no need for SSL/TLS. If you would do something with Tier 3/4 then you would need encryption for cross datacenter communication.
Q: What are the chances(is it even possible?) that somebody is gonna eavesdrop between two digitalocean vps's sending data via http? Did it ever happen before ?
A: There are bots that scan for open ports on servers/computers and try to penetrate them with exploits. In all cases always use a firewall like UFW/firewalld.
So let's say you have two servers with these microservices using the internal private network from your favorite provider:
VPS1 (ip = 10.0.1.50)
FooBarService:1337
BarFooService:7331
VPS2 (ip = 10.0.1.51)
AnotherMicroService:9999
Now both VPS's can access each other's services by simply calling the ip + port.
Good luck.
There's a few simple solutions you could use to authenticate both servers back and forth. The one I would recommend if you want to keep it simple, as you say, is Basic Auth. As long as you're utilizing that over an SSL/HTTPS connection, it suffices as a super simple way to authenticate each end.
You state it is your main goal to protect these endpoints, but then ask if SSL/HTTP is even needed. If these servers are vulnerable to the web in any way, then I would say yes, your endpoints need to be protected, and if you're transmitting sensitive data, then you need to be sending it through a secure stream.
If you believe the data you're sending is not very sensitive, and is likely that no one that knows these two endpoints will even know how to properly manipulate your data by sending fake requests, then sure, you don't need any of this, but then you assume the risk and responsibility for if and when it ever is exposed. Basic Auth is super easy, and with LetsEncrypt it's incredibly easy to obtain an SSL certificate for free. It's good experience, so may as well try it out and protect these endpoints and ensure that they're safe.

What possibilities are there to connect to a postgres database

I was wondering: What possibilities are there to connect to a postgres database?
I know off top of my head that there are at least two possibilities.
The first possibility is a brute one: Open a port and let users anonymously make changes.
The second way is to create a website that communicates with postgres with use of SQL commands.
I couldn't find any more options on the internet so I was wondering if there are any. I'm curious if other options exist. Because maybe one of those options is the best solution to communicate with postgres via the internet.
This is more of a networking/security type question, I think.
You can have your database fully exposed to the internet which is generally a bad idea unless you are just screwing around for fun and don't mind it being completely hosed at some point. I assume this is what you mean by option 1 of your question.
You can have a firewall in front that only exposes it for certain incoming IP's. This is a little better, but still feels a little exposed for a database, especially if there is sensitive data on it.
If you have a limited number of folks that need to interact with the DB, you can have it completely firewalled, but allow SSH connections to the internal network (possibly the same server) and then port forward through the ssh tunnel. This is generally the best way if you need to give full DB access to folks that are external to the DB's network since SSH can be made much more secure than a direct DB connection by using a public/private keypair for each incoming connection. You can also only allow SSH from specific IP's through your firewall as an added level of security.
Similar to SSH, you could stand up a VPN and allow access to the LAN upon which the DB sits and control access through the VPN.
If you have a wider audience, you can allow no external access to the database (except for you or a DBA/Administrator type person through SSH tunneling or VPN). Then build access through a website where communication to the DB is done on the server side scripting (php, node.js, rails, .net, what-have-you). This is the usual website set up that every site with a database behind it uses. I assume that's what you mean in your option 2 of your question.

Peer 2 Peer call using PJSIP and PJSUA

I am still learning about SIP and all its protocols, specifically trying to integrate PJSIP into an iPhone application to make p2p calls.
I have a question about a peer 2 peer connection using PJSUA. I am able to
make calls perfectly to other clients on my local network by calling directly using the URI:
sip:192...*:5060
I am curious if this will work for
making direct calls to other SIP URIs that are not on the local
network without using server configuration - if not this way, is there another way of making p2p calls without server configuration?
thanks in advance,
You can make calls without server configuration, as a general principle, but something needs configuring. As mattjgalloway points out in the comments below your question, the most robust solution is a can of worms involving ICE which provides a kind of "umbrella" protocol for things like STUN.
Last time I touched this issue, I had the requirement that I couldn't use internet-based SIP servers to help. I came up with the idea of a registry of sorts: your client can define a bunch of "address spaces" with particular routing requirements. For SIP URIs in your LAN, you define no routing; for URIs in your company's VPN-accessed network, you define a route passing through your VPN connection; for everything else you define a route through your internet router.
By "define a route", I mean that when you place a call to a URI in some particular address space, you store what IP will go into a Contact header, what Route headers you might need, and so on.
Thus, the process of making a call becomes:
Look up in the set of address spaces for a match.
Ask that address space for the suitable bits needed to make a workable INVITE (appropriate Contact header details, Route headers, etc.)
Construct a normal INVITE, mutating as necessary for the previous step.
Send the INVITE as normal.
This essentially reproduces half of what ICE would give you, in a manually administrated form. "Half", because this ensures that one SIP agent can make calls such that the SIP routing all works. The missing half is you still need some kind of registrar somewhere, and each agent in your contact list needs to have the necessary setup to receive incoming calls. (If an agent's behind a NATting internet router, the router would need to either run a SIP proxy, or forward ports 5060, 5061 to a particular machine (which might be an agent, or a proxy serving the LAN's agents).
It is, indeed, a large can of worms.
The basic issue is to solve the problem of getting transport ports anywhere on the internet for multimedia traffic.
Many companies/experts have tried to solve this situation. A possible way out of is to buy a domain and setup a basic registrar using YATE or Asterisk on an address accessible from the internet and configure it to also use ICE as needed. Your iphone application at both ends could register automatically to it upon start. Then make P2P calls.

Multiple TCP/IP servers and sharing the same "well known port" ... somehow?

I apologize for the weird question wording... here's the design problem:
I am developing a server (on Linux using C++, FWIW) that provides a service to many instances of a client application running on consumer PCs.
I want the following:
1) All clients first identify themselves to a "gatekeeper" server application. Consider this a login procedure, with credentials like a user name and password being passed in. Call the gatekeeper program "gserver". (for gatekeeper.)
2) Once each client has been validated, it is then placed into a long term connection with one of several instances of a different server application running on the same physical server box bound to the same server address. Call any of these instances "wserver" (for "working" server.)
So, what the client sees is that a "gatekeeper" application gives it passworded access to one of several "working" servers running on the same box.
Here is the "real" challenge: we want to exclusively use a "well known" port number for the inbound server connections (like port 80 or 443, say.) Or, our own "well known" port.
We would prefer not to have to make the client talk to a second port on the server for the long term connection phase with wserver(n). The problem with this, of course, is that only one server process at a time can be bound to the same port and server address.
This implies that a connection made by the client with gserver must also fill the role of the long term connection. The only way I see to accomplish this is that gserver must, after login, act like a proxy and copy traffic between itself and the client to the particular wserver(n) that the client is bound to logically.
It would be ideal if a TCP/IP connection first made between client(n) and gserver could be somehow "transported" to another application on the same server, intact, and could then be sustained by one of the wserver(n) instances for the long term connection.
I know that web servers do something like this for spreading out server loads. "Load balancing". The main difference here is that the "balancing" is the allocation of a particular user to a particular wserver(n) instance. But I also have the impression that load balancing is a kind of proxying - which I am trying to avoid (since it complicates the architecture and adds overhead as well as a single point of failure.)
This is a conceptual and design question. Don't worry about source code examples, unless they are absolutely essential to get the ideas across. If we pin down an approach, I can code it up.
Thanks!
What you are looking for is file descriptor passing. See UNP 15.7. One well-known heavy user of this facility is postfix.
I developed such an application long time ago. Since multiple servers can't listen on the same port. What you need is to have gserver listening on the well-known port. Once connection is established, pass the connection to the other servers via an Unix socket. Once the connection is passed to other server, gserver is out of picture. It can die and the other server will be still serving the connection.
I dont' know if this applies to your design, but the usual solution (as implemmented by the xinetd daemon) is to fork() and then exec() the process. For example, xinetd may serve services like rlogin, rsh, tftp, telnet, etc. which are actually served by different programs. This will not be useful to you if your wservers are processes already running in the system.

How to create an SaaS Application?

I don't know how else to say it so I'm just going to explain my ideal scenario and hopefully you can explain to me how to implement it...
I'm creating an application with the Zend Framework that will be hosted with DreamHost. The application will be hosted on its own domain (i.e. example-app.com). Basically, a user should be able to sign up, get their own domain sampleuser.example-app.com or example-app.com/sampleuser which points to, what looks like their own instance of the app, which is really a single instance serving up different content based on the url.
Eventually, I want my users to be able to create their own domain (like foobar.com) that points to sampleuser.example-app.com, such that visitors to foobar.com don't notice that the site is really being served up from example-app.com.
I don't know how to do most of that stuff. How does this process work? Do I need to do some funky stuff with Apache or can this be done with a third party host, like DreamHost?
Update: Thanks for the advice! I've decided to bite the bullet and upgrade my hosting plan to utilize wildcard subdomains. It's cheaper than I was expecting! I also found out about domain reseller programs, like opensrs.com, that have their own API. I think using one of these APIs will be the solution to my domain registration issue.
Subdomains are easy. In hosting environements, in most cases, apache is configured to catch all subdomain calls below the main domain. You just need to have a wildcard DNS defined, so *.example-app.com are pointed to IP of your server. Then your website should catch all calls to those subdomain names.
Other domains are hard. They need to be configured as virtual hosts in Apache - see http://httpd.apache.org/docs/1.3/vhosts/name-based.html - that means it will be difficult to automate that, especially in hosting environement - unless your host gives you some API to do just that (easy and more feasible scenario would be to have a distinctive IP assigned to your website, then you can catch all with your Apache - it's probably possible to configure using your hosting control panel or works out of the box - and then just point DNS servers to your IP).
Then, after you have configured your Apache to point all necessary calls to your website, you can differnetiate application partitions per subdomain in this way:
get the host header from HTTP request
have a database table containing all subdomain names you're serving
make a lookup to that database table to determine instance, or user, id and use it later for filtering data / or selecting a database, if you'll go with a "database per application instance" schema.
Good luck :)