By far NONE answered my question, so i'm stuck here...
Basically i made a program (it's connected to my PostgreSQL database) which, depending on the user input, it will change tables contents in the database. It's a sort of Register/Login sistem. (click here if you want to see the script). When i run it on my pc (Windows 10 x64) it works like a charm. But when a friend of mine (Windows 10 x64) tries to run it (on a different network) it gives him this error:
Could not connect to server: Connection refused
Is the server running on host “192.168.1.113” and accepting TCP/IP connections on port 5432?
(if it can help you, i tried MySQL too, but i got the same result... my friend cannot access to my database!)
So, I was asking my self "Is it even possible allow other devices to access my database from other networks? If yes, how can I do it?"
192.168.*.* is for local network addresses. You would not expect it to be reachable from another network. You would have to figure out what your real address is. For example, by going to https://whatismyipaddress.com/ or just Googling "what is my IP address".
Then you have the question of how often that address changes (which is up to your ISP) and how to get the connection past your home router, which will probably either block it, or at least not route it to your database server, without special configuration to do port-forwarding. This is a basic networking task and at the moment is not specific to PostgreSQL.
Your ISP may also block the connection, as hosting servers on your standard home ISP plan is likely against the terms of service. Although most of them allow it if the traffic never comes to their attention due to high usage or due to abuse complaints.
I'd advise against that, we use databases to store metadata, text based data, data that you need to run queries upon, for storing images what services like instagram, pinterest do-- is they store images on services like s3 which are inexpensive, [because databases are extremely expensive]. Plust the amount of images generate per second, should you receive that amount of traffic would be astronomical.
What they do is they store the images on s3-- it's like your hard disk, but hosted on the internet, and then store the path in the database, and when someone asks for that image, we serve it from s3.
Related
I have a postgres database installed on my raspberry pi that works fine locally within my home network. I would like to be able to access this from outside my home network. I've done some research and from what ive seen port forwarding on my router or using a service like localtunnel or ngrok seem like viable solutions.
However, my question is if these open up any security risks on my home network? If not, then great i can move forward with setting this up (i was leaning towards port forwarding on my router). But if there are concerns, what exactly are they and what steps can I take to have a secure setup?
If you expose your database to the world with a weak password for a database superuser, that will definitely lower your security in a substantial way. Hackers routinely patrol for such weak settings and exploit them, mostly for cryptocurrency mining but also to add you to botnets. In those cases they don't care about your database itself, it is just a way in to get at your CPU/network connection. They might also probe for valuable information appearing in your database, in which case they don't even need to be a superuser.
If you always run the latest bugfix version and use a strong password (like the output of pwgen 20 -sy -1) and use SSL or if you correctly use some other method of authentication and encryption, then it will lower security by only a minimal amount.
If you personally control every password, and ensure they are strong, and test that they are configured correctly to be required for log on (e.g. intentionally enter it wrong once to make sure you get rejected), I wouldn't worry too much the port forwarding providing bad guys access to the machine. If you care about people being able to eavesdrop on the data being sent back and forth, then you also need SSL.
Encrypted tunnels of course are another solution which I am not addressing.
I was wondering: What possibilities are there to connect to a postgres database?
I know off top of my head that there are at least two possibilities.
The first possibility is a brute one: Open a port and let users anonymously make changes.
The second way is to create a website that communicates with postgres with use of SQL commands.
I couldn't find any more options on the internet so I was wondering if there are any. I'm curious if other options exist. Because maybe one of those options is the best solution to communicate with postgres via the internet.
This is more of a networking/security type question, I think.
You can have your database fully exposed to the internet which is generally a bad idea unless you are just screwing around for fun and don't mind it being completely hosed at some point. I assume this is what you mean by option 1 of your question.
You can have a firewall in front that only exposes it for certain incoming IP's. This is a little better, but still feels a little exposed for a database, especially if there is sensitive data on it.
If you have a limited number of folks that need to interact with the DB, you can have it completely firewalled, but allow SSH connections to the internal network (possibly the same server) and then port forward through the ssh tunnel. This is generally the best way if you need to give full DB access to folks that are external to the DB's network since SSH can be made much more secure than a direct DB connection by using a public/private keypair for each incoming connection. You can also only allow SSH from specific IP's through your firewall as an added level of security.
Similar to SSH, you could stand up a VPN and allow access to the LAN upon which the DB sits and control access through the VPN.
If you have a wider audience, you can allow no external access to the database (except for you or a DBA/Administrator type person through SSH tunneling or VPN). Then build access through a website where communication to the DB is done on the server side scripting (php, node.js, rails, .net, what-have-you). This is the usual website set up that every site with a database behind it uses. I assume that's what you mean in your option 2 of your question.
Ok lets say I want to create a connection between my iPhone app and my server (i'd like to try and use GoDaddy servers for this) to server real time location data to users.
I've seen plenty of good stuff online about using sockets, streams, ASIHttpmessage, CFHTTPMessageRef, etc., but what I'm unclear about is how to set up a server that continuously servers real time data to users (I believe you'd need a stream of data going to the user for this, not just a single http request and response). How does one take a host like GoDaddy and run server code on it. I know you can set up a server like this using terminal, but I don't have access to command line or the ability to run this "server program" from my web host as far as I know. Is there software I can download on my cpanel for this? Do I need a virtual private server and different hosting via GoDaddy maybe?
Does anyone know how I can do this or if my understanding of this whole thing is wrong. Please keep in mind I need this real time (or close to). Please, educate me. I really just need a better understanding of how this works.
I was just downloading a new distro of linux using uTorrent, and started to wonder how uTorrent (and other bittorrents) send files to eachother through NAT routers? They obviously use the trackers to get introduced, but how do they pass info to eachother?
Is there a whitepaper on this? I couldn't find one :/
Thanks
Most of the time, they don't. I have a restricted network, and every time I run my torrent program it warns me that some of the ports/functionality required is not available to me.
If one party has a restricted network and another has an open network, the restricted client will always connect to the open client. If you have two restricted clients they will not be able to connect to each other. The reason it works at all is that most (enough) of the people on the torrent network do have some kind of port forwarding or UPNP (universal plug and play) to facilitate this.
Torrent clients work on the basis of what are known as Distributed Hash Tables. They start off with a set of known roots, and branch out looking for other, connected nodes (i.e., neighbours). Establish connections to them, and keep this up, up to a set limit. Since the client is initiating the connection, all the remote has to do is feed the data back, and you get it through the NAT just fine. It's how network traffic works.
I apologize for the weird question wording... here's the design problem:
I am developing a server (on Linux using C++, FWIW) that provides a service to many instances of a client application running on consumer PCs.
I want the following:
1) All clients first identify themselves to a "gatekeeper" server application. Consider this a login procedure, with credentials like a user name and password being passed in. Call the gatekeeper program "gserver". (for gatekeeper.)
2) Once each client has been validated, it is then placed into a long term connection with one of several instances of a different server application running on the same physical server box bound to the same server address. Call any of these instances "wserver" (for "working" server.)
So, what the client sees is that a "gatekeeper" application gives it passworded access to one of several "working" servers running on the same box.
Here is the "real" challenge: we want to exclusively use a "well known" port number for the inbound server connections (like port 80 or 443, say.) Or, our own "well known" port.
We would prefer not to have to make the client talk to a second port on the server for the long term connection phase with wserver(n). The problem with this, of course, is that only one server process at a time can be bound to the same port and server address.
This implies that a connection made by the client with gserver must also fill the role of the long term connection. The only way I see to accomplish this is that gserver must, after login, act like a proxy and copy traffic between itself and the client to the particular wserver(n) that the client is bound to logically.
It would be ideal if a TCP/IP connection first made between client(n) and gserver could be somehow "transported" to another application on the same server, intact, and could then be sustained by one of the wserver(n) instances for the long term connection.
I know that web servers do something like this for spreading out server loads. "Load balancing". The main difference here is that the "balancing" is the allocation of a particular user to a particular wserver(n) instance. But I also have the impression that load balancing is a kind of proxying - which I am trying to avoid (since it complicates the architecture and adds overhead as well as a single point of failure.)
This is a conceptual and design question. Don't worry about source code examples, unless they are absolutely essential to get the ideas across. If we pin down an approach, I can code it up.
Thanks!
What you are looking for is file descriptor passing. See UNP 15.7. One well-known heavy user of this facility is postfix.
I developed such an application long time ago. Since multiple servers can't listen on the same port. What you need is to have gserver listening on the well-known port. Once connection is established, pass the connection to the other servers via an Unix socket. Once the connection is passed to other server, gserver is out of picture. It can die and the other server will be still serving the connection.
I dont' know if this applies to your design, but the usual solution (as implemmented by the xinetd daemon) is to fork() and then exec() the process. For example, xinetd may serve services like rlogin, rsh, tftp, telnet, etc. which are actually served by different programs. This will not be useful to you if your wservers are processes already running in the system.