How to setup the pg_hba.conf - postgresql

I need your help on correctly setup pg_hba.conf for 2 specific postgres servers on different networks. The first server is on local network and the second is on a Cloud server.
Since I will have to setup syncronization between them, I must make sure that both can communicate.
The 'listen_address' is already setup to '*', on postgresql.conf.
My question is, if I add:
host all all 0.0.0.0/0 trust
...to the pg_hba.conf file on both servers will they communicate free of errors?
Perhaps this is not the best way to do it, but since this is for testing purposes perhaps solves my problem for now. Any better and safest solution please?
Thank you all
Regards
Paulo Matos

Since you do not have a static IP for the system connecting to the database, then you should use some method other than "trust". You can use md5, and put the password into a .pgpass file on the client.
You could put the client's host name, rather than IP address, in the 4th field. But that requires a reverse DNS to work correctly (I don't know if dyndns.org supports that) and I've found it overly fiddly and unreliable.
You probably also want to use SSL ("hostssl"). Using md5 will kind of protect your password, but an eavesdropper can still see all the queries you send and all the responses to them.

Related

Localtunnel is not setting up the requested subdomain from the command 'lt --port 4000 --subdomain xyz'

I have been trying to set the subdomain in localtunnel, but it keeps throwing me different subdomains.
Port number is 4000 and it's running.
The command which I used :
lt --port 4000 --subdomain xyz (I changed subdomain name for the security reason).
Where am I doing wrong?
I know it is a very late answer, but for the help of others searchers who get to this link, and are not able to find a valid answer, for those users I am writing this answer
The command which I used : lt --port 4000 --subdomain xyz (I changed
subdomain name for the security reason).
The first thing is that the command is ok but before local tunnel assigns you a subdomain it must be available first.
Now you may be thinking that I am using a private very unique domain name which should have available, yes you are right but remember local tunnel keeps the record of subdomains provided by you and builds his private database which contains enough pool for random subdomain assigning feature.
Which now clears that after one, two or even more (non-consecutive) attempts it is possible that your domain assigned to someone else so that for that period you can obviously not use that domain, however whenever that domain will be freed, you will be assigned the requested domain for sure.
I'm not familiar enough with localtunnel to tell you what's wrong there, but I can tell you how to accomplish your same goal using Telebit:
(p.s. Did you figure this out? If so, I'd love to hear how you did it and I'm sure others would too)
Install
curl https://get.telebit.io | bash
You can also install via npm... but that isn't the preferred install method at this time. There may be some caveats.
The random domain you get is attached to your account (hence the need for email) and it's encrypted end-to-end with Greenlock via Let's Encrypt.
Configure
./telebit http 4000 xyz
The general format is
./telebit <protocol> <port> [subdomain]
It's not just https, you can use it to tunnel anything over tls/ssl (plain tcp, ssh, openvpn, etc).
Custom domains are not yet a generally available feature, but they're on the horizon.

How can I reach my localhost over the web from outside local? i.e ip/page?

I installed usbwebserver
everthing is running, I am trying to reach the root page index.php?
I read everything I possibly can and sorry but I still cant figure out how to reach my localhost
I reach my page with localhost:8080 and the page I want shows up but if I replace it with IP:8080 it does not.
I am trying to reach this page outside of my local network.
I'm sorry, I need to provide you a separate answer for your reformatted question for the "down the street" scenario. I can troubleshoot a few of the issues you're probably having.
ISP's don't typically allow residential internet connections to serve resources over port 8080, or 80. Even if you were to configure your computer as needed, if you're on a standard internet service provider they're probably blocking you in the middle even if you have punched holes all your local security in an attempt to serve assets over port 8080/80.
Assuming they don't allow that you're going to have to first configure your outbound middleware(php in your case) to listen to calls into your ip on a different port. ( You can do this in your C:\WAMP\ folder, in the "wampserver" configuration file. Here's a good walkthrough here: (http://forum.wampserver.com/read.php?2,13744)
Now, you're going to have to drop any firewalls windows/ubuntu/macOS are providing on that port. (This is the part where you've rolled out the red carpet for hackers to get into your box(es) so be careful!) Here's a link for a short and sweet explanation on windows here: (http://yourbusiness.azcentral.com/turn-off-windows-firewall-19396.html) Note that you can open individual ports, you don't have to drop your entire firewall.
Make sure you have opened up access to any folders/mySQLdb's/resources to outside requests as well (seriously, this is a REALLY bad idea from an #home server if you don't know what you're doing)
Then figure out the correct ip and the correct port and give it a go! If it still doesn't work you can download a program like [wireshark] (https://www.wireshark.org/download.html) or [fiddler] (http://www.telerik.com/download/fiddler/fiddler2) to debug your inbound/outbound traffic and see what the machine's seeing before your browser/server gives you any user visible information.
One thing to note, if you are an amateur web developer your homepage is called "index.html" not "home.html" "home.html" only works fine locally, but internet browser engines look by default, for "index.html"
Lastly, and I really can't stress this enough don't host through your personal ISP and serve files from your own machine. Hosting through Fatcow, or hostgator, or any of the other hosts is really honestly dirt cheap and they know far better than you or I do about security.
That said, I hope very much that you succeed in using my answer, or at the very least learning something from it. Happy Coding!
http://www.canyouseeme.org/
--
Read the Background session
go to a command line, type "ipconfig"
Hit Enter.
Under "Ethernet adapter Ethernet:
It should be the third line down, has your following:
IPV4 Address : 192.168.1.xxx where "xxx" is your ip
address.
USE "//" + "the ip address shown for (ipv4)" plus ":8080" and your default page
should show just fine.
For example, if your cmd "ipconfig" for this process reads: "192.168.1.12"
your total URL in your browser will be "//192.168.1.12:8080"
Note that I used 2 forward slashes prior to using an IP address on your
local network. That let's your computer know it's using your network, not
the actual internet. The slashes alone may solve your problem. Also note, if you're accessing a database through your webapp, you will also need to properly configure your db settings to allow access.
First find your outside ip adress not local ip. After that go into router panel and open to use from apache server. Anyone able to access that port now. You can connect outside your local website now. If you can't do that. Try again. This is the way to doing this.

Redirect/rewrite to different internal IIS sites using query string

EDIT: Ugh I forgot to put this on Server Fault...
I have an Azure VM that is hosting a web application.
The application will be accessible via the VM's IP address:
http://191.238.112.62
I want to be able to use query strings to redirect to completely different sites that are within the local IIS. For example:
http://191.238.112.62/?site=1
would redirect to
www.site1.com
The way I have structured IIS can be seen below:
Each site has an entry in the systems host file.
127.0.0.1 wwww.site1.com
127.0.0.1 wwww.site2.com
127.0.0.1 wwww.site3.com
There is likely a better way to achieve what I am going for here so any pointers would be greatly appreciated.
Thanks.
Here is how I would do it. Not sure why you want to use query strings for this as IIS is made to do that if you configure it properly.
In your DNS server register all your websites to point to that IP. This is for when you go live. For development the hosts file is a good solution.
When you create the websites add a Host header like below
Now try loading any website by their full name
http://www.site1.com
http://www.site2.com
http://www.site3.com
Here is more info about IIS host headers.
Again, when you go live make sure you have the DNS set up for all the websites to point to the IP address of your server.
Hope this helps.
Edit based on comment:
Right, here is how I solved this in the past.
You can do all this with the hosts file but it's less painful if you have a proper DNS server to resolve the names.
The basic idea is to use slightly different URLs for development on the local machine.
All devs would have site1.com point to the IP of the shared server and site1.com.local point to 127.0.0.1. So a hosts file on a developer machine would look something like:
191.238.112.62 www.site1.com
127.0.0.1 www.site1.com.local
On all development machines you need to make sure you have the .local host header for all sites.
On the shared server you just need to add the right host headers and no hosts file changes. It's actually a bad idea to change the server hosts file.

Is PostgreSQL peer authentication safe for production?

PostgreSQL peer authentication is a source of many questions on this website, but once you understand how it works, it looks pretty awesome.
For example, I can have my application connecting to the development database without supplying username and password.
So, my question is, can I use peer authentication on a production server? Is it safe enough?
Thank you very much.
peer is very useful for many kinds of deployments - e.g. when you want to allow people to log in with local unix user accounts and get quick DB access as a matching PostgreSQL user.
It's not great for webapps, because you generally want each webapp to have its own user. So you usually use md5 for them.
I often combine the two. For webapps allow md5 to their private DB only - over local sockets if the driver supports it, otherwise over host connections from localhost. Allow peer for local users to any DB, including the webapp DBs. If you want to have only one user in each db (so you can ignore permissions - which I don't recommend, but I know some people do) you can use a pg_ident.conf mapping to allow people to authenticate via peer as users other than their default user name.
Then you may add hostssl connections from the outside world via md5 or gssapi (kerberos), or sspi if it's a Windows DB host.
Authentication methods aren't an all or nothing thing. There's a reason it's easy to provide a list of alternatives and pick the first matching one.

Get Azure public IP address from deployed app

I'm implementing the PASV mode in a FTP server, and I send to the client the IP address and port of the data end point. This is stupid because the IP is actually where the client is already connecting, so there ire two options:
How could I get the public IP
address from a given instance? Not
the VIP, but the public one.
How could I get the original target
IP address that the user used from
a Socket object? Considering routers and load balancers in the middle :P
An answer to any of this questions would do, although there is another way that could work... may I get the public IP address doing a DNS look up of myapp.cloudapp.net?
A fourth option would be use the Azure Management API library... but, too much trouble :P.
Cheers.
Not sure if you ever figured this out, but here's my take on it. The individual role instances are all behind the Windows Azure load balancer and have no idea what the original, outward-facing IP address is. Also, there's no Management API call that returns IP address - Get Deployment returns the URL but not the IP address. I think the only option is going to be a dns lookup.
Having said that: I don't think you can host a passive ftp server in your role instance (at least not elegantly). You may open up to 25 input endpoints on your role (up from 5 - see my recent blog post about this update), but there's manual work involved in the configuration. I don't know if your ftp application lets you limit your port range to such a small number of ports. Also:
You'd have to define each port as its own input endpoint (this is the manual labor part I mentioned) - input endpoints don't allow a port range to be specified, unlike the internal endpoints.
You'd have to specify the port number that's used internally, and the port numbers would need to be sequential
One last thing on ftp: you should be able to host an sftp server with no trouble, since all traffic comes through one port.
The hack that I'm contemplating right now is to retrieve http://www.icanhazip.com/. It isn't elegant and is subject to the availability of that service, but it gets the job done. A better solution would be appreciated!