How to turning off logging for certain postgres function calls - postgresql

I have a couple of functions (login, reset-password) implemented as functions. Is it possible to make sure that these functions are not logged by either postgres itself or any extensions that may be installed?

You cannot absolutely eliminate the possibility of logging parameters. Your log files should only be readable by people you really trust (generally the same people who have superuser on the database). Also, you might consider reading the salt and hash from the database and doing the rehashing and comparison in the app server, rather than sending the password in the clear to the database.

Related

Does PostgresSQL have any default password policy?

I've looked around and haven't found the basic Password requirements, if any, for PostgreSQL. That is, chars allowed, length, casing, etc...
Will someone please point me to these, if they exist?
Thanks
The best you can do on the server is to use the passwordcheck contrib module. You probably have to hack it up to fix your needs.
However, that won't be able to enforce password policies in general, because the server never sees the clear text password unless you change it with
ALTER ROLE xy PASSWORD 'clear_text';
which is not recommended. Changing the password with tools like psql's \password command will hash the password before it is sent to the server, so the server cannot enforce any password rules.
You would have to check the password on the client, but naturally the client is not under your control, unless you restrict severely what people can do on their machines (and people usually find ways around such restrictions).
So there is really no way to do what you want.
What you can do is enable cracklib in the passwordcheck module and this way test the hashed passwords against a dictionary.
For good security, use something like LDAP or Kerberos authentication and implement your password policy there.

PostgreSQL only clear user password

I am having a big problem, quite difficult to find/search.
I have a server in Ubuntu, where inside that server I have installed:
GITLAB (have all proyect)
POSTGRESSQL (Independent gitlab database is used for a personal project)
TOMCAT with APP WEB (Springboot, this use postgres)
This server is still for testing, it is used for specific specific things (I mean, its use and access is limited and controlled)
I am having various problems:
This server is still for testing, it is used for specific specific things (I mean, its use and access is limited and controlled)
Very frequently, almost every day, the user postgres from the postgresql server "erases" the password. Without anyone doing it manually, "it happens exponentially". I notice why the application stops responding, and then I access postgresql and note that the postgres user has no password.
I looked for many places, and I can't find anything. I really don't know where else to look. If someone passed it to you or has information about it, I would be grateful if you could provide it to me.
------More information added----------
I was looking at the postgres logs, before I have no authentication and I see this.
There are times when no one could have been using the springboot server,
--2020-01-17 00:30:21.286
And also the two log that show before that moment. Could it be something that is deleting my password?
Thank you.
PostgreSQL does not randomly delete its own passwords, and I really doubt Tomcat or Gitlab do either. Indeed they shouldn't even have access to the server as the 'postgres' user or any other superuser, and so shouldn't be able to even if they wanted.
It seems like that there is an intruder in your system. After gaining access they create their own user with their own password. Then disabling your normal superuser from logging on is a common way to try to prevent you from regaining control and kicking them out. Do any users exist that you do not recognize?
The bit of the log file you posted clearly shows someone trying to guess your password, starting at 2:58. You aren't logging IP addresses (%h) so it doesn't show where they are coming from. It doesn't show that they succeed, but unless you have log_connections = on, it wouldn't show successes.

pgcrypto keeps visible the private key on the logs?

Somebody told me it could be possible if i use pgcrypto pgp_pub_decrypt() function with my queries the logs will reveal the secret key used to decrypt my data
SELECT pgp_pub_decrypt(string_to_decrypt, private_key)
I can not check that because i dont have access to the logs, but if this is true, from my point of view this should be considered as a security issue. Is this true?
Yes, this is true. Assuming you are logging statements.
It is also probably visible in pg_stat_activity to a superuser who is looking at the right moment.
Of course, a superuser could also install an extension to secretly log just crypto-related function calls, that's the nature of having superuser access.
If you don't control the server your database is running on and don't trust the people who do, then you shouldn't store sensitive information in it.

Unprotected MongoDB server?

I apologize beforehand if this is a stupid or a silly question in any way. Let's just say that I stumbled upon an unprotected MongoDB server belonging to a big company. I tried using a client to connect to the server, without entering a username and password and it connected successfully. Now, I'm not sure if I have access to the data inside the databases, but I can see that there are a few databases on it, and I believe that it's possible for me to create and drop databases on it (haven't tried). How big of a security flaw does this constitute? Please note that I haven't tampered or messed around with anything, I'm just asking so I can discern if this is indeed a security flaw that I should report, or a false positive. Shouldn't such access be limited to database administrators?
I see where this is going, there may be several cases.
It might be a developmental server and data is fake.
It may be abandoned
They must be running some maintenance during which some lazy devs, open the ports and security.
Most production databases are sealed enough, since you call it a "BIG" company, most probably they must have done it.
What ever might be the case depending on the company you can even be slapped with criminal notices, not every companies take bug review by 3rd parties in proper way. If they have a proper bug bounty program though they may offer you a reward. Tread with caution.

Are "best practices" regarding connection handle re-use and database user design mutually exclusive?

SO says this may be subjective. I'm hoping not--I just can't seem to understand how this works in practice, and it seems like a specific enough technical question with I hope a definitive answer.
Context: LAPP stack.
I've read that using a single database user as the login for all connections to the database, and handling security yourself from there, is a bad idea. Databases have sufficient security models and it makes sense to use them.
Database handles have some resource cost associated with them, hence the existence of Apache::DBI, DBIx::Connector, and DBI::connect_cached(), to re-use a recent connection to a database. Making use of them should make a web app faster by avoiding the cost of connecting to a database.
The reason these seem to be mutually exclusive best practices is that, in my understanding, #1 implies that any database connection will be made with separate per-user credentials, which implies (as Apache::DBI documents) that re-using such connections will likely quickly cause your database backend to run out of connections.
The default maximum number of connections for PostgreSQL is 100.
The default numbers of servers and multiplied by subprocesses allowed for each, for Apache 2 running with the prefork MPM, far exceeds that, so it seems Apache::DBI's docs are right.
Thus the question: What do people do then, in practice?
Does this mean people using a LAPP stack generally connect using a single database user, and implement their own security/permissions model? Or does it mean they don't pool connections? Or do they choose between these two strategies based on speed vs security needs if they go with a LAPP stack, and if they need both, go with a desktop app or some other connection model?
Or if these are not, in fact, mutually exclusive strategies, what am I missing in my understanding here?
I've read that using a single database user as the login for all connections to the database, and handling security yourself from there, is a bad idea. Databases have sufficient security models and it makes sense to use them.
You probably misread this, or read it in a highly biased location. A more balanced view is (hopefully) this:
Managing perms (ACL or RBAC or other) within the database is a bloody mess and hard to get right. It can cripple performance, too, if done improperly (think: "select * from table join perms where convoluted_permission_scenario".) Depending on who you ask, you'll get more or less extreme viewpoints, e.g. here's (the very controversial) Zed Shaw: http://vimeo.com/2723800.
Managing perms at the DB level is just as much of a bloody mess. Not all engines implement row-level permissions, and even then there occasionally are leaks. For instance, calling a function in a where clause could (can?) leak rows in Postgres (until a recent version?) if raise gets called. And frankly, if you go past a superficial analysis of what is going on, it basically amounts to the former — just standardized and (usually) in C.
Managing perms at the app level without a database is also a bloody mess. It'll cripple performance no matter what you do from the moment where you need to join outside of SQL, unless you're dealing with trivial amounts of data. If you try it, you'll do fine… until your database grows too large and you basically don't.
So, in short: it's a bloody mess no matter where you manage it. Because permissions are a mess. In addition to the casual and idealistic "Joe needs write access to this set of nodes", you also need to cope with more down to earth scenarios such as "John is going off on vacation for Christmas and needs to temporarily delegate his write permissions on this set of nodes to his assistant Jane". Moreover, whichever scenario you do pick, you need to manage read access (which is usually the most frequent) in such a way that it's fast so you can scale. There's no silver bullet.
Moreover, even in the first and last of the above scenarios, it's ideal to have three DB users. One for reads, one for read/writes, and one for schema changes. Most apps don't, because it's yet another bloody mess to configure your ORM that way, hence the typical one DB user per app.
Anyway, getting back to your question: what people do in practice is one or two database users (read vs read/write/modify), implement RBAC or ACL within the database itself, and avoid access restriction logic like the plague on public-facing pages for performance reasons.