Sql injection problem to retrieve tables from a database - sql-injection

Hello why when search of test the vulnerability of a website i receive this kind of error '' it seem information_schema table does not exist! trying to guess tables! total tables_found 0'' But the Db information_schema exist and too the other Database.

Related

Question. Access view from Oracle DB in PostgreSQL and Insert into table in Oracle DB from PostgreSQL

For a long time I have been working only with Oracle Databases and I haven't had much contact with PostgreSQL.
So now, I have a few questions for people who are closer to Postgres.
Is it possible to create a connection from Postgres to Oracle (oracle_fdw?) and perform selects on views in a different schema than the one you connected to?
Is it possible to create a connection from Postgres to Oracle (oracle_fdw?) and perform inserts on tables in the same schema as the one you connected to?
Ad 1:
Yes, certainly. Just define the foreign table as
CREATE FOREIGN TABLE view_1_r (...) SERVER ...
OPTIONS (table 'VIEW_1', schema 'USERB');
Ad 2:
Yes, certainly. Just define a foreign table on the Oracle table and insert into it. Note that bulk inserts work, but won't perform well, since there will be a round trip between PostgreSQL and Oracle for each row inserted.
Both questions indicate a general confusion between a) the Oracle user that you use to establish the connection and b) the schema of the table or view that you want to access. These things are independent: The latter is determined by the schema option of the foreign table definition, while the former is determined by the user mapping.

postgresql / postgres grant permissions to everything in the current schema

I know I can run this:
grant select on all tables in schema whatever to "my_dev_group";
but the way I've built our CI system - we use lots of different schemas, so the schema is only known at the time of running the tests - not when the script is written. Ie we run the same script against lots of different test schemas and we set the schema name at the start...
so I am trying to achieve the equivalent of the above grant statement - but to operate on the current schema - without any success.
ie all of these were spectacularly unsuccessful
grant select on all tables in schema current to "my_dev_group";
grant select on all tables in schema current_schema() to "my_dev_group";
grant select on all tables to "my_dev_group";
is it possible to do without going into the world of executing dynamic sql, which I'd prefer not to
SQL doesn't have a notion of a “current schema”. Consequently, there is no syntax to change the permissions of all tables in the current schema.
You could loop through all schemas in the search_path or all schemas in the database and run a GRANT statement for each one. Make sure to skip system schemas like information_schema and pg_catalog.
Alternatively, you could use the current_schema() function to retrieve the first existing schema on your search_path and change permissions for that schema.

After numerous (error-free) inserts to Aurora (PostgreSQL) RDS serverless cluster with SQLAlchemy I can't see the table. What happened to my data?

After changes to some Terraform code, I can no longer access the data I've added into an Aurora (PostgreSQL) database. The data gets added into the database as expected without errors in the logs but I can't find the data after connecting to the database with AWS RDS Query Editor.
I have added thousands of rows with Python code that uses the SQLAlchemy/PostgreSQL engine object to insert a batch of rows from a mappings dictionary, like so:
if (count % batch_size) == 0:
self.engine.execute(Building.__table__.insert(), mappings)
self.session.commit()
The logs from this data ingest show no errors, the commits all appear to have completed successfully. So the data was inserted someplace, I just can't work out where that is, as it's not showing up in the AWS Console RDS Query Editor. I run the SQL below to find the table, with zero rows returned:
SELECT * FROM information_schema.tables WHERE table_name = 'buildings'
This has worked as expected before (i.e. I could see the data in the Aurora database via the Query Editor) so I'm trying to work out which of the recently modified Terraform settings have caused the issue.
Where else can I look to find where the data was inserted, assuming that it was actually inserted somewhere? If I can work that out it may help reveal the culprit.
I suspect misleading capitalization. Like "Buildings". Search again with:
SELECT * FROM information_schema.tables WHERE table_name ~* 'building';
Or:
SELECT * FROM pg_catalog.pg_tables WHERE tablename ~* 'building';
Or maybe your target wasn't a table? You can "write" to simple views. Check with:
SELECT * FROM pg_catalog.pg_class WHERE relname ~* 'building';
None of this is specific to RDS. It's the same in plain Postgres.
If the last query returns nothing, you are in the wrong database. (You are aware that there can be multiple databases in one DB cluster?) Or you have a serious problem.
See:
How to check if a table exists in a given schema
Are PostgreSQL column names case-sensitive?
Once I logged more information regarding the connection I discovered that the database name being used was incorrect, so I have been querying the Aurora instance using the wrong database name. Once I worked this out and used the correct database name the select statements in AWS RDS Query Editor worked as expected.

Linking MS Access table to PG Admin schema

I would like to link a MS Access table to a table in PG admin if it is possible for use in a Postgres query. I have searched for an answer but all I can find is answers for listing postgres tables in Access which is almost the opposite of what I want to do.
I want to be able to access the data entered in an access form without having to continually import the data into a table in PG Admin.
I'm not even sure that is possible but any method that is easier than importing the table into PG Admin every day would be useful.
Thanks
Gary
Try the PostgreSQL OGR Foreign Data Wrapper. Its built for spatial data, but it works perfectly well with non-spatial tables. If you have the PostGIS extension installed you will already have it.
https://github.com/pramsey/pgsql-ogr-fdw
There are several examples on that page, but the command
ogr_fdw_info -s <pathToAccessFile> -l <tablename>
will return create server and a create foreign table statements which you can edit as required then run in pgAdmin.

Joining Results from Two Separate Databases

Is it possible to JOIN rows from two separate postgres databases?
I am working with system with couple databases in one server and sometimes I really need such a feature.
According to http://wiki.postgresql.org/wiki/FAQ
There is no way to query a database other than the current one.
Because PostgreSQL loads database-specific system catalogs, it is
uncertain how a cross-database query should even behave.
contrib/dblink allows cross-database queries using function calls. Of
course, a client can also make simultaneous connections to different
databases and merge the results on the client side.
EDIT: 3 years later (march 2014), this FAQ entry has been revised and is more helpful:
How do I perform queries using multiple databases?
There is no way to directly query a database other than the current
one. Because PostgreSQL loads database-specific system catalogs, it is
uncertain how a cross-database query should even behave.
The SQL/MED support in PostgreSQL allows a "foreign data wrapper" to
be created, linking tables in a remote database to the local database.
The remote database might be another database on the same PostgreSQL
instance, or a database half way around the world, it doesn't matter.
postgres_fdw is built-in to PostgreSQL 9.3 and includes read/write
support; a read-only version for 9.2 can be compiled and installed as
a contrib module.
contrib/dblink allows cross-database queries using function calls and
is available for much older PostgreSQL versions. Unlike postgres_fdw
it can't "push down" conditions to the remote server, so it'll often
land up fetching a lot more data than you need.
Of course, a client can also make simultaneous connections to
different databases and merge the results on the client side.
Forget about dblink!
Say hello to Postgres_FDW:
To prepare for remote access using postgres_fdw:
Install the postgres_fdw extension using CREATE EXTENSION.
Create a foreign server object, using CREATE SERVER, to represent each remote database you want to connect to. Specify connection
information, except user, and password, as options of the server
object.
Create a user mapping, using CREATE USER MAPPING, for each database user you want to allow to access each foreign server. Specify
the remote user name and password to use as user and password options
of the user mapping.
Create a foreign table, using CREATE FOREIGN TABLE or IMPORT FOREIGN SCHEMA, for each remote table you want to access. The columns
of the foreign table must match the referenced remote table. You can,
however, use table and/or column names different from the remote
table's, if you specify the correct remote names as options of the
foreign table object.
Now you need only SELECT from a foreign table to access the data
stored in its underlying remote table.
It's really useful even on large data.
Yes, it is possible to do this using dblink albeit with significant performance considerations.
The following example will require the current SQL user to have permissions on both databases. If db2 is not located on the same cluster, then you will need to replace dbname=db2 with the full connection string defined in the dblink documentation.
SELECT *
FROM table1 tb1
LEFT JOIN (
SELECT *
FROM dblink('dbname=db2','SELECT id, code FROM table2')
AS tb2(id int, code text);
) AS tb2 ON tb2.column = tb1.column;
If table2 is very large, you could have performance issues because the sub-query loads up the entire table2 before performing the join.
No you can't. You could use dblink to connect from one database to another database, but that won't help if you're looking for JOIN's.
You can't use different SCHEMA's within a single database to store all you data?
Just a few steps and You can reach the goal:
follow this reference step by step
WE HAVE BEEN CONNECTED TO DB2 WITH TABLE TBL2 AND COLUMN COL2
ALSO THERE IS DB1 WITH TBL1 AND COLUMN COL1
*** connecting to second db ie db2
Now just **copy paste the 1-7 processes** (make sure u use correct username and password and ofcourse db name)
1.**CREATE EXTENSION dblink;**
2.**SELECT pg_namespace.nspname, pg_proc.proname
FROM pg_proc, pg_namespace
WHERE pg_proc.pronamespace=pg_namespace.oid
AND pg_proc.proname LIKE '%dblink%';**
3.**SELECT dblink_connect('host=localhost user=postgres password=postgres dbname=db1');**
4.**CREATE FOREIGN DATA WRAPPER postgres VALIDATOR postgresql_fdw_validator;**
5.**CREATE SERVER postgres2 FOREIGN DATA WRAPPER postgres OPTIONS (hostaddr '127.0.0.1', dbname 'db1');**
6.**CREATE USER MAPPING FOR postgres SERVER postgres2 OPTIONS (user 'postgres', password 'postgres');**
7.**SELECT dblink_connect('postgres2');**
---Now, you can SELECT the data of Database_One from Database_Two and even join both db results:
**SELECT * FROM public.dblink
('postgres2','SELECT col1,um_name FROM public.tbl1 ')
AS DATA(um_userid INTEGER),tbl2 where DATA.col1=tbl2.col2;**
You can also Check this :[How to join two tables of different databases together in postgresql [\[working finely in version 9.4\]][1]
You need to use dblink...as araqnid mentioned above, something like this works fine:
select ST.Table_Name, ST.Column_Name, DV.Table_Name, DV.Column_Name, *
from information_schema.Columns ST
full outer join dblink('dbname=otherdatabase','select Table_Name,
Column_Name from information_schema.Columns') DV(Table_Name text,
Column_Name text)
on ST.Table_Name = DV.Table_name
and ST.Column_Name = DV.Column_Name
where ST.Column_Name is null or DV.Column_Name is NULL
You have use dblink extension of postgresql.
Reference take from this Article:
DbLink extension of PostgreSQL which is used to connect one database to another database.
Install DbLink extension.
CREATE EXTENSION dblink;
Verify DbLink:
SELECT pg_namespace.nspname, pg_proc.proname
FROM pg_proc, pg_namespace
WHERE pg_proc.pronamespace=pg_namespace.oid
AND pg_proc.proname LIKE '%dblink%';
I have already prepared full demonstration on this. Please visit my post to learn step by step for executing cross database query in Postgresql.
Cannot be done? Of course we can, without special extensions. In our case, we had to compare two tables from different database servers, e.g. ACC and PROD, hence an even harder case than from most answers. Especially because ACC and PROD are deliberately on different servers to create a barrier, so you will not easily gain enough rights to perform a GRANT USAGE ON FOREIGN SERVER.
The obvious solution is to export both tables, and import both in the same database, e.g. DEV, or your own local db, under appropriate names, e.g. table1_acc and table1_prod, or schemas like acc and prod. Then, you may JOIN those with no special problems.