How to load Postgres QgsVectorLayer - postgresql

I have a QGIS script that I am trying to load a vector layer that is stored in a Postgres database. When I print the layer's isValid() method I get False. Here is my code:
from qgis.core import *
db_client = 'postgres'
db_host = 'localhost'
db_port = '5432'
db_name = 'database'
db_user = 'user'
db_password = 'pass123'
db_schema = 'public'
tablename = 'Geo_Layer'
geometrycol = 'geom'
tract_number_index = 3
QgsApplication.setPrefixPath('/usr/bin/qgis', True)
qgs = QgsApplication([], False)
qgs.initQgis()
geo_uri = QgsDataSourceUri()
geo_uri.setConnection(db_host, db_port, db_name, db_user, db_password)
geo_uri.setDataSource(db_schema, tablename, geometrycol, '', 'id')
geo_layer = QgsVectorLayer(geo_uri.uri(False), "Test", "postgres")
# Other configurations I have tried
# geo_layer = QgsVectorLayer(geo_uri.uri(), "Test", "postgres")
# geo_layer = QgsVectorLayer(geo_uri.uri(), "Test", "ogr")
# geo_layer = QgsVectorLayer(geo_uri.uri(False), "Test", "ogr")
print(geo_layer.isValid())
qgs.exitQgis()
I have provided the other QgsVectorLayer configurations I have tried. All print that the layer is not valid.
QGIS Version: 3.16.3-Hannover
Python Version: 3.8.5
Ubuntu Version: 20.04.02 LTS
I have check my credentials with DBeaver and I am able to connect.

I once faced this issue when my geometry column in postgis contains multiple geometry type. In this case you can first filter the column for geometry types, and then for each geometry type construct a layer for qgis:
for geom in geometry_types:
uri.setDataSource(schema, table, column, "GeometryType(%s)= '%s'" % (column, geom))
vlayer = QgsVectorLayer(uri.uri(), layer_name, "postgres")
print(vlayer.isValid())
You can check for the geometry types in postgis using following query:
SELECT DISTINCT(GeometryType("%s"::geometry)) FROM "%s";""" % (column, table)

Related

Error saying PostgreSQL database table doesn't exist when it is created in the entrypoint script - DOCKER COMPOSE and POSTGRESQL

I am completely new to Docker and Docker compose and I have encountered issue which I don't know how to solve.
I am trying to make a simple app which shows some items from my existing postgres database on the localhost:8080. Everything works fine, so further on I want to put this app into Docker containers using Docker compose. I used dump to get apartments.sql file, which I am trying to load to the postgres docker container by using VOLUMES, but I am getting an error that says that the table that I am trying to access doesn't exist. My app code is:
def get_db_connection():
conn = psycopg2.connect(host='db',
user='postgres',
password='pass',
port = '5432')
return conn
#app.route('/')
def index():
conn = get_db_connection()
cur = conn.cursor()
cur.execute('SELECT * FROM maki;')
# save the data into the variable called items
items = cur.fetchall()
new_items = tuples_to_list(items)
cur.close()
conn.close()
return render_template('index.html', new_items=new_items)
if __name__ == '__main__':
app.run(host='0.0.0.0',port=8080,debug=True)
When I run it locally, it connects to my local database and works normally.
My docker-compose.yml file is:
version: '3.8'
services:
web:
build: ./flask_app
ports:
- 8080:8080
volumes:
- ./:/app
depends_on:
- db
db:
image: postgres
ports:
- 5432:5432
environment:
- POSTGRES_PASSWORD=pass
- POSTGRES_USER=postgres
- POSTGRES_DB=apartments
volumes:
- data:/var/lib/postgresql/data
- ./data/apartments.sql:/docker-entrypoint-initdb.d/apartments.sql
container_name: postgresdb
volumes:
data:
driver: local
where in the folder apartments.sql I put the dumped apartments.sql which has table named maki. I get the following error: psycopg2.errors.UndefinedTable: relation "maki" does not exist
LINE 1: SELECT * FROM maki;
I am not sure if this is the right approach, or in general, how should I do this in order for docker compose to work properly and show me the items on localhost:8080? What am I supposed to do with the index.html file in docker?
My dumped apartments.sql file:
-- PostgreSQL database dump
--
-- Dumped from database version 13.7
-- Dumped by pg_dump version 13.7
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;
SET default_tablespace = '';
SET default_table_access_method = heap;
--
-- Name: maki; Type: TABLE; Schema: public; Owner: postgres
--
CREATE TABLE public.maki (
name character varying(255),
image text
);
ALTER TABLE public.maki OWNER TO postgres;
--
-- Data for Name: maki; Type: TABLE DATA; Schema: public; Owner: postgres
--
COPY public.maki (name, image) FROM stdin;
Prodej bytu 3+kk 79\\\\u00a0m\\\\u00b2\\ https://d18-a.sdn.cz/d_18/c_img_gY_m/AQF296.jpeg?fl=res,400,300,3|shr,,20|jpg,90
Prodej bytu 3+kk 60\\\\u00a0m\\\\u00b2\\ https://d18-a.sdn.cz/d_18/c_img_gV_p/VBhduS.jpeg?fl=res,400,300,3|shr,,20|jpg,90
Prodej bytu 3+kk 98\\\\u00a0m\\\\u00b2\\ https://d18-a.sdn.cz/d_18/c_img_gT_o/kPyBHP.jpeg?fl=res,400,300,3|shr,,20|jpg,90
Prodej bytu 3+kk 120\\\\u00a0m\\\\u00b2 (Mezonet)\\ https://d18-a.sdn.cz/d_18/c_img_gX_l/r7TKmQ.jpeg?fl=res,400,300,3|shr,,20|jpg,90
Prodej bytu 3+kk 81\\\\u00a0m\\\\u00b2\\ https://d18-a.sdn.cz/d_18/c_img_gT_p/7XDIio.jpeg?fl=res,400,300,3|shr,,20|jpg,90
Prodej bytu 3+kk 55\\\\u00a0m\\\\u00b2\\ https://d18-a.sdn.cz/d_18/c_img_gW_m/eBNBuKo.jpeg?fl=res,400,300,3|shr,,20|jpg,90
Prodej bytu 3+kk 75\\\\u00a0m\\\\u00b2\\ https://d18-a.sdn.cz/d_18/c_img_gS_n/xcgCeK.jpeg?fl=res,400,300,3|shr,,20|jpg,90
--
-- PostgreSQL database dump complete
--

Error creating DB instance: InvalidParameterValue: Invalid DB engine for PostgreSQL DB

I am trying to create a PostgreSQL RDS instance using Terraform.
Here is how my configuration looks:
resource "aws_db_subnet_group" "postgres" {
name = "postgres-subnets"
subnet_ids = ["mysub1","mysub2"]
}
resource "aws_db_instance" "myrds" {
engine = "postgresql"
engine_version = "12.4"
instance_class = "db.t2.micro"
identifier = "myrds"
username = "myuser"
password = "*******"
allocated_storage = 10
storage_type = "gp2"
db_subnet_group_name = "${aws_db_subnet_group.postgres.id}"
}
It fails with following error:
Error: Error creating DB Instance: InvalidParameterValue: Invalid DB engine
Terraform documentation needs to add the engine names which are supported:
engine = "postgresql" is incorrect. Supported value is "postgres"

AWS Terraform postgresql provider: SSL is not enabled on the server

I am trying to create a database in the created postgres RDS in AWS with postgresql provider. The terraform script i have created is as following:
resource "aws_db_instance" "test_rds" {
allocated_storage = "" # gigabytes
backup_retention_period = 7 # in days
engine = ""
engine_version = ""
identifier = ""
instance_class = ""
multi_az = ""
name = ""
username = ""
password = ""
port = ""
publicly_accessible = "false"
storage_encrypted = "false"
storage_type = ""
vpc_security_group_ids = ["${aws_security_group.test_sg.id}"]
db_subnet_group_name = "${aws_db_subnet_group.rds_subnet_group.name}"
}
The postgresql provider is as following:
# Create databases in rds
provider "postgresql" {
alias = "alias"
host = "${aws_db_instance.test_rds.address}"
port = 5432
username =
password =
database =
sslmode = "disable"
}
# Create user in rds
resource "postgresql_role" "test_role" {
name =
replication = true
login = true
password =
}
# Create database rds
resource "postgresql_database" "test_db" {
name = testdb
owner = "${postgresql_role.test_role.name}"
lc_collate = "C"
allow_connections = true
provider = "postgresql.alias"
}
Anyway i keep retrieving
Error: Error initializing PostgreSQL client: error detecting capabilities: error PostgreSQL version: pq: SSL is not enabled on the server
Note: the empty fields are already filled and the RDS is successfully created, the problem rises when trying to create the database in the rds with the postgresql provider.
We ran into this issue as well, and the problem was that the password was not defined. It seems that we will get the SSL is not enabled error when it has problems connecting. We also had the same problem when the db host name was missing. You will need to make sure you define all of the fields needed to connect in Terraform (probably database and username too).
Ensuring there was a password set for the postgres user and disabled sslmode, did it for me
sslmode = "disable"

Creating postgres schemas using psycopg cur.execute

My python application allows users to create schemas of their naming. I need a way to protect the application from sql injections.
The SQL to be executed reads
CREATE SCHEMA schema_name AUTHORIZATION user_name;
The psycopg documentation (generally) recommends passing parameters to execute like so
conn = psycopg2.connect("dbname=test user=postgres")
cur = conn.cursor()
query = 'CREATE SCHEMA IF NOT EXISTS %s AUTHORIZATION %s;'
params = ('schema_name', 'user_name')
cur.execute(query, params)
But this results in a query with single quotes, which fails:
CREATE SCHEMA 'schema_name' AUTHORIZATION 'user_name';
> fail
Is there a way to remove the quotes, or should I just settle for stripping non-alphanumeric characters from the schema name and call it a day? The later seems kind of ugly, but should still work.
To pass identifiers use AsIs. But that exposes to SQL injection:
import psycopg2
from psycopg2.extensions import AsIs
conn = psycopg2.connect(database='cpn')
cursor = conn.cursor()
query = """CREATE SCHEMA %s AUTHORIZATION %s;"""
param = (AsIs('u1'), AsIs('u1; select * from user_table'))
print cursor.mogrify(query, param)
Output:
CREATE SCHEMA u1 AUTHORIZATION u1; select * from user_table;
Here's a boilerplate that might help. I've used environment variables but you can use a .conf or whatever you like.
Store your connection variables in a .env file:
db_host = "localhost"
db_port = "5432"
db_database = "postgres"
db_user = "postgres"
db_password = "postgres"
db_schema = "schema2"
Load params in your app.py and assign them to variables, then use the variables where required:
import psychopg2
from dotenv import load_dotenv
import database
# Load your environment variables here:
load_dotenv()
db_host = os.environ["db_host"]
db_port = os.environ["db_port"]
db_database = os.environ["db_database"]
db_user = os.environ["db_user"]
db_password = os.environ["db_password"]
db_schema = os.environ["db_schema"]
# Build Connection:
connection = psycopg2.connect(host=db_host,
port=db_port,
database=db_database,
user=db_user,
password=db_password
)
# Build Query Strings:
CREATE_SCHEMA = f"CREATE SCHEMA IF NOT EXISTS {schema};"
CREATE_TABLE1 = f"CREATE TABLE IF NOT EXISTS {schema}.table1 (...);"
CREATE_TABLE2 = f"CREATE TABLE IF NOT EXISTS {schema}.table2 (...);"
# Create Schema and Tables:
with connection:
with connection.cursor() as cursor:
cursor.execute(CREATE_SCHEMA)
cursor.execute(CREATE_TABLE1)
cursor.execute(CREATE_TABLE2)
As of psycopg2 >= 2.7, psycopg2.sql can be used to compose dynamic statements, which also guards from SQL injection.

How to know from what table record is retrieved in Sphinx?

from sphinx.conf:
source src0 {
type = pgsql
sql_host = localhost
sql_user = <db user>
sql_pass = <pwd>
sql_db = <db name>
sql_port = 5432
sql_query = \
SELECT id, header, text, "app_main" as table_name \
FROM app_main
sql_query_info = SELECT * FROM app_main WHERE id=$id
sql_attr_string = table_name
}
source src1 {
type = pgsql
sql_host = localhost
sql_user = <db user>
sql_pass = <pwd>
sql_db = <db name>
sql_port = 5432
sql_query = \
SELECT id, header, text, "app_product" as table_name \
FROM app_product
sql_query_info = SELECT * FROM app_product WHERE id=$id
sql_attr_string = table_name
}
index global_index {
source = src0
source = src1
path = D:/blizzard/Projects/Python/Web/moz455/app/sphinx/data/global_index
docinfo = extern
charset_type = utf-8
}
Command
client.Query(S, '*')
returns
{'status': 0, 'matches': [{'id': 5, 'weight': 30, 'attrs': {}}], 'fields': ['header', 'text'], 'time': '0.000', 'total_found': 1, 'warning': '', 'attrs': [], 'words': [{'docs': 1, 'hits': 2, 'word': 'styless'}], 'error': '', 'total': 1})
Why attrs dict is empty? Is this the right way to get table name and if not - what is?
Make sure you rebuild the index after changing the the config file
Best to restart sphinx after changing config
Specify the actual index name(s) in the query, rather than just using '*' - all indexes should have the required attribute(s)