I have imported one shapefile named tc_bf25 using qgis, and the following is my python script typed in pyscripter,
import sys
import psycopg2
conn = psycopg2.connect("dbname = 'routing_template' user = 'postgres' host = 'localhost' password = '****'")
cur = conn.cursor()
query = """
ALTER TABLE tc_bf25 ADD COLUMN source integer;
ALTER TABLE tc_bf25 ADD COLUMN target integer;
SELECT assign_vertex_id('tc_bf25', 0.0001, 'the_geom', 'gid')
;"""
cur.execute(query)
query = """
CREATE OR REPLACE VIEW tc_bf25_ext AS
SELECT *, startpoint(the_geom), endpoint(the_geom)
FROM tc_bf25
;"""
cur.execute(query)
query = """
CREATE TABLE node1 AS
SELECT row_number() OVER (ORDER BY foo.p)::integer AS id,
foo.p AS the_geom
FROM (
SELECT DISTINCT tc_bf25_ext.startpoint AS p FROM tc_bf25_ext
UNION
SELECT DISTINCT tc_bf25_ext.endpoint AS p FROM tc_bf25_ext
) foo
GROUP BY foo.p
;"""
cur.execute(query)
query = """
CREATE TABLE network1 AS
SELECT a.*, b.id as start_id, c.id as end_id
FROM tc_bf25_ext AS a
JOIN node AS b ON a.startpoint = b.the_geom
JOIN node AS c ON a.endpoint = c.the_geom
;"""
cur.execute(query)
query = """
ALTER TABLE network1 ADD COLUMN shape_leng double precision;
UPDATE network1 SET shape_leng = length(the_geom)
;"""
cur.execute(query)
I got the error at the second cur.execute(query),
But I go to pgAdmin to check result, even though no error occurs, the first cur.execute(query) didn't add new columns in my table.
What mistake did I make? And how to fix it?
I am working with postgresql 8.4, python 2.7.6 under Windows 8.1 x64.
When using psycopg2, autocommit is set to False by default. The first two statements both refer to table tc_bf25, but the first statement makes an uncommitted change to the table. So try running conn.commit() between statements to see if this resolves the issue
You should run each statement individually. Do not combine multiple statements into a semicolon separated series and run them all at one. It makes error handling and fetching of results much harder.
If you still have the problem once you've made that change, show the exact statement you're having the problem with.
Just to add to #Talvalin you can enable auto-commit by adding
psycopg2.connect("dbname='mydb',user='postgres',host ='localhost',password = '****'")
conn.autocommit = True
after you connect to your database using psycopg2
Related
I'm trying to use UPDATE SELF JOIN and could not seem to get the correct SQL query.
Before the query, I execute this SQL query to get the values:
SELECT DISTINCT ON (purpose) purpose FROM user_assigned_customer
sales_manager
main_contact
representative
administrator
By the time I run this query, it overwrites all the purpose columns:
UPDATE user_assigned_customer SET purpose = (
SELECT 'main_supervisor' AS purpose FROM user_assigned_customer AS assigned_user
LEFT JOIN app_user ON app_user.id = assigned_user.app_user_id
WHERE app_user.role = 'supervisor'
AND user_assigned_customer.purpose IS NULL
AND assigned_user.id = user_assigned_customer.id
)
The purpose column is now only showing when running the first query:
main_supervisor
Wondering if there is a way to query to update SQL Self JOIN with a custom value.
I think I got it with a help of a friend.
UPDATE user_assigned_customer SET purpose = 'main_supervisor'
FROM user_assigned_customer AS assigned_user
LEFT JOIN app_user ON app_user.id = assigned_user.app_user_id
WHERE app_user.role = 'supervisor'
AND user_assigned_customer.purpose IS NULL
AND assigned_user.id = user_assigned_customer.id
I have this python3 code :
conn = psycopg2.connect( ... )
curr = conn.cursor()
curr.execute(code)
rows = curr.fetchall()
where 'code' has the select query statement
After executing this, 'rows' list will have lists of only the selected row values. How do I run 'curr.execute' in such a way that I also get the respective col headers too?
Meaning if I have say
Select col1, col2 from table Where some_condition;
I want my 'rows' list to have something like [['col1', 'col2'], [some_val_for_col1, some_val_for_col2] ....]. Any other ways of getting these col headers are also fine, but the select query in 'code' shouldn't change.
you have to execute 2 commands
curr.execute("Select * FROM people LIMIT 0")
colnames = [desc[0] for desc in curs.description]
curr.execute(code)
you can follow steps mentioned in https://kb.objectrocket.com/postgresql/get-the-column-names-from-a-postgresql-table-with-the-psycopg2-python-adapter-756
we are connecting to our Postgresql (RDS) server from our django backend as well as lambda, sometimes django backend queries time out and I run the following query to see the locks:
SELECT
pg_stat_activity.client_addr,
pg_stat_activity.query
FROM pg_class
JOIN
pg_locks ON pg_locks.relation = pg_class.oid
JOIN
pg_stat_activity ON pg_locks.pid =
pg_stat_activity.pid
WHERE
pg_locks.granted='t' AND
pg_class.relname='accounts_user'
This gives me 30 rows of simple select queries executed from lambda like this:
SELECT first_name, picture, username FROM accounts_user WHERE id = $1
why does this query hold a lock? should I be worried?
I'm using pg8000 library to connect from Lambda
with pgsql.cursor() as cursor:
cursor.execute(
"""
SELECT first_name, picture, username
FROM accounts_user
WHERE id = %s
""",
(author_user_id,),
)
row = cursor.fetchone()
# use the row ..
I opened an issue at Github maybe it's because I'm using the library wrong. https://github.com/tlocke/pg8000/issues/16
You can also try to reuse the database connection, see https://docs.djangoproject.com/en/2.2/ref/settings/#conn-max-age
DATABASES = {
'default': {
...
'CONN_MAX_AGE': 600, # reuse database connection
}
}
I created a temporary table with sqlalchemy (with an underlying postgres database) that is going to be joined with a database table. However, in some cases when a value is empty '' then postgres throws the error:
failed to find conversion function from unknown to text
SqlAlchemy assembles everything to the following context
[SQL: 'WITH temp_table AS \n(SELECT %(param_1)s AS id, %(param_2)s AS email, %(param_3)s AS phone)\n SELECT campaigns_contact.id, campaigns_contact.email, campaigns_contact.phone \nFROM campaigns_contact JOIN temp_table ON temp_table.id = campaigns_contact.id AND temp_table.email = campaigns_contact.email AND temp_table.phone = campaigns_contact.phone'] [parameters: {'param_1': 83, 'param_2': '', 'param_3': '+1234567890'}]
I assemble the temporary table as follows
stmts = []
for row in import_data:
row_values = [literal(row[value]).label(value) for value in values]
stmts.append(select(row_values))
subquery = union_all(*stmts)
subquery = subquery.cte(name="temp_table")
The problem seems to be the part here
...%(param_2)s AS email...
which after replacing the param_2 results in
...'' AS email...
which will cause the error mentioned above.
One way to solve the issue is to perform a cast
...''::text AS email...
However, I don't know how to perform ::text cast with sqlalchemy!?
I have a database in postgreSQL. I want to read some data from there, but I get an error (column anganridref does not exist) when I execute my command.
Here is my NpgsqlCommand:
cmd.CommandText = "select * from angebot,angebotstatus,anrede where anrid=anganridref and anstaid=anganstaidref";
and my 3 tables
the names of my columns are rights. So I don't understand why that error comes. Someone can explain me why it does crash? Its not the problem of large and lowercase.
You are not prefixing your column names in the where clause:
select *
from angebot,
angebotstatus,
anrede
where anrid = anganridref <-- missing tablenames for the columns
and anstaid = anganstaidre
It's also recommended to use an explicit JOIN instead of the old SQL 89 implicit join syntax:
select *
from angebot
join angebotstatus on angebot.aaaa = angebotstatus.bbbb
join anrede on angebot.aaaa = anrede.bbbb