Pyodbc connection with amazon rds postgres database produces error when executing SQL commands (syntax error) - postgresql

I have set up a connection between pyodbc and the aws rds (postgresql database) and have installed psqlodbc (which is what the Postgres Unicode(x64) odbc driver is). Everything looks fine until I run a SQL query. It returns a syntax error but there is nothing wrong with my syntax. I'm not exactly sure what would be the issue.
This is Python 3.7 by the way.
import pyodbc
mypw = 'skjhaf234234dkjhkjx'
string = 'Driver={PostgreSQL Unicode(x64)};Server=myfakeserveraddress.rds.amazonaws.com;Database=mydb;UID=myusername;PWD='+mypw+';'
connection = pyodbc.connect(string)
c = connection.cursor()
c.execute("SELECT * FROM schema_table.test_table;")
Error Message:
Traceback (most recent call last):
File "", line 1, in
pyodbc.ProgrammingError: ('42601', '[42601] ERROR: syntax error at or near "'schema_table.test_table'";\nError while executing the query (1) (SQLExecDirectW)')
Without the single quotation marks ' surrounding the table name, I get this error
c.execute("SELECT * from schema_table.test_table")
Traceback (most recent call last): File "", line 1, in
pyodbc.ProgrammingError: ('25P02', '[25P02] ERROR: current
transaction is aborted, commands ignored until end of transaction
block;\nError while executing the query (1) (SQLExecDirectW)')
PS My company has disabled pip installs so I cannot upgrade my packages and am limited to using only a few packages (including this one).
How can I execute my commands without errors?

It seems I have figured it out.... I added autocommit=False to the connection initialization and it seems fine now.... Perhaps it has something to do with the underlying parsing of the sql commands. Keeping the question in case it helps someone.
import pyodbc
mypw = 'skjhaf234234dkjhkjx'
string = 'Driver={PostgreSQL Unicode(x64)};Server=myfakeserveraddress.rds.amazonaws.com;Database=mydb;UID=myusername;PWD='+mypw+';'
connection = pyodbc.connect(string, autocommit=False)
c = connection.cursor()
c.execute("SELECT * FROM schema_table.test_table;")

Related

Error from Google Authentication in cloud sql connection

[LATEST UPDATE] Thanks to Jack's enormous help!!! I managed to connect to the Cloud SQL postgres DB and read/write my dataframes to the database. However, I am still experiencing the same error that I experienced previously, which is...
struct.error: 'h' format requires -32768 <= number <= 32767
This error doesnt happen when the dataframes are small, compact and columns do not have too many NaN values in them. However, when there are many NaN values in the columns, the program throws the following error.
Separately I have tried using df = df.fillna(0) to fill the NaN values with 0. But it did not work as well, and the same error surfaced. Please help!
Traceback (most recent call last):
File "...\falcon_vbackup\STEP5_SavetoDB_and_SendEmail.py", line 81, in <module>
main_SavetoDB_and_SendEmail(
File "...\falcon_vbackup\STEP5_SavetoDB_and_SendEmail.py", line 37, in main_SavetoDB_and_SendEmail
Write_Dataframe_to_SQLTable(
File "...\falcon_vbackup\APPENDIX_Database_ReadWrite_v2.py", line 143, in Write_Dataframe_to_SQLTable
df_Output.to_sql(sql_tablename, con=conn, schema='public', index=False, if_exists=if_exists, method='multi', chunksize=1000)
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pandas\core\generic.py", line 2963, in to_sql
return sql.to_sql(
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pandas\io\sql.py", line 697, in to_sql
return pandas_sql.to_sql(
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pandas\io\sql.py", line 1739, in to_sql
total_inserted = sql_engine.insert_records(
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pandas\io\sql.py", line 1322, in insert_records
return table.insert(chunksize=chunksize, method=method)
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pandas\io\sql.py", line 950, in insert
num_inserted = exec_insert(conn, keys, chunk_iter)
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pandas\io\sql.py", line 873, in _execute_insert_multi
result = conn.execute(stmt)
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\sqlalchemy\engine\base.py", line 1289, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\sqlalchemy\sql\elements.py", line 325, in _execute_on_connection
return connection._execute_clauseelement(
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\sqlalchemy\engine\base.py", line 1481, in _execute_clauseelement
ret = self._execute_context(
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\sqlalchemy\engine\base.py", line 1845, in _execute_context
self._handle_dbapi_exception(
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\sqlalchemy\engine\base.py", line 2030, in _handle_dbapi_exception
util.raise_(exc_info[1], with_traceback=exc_info[2])
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\sqlalchemy\util\compat.py", line 207, in raise_
raise exception
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\sqlalchemy\engine\base.py", line 1802, in _execute_context
self.dialect.do_execute(
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\sqlalchemy\engine\default.py", line 732, in do_execute
cursor.execute(statement, parameters)
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pg8000\dbapi.py", line 455, in execute
self._context = self._c.execute_unnamed(
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pg8000\core.py", line 627, in execute_unnamed
self.send_PARSE(NULL_BYTE, statement, oids)
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\pg8000\core.py", line 601, in send_PARSE
val.extend(h_pack(len(oids)))
struct.error: 'h' format requires -32768 <= number <= 32767
Exception ignored in: <function Connector.__del__ at 0x00000213190D8700>
Traceback (most recent call last):
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\site-packages\google\cloud\sql\connector\connector.py", line 167, in __del__
File "c:\Users\ng_yj\.conda\envs\venv_falcon\lib\concurrent\futures\_base.py", line 447, in result
concurrent.futures._base.TimeoutError:
I have setup a postgresql in GCP's Cloud SQL. I am trying to connect to it using
google.cloud.sql.connector. I have created a Service Account from the GCP Console, and downloaded the json keys.
I want to use a service account , credentials/ keys (in the format of reading a .json file placed in the same directory as my main.py code) to authenticate access to cloud_sql.
I am trying to authenticate, but I keep getting an error that says that the service account json file was not found.
Can anyone help to figure out how to fix this error? Thank you!
import pandas as pd
from google.cloud.sql.connector import connector
import os
import pandas as pd
import pandas as pd
import sqlalchemy
import os
# configure Cloud SQL Python Connector properties
def getconn():
conn = connector.connect(
os.environ['LL_DB_INSTANCE_CONNECTION_NAME'],
"pg8000",
user=os.environ['LL_DB_USER'],
password=os.environ['LL_DB_PASSWORD'],
db=os.environ['LL_DB_NAME'])
return conn
# Show existing SQLTables within database
def Show_SQLTables_in_Database(conn):
if conn!=None:
# Show what tables remain in database
results = conn.execute("""SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public'""").fetchall()
for table in results:
print(table)
if __name__=="__main__":
# Set the Google Application Credentials as environment variable
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = os.path.join(os.getcwd(),"Google-Credentials-LL-tech2.json")
# create connection pool to re-use connections
pool = sqlalchemy.create_engine("postgresql+pg8000://", creator=getconn)
with pool.connect() as db_conn:
# Show what tables remain in database
results = db_conn.execute("""SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public'""").fetchall()
for table in results:
print(table)
The error you are seeing means that the .json file is not being found. This is most likely being caused by os.getcwd() which gets the path of the current working directory from where main.py is being called. This leads to errors if you are calling the file from anywhere other than the parent directory.
Working case: python main.py
Error case: python folder/main.py
Change the line where you set credentials to the following:
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = os.path.join(os.path.dirname(os.path.abspath(__file__)),"Google-Credentials-LL-tech2.json")
This will allow the credentials path to be properly set for all cases of where your main.py is called from.
Responding to your latest update of the error.
First, make sure that your service account has the Cloud SQL Client role applied to it.
Secondly, try executing the following basic script prior to your custom configuration, this will help isolate the error to the Python Connector or the service account/implementation.
The following should just connect to your database and print the time.
from google.cloud.sql.connector import connector
import sqlalchemy
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = os.path.join(os.path.dirname(os.path.abspath(__file__)),"GSheet-Credentials-LL-tech2.json")
# build connection for db using Python Connector
def getconn():
conn = connector.connect(
os.environ['LL_DB_INSTANCE_CONNECTION_NAME'],
"pg8000",
user=os.environ['LL_DB_USER'],
password=os.environ['LL_DB_PASSWORD'],
db=os.environ['LL_DB_NAME'],
)
return conn
# create connection pool
pool = sqlalchemy.create_engine("postgresql+pg8000://", creator=getconn)
def db_connect():
with pool.connect() as conn:
current_time = conn.execute(
"SELECT NOW()").fetchone()
print(f"Time: {str(current_time[0])}")
db_connect()
If that still gives the error, please provide the full stacktrace of the error so that I can try and debug it further with more info.

Postgresql 11.1: "Problem running post-install step. Installation may not complete correctly. Error with configuration or permissions."

I checked the log file and I think this is the part that caused the problem:
Setting up database
[15:30:54] Configuring pg11 to point to existing data dir D:\John's Files\My Documents\Code\Databases\PostgreSQL\data\pg11
Setting PostgreSQL port
= 5432
Executing C:\Installed Software\Developer Software\PostgreSQLv11.1/pgc config pg11 --datadir "D:\John's Files\My Documents\Code\Databases\PostgreSQL\data\pg11"
Script exit code: 1
Script output: ################################################
# FATAL SQL Error in check_release
# SQL Message = near "s": syntax error
# SQL Statement = SELECT r.component FROM releases r, versions v
WHERE r.component = v.component
AND r.component LIKE '%D:\John's Files\My Documents\Code\Databases\PostgreSQL\data\pg11%' AND v.is_current = 1
################################################
Script stderr: Program ended with an error exit code
Error with configuration or permissions. Please see log file for more information.
Problem running post-install step. Installation may not complete correctly
Error with configuration or permissions. Please see log file for more information.
I think the problem is the apostrophe in "John's". Does anyone know if that's right? Is there a fix to this problem? I don't want to rename my directory because Postgresql can't handle apostrophes.

Error writing to database in Moodle in Find and Replace tool

I have moved my moodle database files from old server URLold= (https://example1.com) to new server URLnew=(https://example2.com). Now I want to replace URLold with URLnew in database tables using find and replace tool provided by moodle. But when i perform the operation I am getting this error. What should i do? Please help.
Error I am getting
Debug info: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'table = REPLACE(table, 'https://example1.com', 'https://example2.com')' at line 1
UPDATE mdl_pma_history SET table = REPLACE(table, ?, ?)
[array (
0 => 'https://example1.com',
1 => 'https://example2.com',
)]
Error code: dmlwriteexception
Stack trace:
line 426 of /lib/dml/moodle_database.php: dml_write_exception thrown
line 895 of /lib/dml/mysqli_native_moodle_database.php: call to moodle_database->query_end()
line 6787 of /lib/adminlib.php: call to mysqli_native_moodle_database->execute()
line 74 of /admin/tool/replace/index.php: call to db_replace()
So I got the answer on my own
Had to delete the mdl_pma_history table that was causing the error. The Steps i followed are as follows.
Exported the table to .sql file
Deleted the table because it was not allowing the script to run
Once the script (Find and replace) ran successfully imported the
table back
Done.

error:not authorized on cuckoo(DB name) to execute command using (cuckoo with mongoDB)

I'm trying to use mongoDB with cuckoo but i get this error message :
2016-12-16 06:58:01,632 [lib.cuckoo.core.plugins] ERROR: Failed to run the reporting module "MongoDB":
Traceback (most recent call last):
File "/home/ziv/Documents/cuckoo/lib/cuckoo/core/plugins.py", line 533, in process
current.run(self.results)
File "/home/ziv/Documents/cuckoo/modules/reporting/mongodb.py", line 89, in run
if "cuckoo_schema" in self.db.collection_names():
File "/usr/lib/python2.7/dist-packages/pymongo/database.py", line 520, in collection_names
results = self._list_collections(sock_info, slave_okay)
File "/usr/lib/python2.7/dist-packages/pymongo/database.py", line 492, in _list_collections
cursor = self._command(sock_info, cmd, slave_okay)["cursor"]
File "/usr/lib/python2.7/dist-packages/pymongo/database.py", line 393, in _command
allowable_errors)
File "/usr/lib/python2.7/dist-packages/pymongo/pool.py", line 211, in command
read_concern)
File "/usr/lib/python2.7/dist-packages/pymongo/network.py", line 100, in command
helpers._check_command_response(response_doc, msg, allowable_errors)
File "/usr/lib/python2.7/dist-packages/pymongo/helpers.py", line 196, in _check_command_response
raise OperationFailure(msg % errmsg, code, response)
OperationFailure: command SON([('listCollections', 1), ('cursor', {})]) on namespace cuckoo.$cmd failed: not authorized on cuckoo to execute command { listCollections: 1, cursor: {} }
this is the DB's i have:
show dbs
admin 0.078GB
cuckoo 0.078GB
local 0.078GB
i used this guide to install mongo db https://www.howtoforge.com/tutorial/install-mongodb-on-ubuntu-16.04/
i used this guide to install cuckoo
http://mostlyaboutsecurity.com/?p=15&i=1
update
ithink i dont have permissions but i dont know how to set up what i need,
this is the cuckoo code that uses mongo DB:
(on this line "self.db.collection_names():")
def run(self, results):
"""Writes report.
#param results: analysis results dictionary.
#raise CuckooReportError: if fails to connect or write to MongoDB.
"""
if not HAVE_MONGO:
raise CuckooDependencyError(
"Unable to import pymongo (install with "
"`pip install pymongo`)"
)
self.connect()
# Set mongo schema version.
# TODO: This is not optimal becuase it run each analysis. Need to run
# only one time at startup.
if "cuckoo_schema" in self.db.collection_names():
if self.db.cuckoo_schema.find_one()["version"] != self.SCHEMA_VERSION:
CuckooReportError("Mongo schema version not expected, check data migration tool")
else:
self.db.cuckoo_schema.save({"version": self.SCHEMA_VERSION})
def connect(self):
"""Connects to Mongo database, loads options and set connectors.
#raise CuckooReportError: if unable to connect.
"""
host = self.options.get("host", "127.0.0.1")
port = int(self.options.get("port", 27017))
db = self.options.get("db", "cuckoo")
try:
self.conn = MongoClient(host, port)
self.db = self.conn[db]
self.fs = GridFS(self.db)
except TypeError:
raise CuckooReportError("Mongo connection port must be integer")
except ConnectionFailure:
raise CuckooReportError("Cannot connect to MongoDB")
I don't want to edit this code (add connection string to it)
I have a clean installation of MongoDB, how do i create a DB named cuckoo
that this code can acsess and use?
I couldn't find any reference in all the guides I read. its like it should work automaticly but it doesn't
The error message is:
OperationFailure: command SON([('listCollections', 1), ('cursor', {})]) on namespace cuckoo.$cmd failed: not authorized on cuckoo to execute command { listCollections: 1, cursor: {} }
This indicates that your app is attempting to execute a command in the MongoDB database for which it does not have permissions.
Does your connection string to the database include the authentication credentials (username/password)?
Does this user have necessary permissions to execute this command?

Web2py - Auth with MongoDB

Good day,
I'm trying to use MongoDB with web2py, and for that I started with authentication, but this appeared some errors that I do not understand.
In a relational database, the web2py creates the authentication tables, MongoDB in the collections are not created automatically.
Below is the code and the error when trying to log me:
db.py
db = DAL("mongodb://localhost/primer", check_reserved=["mongodb_nonreserved",], adapter_args={"safe":False})
from gluon.tools import Auth, Service, PluginManager
auth = Auth(db)
service = Service()
plugins = PluginManager()
auth.settings.remember_me_form = False
auth.settings.actions_disabled=['register','change_password','request_reset_password','retrieve_username','profile']
auth.define_tables(username=True)
from gluon.contrib.login_methods.ldap_auth import ldap_auth
auth.settings.login_methods = [ldap_auth(server='localhost', port='10389', base_dn='ou=people,o=empresa,dc=com,dc=br')]
The authentication is by LDAP, and works perfectly in a relational database, which has the AUTH_USER table.
However, the loging using MongoDB, this appearing the following error:
Traceback (most recent call last):
File "C:\Users\Rafa\Desktop\web2py-10-06-2015p4\applications\contrato\controllers/appadmin.py", line 249, in select
nrows = db(query, ignore_common_filters=True).count()
File "C:\Users\Rafa\Desktop\web2py-10-06-2015p4\gluon\packages\dal\pydal\objects.py", line 2016, in count
return db._adapter.count(self.query,distinct)
File "C:\Users\Rafa\Desktop\web2py-10-06-2015p4\gluon\packages\dal\pydal\adapters\mongo.py", line 200, in count
count=True,snapshot=snapshot)['count'])
File "C:\Users\Rafa\Desktop\web2py-10-06-2015p4\gluon\packages\dal\pydal\adapters\mongo.py", line 319, in select
sort=mongosort_list, snapshot=snapshot).count()}
File "C:\Python27\lib\site-packages\pymongo\collection.py", line 929, in find
return Cursor(self, *args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'snapshot'
The database "primer" is created and only has two collections "posts" and "system.indexes"
Could someone help me with this error to be able to use MongoDB with the web2py?
Thank You!
Found it.
From pymongo's changelog There are a lot of breaking changes in pymongo 3.0 compared to 2.8
The following find/find_one options have been removed:
snapshot (use the new modifiers option instead)
So uninstall pymongo and try the latest before 3.0:
pip install pymongo==2.8.1
Here's my attempt:
>>> from pydal import *
No handlers could be found for logger "web2py"
>>> db = DAL('mongodb://localhost/connect_test')
>>> db.define_table('some',Field('key'),Field('value'))
<Table some (id,key,value)>
>>> db.define_table('some2',Field('ref','reference some'),Field('value'))
<Table some2 (id,ref,value)>
>>> db(db.some).select()
<Rows (1)>
>>> db(db.some).select().first()
<Row {'value': 'pir', 'key': 'bla', 'id': 26563964102769618087622556519L}>
>>>
[edit]
There's more to it. This worked at least with pydal 15.03. Googling some code i found the following in the mongo.py adapter :
from pymongo import version
if 'fake_version' in driver_args:
version = driver_args['fake_version']
if int(version.split('.')[0]) < 3:
raise Exception(
"pydal requires pymongo version >= 3.0, found '%s'"
% version)
Which was like good soil for a big frown...
After updating pydal to 15.07 it apears to brake indeed:
RuntimeError: Failure to connect, tried 5 times:
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\pydal\base.py", line 437, in __init__
self._adapter = ADAPTERS[self._dbname](**kwargs)
File "C:\Python27\lib\site-packages\pydal\adapters\base.py", line 57, in __call__
obj = super(AdapterMeta, cls).__call__(*args, **kwargs)
File "C:\Python27\lib\site-packages\pydal\adapters\mongo.py", line 82, in __init__
% version)
Exception: pydal requires pymongo version >= 3.0, found '2.8.1'
So it's back to upgrading pymongo :)
With pymongo at 3.0.3 and pydal at 15.07 it works like a charm again.