Set PostgreSQL configuration parameter in SQLAlchemy - postgresql

I'm using Pyramid and SQLAlchemy for my REST server. For the logging purposes I need some way how to determine user name in postgresql after update trigger function. One way is setting some runtime configuration parameter before update execution and using this value in trigger function.
In psql terminal it can be done using
set myapp.user_name = 'User Name';
This parameter will be subsequently used in postgresql trigger.
Is it possible to set this parameter in Pyramid / SQLAlchemy application?
I suppose I can use SQLAlchemy Events. But I'm not sure which event is correct for this case.

You can register a Pyramid event handler which would set the parameter on any new request:
from pyramid import events
def on_new_request(event):
"""
you can access request as event.request
and the current registry settings as event.request.registry.settings
"""
session = DBSession()
session.execute("SET blah TO 'foo'")
def app(global_config, **settings):
config = Configurator(...)
config.add_subscriber(on_new_request, events.NewRequest)
See Deployment Settings and events.NewRequest for more details.
One thing to watch out is to make sure you're always using the same session object - SQLAlchemy maintains a pool of connections and, if you commit your session, all subsequent operations will use a brand-new session which is not pre-configured with your settings. In other words, you should not do session.commit(), session.rollback(), transaction.commit() etc. in your code and instead rely on ZopeTransactionExtension committing/rolling back the transaction at the end of the request/response cycle. Which is a recommended practice anyway.

I'm not sure what the exact command you're running with set myapp.user = "User Name".
When you say "In psql this can be done using..." I'm assuming you are running some sort of SQL command, although the command you listed is not a valid command, at least not typically.
In any event, arbitrary SQL can be executed in SQLAlchemy using the .execute() method.
session = Session()
session.execute("select 1;")
http://docs.sqlalchemy.org/en/rel_0_9/orm/session.html?highlight=execute#sqlalchemy.orm.session.Session.execute

Related

How to change default value for parameter globally for redshift cluster

I am using the new SUPER data type and I found that you can't access camel case fields unless you set downcase_delimited_identifier to False.
It's True by default.
I want to set it to false globally on the cluster ( i.e. persistently ).
But it seems this is not possible?
This page indicates you can use parameter groups for this purpose.
But that does not appear to be the case. There's 12 parameters set by default, and you can modify their values. But you can't add any new parameters.
I tried modifying the group using aws cli, but this didn't work either:
$ aws redshift modify-cluster-parameter-group --parameter-group-name my-redshift-parameter-group --parameters ParameterName=downcase_delimited_identifier,ParameterValue=false
An error occurred (InvalidParameterValue) when calling the ModifyClusterParameterGroup operation: Could not find parameter with name: downcase_delimited_identifier
Is it really true that you can't change a default value for a parameter for a cluster?
The parameters in the parameter group are cluster parameters. The parameter you are trying to set is a connection / session parameter (i.e. you type SET in during your connection). If you want to have a parameter set for every time you log in you can configure this with ALTER USER - https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_USER.html. If you want this to affect all users I believe you have to run ALTER USER for each user individually. Scripting this is fairly easy to do. Once the user has been altered the parameter will be set every time that user connects.

How to match a list of values from Database1 with a column in Database2 using JDBC Request in JMeter?

I am quite new to JMeter, so I am looking for the best approach to do this: I want to get a list of messageID's from Database1 and then check whether these messageID values will be found in Database2 and then check the ErrorMessage column for these ID's against what I expect.
I have the JDBC Request working for extracting the list of messageID's from Database1. JMeter returns the list to me, but now I'm stuck. I am not sure how to handle the variable names and result variable names field in the JDBC Request and use this in the next throughput controller loop for the JDBC Request for Database2.
My JDBC request looks like this (PostgreSQL):
SELECT messageID FROM database1
ORDER BY created DESC
FETCH FIRST 20 ROWS ONLY
Variable names: messageid
Result variable names: resultDB1
Then I use the BeanShell Assertion to see whether the connection to the database is present, or whether the response is empty.
But now, I have to connect to a different database, so I need to make a new throughput controller with a new JDBC configuration, Request, etc in there, but I don't know how to pass on the messageid list to this new request.
What I thought about was writing the list of results from Database1 into a file and then read the values from that file for Database2, but that seems like unnecessarily complicated to me, like there should be a solution in JMeter already for that. Also, I am running my JMeter tests on a remote linux server, so I don't want to make it more complicated by making new files and saving them somewhere.
You can convert your resultDB1 into a JMeter Property like:
props.put("resultDB1", vars.getObject("resultDB1"));
As per JMeter Documentation:
Properties are not the same as variables. Variables are local to a thread; properties are common to all threads
So basically JMeter Properties is a subset of Java Properties which are global for the whole JVM
Once done you will be able to access the value in other Thread Groups like:
ArrayList resultDB1 = (ArrayList)props.get("resultDB1");
ArrayList resultDB2 = (ArrayList)vars.getObject("resultDB2");
//your code to compare 2 result sets here
Also be aware that since JMeter 3.1 you should be using JSR223 Test Elements and Groovy language for scripting so consider migrating to JSR223 Assertion on next available opportunity.

Go database/sql - Issue Commands on Reconnect

I have a small application written in Go that connects to a PostgreSQL database on another server, utilizing database/sql and lib/pq. When I start the application, it goes through and establishes that all the database tables and indexes exist. As part of this process, it issues a SET search_path TO preferredschema,public command. Then, for the remainder of the database access, I do not have to specify the schema.
From what I've determined from debugging it, when database/sql reconnects (no network is perfect), the application begins failing because the search path isn't set. Is there a way to specify commands that should be executed when it reconnects? I've searched for an event that might be able to be leveraged, but have come up empty so far.
Thanks!
From the fine manual:
Connection String Parameters
[...]
In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html.
Then if we go over to the PostgreSQL documentation, you'll see various ways of setting connection parameters such as config files, SET commands, command line switches, ...
While the desired behavior isn't exactly spelled out, it is suggested that you can put anything you'd SET right into the connection string:
connStr := "dbname=... user=... search_path=preferredschema,public"
// -----------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
and since that's all there is for configuring the connection, it should be used for every connection (including reconnects).
The Connection String Parameters section of the pq documentation also tells you how to quote and escape things if whatever preferredschema really is needs it or if you have to grab a value at runtime and add it to the connection string.

How to combine py.test fixtures with Flask-SQLAlchemy and PostgreSQL?

I'm struggling to write py.test fixtures for managing my app's database that maximize speed, supports pytest-xdist parallelization of tests, and isolates the tests from each other.
I'm using Flask-SQLAlchemy 2.1 against a PostgreSQL 9.4 database.
Here's the general outline of what I'm trying to accomplish:
$ py.test -n 3 spins up three test sessions for running tests.
Within each session, a py.test fixture runs once to setup a transaction, create the database tables, and then at the end of the session it rolls back the transaction. Creating the database tables needs to happen within a PostgreSQL transaction that's only visible to that particular test-session, otherwise the parallelized test sessions created by pytest-xdist cause conflicts with each other.
A second py.test fixture that runs for every test connects to the existing transaction in order to see the created tables, creates a nested savepoint, runs the test, then rolls back to the nested savepoint.
Ideally, these pytest fixtures support tests that call db.session.rollback(). There's a potential recipe for accomplishing this at the bottom of this SQLAlchemy doc.
Ideally the pytest fixtures should yield the db object, not just the session so that
folks can write tests without having to remember to use a session that's
different than the standard db.session they use throughout the app.
Here's what I have so far:
import pytest
# create_app() is my Flask application factory
# db is just 'db = SQLAlchemy()' + 'db.init_app(app)' within the create_app() function
from app import create_app, db as _db
#pytest.yield_fixture(scope='session', autouse=True)
def app():
'''Session-wide test application'''
a = create_app('testing')
with a.app_context():
yield a
#pytest.yield_fixture(scope='session')
def db_tables(app):
'''Session-wide test database'''
connection = _db.engine.connect()
trans = connection.begin() # begin a non-ORM transaction
# Theoretically this creates the tables within the transaction
_db.create_all()
yield _db
trans.rollback()
connection.close()
#pytest.yield_fixture(scope='function')
def db(db_tables):
'''db session that is joined to existing transaction'''
# I am quite sure this is broken, but it's the general idea
# bind an individual Session to the existing transaction
db_tables.session = db_tables.Session(bind=db_tables.connection)
# start the session in a SAVEPOINT...
db_tables.session.begin_nested()
# yield the db object, not just the session so that tests
# can be written transparently using the db object
# without requiring someone to understand the intricacies of these
# py.test fixtures or having to remember when to use a session that's
# different than db.session
yield db_tables
# rollback to the savepoint before the test ran
db_tables.session.rollback()
db_tables.session.remove() # not sure this is needed
Here's the most useful references that I've found while googling:
http://docs.sqlalchemy.org/en/latest/orm/session_transaction.html#joining-a-session-into-an-external-transaction-such-as-for-test-suites
http://koo.fi/blog/2015/10/22/flask-sqlalchemy-and-postgresql-unit-testing-with-transaction-savepoints/
https://github.com/mitsuhiko/flask-sqlalchemy/pull/249
I'm a couple years late here, but you might be interested in pytest-flask-sqlalchemy, a plugin I wrote to help address this exact problem.
The plugin provides two fixtures, db_session and db_engine, which you can use like regular Session and Engine objects to run updates that will get rolled back at the end of the test. It also exposes a few configuration directives (mocked-engines and mocked-sessions) that will mock out connectables in your app and replace them with these fixtures so that you can run methods and be sure that any state changes will get cleaned up when the test exits.
The plugin should work with a variety of databases, but it's been tested most heavily against Postgres 9.6 and is in production in the test suite for https://dedupe.io. You can find some examples in the documentation that should help you get started, but if you're willing to provide some code I'd be happy to demonstrate how to use the plugin, too.
I had similar issue trying to combine yield fixtures. Unfortunately according to the doc you are not able to combine more than one yield level.
But you might be able to find a work around using request.finalizer:
#pytest.fixture(scope='session', autouse=True)
def app():
'''Session-wide test application'''
a = create_app('testing')
with a.app_context():
return a
#pytest.fixture(scope='session')
def db_tables(request, app):
'''Session-wide test database'''
connection = _db.engine.connect()
trans = connection.begin() # begin a non-ORM transaction
# Theoretically this creates the tables within the transaction
_db.create_all()
def close_db_session():
trans.rollback()
connection.close()
request.addfinalizer(close_db_session)
return _db

Re-using sessions in ScalaQuery?

I need to do small (but frequent) operations on my database, from one of my api methods. When I try wrapping them into "withSession" each time, I get terrible performance.
db withSession {
SomeTable.insert(a,b)
}
Running the above example 100 times takes 22 seconds. Running them all in a single session is instantaneous.
Is there a way to re-use the session in subsequent function invocations?
Do you have some type of connection pooling (see JDBC Connection Pooling: Connection Reuse?)? If not you'll be using a new connection for every withSession(...) and that is a very slow approach. See http://groups.google.com/group/scalaquery/browse_thread/thread/9c32a2211aa8cea9 for a description of how to use C3PO with ScalaQuery.
If you use a managed resource from an application server you'll usually get this for "free", but in stand-alone servers (for example jetty) you'll have to configure this yourself.
I'm probably stating the way too obvious, but you could just put more calls inside the withSession block like:
db withSession {
SomeTable.insert(a,b)
SomeOtherTable.insert(a,b)
}
Alternately you can create an implicit session, do your business, then close it when you're done:
implicit val session = db.createSession
SomeTable.insert(a,b)
SomeOtherTable.insert(a,b)
session.close