How can I performance test a Postgres Database with Gatling? - postgresql

I'm using Gatling to performance test an application, but the application has a Postgres database, which relies on jdbc to connect. Is it possible to run a scenario to test the database performance, and if so, how?
I found the jdbcFeeder, but I do not know how to execute the scenario I've set up, since the exec doesn't accept the url I provide it... jdbc:postgresql://host:port/database?user=<user>&password=<password>. It returns a java.lang.IllegalArgumentException: could not be parsed into a proper Uri, missing host.
Example:
val sqlQueryFeeder: RecordSeqFeederBuilder[Any] =
jdbcFeeder(connectionUrl, user, password, s"SELECT * FROM $schema.$table")
val scn: ScenarioBuilder = scenario("Test 1")
.feed(sqlQueryFeeder)
.exec(http.baseUrl(jdbc:postgresql://host:port/database?user=<user>&password=<password>))
)
setUp(scn.inject(atOnceUsers(100)))

Not out of the box.
jdbcFeeder is about driving performance tests with data read from a database (i.e. it's about "feeding" a test with data, not about testing a database).
The gatling-jdbc project appears to be intended to do what you want, but it didn't work when I tried it out (Gatling 3.0-RC4). I'm pretty new to Gatling and Scala, the problem might be me, or it might even by the Gatling release candidate - you might have better luck if you try it out.
There's also gatling-sql but I've not tried it out myself and it hasn't been touched in over a year.

Related

Unit testing strategy

I have to admit I'm quite new to unit testing and there is a lot questions for me.
It's hard to name it but I think it's behaviour test, anyways let me go straight to the example:
I need to test users roles listing to make sure that my endpoint is working correctly and returns all user roles assigned to him.
That means:
I need to create user
I need to create role
I need to assign created role to created user
As we can see there is three operations that must be excecuted before actual test and I believe that in larger applications such list can grow to much larger number of operations and even complex.
The question is how I should test such endpoints: should I just insert raw data to DB, write some code blocks that would do such preparations.
It's probably best if you test the individual units of your service without hitting the service itself, otherwise you're also unit testing the WebApi framework itself.
This will also allow you to mock your database so you don't have to rely on any stored data to run your tests or the authorization to your service.

Unable to practice Sql Injection (blind) on DVWA 1.10

I am practicing for Security Testing. I came across DVWA and I started practicing for Sql Injection. I was doing fine till I started with SQL Injection (blind). No matter which query I try I am not getting the desired result.
For eg :
1' and 1=0 union select null,table_name from information_schema.tables#
simply returns User ID exists in the database.
I have set the DVWA Security to Low. Also made sure there are no errors on setup page of the application under Setup Check section.
Following are environment details:
Operating system: Windows
Backend database: MySQL
PHP version: 5.6.16
I think the answer is here and the behavior is expected
https://github.com/ethicalhack3r/DVWA/issues/12
Someone complained of the opposite behavior and the developer agreed, and a contributor named g0tm1lk fixed it. He made the exercise really "blind" and we have to use blind injection methods to test the vulnerability.
Showing the SQL error messages to the user is just: a SQL injection vuln + a misconfiguration issue.
A blind SQL injection might occur when the columns of the results returned by a query are not shown to the user. However, the user can tell somehow if the query returned any records or none.
E.g.: Suppose the url "http://www.example.com/user?id=USER_ID" returns:
200 if USER_ID exists
404 if USER_ID not exists
But it won't show any information from the query results (e.g. username, address, phone, etc)
If the page is vulnerable to SQLi [blind], an attacker won't be able get info from the DB printed in the result page, but he might be able to infer it by asking yes/no questions.

Using the best practice in API test setup - DB communication needed (TDD)

I am going to write a new endpoint to unlock the domain object, something like:
../domainObject/{id}/unlock
As I apply TDD, I have started to write an API test first. When the test fails, I am going to start writing Integration and Unit tests and implement the real code.
In API test, I need a locked domain data for test fixture setup to test the unlock endpoint that will be created. However, there is no endpoint for locking the domain object on the system. (our Quartz jobs lock the data) I mean, I need to create a data by using the database directly.
I know that in API test, straight forwardly database usage is not the best practice. If you need a test data, you should call the API too. e.g.
../domainObject/{id}/lock
Should this scenario be an exception in this case? Or is there any other practice should I follow?
Thanks.
There is no good or bad practice here, it's all about how much you value end to end testing of the system including the database.
Testing the DB part will require a little more infrastructure, because you'll have to either use an in-memory database for faster test runs, or set up a full-fledged permanent test DB in your dev environment. When doing the latter, it might be a good idea to have a separate test suite for end-to-end tests that runs less frequently than your normal test suite, because it will inevitably be slower.
In that scenario, you'll have preexisting test data always present in the DB and a locked object can be one of them.
If you don't care about all this, you can stub the data store abstraction (repository, DAO or whatever) to return a canned locked object.

How to perform an "I can reach my database" healthcheck?

I have a classic spray+slick http server which is my database access layer, and I'd like to be able to have an healthcheck route to ensure my server is still able to reach my DB.
I could do it by doing a generic sql query, but I was wondering if there was a better way to just check the connection is alive and usable without actually adding load on the database (or at least the minimum possible load).
So pretty much :
val db = Database.forConfig("app.mydb")
[...]
db.???? // Do the check here
Why do you want to avoid executing a query against the database?
I think the best health check is to actually use the database as your application would (actually connecting and running a query). With that in mind, you can perform a SELECT 1 against your DB, and verify that it responds accordingly.

How to combine py.test fixtures with Flask-SQLAlchemy and PostgreSQL?

I'm struggling to write py.test fixtures for managing my app's database that maximize speed, supports pytest-xdist parallelization of tests, and isolates the tests from each other.
I'm using Flask-SQLAlchemy 2.1 against a PostgreSQL 9.4 database.
Here's the general outline of what I'm trying to accomplish:
$ py.test -n 3 spins up three test sessions for running tests.
Within each session, a py.test fixture runs once to setup a transaction, create the database tables, and then at the end of the session it rolls back the transaction. Creating the database tables needs to happen within a PostgreSQL transaction that's only visible to that particular test-session, otherwise the parallelized test sessions created by pytest-xdist cause conflicts with each other.
A second py.test fixture that runs for every test connects to the existing transaction in order to see the created tables, creates a nested savepoint, runs the test, then rolls back to the nested savepoint.
Ideally, these pytest fixtures support tests that call db.session.rollback(). There's a potential recipe for accomplishing this at the bottom of this SQLAlchemy doc.
Ideally the pytest fixtures should yield the db object, not just the session so that
folks can write tests without having to remember to use a session that's
different than the standard db.session they use throughout the app.
Here's what I have so far:
import pytest
# create_app() is my Flask application factory
# db is just 'db = SQLAlchemy()' + 'db.init_app(app)' within the create_app() function
from app import create_app, db as _db
#pytest.yield_fixture(scope='session', autouse=True)
def app():
'''Session-wide test application'''
a = create_app('testing')
with a.app_context():
yield a
#pytest.yield_fixture(scope='session')
def db_tables(app):
'''Session-wide test database'''
connection = _db.engine.connect()
trans = connection.begin() # begin a non-ORM transaction
# Theoretically this creates the tables within the transaction
_db.create_all()
yield _db
trans.rollback()
connection.close()
#pytest.yield_fixture(scope='function')
def db(db_tables):
'''db session that is joined to existing transaction'''
# I am quite sure this is broken, but it's the general idea
# bind an individual Session to the existing transaction
db_tables.session = db_tables.Session(bind=db_tables.connection)
# start the session in a SAVEPOINT...
db_tables.session.begin_nested()
# yield the db object, not just the session so that tests
# can be written transparently using the db object
# without requiring someone to understand the intricacies of these
# py.test fixtures or having to remember when to use a session that's
# different than db.session
yield db_tables
# rollback to the savepoint before the test ran
db_tables.session.rollback()
db_tables.session.remove() # not sure this is needed
Here's the most useful references that I've found while googling:
http://docs.sqlalchemy.org/en/latest/orm/session_transaction.html#joining-a-session-into-an-external-transaction-such-as-for-test-suites
http://koo.fi/blog/2015/10/22/flask-sqlalchemy-and-postgresql-unit-testing-with-transaction-savepoints/
https://github.com/mitsuhiko/flask-sqlalchemy/pull/249
I'm a couple years late here, but you might be interested in pytest-flask-sqlalchemy, a plugin I wrote to help address this exact problem.
The plugin provides two fixtures, db_session and db_engine, which you can use like regular Session and Engine objects to run updates that will get rolled back at the end of the test. It also exposes a few configuration directives (mocked-engines and mocked-sessions) that will mock out connectables in your app and replace them with these fixtures so that you can run methods and be sure that any state changes will get cleaned up when the test exits.
The plugin should work with a variety of databases, but it's been tested most heavily against Postgres 9.6 and is in production in the test suite for https://dedupe.io. You can find some examples in the documentation that should help you get started, but if you're willing to provide some code I'd be happy to demonstrate how to use the plugin, too.
I had similar issue trying to combine yield fixtures. Unfortunately according to the doc you are not able to combine more than one yield level.
But you might be able to find a work around using request.finalizer:
#pytest.fixture(scope='session', autouse=True)
def app():
'''Session-wide test application'''
a = create_app('testing')
with a.app_context():
return a
#pytest.fixture(scope='session')
def db_tables(request, app):
'''Session-wide test database'''
connection = _db.engine.connect()
trans = connection.begin() # begin a non-ORM transaction
# Theoretically this creates the tables within the transaction
_db.create_all()
def close_db_session():
trans.rollback()
connection.close()
request.addfinalizer(close_db_session)
return _db