Mocking postgressql using python & pytest - postgresql

def fetch_holidays_in_date_range(src):
query = "SELECT * from holiday_tab where id = src"
db = dbconnect.connect()
# defining the cursor for reading data
cursor = db.cursor(cursor_factory=psycopg2.extras.RealDictCursor)
# query the database
cursor.execute(query.format(src));
rows = cursor.fetchall()
dbconnect.destroy(cursor, db)
return rows
Could someone help, how to mock this code in pytest or in unittest. I've googled for mock db and I hardly found anything.

Pytest doesn't support you to run the test on production/main DB if you are using pytest-django.
There is a better approach to solve this issue.
Pytest DB resolve method
This says that whenever you run the test with a marker #pytest.mark.django_db, tests are run on another newly created db with name test_your production_db_name.
So if your db name is hello, pytest will create a new db called test_hello and runs tests on it

Related

How to use pytest reuse-db correctly

I have broken my head trying to figure out how --reuse-db. I have a super-simple Django project with one model Student and the following test
import pytest
from main.models import Student
#pytest.mark.django_db
def test_1():
Student.objects.create(name=1)
assert Student.objects.all().count() == 1
When I run it for the first time with command pytest --reuse-db, the test passes - and I am not surprised.
But when I run the pytest --reuse-db for the second time, I expect that the db is not destroyed and the test fails, because I expect that Student.objects.all().count() == 2.
I am misunderstanding the --reuse-db flag ?
--reuse-db means to reuse the database between N tests within the same test run.
This flag has no bearing on running pytest twice.

How to "restart" Cloud MongoDB Atlas Database

Today I was stress testing multiprocessing in Python against our cloud MongoDB (Atlas).
It's currently running at 100%, and I'd like to do something like a "restart".
I have found a "Shutdown" command, but I can't find a command to start it up after it has shutdown, so I'm afraid to run just the "Shutdown".
I have tried killing processes one at a time in the lower right section of the screen below, but after refreshing the page, the same processes numbers are there, and I think there are more on the bottom of the list. I think they are all backed up.
Doing an insert of a large document does not return to the Python program in 5 minutes. I need to get that working again (should update in 10-15 seconds as it has in the past).
I am able to open a command window and connect to that server. Just unclear what commands to run.
Here is an example of how I tried to kill some of the processes:
Also note that the "Performance Adviser" page is not recommending any new indexes.
Update 1:
Alternatively, can I kill all running, hung, or locked processes?
I was reading about killop here:(https://docs.mongodb.com/manual/tutorial/terminate-running-operations/), but found it confusing with the versions, and the fact that I'm using Atlas.
I'm not sure if there is an easier way, but this is what I did.
First, I ran a Python program to extract all the desired operation IDs based on my database and collection name. You have to look at the file created to understand the if statements in the code below. NOTE: it says that db.current_op is deprecated, and I haven't found out how to do this without that command (from PyMongo).
Note the doc page warns against killing certain types of operations, so I was careful to pick ones that were doing inserts on one specific collection. (Do not attempt to kill all processes in the JSON returned).
import requests
import os
import sys
import traceback
import pprint
import json
from datetime import datetime as datetime1, timedelta, timezone
import datetime
from time import time
import time as time2
import configHandler
import pymongo
from pymongo import MongoClient
from uuid import UUID
def uuid_convert(o):
if isinstance(o, UUID):
return o.hex
# This get's all my config from a config.json file, not including that code here.
config_dict = configHandler.getConfigVariables()
cluster = MongoClient(config_dict['MONGODB_CONNECTION_STRING_ADMIN'])
db = cluster[config_dict['MONGODB_CLUSTER']]
current_ops = db.current_op(True)
count_ops = 0
for op in current_ops["inprog"]:
count_ops += 1
#db.kill- no such command
if op["type"] == "op":
if "op" in op:
if op["op"] == "insert" and op["command"]["insert"] == "TestCollectionName":
#print(op["opid"], op["command"]["insert"])
print('db.adminCommand({"killOp": 1, "op": ' + str(op["opid"]) + '})')
print("\n\ncount_ops=", count_ops)
currDateTime = datetime.datetime.now()
print("type(current_ops) = ", type(current_ops))
# this dictionary has nested fields
# current_ops_str = json.dumps(current_ops, indent=4)
# https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior
filename = "./data/currents_ops" + currDateTime.strftime("%y_%m_%d__%H_%M_%S") + ".json"
with open(filename, "w") as file1:
#file1.write(current_ops_str)
json.dump(current_ops, file1, indent=4, default=uuid_convert)
print("Wrote to filename=", filename)
It writes the full ops file to disk, but I did a copy/paste from the command windows to a file. Then from the command line, I ran something like this:
mongo "mongodb+srv://mycluster0.otwxp.mongodb.net/mydbame" --username myuser --password abc1234 <kill_opid_script.js
The kill_opid_script.js looked like this. I added the print(db) because the first time I ran it didn't seem to do anything.
print(db)
db.adminCommand({"killOp": 1, "op": 648685})
db.adminCommand({"killOp": 1, "op": 667396})
db.adminCommand({"killOp": 1, "op": 557439})
etc... for 400+ times...
print(db)

MongoDB - A Script to Create Indexes

New to MongoDB - at the moment I'm creating indexes directly from my web app however I want to instead have some sort of bash script I run (and more easily maintain) that can create my various text indexes for me.
Wanted to check is this possible? I'm unsure about how I would actually execute it if so - namely I have a Docker image running Docker - so do I have to bash into that then run the .sh? Or would I just specify the DB and collection in the script itself and just run it from terminal as usual?
Any pointers would be appreciated.
Thanks.
You can do it using java script:
var createIndexes = function(fullObj) {
conn = new Mongo();
db = conn.getDB(databaseName);
getMapInd = null;
setMapInd1 = db.testMappings.createIndex( { 'testId': 1}, {unique: true} )
getMapInd = db.testMappings.getIndexes();
printjson("---------------------Below indexes created in Mappings collection-----------------------");
printjson(getMapInd);
};
createIndexes();

How to aggregate test results to publish to testrail after executing pytest tests with xdist?

I'm running into a problem like this. I'm currently using pytest to run test cases, and reducing execution time using xdist to run tests in parallel and publishing tests results to TestRail. The issue is when using xdist, pytest-testrail plugin creates Test-Run for each xdist workers and then publishes test cases like Untested.
I tried this hook pytest_terminal_summary to prevent pytest_sessionfinish plugin hook from being call multiple times.
I expect only one test run is created, but still multiple test runs are created.
I ran in to the same problem, but found a kind of workaround with duct tape.
I found that all results are collecting properly in test run, if we run the tests with --tr-run-id key.
If you are using jenkins jobs to automate processes, you can do following:
1) create testrun using testrail API
2) get ID of this test run
3) run the tests with --tr-run-id=$TEST_RUN_ID
I used these docs:
http://docs.gurock.com/testrail-api2/bindings-python
http://docs.gurock.com/testrail-api2/reference-runs
from testrail import *
import sys
client = APIClient('URL')
client.user = 'login'
client.password = 'password'
result = client.send_post('add_run/1', {"name": sys.argv[1], "assignedto_id": 1}).get("id")
print(result)
then in jenkins shell
RUN_ID=`python3 testrail_run.py $BUILD_TAG`
and then
python3 -m pytest -n 3 --testrail --tr-run-id=$RUN_ID --tr-config=testrail.cfg ...

Lumen testing environment with MongoDB

In a service provider I set the Mongo database name I am using within the application like this:
$this->app->bind('MongoDB', function() {
$client = new MongoClient();
return $client->selectDB('myproductiondatabase');
});
When running phpunit to run my tests I want to use a different database that gets recreated on every test. What ive done so far is:
$db = $this->app->environment('production') ? 'myproductiondatabase' : 'mytestingdatabase';
$this->app->bind('MongoDB', function() {
$client = new MongoClient();
return $client->selectDB($db);
});
This doesn't seem quite right. I understand I can make multiple .env files for testing and such. Not sure how when running phpunit from the cmd line it will know which .env file to load.
Whats the best way?