Mocking MongoDB for Testing REST API designed in Flask - mongodb

I have a Flask application, where the REST APIs are built using flask_restful with a MongoDB backend. I want to write Functional tests using pytest and mongomock for mocking the MongoDB to test the APIs but not able to configure that. Can anyone guide me providing an example where I can achieve the same?
Here is the fixture I am using in the conftest.py file:
#pytest.fixture(scope='module')
def test_client():
# flask_app = Flask(__name__)
# Flask provides a way to test your application by exposing the Werkzeug test Client
# and handling the context locals for you.
testing_client = app.test_client()
# Establish an application context before running the tests.
ctx = app.app_context()
ctx.push()
yield testing_client # this is where the testing happens!
ctx.pop()
#pytest.fixture(autouse=True)
def patch_mongo():
db = connect('testdb', host='mongomock://localhost')
yield db
db.drop_database('testdb')
disconnect()
db.close()
and here is the the test function for testing a post request for the creation of the user:
def test_mongo(test_client,patch_mongo):
headers={
"Content-Type": "application/json",
"Authorization": "token"
}
response=test_client.post('/users',headers=headers,data=json.dumps(data))
print(response.get_json())
assert response.status_code == 200
The issue is that instead of using the testdb, pytest is creating the user in the production database. is there something missing in the configuration?

Related

How to create a postgres async mock up in python with asyncpg?

We have a basic fastapi server with some http and websocket endpoints.
We're using postgres with asyncpg to do some basic CRUD operations.
One example is that there's a post endpoint that creates an item in the DB and notifies the websocket listeners:
async def notify_todo_listeners():
todos = db_handler.fetch_todos()
notify_all(todos)
#app.post("/todo")
async def post_todo(request):
todo = db_handler.insert(request.json())
asyncio.create_task(notify_todo_listeners())
return todo
And we want to test that endpoint in pytest, so we create a temp postgres DB using docker and we also patch the db_handler to be using a mock connection that we create in the test environment.
That connection is setup so that it would be rolled back once the test is finished
#pytest_asyncio.fixture(scope="function")
async def session(monkeypatch):
connection = await asyncpg.connect(CONNECTION_STRING)
transaction = connection.transaction()
await transaction.start()
async def mock_get_connection():
return connection
monkeypatch.setattr(database_handler, "get_connection", mock_get_connection)
yield connection
transaction.rollback()
async def test_post_todo(session):
async with AsyncClient(app=app, base_url="http://test") as client:
response = await client.post(some_todo_object)
assert response.status_code == 200
# some other assertions ...
The problem is when we try to test that endpoint, the part where we create a new task for notifying subscribers and using the DB to fetch the todo list raises this error:
exception=InterfaceError('cannot perform operation: another operation is in progress')
My understanding is that an async connection cannot be used across different co-routines, otherwise we get that error.
Question is, how can we properly mock this database while rolling back all changes made during each test run, while accounting for the possibility of having multiple co-routine tasks running?

Writing log to gcloud Vertex AI Endpoint using gcloud client fails with google.api_core.exceptions.MethodNotImplemented: 501

Trying to use google logging client library for writing logs into gcloud, specifically, i'm interested in writing logs that will be attached to a managed resource, in this case, a Vertex AI endpoint:
Code sample:
import logging
from google.api_core.client_options import ClientOptions
import google.cloud.logging_v2 as logging_v2
from google.oauth2 import service_account
def init_module_logger(module_name: str) -> logging.Logger:
module_logger = logging.getLogger(module_name)
module_logger.setLevel(settings.LOG_LEVEL)
credentials= service_account.Credentials.from_service_account_info(json.loads(SA_KEY_JSON))
client = logging_v2.client.Client(
credentials=credentials,
client_options=ClientOptions(api_endpoint="us-east1-aiplatform.googleapis.com"),
)
handler = client.get_default_handler(
resource=Resource(
type="aiplatform.googleapis.com/Endpoint",
labels={"endpoint_id": "ENDPOINT_NUMBER_ID",
"location": "us-east1"},
)
)
#Assume we have the formatter
handler.setFormatter(ENRICHED_FORMATTER)
module_logger.addHandler(handler)
return module_logger
logger = init_module_logger(__name__)
logger.info("This Fails with 501")
And i am getting:
google.api_core.exceptions.MethodNotImplemented: 501 The GRPC target
is not implemented on the server, host:
us-east1-aiplatform.googleapis.com, method:
/google.logging.v2.LoggingServiceV2/WriteLogEntries. Sent all pending
logs.
I thought we need to enable api and was told it's enabled, and that we have: https://www.googleapis.com/auth/logging.write
what could be causing the error?
As mentioned by #DazWilkin in the comment, the error is because the API endpoint us-east1-aiplatform.googleapis.com does not have a method called WriteLogEntries.
The above endpoint is used to send requests to Vertex AI services and not to Cloud Logging. The API endpoint to be used is the logging.googleapis.com as shown in the entries.write method. Refer to this documentation for more info.
The ClientOptions() function should have logging.googleapis.com as the api_endpoint parameter. If the client_options parameter is not specified, logging.googleapis.com is used by default.
After changing the api_endpoint parameter, I was able to successfully write the log entries. The ClientOptions() is as follows:
client = logging_v2.client.Client(
credentials=credentials,
client_options=ClientOptions(api_endpoint="logging.googleapis.com"),
)

How do I configure my Postman mock server response to return a date always two days in the past?

In my Postman Mock Server, I have set up a GET request to return JSON, in which the following is returned
“due_date":"2021-10-10"
What I would like is to adjust the response so that the date is returned is two days in the past. So if today is “2021-10-10”, I would like the response to contain
“due_date":"2021-10-08”
And if today is “2022-01-01”, I would like the response to contain
“due_date":"2021-12-30”
And so on. How do I set up my Postman mock server request to return such data?
I think it's a good question besides I'm curious so I made some research and found a workaround for this. It's a bit complex. I'm not sure worth it or not.
The first thing all, Postman Mock Server (in short Mock Server) cannot execute any test and pre-script so it is not capable to compute things. You need a calculation here so what are you gonna do? Well, you can define an environment for Mock Server which gives you the ability to use dynamic values in mock responses.
I will continue step by step to show the process.
1 - Open a Mock Server with an environment:
1.1 - Create a collection for the new Mock Server:
Your mock response will look like below:
{"due_date": "{{date}}"}
1.2 - Create an environment:
1.3 - Finish to create:
1.4 - When you finish, Postman creates a collection like below:
1.5 - You can test your Mock Server from this collection:
As you can see, Mock Server uses the environment variable in their response.
Now, We have to figure out how to update the environment variable.
You have to use an external service to update your environment variable. You can use Postman Monitor for this job because it can execute tests (means any code) and works like a CRON job which means you can set a Postman Monitor to update a specific environment variable every 24 hours.
2 - Open a Postman Monitor to update your environment:
2.1 - This step is pretty straightforward, create a Postman Monitor like the below configuration:
2.2 - Write a test to update the environment:
The test will look like below:
// you have to use pm.test() otherwise Postman Monitor not execute the test
const moment = require("moment");
pm.test("update date", () => {
// set date 2 days past
let startdate = moment();
const dayCount = 2;
startdate = startdate.subtract(dayCount, "days");
startdate = startdate.format("YYYY-MM-DD");
// this is not work on Postman Monitor, use Postman API like below
//pm.environment.set('date', startdate);
const data = JSON.stringify({
environment: {
values: [
{
key: "date",
value: startdate,
},
],
},
});
const environmentID = "<your-environment-id>";
// Set environment variable with Postman API
const postRequest = {
url: `https://api.getpostman.com/environments/${environmentID}`,
method: "PUT",
header: {
"Content-Type": "application/json",
"X-API-Key":
"<your-postman-api-key>",
},
body: {
mode: "raw",
raw: data,
},
};
pm.sendRequest(postRequest, (error, response) => {
console.log(error ? error : response.json());
// force to fail test if any error occours
if (error) pm.expect(true).to.equal(false);
});
});
You cannot change an environment variable with pm.environment when you using Postman Monitor. You should use Postman API with pm.sendRequest in your test.
You need to get a Postman API key and you need to learn your environment id. You can learn the environment id from Postman API.
To learn your Environment ID, use this endpoint: https://www.postman.com/postman/workspace/postman-public-workspace/request/12959542-b7ace502-4a5a-4f1c-8164-158811bbf236
To learn how to get a Postman API key: https://learning.postman.com/docs/developer/intro-api/#generating-a-postman-api-key
2.3 - Run Postman Monitor manually to make sure tests are working:
2.4 - As you can see Postman Monitor execute the script:
2.5 - When I check the environment, I can see the result:
You can test from browser to see results:
I have answered this question earlier but I have another solution.
You can deploy a server to update the variable from your mock environment. If you want to do it for free, just use Heroku.
I wrote a Flask app in Python and deploy it to Heroku, check below code:
from flask import Flask
import os
import json
import requests
from datetime import datetime, timedelta
app = Flask(__name__)
# the port randomly assigned and then mapped to port 80 by the Heroku
port = int(os.environ.get("PORT", 5000))
# debug mode
debug = False
#app.route('/')
def hello_world():
N_DAYS_AGO = 2
# calculate date
today = datetime.now()
n_days_ago = today - timedelta(days=N_DAYS_AGO)
n_days_ago_formatted = n_days_ago.strftime("%Y-%m-%d")
# set environment
payload = json.dumps({
"environment": {
"values": [
{
"key": "occupation",
"value": n_days_ago_formatted
}
]
}
})
postman_api_key = "<your-postman-api-key>"
headers = {
'Content-Type': 'application/json',
'X-API-Key': postman_api_key
}
environment_id = "<your-environment-id>"
url = "https://api.getpostman.com/environments/" + environment_id
r = requests.put(url, data=payload, headers=headers)
# return postman response
return r.content
if __name__ == '__main__':
app.run(debug=debug, host='0.0.0.0', port=port)
Code calculates the new date and sends it to Mock Environment. It worked, I tested it in Heroku before this answer.
When you go to your Heroku app's page the code will trigger and the date environment automatically will update, use the environment variable in your mock server to solve the problem.
You need to automate this code execution so I suggest you use UptimeRobot to ping your Heroku app 1 time a day. On every ping, your environment variable will update. Don't overuse it because Heroku has a usage quota for the free plan.
To use this code you need to learn how to deploy a Flask app on Heroku. By the way, Flask is just an option here, you can use NodeJS instead of Python, the logic will stay the same.

How to use flask babel gettext in celery?

I have a flask-celery setup with flask babel translating my texts. I can't translate in celery tasks. I believe this is because it doesn't know the current language (I'm not it could even if it did) and this is because celery doesn't have access to the request context (from what i understood)...
What would be the solution to be able to translate?
You rightly pointed out that issue that is celery doesn't have access to request context, which means flask_babelex.get_locale returns None. You can use force_locale context manager available in Flask-Babel which provides dummy request context.
from contextlib import contextmanager
from flask import current_app
from babel import Locale
from ..config import SERVER_NAME, PREFERRED_URL_SCHEME
#contextmanager
def force_locale(locale=None):
if not locale:
yield
return
env = {
'wsgi.url_scheme': PREFERRED_URL_SCHEME,
'SERVER_NAME': SERVER_NAME,
'SERVER_PORT': '',
'REQUEST_METHOD': ''
}
with current_app.request_context(env) as ctx:
ctx.babel_locale = Locale.parse(locale)
yield
Sample Celery Task
#celery.task()
def some_task(user_id):
user = User.objects.get(id=user_id)
with force_locale(user.locale):
...gettext('TranslationKey')...

Authenticate with ECE ElasticSearch Sink from Apache Fink (Scala code)

Compiler error when using example provided in Flink documentation. The Flink documentation provides sample Scala code to set the REST client factory parameters when talking to Elasticsearch, https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/elasticsearch.html.
When trying out this code i get a compiler error in IntelliJ which says "Cannot resolve symbol restClientBuilder".
I found the following SO which is EXACTLY my problem except that it is in Java and i am doing this in Scala.
Apache Flink (v1.6.0) authenticate Elasticsearch Sink (v6.4)
I tried copy pasting the solution code provided in the above SO into IntelliJ, the auto-converted code also has compiler errors.
// provide a RestClientFactory for custom configuration on the internally created REST client
// i only show the setMaxRetryTimeoutMillis for illustration purposes, the actual code will use HTTP cutom callback
esSinkBuilder.setRestClientFactory(
restClientBuilder -> {
restClientBuilder.setMaxRetryTimeoutMillis(10)
}
)
Then i tried (auto generated Java to Scala code by IntelliJ)
// provide a RestClientFactory for custom configuration on the internally created REST client// provide a RestClientFactory for custom configuration on the internally created REST client
import org.apache.http.auth.AuthScope
import org.apache.http.auth.UsernamePasswordCredentials
import org.apache.http.client.CredentialsProvider
import org.apache.http.impl.client.BasicCredentialsProvider
import org.apache.http.impl.nio.client.HttpAsyncClientBuilder
import org.elasticsearch.client.RestClientBuilder
// provide a RestClientFactory for custom configuration on the internally created REST client// provide a RestClientFactory for custom configuration on the internally created REST client
esSinkBuilder.setRestClientFactory((restClientBuilder) => {
def foo(restClientBuilder) = restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = { // elasticsearch username and password
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(es_user, es_password))
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
}
})
foo(restClientBuilder)
})
The original code snippet produces the error "cannot resolve RestClientFactory" and then Java to Scala shows several other errors.
So basically i need to find a Scala version of the solution described in Apache Flink (v1.6.0) authenticate Elasticsearch Sink (v6.4)
Update 1: I was able to make some progress with some help from IntelliJ. The following code compiles and runs but there is another problem.
esSinkBuilder.setRestClientFactory(
new RestClientFactory {
override def configureRestClientBuilder(restClientBuilder: RestClientBuilder): Unit = {
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = {
// elasticsearch username and password
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(es_user, es_password))
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
httpClientBuilder.setSSLContext(trustfulSslContext)
}
})
}
}
The problem is that i am not sure if i should be doing a new of the RestClientFactory object. What happens is that the application connects to the elasticsearch cluster but then discovers that the SSL CERT is not valid, so i had to put the trustfullSslContext (as described here https://gist.github.com/iRevive/4a3c7cb96374da5da80d4538f3da17cb), this got me past the SSL issue but now the ES REST Client does a ping test and the ping fails, it throws an exception and the app shutsdown. I am suspecting that the ping fails because of the SSL error and maybe it is not using the trustfulSslContext i setup as part of new RestClientFactory and this makes me suspect that i should not have done the new, there should be a simple way to update the existing RestclientFactory object and basically this is all happening because of my lack of Scala knowledge.
Happy to report that this is resolved. The code i posted in Update 1 is correct. The ping to ECE was not working for two reasons:
The certificate needs to include the complete chain including the root CA, the intermediate CA and the cert for the ECE. This helped get rid of the whole trustfulSslContext stuff.
The ECE was sitting behind an ha-proxy and the proxy did the mapping for the hostname in the HTTP request to the actual deployment cluster name in ECE. this mapping logic did not take into account that the Java REST High Level client uses the org.apache.httphost class which creates the hostname as hostname:port_number even when the port number is 443. Since it did not find the mapping because of the 443 therefore the ECE returned a 404 error instead of 200 ok (only way to find this was to look at unencrypted packets at the ha-proxy). Once the mapping logic in ha-proxy was fixed, the mapping was found and the pings are now successfull.