I'm trying to set up a mock service call to kinesis firehose. I'm importing mock_firehose from moto and referencing it as #mock_firehose. In the test method I've created a client using boto3.
#mock_firehose
def test_push_to_stream(push_record, stream):
ret = app.push_to_stream(push_record, stream)
client = boto3.client('firehose', region_name='us-west-2)
I've exported the AWS_PROFILE I want to use and checked the credentials are correct. The error I encounter is:
botocore.errorfactory.ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the PutRecord operation: Firehose sample-name under account 123456789012 not found.
Apparently the dummy account 123456789012 is the default test account for running tests against mocked AWS services. I'm not sure if I need to create a stream for the test account, but that would make sense. It seems to fail if I comment out the boto3.client line and just having #mock_firehose above the method. Is there a setup step I'm missing requiring me to initialize a stream before calling #mock_firehose?
Moto is used to intercept any calls to AWS. Moto decodes the requests, figures out what you're trying to do and keeps an in-memory copy of the required infrastructure. So using Moto should make it seem like you're talking to AWS, but without the cost associated with it.
The dummy 123456789012 account is used to indicate that we're running against Moto, and to make sure it does not accidentally mutate any real infrastructure.
The ResourceNotFound-exception actually comes from Moto here. It knows that you have access to this test account, but it does know about the stream yet - because nothing has been created.
(AWS would probably respond with an AccessDenied error, saying that you do not have access to the dummy account number.)
With that in mind:
I've exported the AWS_PROFILE I want to use and checked the credentials are correct. The error I encounter is:
The credentials do not have to be correct, as AWS will never be reached.
To go one step further: credentials should not be correct, to ensure that AWS will never be reached.
I'm not sure if I need to create a stream for the test account
Yes, you should.
The general flow of a unit test using Moto looks like this:
Setup the equivalent infrastructure, that you expect to exist in AWS, in Moto
Run the business logic against Moto
Assert as required
So for your usecase, it would roughly look like this:
#mock_firehose
def test_push_to_stream(push_record, stream):
# Setup architecture that exists in Prod
client = boto3.client('firehose', region_name='us-west-2)
client.create_delivery_stream(...)
# Run business logic
ret = app.push_to_stream(push_record, stream)
# Verify that the records exist
...
As an implementation tip:
At the moment, Moto only sends Firehose records through if the endpoint is S3 or HTTP. Records to other endpoints, such as Elasticsearch and Redshift are not yet processed.
So it would make sense to setup your delivery stream to deliver to S3, as that makes it easy to verify whether the business logic send the correct records.
The documentation can be helpful to see more usage examples/ideas on how to use Moto correctly and safely: http://docs.getmoto.org/en/latest/docs/getting_started.html#moto-usage
Related
I recorded my jmeter script on server x and make it dynamic after that run that same script on server y - it fetch all data by post processor and did not give any error but data is not added on fronted . how can I solve it any reason behind it? (website is same just change the server for testing)
expected-Data should add on fronted like create lead on server y (successfully create on server x)
actual -data not added on server y
Most probably you need to correlate your script as it is not doing what it is supposed to be doing.
You can run your test with 1 virtual user and 1 iteration configured in the Thread Group and inspect request and response details using View Results Tree listener
My expectation is that you either not getting logged in (you have added HTTP Cookie Manager to your Test Plan, haven't you?) or fail to provide valid dynamic parameters. Modern web applications widely use dynamic parameters for example for client side state tracking or for CSRF protection
You can easily detect dynamic parameters by recording the same scenario one more time and compare the generated scripts. All the values which differ need to be correlated, to wit extracted from the previous response using a suitable Post-Processor and stored into a JMeter Variable. Once done you will need to replace recorded hard-coded value with the aforementioned JMeter Variable.
Check out How to Handle Correlation in JMeter article for comprehensive information with examples.
I have situation which is almost identical to the one described here: Play framework resource starvation after a few days
My application is simple, Play 2.6 + PostgreSQL + Slick 3.
Also, DB retrieval operations are Slick only and simple.
Usage scenario is that data comes in through one endpoint, gets stored into DB (there are some actors storing some data in async fashion which can fail with default strategy) and is served through rest endpoints.
So far so good.
After few days, every endpoint that has anything to do with database stops responding. Application is server on t3-medium on a single instance connected to RDS instance. Connection count to RDS is always the same and stable, mostly idling.
What I have also noticed is that database actually gets called and query gets executed, but request never ends or gets any data.
Simplest endpoint (POST) is for posting feedback - basically one liner:
feedbackService.storeFeedback(feedback.deviceId, feedback.message).map(_ => Success)
This Success thing is wrapper around Ok("something") so no magic there.
Feedback Service stores one record in DB in a Slick preferred way, nothing crazy there as well.
Once feedback post is called, I notice in psql client that INSERT query has been executed and data really ends up in database, but HTTP request never ends and no success data gets returned. In parallel, calling non DB related endpoints which do return some values like status endpoint goes through without problems.
Production logs don't show anything and restarting helps for a day or two.
I suppose some kind of resource starvation is happening, but which and where is currently beyond me.
I have to admit I'm quite new to unit testing and there is a lot questions for me.
It's hard to name it but I think it's behaviour test, anyways let me go straight to the example:
I need to test users roles listing to make sure that my endpoint is working correctly and returns all user roles assigned to him.
That means:
I need to create user
I need to create role
I need to assign created role to created user
As we can see there is three operations that must be excecuted before actual test and I believe that in larger applications such list can grow to much larger number of operations and even complex.
The question is how I should test such endpoints: should I just insert raw data to DB, write some code blocks that would do such preparations.
It's probably best if you test the individual units of your service without hitting the service itself, otherwise you're also unit testing the WebApi framework itself.
This will also allow you to mock your database so you don't have to rely on any stored data to run your tests or the authorization to your service.
I am going to write a new endpoint to unlock the domain object, something like:
../domainObject/{id}/unlock
As I apply TDD, I have started to write an API test first. When the test fails, I am going to start writing Integration and Unit tests and implement the real code.
In API test, I need a locked domain data for test fixture setup to test the unlock endpoint that will be created. However, there is no endpoint for locking the domain object on the system. (our Quartz jobs lock the data) I mean, I need to create a data by using the database directly.
I know that in API test, straight forwardly database usage is not the best practice. If you need a test data, you should call the API too. e.g.
../domainObject/{id}/lock
Should this scenario be an exception in this case? Or is there any other practice should I follow?
Thanks.
There is no good or bad practice here, it's all about how much you value end to end testing of the system including the database.
Testing the DB part will require a little more infrastructure, because you'll have to either use an in-memory database for faster test runs, or set up a full-fledged permanent test DB in your dev environment. When doing the latter, it might be a good idea to have a separate test suite for end-to-end tests that runs less frequently than your normal test suite, because it will inevitably be slower.
In that scenario, you'll have preexisting test data always present in the DB and a locked object can be one of them.
If you don't care about all this, you can stub the data store abstraction (repository, DAO or whatever) to return a canned locked object.
I feel like this should be a lot easier than it's been on me.
copy table
from 's3://s3-us-west-1.amazonaws.com/bucketname/filename.csv'
CREDENTIALS 'aws_access_key_id=my-access;aws_secret_access_key=my-secret'
REGION 'us-west-1';
Note I added the REGION section after having a problem but did nothing.
Where I am confused though is that in the bucket properties there is only the https://path/to/the/file.csv. I can only assume that all the documentation that I have read calling for the path to start with s3://... that I would just change https to s3 like shown in my example.
However I get this error:
"Error : ERROR: S3ServiceException:
The bucket you are attempting to access must be addressed using the specified endpoint.
Please send all future requests to this endpoint.,Status 301,Error PermanentRedirect,Rid"
I am using navicat for PostgreSQL to connect to Redshift and Im running on a mac.
The S3 path should be 's3://bucketname/filename.csv'. Try this.
Yes, It should be a lot easier :-)
I have only seen this error when your S3 bucket is not in US Standard. In such cases you need to use endpoint based address e.g. http://s3-eu-west-1.amazonaws.com/mybucket/myfile.txt.
You can find the endpoints for your region in this documentation page,
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region