crafting the body for request does not work concurrently - scala

I would like to send simultaneous requests through gatlings for some duration
below is the snippet of my code where I am crafting the requests.
JSON file contents function which is used for crafting the json. its been used in the main request
the TestDevice_dev.csv has list of devices till 30 after 30 I will reuse it.
TestDevice1
TestDevice2
TestDevice3
.
.
.
val dFeeder = csv("TestDevice_dev.csv").circular
val trip_dte_tunnel_1 = scenario("TripSimulation")
.feed(dFeeder)
.exec(session => {
val key = conf.getString("config.env.sign_key")
var bodyTrip = CannedRequests.jsonFileContents("${deviceID}")
//deviceId comes from the feeder
session.set("trip_sign", SignatureGeneration.getSignature(key, bodyTrip))
session.set("tripBody",bodyTrip)
})
.exec(http("trip")
.post(trip_url)
.headers(trip_Headers_withsign)
.body(StringBody("${tripBody}")).asJSON.check(status.is(201)))
.exec(flushSessionCookies)
the scenario is started as below
val scn_trip = scenario("trip simulation")
.repeat{1} {
exec(DataExchange.trip_dte_tunnel_1)
}
setUp(scn_trip.inject(constantUsersPerSec(5) during (5 seconds))) ```
it runs fine if there is 1 user for 5 seconds but not simulatenous users.
the json request which is crafted looks like the below
"events":[
{
"deviceDetailsDataModel":{
"deviceId":"<deviceID>"
},
"eventDateTime":"<timeStamp>",
"tripInfoDataModel":{
"ignitionStatus":"ON",
"ignitionONTime":"<onTimeStamp>"
}
},
{
"deviceDetailsDataModel":{
"deviceId":"<deviceID>"
},
"eventDateTime":"<timeStamp>",
"tripInfoDataModel":{
"ignitionStatus":"ON",
"ignitionONTime":"<onTimeStamp>"
}
},
{
"deviceDetailsDataModel":{
"deviceId":"<deviceID>"
},
"eventDateTime":"<timeStamp>",
"tripInfoDataModel":{
"ignitionOFFTime":"<onTimeStamp>",
"ignitionStatus":"OFF"
}
}
]
}`
`def jsonFileContents(deviceId: String): String= {
val fileName = "trip-data.json"
var stringBuilder=""
var timeStamp1:Long = ZonedDateTime.now(ZoneId.of("America/Chicago")).toInstant().toEpochMilli().toLong - 10000.toLong
for (line <- (Source fromFile fileName).getLines) {
if (line.contains("eventDateTime")) {
var lineReplace=line.replaceAll("<timeStamp>", timeStamp1.toString())
stringBuilder=stringBuilder+lineReplace
timeStamp1 = timeStamp1+1000.toLong
}
else if (line.contains("onTimeStamp")) {
var lineReplace1=line.replaceAll("<onTimeStamp>", timeStamp1.toString)
stringBuilder=stringBuilder+lineReplace1
}
else if (line.contains("deviceID")){
var lineReplace2=line.replace("<deviceID>", deviceId)
stringBuilder=stringBuilder+lineReplace2
}
else {
stringBuilder =stringBuilder+line
}
}
stringBuilder
}
`

Best guess: your feeder contains one single entry and you're using the default queue strategy. Either add more entries in your feeder file to match the number of users, or use a different strategy.
This really is explained in the documentation, including the tutorials. I recommend you take some time to read the documentation before rushing into the code, you'll save lots of time in the end.

You don't need to do your own parameter substitution of values in the json file - Gatling supports passing en ELFileBody as the body where you can have a json file with gatling EL expressions like ${deviceId}.

Related

API JSON Schema Validation with Optional Element using Pydantic

I am using fastapi and BaseModel from pydantic to validate and document the JSON schema for an API return.
This works well for a fixed return but I have optional parameters that change the return so I would like to include it in the validation but for it not to fail when the parameter is missing and the field is not returned in the API.
For example: I have an optional boolean parameter called transparency when this is set to true I return a block called search_transparency with the elastic query returned.
{
"info": {
"totalrecords": 52
},
"records": [],
"search_transparency": {"full_query": "blah blah"}
}
If the parameter transparency=true is not set I want the return to be:
{
"info": {
"totalrecords": 52
},
"records": []
}
However, when I set that element to be Optional in pydantic, I get this returned instead:
{
"info": {
"totalrecords": 52
},
"records": [],
"search_transparency": None
}
I have something similar for the fields under records. The default is a minimal return of fields but if you set the parameter full=true then you get many more fields returned. I would like to handle this in a similar way with the fields just being absent rather than shown with a value of None.
This is how I am handling it with pydantic:
class Info(BaseModel):
totalrecords: int
class Transparency(BaseModel):
full_query: str
class V1Place(BaseModel):
name: str
class V1PlaceAPI(BaseModel):
info: Info
records: List[V1Place] = []
search_transparency: Optional[Transparency]
and this is how I am enforcing the validation with fastapi:
#app.get("/api/v1/place/search", response_model=V1PlaceAPI, tags=["v1_api"])
I have a suspicion that maybe what I am trying to achieve is poor API practice, maybe I am not supposed to have variable returns.
Should I instead be creating multiple separate endpoints to handle this?
eg. api/v1/place/search?q=test vs api/v1/place/full/transparent/search?q=test
EDIT
More detail of my API function:
#app.get("/api/v1/place/search", response_model=V1PlaceAPI, tags=["v1_api"])
def v1_place_search(q: str = Query(None, min_length=3, max_length=500, title="search through all place fields"),
transparency: Optional[bool] = None,
offset: Optional[int] = Query(0),
limit: Optional[int] = Query(15)):
search_limit = offset + limit
results, transparency_query = ESQuery(client=es_client,
index='places',
transparency=transparency,
track_hits=True,
offset=offset,
limit=search_limit)
return v1_place_parse(results.to_dict(),
show_transparency=transparency_query)
where ESQuery just returns an elasticsearch response.
And this is my parse function:
def v1_place_parse(resp, show_transparency=None):
"""This takes a response from elasticsearch and parses it for our legacy V1 elasticapi
Args:
resp (dict): This is the response from Search.execute after passing to_dict()
Returns:
dict: A dictionary that is passed to API
"""
new_resp = {}
total_records = resp['hits']['total']['value']
query_records = len(resp.get('hits', {}).get('hits', []))
new_resp['info'] = {'totalrecords': total_records,
'totalrecords_relation': resp['hits']['total']['relation'],
'totalrecordsperquery': query_records,
}
if show_transparency is not None:
search_string = show_transparency.get('query', '')
new_resp['search_transparency'] = {'full_query': str(search_string),
'components': {}}
new_resp['records'] = []
for hit in resp.get('hits', {}).get('hits', []):
new_record = hit['_source']
new_resp['records'].append(new_record)
return new_resp
Probably excluding that field if it is None can get the job done.
Just add a response_model_exclude_none = True as a path parameter
#app.get(
"/api/v1/place/search",
response_model=V1PlaceAPI,
tags=["v1_api"],
response_model_exclude_none=True,
)
You can customize your Response model even more, here is a well explained answer of mine I really suggest you check it out.

Execute gatling scenarios based on a boolean flag

Is it possible in Gatling to execute scenarios based on a boolean flag from properties file
application.conf
config {
isDummyTesting = true,
Test {
baseUrl = "testUrl"
userCount = 1
testUser {
CustomerLoginFeeder = "CustomerLogin.getLogin()"
Navigation = "Navigation.navigation"
}
},
performance {
baseUrl = "testUrl"
userCount = 100
testUser {
CustomerLoginFeeder = "CustomerLogin.getLogin()"
}
}
}
and in my simulation file
var flowToTest = ConfigFactory.load().getObject("config.performance.testUser").toConfig
if (ConfigFactory.load().getBoolean("config.isDummyTesting")) {
var flowToTest = ConfigFactory.load().getObject("config.Test.testUser").toConfig
}
while executing flow, i am running below code
scenario("Customer Login").exec(flowToTest)
and facing error
ERROR : io.gatling.core.structure.ScenarioBuilder
cannot be applied to (com.typesafe.config.Config)
I want if flag is true, it executes two scenarios else the other one.
I think you're making a mistake in trying to have to flow defined in the config, rather than just the flag. You can then load the value of the isDummyTesting flag and have that passed into a session variable. From there, you can use the standard gatling doIf construct to include Navigation.navigation if specified.
so in your simulation file, you can have
private val isDummyTesting = java.lang.Boolean.getBoolean("isDummyTesting")
and then in your scenario
.exec(session => session.set("isDummyTesting", isDummyTesting))
...
.exec(CustomerLogin.getLogin())
.doIf("${isDummyTesting}") {
exec(Navigation.navigation)
}

How to provide the output result of a function as a PUT request in scala?

i have a scala code that converts a layer to a Geotiff file. Now i want this Geotiff file to be passed in a PUT request as a REST service. How can i do that?
Here is a section of code:
val labeled_layerstack =
{
//Labeled Layerstack
//val layers_input = Array(layer_dop) ++ layers_sat
val layers_labeled_input = Array(layer_label) ++ Array(output_layerstack) //++ layers_input
ManyLayersToMultibandLayer(layers_labeled_input, output_labeled_layerstack)
output_labeled_layerstack
}
if (useCleanup) {
DeleteLayer(layer_label)
if(useDOP)
DeleteLayer(layer_dop)
for( layer_x <- layers_sat)
DeleteLayer(layer_x)
}
labeled_layerstack
}
else output_labeled_layerstack //if reusing existing layerstack ( processing steps w/o "layerstack")
if(processingSteps.isEmpty || processingSteps.get.steps.exists(step => step == "classification")) {
if (useRandomForest) {
ClusterTestRandomForest(labeled_layerstack, fileNameClassifier, layerResult, Some(output_layerstack))
if (useExportResult) {
LayerToGeotiff(layerResult, fileNameResult, useStitching = useExportStitching)
}
}
else if (useSVM) {
ClusterTestSVM(labeled_layerstack, fileNameClassifier, layerResult, Some(output_layerstack))
if (useExportResult) {
LayerToGeotiff(layerResult, fileNameResult, useStitching = useExportStitching)
}
}
}
The original code is quite long and is not shareable so i am sharing this which is relevant to the problem. The output of LayertoGeotiff should be passed as an PUT request. How can i create such a request?
I suggest to you the Play framework to send a PUT request

Difference between RoundRobinRouter and RoundRobinRoutinglogic

So I was reading tutorial about akka and came across this http://manuel.bernhardt.io/2014/04/23/a-handful-akka-techniques/ and I think he explained it pretty well, I just picked up scala recently and having difficulties with the tutorial above,
I wonder what is the difference between RoundRobinRouter and the current RoundRobinRouterLogic? Obviously the implementation is quite different.
Previously the implementation of RoundRobinRouter is
val workers = context.actorOf(Props[ItemProcessingWorker].withRouter(RoundRobinRouter(100)))
with processBatch
def processBatch(batch: List[BatchItem]) = {
if (batch.isEmpty) {
log.info(s"Done migrating all items for data set $dataSetId. $totalItems processed items, we had ${allProcessingErrors.size} errors in total")
} else {
// reset processing state for the current batch
currentBatchSize = batch.size
allProcessedItemsCount = currentProcessedItemsCount + allProcessedItemsCount
currentProcessedItemsCount = 0
allProcessingErrors = currentProcessingErrors ::: allProcessingErrors
currentProcessingErrors = List.empty
// distribute the work
batch foreach { item =>
workers ! item
}
}
}
Here's my implementation of RoundRobinRouterLogic
var mappings : Option[ActorRef] = None
var router = {
val routees = Vector.fill(100) {
mappings = Some(context.actorOf(Props[Application3]))
context watch mappings.get
ActorRefRoutee(mappings.get)
}
Router(RoundRobinRoutingLogic(), routees)
}
and treated the processBatch as such
def processBatch(batch: List[BatchItem]) = {
if (batch.isEmpty) {
println(s"Done migrating all items for data set $dataSetId. $totalItems processed items, we had ${allProcessingErrors.size} errors in total")
} else {
// reset processing state for the current batch
currentBatchSize = batch.size
allProcessedItemsCount = currentProcessedItemsCount + allProcessedItemsCount
currentProcessedItemsCount = 0
allProcessingErrors = currentProcessingErrors ::: allProcessingErrors
currentProcessingErrors = List.empty
// distribute the work
batch foreach { item =>
// println(item.id)
mappings.get ! item
}
}
}
I somehow cannot run this tutorial, and it's stuck at the point where it's iterating the batch list. I wonder what I did wrong.
Thanks
In the first place, you have to distinguish diff between them.
RoundRobinRouter is a Router that uses round-robin to select a connection.
While
RoundRobinRoutingLogic uses round-robin to select a routee
You can provide own RoutingLogic (it has helped me to understand how Akka works under the hood)
class RedundancyRoutingLogic(nbrCopies: Int) extends RoutingLogic {
val roundRobin = RoundRobinRoutingLogic()
def select(message: Any, routees: immutable.IndexedSeq[Routee]): Routee = {
val targets = (1 to nbrCopies).map(_ => roundRobin.select(message, routees))
SeveralRoutees(targets)
}
}
link on doc http://doc.akka.io/docs/akka/2.3.3/scala/routing.html
p.s. this doc is very clear and it has helped me the most
Actually I misunderstood the method, and found out the solution was to use RoundRobinPool as stated in http://doc.akka.io/docs/akka/2.3-M2/project/migration-guide-2.2.x-2.3.x.html
For example RoundRobinRouter has been renamed to RoundRobinPool or
RoundRobinGroup depending on which type you are actually using.
from
val workers = context.actorOf(Props[ItemProcessingWorker].withRouter(RoundRobinRouter(100)))
to
val workers = context.actorOf(RoundRobinPool(100).props(Props[ItemProcessingWorker]), "router2")

Uploading multiple large files and other form data in playframework 2/scala

I am trying to upload multiple large files in the play framework using scala. I'm still a scala and play noob.
I got some great code from here which got me 90% of the way, but now I'm stuck again.
The main issue I have now is that I can only read the file data, not any other data that's been uploaded, and after poking around the play docs I'm unclear as to how to get at that from here. Any Suggestions appreciated!
def directUpload(projectId: String) = Secured(parse.multipartFormData(myFilePartHandler)) { implicit request =>
Ok("Done");
}
def myFilePartHandler: BodyParsers.parse.Multipart.PartHandler[MultipartFormData.FilePart[Result]] = {
parse.Multipart.handleFilePart {
case parse.Multipart.FileInfo(partName, filename, contentType) =>
println("Handling Streaming Upload:" + filename + "/" + partName + ", " + contentType);
//Set up the PipedOutputStream here, give the input stream to a worker thread
val pos: PipedOutputStream = new PipedOutputStream();
val pis: PipedInputStream = new PipedInputStream(pos);
val worker: UploadFileWorker = new UploadFileWorker(pis,contentType.get);
worker.start();
//Read content to the POS
play.api.libs.iteratee.Iteratee.fold[Array[Byte], PipedOutputStream](pos) { (os, data) =>
os.write(data)
os
}.mapDone { os =>
os.close()
worker.join()
if( worker.success )
Ok("uplaod done. Size: " + worker.size )
else
Status(503)("Upload Failed");
}
}
}
You have to handle the data part. As you can guess (or look up in the documentation) the function to handle the data part ist called: handleFilePart.
def myFilePartHandler: BodyParsers.parse.Multipart.PartHandler[MultipartFormData.FilePart[Result]] = {
parse.Multipart.handleFilePart {
// ...
}
parse.Multipart.handleFilePart {
// ...
}
}
Another way would be the handlePart method. Check the documentation for more details.