Jmeter - Can I change a variable halfway through runtime? - mongodb

I am fairly new to JMeter so please bear with me.
I need to understand whist running a JMeter script I can change the variable with the details of "DB1" to then look at "DB2".
Reason for this is I want to throw load at a MongoDB and then switch to another DB at a certain time. (hotdb/colddb)

The easiest way is just defining 2 MongoDB Source Config elements pointing to separate database instances and give them 2 different MongoDB Source names.
Then in your script you will be able to manipulate MongoDB Source parameter value in the MongoDB Script test element or in JSR223 Samplers so your queries will hit either hotdb or colddb
See How to Load Test MongoDB with JMeter article for detailed information

How about reading the value from a file in a beanshell/javascript sampler each iteration and storing in a variable, then editing/saving the file when you want to switch? It's ugly but it would work.

Related

Spring batch create several system calls

I am starting with spring batch and I have a code structure with list of objects, for each object I need to call an external ant process. I understand I can use the SystemCommandTasklet for this however I am not sure how to set it for multiple values (so far it seems it can only make 1 execution?) Thanks in advance

How to validate ksql script?

I would like to know if there is a way for checking if a .ksql script is syntactically correct?
I know that you can send a POST request to the server, this however would also execute the containing ksql commands. I would love to have some kind of endpoint where you can pass your statement and it returns you either an error code or an OK like:
curl -XPOST <ksqldb-server>/ksql/validate -d '{"ksql": "<ksql-statement>"}' .
My question aims for a way to check the synatx in an automated fashion without the need to cleaning up everything afterwards.
Thanks for you help!
Note: I am also well aware that I could run everything separately using, e.g., a docker-compose file and tear everything down again. This however is quite resource heavy and and harder to maintain.
one option could be to use the ksql test runner (see here: https://docs.ksqldb.io/en/latest/how-to-guides/test-an-app/) and look at the errors to check if the statement is valid. Let me know if it works for your scenario.
By now I've found a way to test for my use case. I had a ksqldb cluster already in place with all other systems needed for the Kafka ecosystem (Zookeeper, Broker,...). I had to compromise a little but and go through the effort of deploying everything but here is my approach:
Use proper naming (let it be prefixed with test or whatever suits your use case) for your streams, tables,... the queries' sink property should include the prefixed topic in order to find it easily, or you simply assign an QUERY_ID (https://github.com/confluentinc/ksql/pull/6553).
Deploy the streams, tables,... to your ksqldb server using its rest API. Since I was programming in Python, I made use of the ksql package using pip (https://github.com/bryanyang0528/ksql-python).
Cleanup the ksqldb server by filtering for the naming that you assigned to the ksql resources and run the corresponding DROP or TERMINATE statement. Consider also, you will have dependencies that result in multiple streams/tables reusing a topics. The statements can be looked up in the official developer guide (https://docs.ksqldb.io/en/latest/developer-guide/ksqldb-reference/quick-reference/).
If you had errors in step 2, step 3 should have cleaned up the leftovers so that you can adjust your ksql scripts until they run through smoothly.
Note: I could not make any assumptions on how the streams,... look like. If you can, I would prefer the ksql-test-runner.

AZURE DATA FACTORY - Can I set a variable from within a CopyData task or by using the output?

I have simple pipeline that has a Copy activity to populate a table. That task is based on a query and will only ever return 1 row.
The problem I am having is that I want to reuse the value from one of the columns (batch number) to set a variable so that at the end of the pipeline I can use a Stored Procedure to log that the batch was processed. I would rather avoid running the query a second time in a lookup task so can I make use of the data already being returned?
I have tried duplicating the column in the Copy activity and then mapping that to something like #BatchNo but that fails and have even tried to add a Set Variable task but can't figure out how to take a single column #{activity('Populate Aleprstw').output} does not error but not sure what that will actually do in this case.
Thanks and sorry if its a silly question.
Cheers
Mark
I always do it like this:
Generate a batch number (usually with a proc)
Use a lookup to grab it into a variable
Use the batch number in all activities (might be multiple copes, procs etc.)
Write the batch completion
From your description it seems you have the batch embedded in the data copy from the start which is not typical.
If you must do it this way, is there really an issue with running a lookup again?
Copy activity doesn't return data like that, so you won't be able to capture the results that way. With this design, running the query again in a Lookup is the best option.
Is the query in the Source running on the same Server as the Sink? If so, you could collapse the entire operation into a Stored Procedure that returns the data point you are trying to capture.

Deploying DB2 user define functions in sequence of dependency

We have about 200 user define functions in DB2. These UDF are generated by datastudio into a single script file.
When we create a new DB, we need to run the script file several times because some UDF are dependent on other UDF and cannot be create until the precedent functions are created first.
Is there a way to generate a script file so that the order they are deployed take into account this dependency. Or is there some other technique to arrange the order efficiently?
Many thanks in advance.
That problem should only happen if the setting of auto_reval is not correct. See "Creating and maintaining database objects" for details.
Db2 allows to create objects in an "unsorted" order. Only when the object is used (accessed), the objects and its depending objects are checked. The behavior was introduced a long time ago. Only some old, migrated databases keep auto_reval=disabled. Some environments might set it based on some configuration scripts.
if you still run into issues, try setting auto_reval=DEFERRED_FORCE.
The db2look system command can generate DDL by by object creation time with the -ct option, so that can help if you don't want to use the auto_reval method.

Should Command / Handles hold the full aggregates or only its id?

I'm trying to play around with DDD and CQRS.
And I got this two solutions :
add AggregateId to my command / event. It's nice beauce I can use my command as my web service's parameter, and I can as well return some instance of my command to my forms for saying "you can do this command,t his one and this one"
add my full Aggregate to my command / event. It's nice because I'm sure that I won't load my aggregate 100 times if there is a lot of event going on, I'll just pass my reference around (for instance I won't load it in my command's validator and in my command handler). But i'd add to create a parameter class for each command wih only the id.
For now I have the id in the commands and the full model in the events (I trust my unit of work for caching the Load(aggregateId) so i won't execute the same request 100 for 1 command).
Is there a right / better way ?
Yes your current approach is correct - reference the aggregate with an identity value on the command. A command is meant to be serialized and sent across process boundaries. Also, a command is normally constructed by a client who may not have enough information to create an entire aggregate instance. This is also why an identity should be used. And yes, your unit of work should take care of caching an aggregate for the duration of a unit of work, if need be.