Pass control from one gen_fsm to another - sockets

I'm creating a generic Erlang server that should be able to handle hundreds of client connections concurrently. For simplicity, let's suppose that the server performs for every client some basic computation, e.g., addition or subtraction of every two values which the client provides.
As a starting point, I'm using this tutorial for basic TCP client-server interaction. An excerpt that represents the supervision tree:
+----------------+
| tcp_server_app |
+--------+-------+
| (one_for_one)
+----------------+---------+
| |
+-------+------+ +-------+--------+
| tcp_listener | + tcp_client_sup |
+--------------+ +-------+--------+
| (simple_one_for_one)
+-----|---------+
+-------|--------+|
+--------+-------+|+
| tcp_echo_fsm |+
+----------------+
I would like to extend this code and allow tcp_echo_fsm to pass the control over the socket to one out of two modules: tcp_echo_addition (to compute the addition of every two client values), or tcp_echo_subtraction (to compute the subtraction between every two client values).
The tcp_echo_fsm would choose which module to handle a socket based on the first message from the client, e.g., if the client sends <<start_addition>>, then it would pass control to tcp_echo_addition.
The previous diagram becomes:
+----------------+
| tcp_server_app |
+--------+-------+
| (one_for_one)
+----------------+---------+
| |
+-------+------+ +-------+--------+
| tcp_listener | + tcp_client_sup |
+--------------+ +-------+--------+
| (simple_one_for_one)
+-----|---------+
+-------|--------+|
+--------+-------+|+
| tcp_echo_fsm |+
+----------------+
|
|
+----------------+---------+
+-------+-----------+ +-------+--------------+
| tcp_echo_addition | + tcp_echo_subtraction |
+-------------------+ +-------+--------------+
My questions are:
Am I on the right path? Is the tutorial which I'm using a good starting point for a scalable TCP server design?
How can I pass control from one gen_fsm (namely, tcp_echo_fsm) to another gen_fsm (either tcp_echo_addition or tcp_echo_subtraction)? Or better yet: is this a correct/clean way to design the server?
This related question suggests that passing control between a gen_fsm and another module is not trivial and there might be something wrong with this approach.

For 2, you can use gen_tcp:controlling_process/2 to pass control of the tcp connection: http://erlang.org/doc/man/gen_tcp.html#controlling_process-2.
For 1, I am not sure of the value of spawning a new module as opposed to handling the subtraction and addition logic as part of the defined states in your finite state machine. Doing so creates code which is now running outside of your supervision tree, so it's harder to handle errors and restarts. Why not define addition and subtraction as different states within your state machines handle that logic within those two states?
You can create tcp_echo_fsm:subtraction_state/2,3 and tcp_echo_fsm:addition_state/2,3 to handle this logic and use your first message to transition to the appropriate state rather than adding complexity to your application.

Related

Concurrency in Apache JMeter load testing has strange behaviour

I'm doing some loadtesting of an API using a somewhat basic setup in JMeter.
The idea here is that the Thread group spawns a bunch of clients/threads and each of these clients has a bunch of loops which runs in parallel (using the Bzm - parallel controller).
Each loop represents some kind of action that a user can perform and each loop has a Uniform Timer Controller to adjust how often a given action is performed for each client.
One of the actions consists of two calls, first one (1) fetches som id's which are then extracted with a JSON extractor and modified a bit with a BeanShell Post Processor. The result from the post processor is then used as a param for the next call (2).
The issue I'm facing is that in my Summary report there is a lot more results from the first HTTP request (1) showing up than from the second one (2). I would expect them to always be called the same number of times.
My guess is that it all comes down to me lacking some basic understanding of flow and concurrency (and maybe timers) in JMeter, but I have been unable to figure it out, so I need help.
This is the setup, imagine there being multiple loops.
Thread group
+
|
+------ ---+ Parallel controller
| +
| |
| +-----------+ Loop
| +
| +----------+ Transaction
| | +
| | |
| | +---------+ Uniform random timer
| | +
| | |
| | |
| | +
| | (1) HTTP request
| | +
| | +---------+ JSON extractor
+ | | +
| | |
Summary Report | | +
| | BeanShell Post processor
| |
| |
| |
| +
|
| (2) HTTP request
|
|
|
Loop +----------------------------------+
|
|
Ok, so I figured it out. It all comes down to understanding the structure of the tests, diving in to the docs really helped as they are very detailed.
This is the relevant part:
Note that timers are processed before each sampler in the scope in
which they are found; if there are several timers in the same scope,
all the timers will be processed before each sampler. Timers are only
processed in conjunction with a sampler. A timer which is not in the
same scope as a sampler will not be processed at all. To apply a
timer to a single sampler, add the timer as a child element of the
sampler. The timer will be applied before the sampler is executed. To
apply a timer after a sampler, either add it to the next sampler, or
add it as the child of a Flow Control Action Sampler.
https://jmeter.apache.org/usermanual/component_reference.html#timers
Another extremely important thing to understand is that some building blocks (in relation to the tree structure) are hierarchical some are ordered and some are both. This is described in detail here https://jmeter.apache.org/usermanual/test_plan.html#scoping_rules
All in all my issue could be fixed by putting the Uniform random timer as a child of the first HTTP call (1) causing it to only affect that call, or by adding a Flow Control Action as a sibling after the second call (2) and adding the Uniform random timer as a child to that.

How to nicely decouple paging and heap (dynamic memory management) functionalities in OS development?

Background
I am following James Molly's OS tutorial to implement a toy operating system, of which I found the paging and heap code very involved because of their interdependence. For example, paging uses kmalloc provided by the heap because it needs to dynamically allocate space for the page table data structures and must have the virtual address. This is done in the following function call in paging.c:
dir->tables[table_idx] = (page_table_t*)kmalloc_ap(sizeof(page_table_t), &tmp);
Meanwhile, the heap relies on paging to allocate physical frames when it needs to grow, which can be seen in kheap.c:
alloc_frame( get_page(heap->start_address+i, 1, kernel_directory), (heap->supervisor)?1:0, (heap->readonly)?0:1);
They are tightly coupled together within the memory management module, like the following:
other modules
^ ^ ^
| | |
v v v
+-----------------------------------------+
| Memory Management |
| |
| +------------------+ |
| | paging | |
| | +-----+------------+ |
| | | | | |
| +------------+-----+ | |
| | heap | |
| +------------------+ |
+-----------------------------------------+
Question
I am wondering if it is likely to completely decouple paging and the heap. I expect it to be possible because conceptually, I think
Paging can be thought as a address mapping/translation mechanism (probably plus physical frame allocation??).
The heap is about dynamic memory management.
They seem pretty self-contained respectively. Can they be implemented in a decoupled fashion with only unidirectional dependency, for example like the stacking TCP/IP protocols?

Correct place for accessing repositories in MVVM

My UWP project follows the MVVM pattern (or it least is supposed to) and I'm currently refactoring it as it suffers from the "god viewmodel" problem: the page viewmodels do basically everything and this is really hard to maintain.
Currently a typical page viewmodel does these things:
It handles initialization when a page is navigated to: It fetches the models from the repositories and creates viewmodels that "wrap around" the models of my business layer and exposes them to the view with properties.
It exposes ICommand properties that are implemented with DelegateCommand, which call methods on my page viewmodel. These methods change some stuff in the sub-viewmodels and persist the changes by calling the repositories.
It listens to events that are published on an EventAggregator. It might receive events like ModelChangedEvent or ModelAddedEvent, which are created by the persistence layer when a model was changed by a third party. If the page view model receives an event like this, the view model updates the correct sub-view model.
So I have this architecture:
+-------------------------+
| Page |
+-------------------------+
| binds to properties
| calls commands
\|/ subscribes to changes from external
+-------------------------+---------------------------------------+
| PageViewModel | ----------------------- |
+-------------------------+ | |
| creates | |
\|/ updates gets models | |
+-------------------------+ stores models | |
| ContentViewModel | | |
+-------------------------+ | |
| wraps around | |
\|/ \|/ |
+-------------------------+ loads +----------------------+ |
| ContentModel | <--------- | IContentRepository | |
+-------------------------+ stores +----------------------+ |
| notifies of |
| changes |
\|/ |
+----------------------+ |
| IEventAggregator |<-+
+----------------------+
I want to refactor the "god-like" PageViewModels by putting the Commands into their own file and by changing how updates received from the EventAggregator are handled. But I have a few questions about this:
Let's imagine there is an AddContentCommand that is supposed to create a new ContentModel entity and immediately stores it in the database. Should AddContentCommand be injected with IContentRepository and persist the new object by calling the repository? Or should this be done on PageViewModel? How should the PageViewModel be notified of the addition? Should this be done directly by calling some method on PageViewModel from AddContentCommand or should this be done through the EventAggregator? Should AddContentCommand even have a reference to PageViewModel at all?
Would there be issues if each ContentViewModel subscribed to changes on IEventAggregator and updated itself accordingly? Or is it better if PageViewModel subscribes to it and tells the dependent view models to update, as it is done right now?

Linking multiple custom variables in Omniture

I have an implementation in Sitecatalyst, where i have to track categories and the multiple tags associated with the categories. How should i go for it. What should be the variables which should be defined for it in omniture.
for example -
|---------------------|
| MOOD | // Main Category
|---------------------|
| Uplifting | // Sub Category
|---------------------|
| Fun | // Sub Category
|---------------------|
| Proud | // Sub Category
|---------------------|
| Fun | // Sub Category
There are three options:
Listprop
You could create a listprop which you can enable in the report suite settings by implementing a delimiter for the traffic variable. This is especially handy if you have the possibility to create multiple levels. You can then implement the traffic variable like so:
s.prop1 = "MOOD|Uplifting|Fun|Proud|Fun";
Adobe Analytics will automatically split the values based on the pipe character. Please note that listprops don't allow correlations and pathing.
Multiple traffic variables/props
The other option would be to create multiple traffic variables and define all of them on every page where they're required.
s.prop1 = "MOOD";
s.prop2 = "Uplifting";
s.prop3 = "Fun";
s.prop4 = "Proud";
s.prop5 = "Fun";
However, this option will consume a lot of traffic variables of which you only have 75.
Classification
The third option would look the same as the listprop, however, you don't configure the traffic variable as a listprop, you configure it as a normal traffic variable and classify it later on using the Classification Rule Builder.
Using the classification rule builder you can split the incoming data by the pipe character (using regular expression) and create new dimensions resembling the categories.
s.prop1 = "MOOD|Uplifting|Fun|Proud|Fun";
I would personally go for the third option as it doesn't require a lot of props and it allows for a future proof approach of measuring the categories even when you're adding more levels.
Good luck with your implementation!

Using DBFit variables for other Fixtures

Is it possible to use a DbFit variable in another fixture? For example, if I create a query in dbfit with a variable <<firstname as follows:
SELECT * FROM System.Columns Where ColumnName= 'FirstName' | <<firstname |
Is it possible to include a reference to that page in another suite of tests that use a column fixture or RestFixture and call the variable name to check against the database?
In the RestFixture, I'm checking a database column name against a node so can I do something like:
| GET | /resources/6 | | //${firstname} |
Where I'm checking if an xml node named FirstName exists in the database.
With fitSharp (.NET), it is. With Java, I'm not sure.
I was struggling with the same issue, imo it's hard to find any good documentation. I stumbled upon the answer skimming some arbitrary tests somewhere.
You can access the symbol in RESTFixture using %variablename%.
RestFixture
| GET | /resources/6 | | //%firstname% |