My UWP project follows the MVVM pattern (or it least is supposed to) and I'm currently refactoring it as it suffers from the "god viewmodel" problem: the page viewmodels do basically everything and this is really hard to maintain.
Currently a typical page viewmodel does these things:
It handles initialization when a page is navigated to: It fetches the models from the repositories and creates viewmodels that "wrap around" the models of my business layer and exposes them to the view with properties.
It exposes ICommand properties that are implemented with DelegateCommand, which call methods on my page viewmodel. These methods change some stuff in the sub-viewmodels and persist the changes by calling the repositories.
It listens to events that are published on an EventAggregator. It might receive events like ModelChangedEvent or ModelAddedEvent, which are created by the persistence layer when a model was changed by a third party. If the page view model receives an event like this, the view model updates the correct sub-view model.
So I have this architecture:
+-------------------------+
| Page |
+-------------------------+
| binds to properties
| calls commands
\|/ subscribes to changes from external
+-------------------------+---------------------------------------+
| PageViewModel | ----------------------- |
+-------------------------+ | |
| creates | |
\|/ updates gets models | |
+-------------------------+ stores models | |
| ContentViewModel | | |
+-------------------------+ | |
| wraps around | |
\|/ \|/ |
+-------------------------+ loads +----------------------+ |
| ContentModel | <--------- | IContentRepository | |
+-------------------------+ stores +----------------------+ |
| notifies of |
| changes |
\|/ |
+----------------------+ |
| IEventAggregator |<-+
+----------------------+
I want to refactor the "god-like" PageViewModels by putting the Commands into their own file and by changing how updates received from the EventAggregator are handled. But I have a few questions about this:
Let's imagine there is an AddContentCommand that is supposed to create a new ContentModel entity and immediately stores it in the database. Should AddContentCommand be injected with IContentRepository and persist the new object by calling the repository? Or should this be done on PageViewModel? How should the PageViewModel be notified of the addition? Should this be done directly by calling some method on PageViewModel from AddContentCommand or should this be done through the EventAggregator? Should AddContentCommand even have a reference to PageViewModel at all?
Would there be issues if each ContentViewModel subscribed to changes on IEventAggregator and updated itself accordingly? Or is it better if PageViewModel subscribes to it and tells the dependent view models to update, as it is done right now?
Related
I'm doing some loadtesting of an API using a somewhat basic setup in JMeter.
The idea here is that the Thread group spawns a bunch of clients/threads and each of these clients has a bunch of loops which runs in parallel (using the Bzm - parallel controller).
Each loop represents some kind of action that a user can perform and each loop has a Uniform Timer Controller to adjust how often a given action is performed for each client.
One of the actions consists of two calls, first one (1) fetches som id's which are then extracted with a JSON extractor and modified a bit with a BeanShell Post Processor. The result from the post processor is then used as a param for the next call (2).
The issue I'm facing is that in my Summary report there is a lot more results from the first HTTP request (1) showing up than from the second one (2). I would expect them to always be called the same number of times.
My guess is that it all comes down to me lacking some basic understanding of flow and concurrency (and maybe timers) in JMeter, but I have been unable to figure it out, so I need help.
This is the setup, imagine there being multiple loops.
Thread group
+
|
+------ ---+ Parallel controller
| +
| |
| +-----------+ Loop
| +
| +----------+ Transaction
| | +
| | |
| | +---------+ Uniform random timer
| | +
| | |
| | |
| | +
| | (1) HTTP request
| | +
| | +---------+ JSON extractor
+ | | +
| | |
Summary Report | | +
| | BeanShell Post processor
| |
| |
| |
| +
|
| (2) HTTP request
|
|
|
Loop +----------------------------------+
|
|
Ok, so I figured it out. It all comes down to understanding the structure of the tests, diving in to the docs really helped as they are very detailed.
This is the relevant part:
Note that timers are processed before each sampler in the scope in
which they are found; if there are several timers in the same scope,
all the timers will be processed before each sampler. Timers are only
processed in conjunction with a sampler. A timer which is not in the
same scope as a sampler will not be processed at all. To apply a
timer to a single sampler, add the timer as a child element of the
sampler. The timer will be applied before the sampler is executed. To
apply a timer after a sampler, either add it to the next sampler, or
add it as the child of a Flow Control Action Sampler.
https://jmeter.apache.org/usermanual/component_reference.html#timers
Another extremely important thing to understand is that some building blocks (in relation to the tree structure) are hierarchical some are ordered and some are both. This is described in detail here https://jmeter.apache.org/usermanual/test_plan.html#scoping_rules
All in all my issue could be fixed by putting the Uniform random timer as a child of the first HTTP call (1) causing it to only affect that call, or by adding a Flow Control Action as a sibling after the second call (2) and adding the Uniform random timer as a child to that.
Background
I am following James Molly's OS tutorial to implement a toy operating system, of which I found the paging and heap code very involved because of their interdependence. For example, paging uses kmalloc provided by the heap because it needs to dynamically allocate space for the page table data structures and must have the virtual address. This is done in the following function call in paging.c:
dir->tables[table_idx] = (page_table_t*)kmalloc_ap(sizeof(page_table_t), &tmp);
Meanwhile, the heap relies on paging to allocate physical frames when it needs to grow, which can be seen in kheap.c:
alloc_frame( get_page(heap->start_address+i, 1, kernel_directory), (heap->supervisor)?1:0, (heap->readonly)?0:1);
They are tightly coupled together within the memory management module, like the following:
other modules
^ ^ ^
| | |
v v v
+-----------------------------------------+
| Memory Management |
| |
| +------------------+ |
| | paging | |
| | +-----+------------+ |
| | | | | |
| +------------+-----+ | |
| | heap | |
| +------------------+ |
+-----------------------------------------+
Question
I am wondering if it is likely to completely decouple paging and the heap. I expect it to be possible because conceptually, I think
Paging can be thought as a address mapping/translation mechanism (probably plus physical frame allocation??).
The heap is about dynamic memory management.
They seem pretty self-contained respectively. Can they be implemented in a decoupled fashion with only unidirectional dependency, for example like the stacking TCP/IP protocols?
Much like in the example from this question I see many code snippets on the web using magic numbers when making ExtendedPropertyDefinition. Example:
Dim PR_DELETED_ON As New ExtendedPropertyDefinition(26255, MapiPropertyType.SystemTime)
Dim PR_SEARCH_KEY As New ExtendedPropertyDefinition(12299, MapiPropertyType.Binary)
I have sort of found a reference location for these on MSDN. I can look them up individually as supposed to one large table. Here is the one for PR_DELETED_ON like in the above example
+------------------------+---------------+
| Associated properties: | PR_SEARCH_KEY |
+------------------------+---------------+
| Identifier: | 0x300B |
+------------------------+---------------+
| Data type: | PT_BINARY |
+------------------------+---------------+
| Area: | ID properties |
+------------------------+---------------+
0x300b being 12299 in decimal
I hate magic numbers so I was looking for an enum for this in the EWS API. I wrote this snippet to (hopefully) show me all the enums exposed.
$obj = [Reflection.Assembly]::LoadFile("C:\Program Files (x86)\EWSManagedAPI\Microsoft.Exchange.WebServices.dll")
$obj.GetTypes() | Where-object{$_.isenum -and ($_.ispublic -or $_.isnestedpublic)} | ForEach-Object{
$props = #{Name = $_.FullName}
[enum]::GetValues($_) | ForEach-Object{
$props.Integer = [int64]$_
$props.Text = $_
[pscustomobject]$props
}
}
I didn't see anything in the output that matched what I was looking at above. Does anyone know if there is a preexisting enum for these properties? If not that is fine. I just assumed there would be something out there.
Not the end of the world but I couldn't find them myself. Might explain why code snippets keep referencing to them.
No there is nothing in the EWS Managed API for this and AFAIK there is no master list maintained by Microsoft. There are also different types of Properties eg Tagged and Named properties and to use an Extended property in EWS you need to first define and tell Exchange to either return or set that property so EWS doesn't allow you to enumerate all the Extended properties on a Item like MAPI. The closest list that I know of is the one from the EWSEditor which is pretty comprehensive https://ewseditor.codeplex.com/SourceControl/latest#EWSEditor/PropertyInformation/KnownExtendedPropertiesData.cs . The Mapi include files also have a good list eg https://github.com/openchange/openchange/blob/master/properties_enum.h (but these are only tagged properties).
I'm creating a generic Erlang server that should be able to handle hundreds of client connections concurrently. For simplicity, let's suppose that the server performs for every client some basic computation, e.g., addition or subtraction of every two values which the client provides.
As a starting point, I'm using this tutorial for basic TCP client-server interaction. An excerpt that represents the supervision tree:
+----------------+
| tcp_server_app |
+--------+-------+
| (one_for_one)
+----------------+---------+
| |
+-------+------+ +-------+--------+
| tcp_listener | + tcp_client_sup |
+--------------+ +-------+--------+
| (simple_one_for_one)
+-----|---------+
+-------|--------+|
+--------+-------+|+
| tcp_echo_fsm |+
+----------------+
I would like to extend this code and allow tcp_echo_fsm to pass the control over the socket to one out of two modules: tcp_echo_addition (to compute the addition of every two client values), or tcp_echo_subtraction (to compute the subtraction between every two client values).
The tcp_echo_fsm would choose which module to handle a socket based on the first message from the client, e.g., if the client sends <<start_addition>>, then it would pass control to tcp_echo_addition.
The previous diagram becomes:
+----------------+
| tcp_server_app |
+--------+-------+
| (one_for_one)
+----------------+---------+
| |
+-------+------+ +-------+--------+
| tcp_listener | + tcp_client_sup |
+--------------+ +-------+--------+
| (simple_one_for_one)
+-----|---------+
+-------|--------+|
+--------+-------+|+
| tcp_echo_fsm |+
+----------------+
|
|
+----------------+---------+
+-------+-----------+ +-------+--------------+
| tcp_echo_addition | + tcp_echo_subtraction |
+-------------------+ +-------+--------------+
My questions are:
Am I on the right path? Is the tutorial which I'm using a good starting point for a scalable TCP server design?
How can I pass control from one gen_fsm (namely, tcp_echo_fsm) to another gen_fsm (either tcp_echo_addition or tcp_echo_subtraction)? Or better yet: is this a correct/clean way to design the server?
This related question suggests that passing control between a gen_fsm and another module is not trivial and there might be something wrong with this approach.
For 2, you can use gen_tcp:controlling_process/2 to pass control of the tcp connection: http://erlang.org/doc/man/gen_tcp.html#controlling_process-2.
For 1, I am not sure of the value of spawning a new module as opposed to handling the subtraction and addition logic as part of the defined states in your finite state machine. Doing so creates code which is now running outside of your supervision tree, so it's harder to handle errors and restarts. Why not define addition and subtraction as different states within your state machines handle that logic within those two states?
You can create tcp_echo_fsm:subtraction_state/2,3 and tcp_echo_fsm:addition_state/2,3 to handle this logic and use your first message to transition to the appropriate state rather than adding complexity to your application.
Is it possible to use a DbFit variable in another fixture? For example, if I create a query in dbfit with a variable <<firstname as follows:
SELECT * FROM System.Columns Where ColumnName= 'FirstName' | <<firstname |
Is it possible to include a reference to that page in another suite of tests that use a column fixture or RestFixture and call the variable name to check against the database?
In the RestFixture, I'm checking a database column name against a node so can I do something like:
| GET | /resources/6 | | //${firstname} |
Where I'm checking if an xml node named FirstName exists in the database.
With fitSharp (.NET), it is. With Java, I'm not sure.
I was struggling with the same issue, imo it's hard to find any good documentation. I stumbled upon the answer skimming some arbitrary tests somewhere.
You can access the symbol in RESTFixture using %variablename%.
RestFixture
| GET | /resources/6 | | //%firstname% |