How to display a simple messsage using WPF and MVVM - mvvm

I've look online at several different resources to find a simple example of commands inside C# using the MVVM and WPF. All I
want to know is how to display a message after a button has been clicked. I couldn't find anything this simple online so I'm
asking here.

It depends really on the nature of the message, from where it comes from and at when should it be displayed. While it's for sure an operation that is more view-related, the source may come from business logic that may merit being in the viewmodel for testing.
Generally speaking, I would choose one of two approaches:
Use normal dependency injection when building the view-model, and inject a service (via its interface) that will be in charge of displaying all messages. The real implementation may be simple as a call to MessageBox.Show or do more complicated things at the view level.
Give the view-model an event for it to raise, containing all the message data in its EventArgs parameter. The view will subscribe to that event and display the notification as it pleases.
In both cases, the view-model is unaware of view specific logic, while still able to encapsulate business logic for message generation, and being fully unit-testable.

Unfortunately, "simple" things such as displaying a message in a Window tend to violate MVVM from a purist standpoint, or require a fair amount of code to build a system where this works cleanly.
That being said, in most cases, showing a message is often a pure view concern. As such, this is something where I'll often just do it in code behind in the View. At first, this may seem like it violates MVVM, but as it's a "pure-View" related operation, it's not really a problem in practice.

You can do it in two ways
1. You can just put a message Box in the View-Model on Execute of the ICommand, when you click on the Button, Message box will pop up or
2. You can have Message on the hidden on the Xaml, and make it visible when you click on Button

Related

MVVM vs Bloc patterns

I'm creating a new app with Flutter, and I'm trying to design it, separating the business logic from the view.
I've read about Bloc and MVVM (I know there are other patterns but these were the ones I preferred), but I don't understand the differences between them. They look pretty much the same to me.
Does anyone can help me understand them?
Looking at this illustration for MVVM (source):
You can see that there are seperate data and business logic models. However, using BLoC there is not really a distinction like that. The classes that handle the business logic also handle the data, which can also apply to MVVM.
To be fair, there really is not much of a difference. The key part to take away is the same for both: Isolating the business logic from the UI. Hence, the implementation of either of the two will look very similar, i.e. using Stream's and StreamBuilder's.
Additionally, there are packages that make working with Stream's easier, e.g. rxdart which is what the Flutter team uses as far as I am concerned.
BLoC and MVVM seemed to be different when BLoC was introduced, but that differences faded away as BLoC implementations changed over time. Right now the only real difference is that BLoC doesn't specify a separate presentation logic and business logic, or at least it doesn't do it in an obvious manner. Presentation logic is the layer that understands interactions between UI elements and the business part of the application(Presenter job in MVP). Some BLoC implementations put presentation logic into BLoC's, some others into UI.
The NEW THING in BloC was that it should not expose any methods. Instead, it would only accept events through its exposed sink or sinks. This was for sake of code reuse between Angular Dart web apps and Flutter mobile apps. This concept was recently abandoned because we don't really write Angular Dart web apps and it is less convenient than regular methods. Right now Blocks in official BLoC package expose methods just like good ol' VM.
Some would say that BLoC should expose one Stream of complete state objects, while VM can expose multiple Streams, but this is not true. Exposing one Stream of states is a good practice in both approaches. At first, official Google BLoC presentations presented BLoCs implemented using multiple output Streams as well.
One interesting difference that seemed to be a thing was that BLoC should communicate via events not only with UI but also with different parts of the application. for example, it should receive an event after receiving Firebase notification or when Repository data changes. While this seems interesting I've never seen an implementation like that. It would be odd from a technical point of view (Repository would have to know about all BLoC's that are using it???). Although I am thinking about trying out such an implementation that would be based on EventBus but that's completely off topic :)
They are not quite the same, actually... MVVM implies databindings between the view and the viewmodel, which means, in practice, the view objects mostly are the ones commanding the viewmodel. MVVM seems to me a simplification of MVC, to show the model "as is" behind the scenes. For example, Xamarin largely uses MVVM and the controls on the screen like checkboxes, textinputs, etc, all do modify the modelview behind the scenes.
You may already starting to see a problem here: if you change the UI you may have to change the MV as well. Suppose you have an entry number that must be between 0-255, where do you put this logic? Well, on MVVM you put this logic on the view. But you must put these locks on the modelview as well to guarantee data safety. That means a lot of code rewriting to do the same thing. If you decide to change this range, you have to change in two places, which makes your code more prone to errors. Disclaimer: there are workarounds for this but is far more complicated than it should be.
On the other hand, BLoC works by receiving events and emitting states. It doesn't care (although it may) from where the event came from. Using the same example from the above, the view would signal an event to the bloc/controller with "hey, my number changed!", the bloc then would process this new event and, if suitable, emit a signal to the UI: "hey UI! You should change! I have a new state for you!". Then, the UI rebuilds itself to present those changes.
For me, the advantage of BLoC over MVVM is that the business logic can be entirely decouple from the view, which is overall a better way to do things. As our modern software development requires more and more changes in the UI (being different screen sizes, densities, platform, etc.), having the UI side decoupled from the models are a fantastic feature to code reusability.

Where in an Eclipse plugin to I place an algorithm that utilises several views

I've got an Eclipse plugin that contains 4 views. A piece of programming functionality must sit 'above' these views, tying them together. I'll call it the 'master'
Can anybody advise on the best location for this functionality? Really, I want the 'master' to start once the application is open and the views have been initialised.
In my generated RCP application plugin I have an Activitor, a Client, a Perspective, an ApplicationActionBarAdvisor, and ApplicationWorkbenchAdvisor, and an ApplicationWorkbenchWindowAdvisor. None of these seem suitable for hosting the 'master'.
Edit: after a little further investigation I suspect the ApplicationWindowAdvisor holds my answer. It has a number of methods that may be overridden to jump into application lifecycle stages. The ones that appear to relate to this problem appear to be: postStartup, postWindowOpen, postWindowCreate
I'd appreciate any pointers on which method is called after all the views have been created/initialised.
Edit 2: more googling has exposed use of the org.eclipse.ui.startup
extension point, as IStartup.earlyStartup() is also run after the Workbench
was completely started.
cheers,
Ian
Maybe you could define an OSGi service (see the tutorial by Lars Vogel for detailed howto: http://www.vogella.de/articles/OSGi/article.html).
This service could be initialized either declaratively, or by using your plug-in activator; then each view could connect to this service.
On the other hand, if you want to communicate between the views, you could simple use the workbench selection service - in this case, all views are operating somewhat independent of each other, without central control.
Edit Responding to the changes in the question: neither proposed methods have anything to do with the opening (or closing) of the views. The postStartup is executed after the application starts; postWindowOpen is executed after the window is opened; while postWindowCreate is opened after the window is created, but before its opened.
The earlyStartup() makes it possible to execute after the workbench was started, but it still does not makes sure whether the corresponding views are opened - views have a different life-cycle then windows.
Globally, you have to provide some common service that can be used by each of the views; this can registered at most points of the application lifecycle - you should choose the one that fits your need best.
I think you're mixing concepts. Algorithms don't work on views but on the models which the views show.
A view is nothing but a window which transforms bits in memory to something a user can find useful.
So what you really want is to separate the models in your views from the views themselves. That will also make testing much more simple. Have them publish events when they are change.
Your algorithm should subscribe to the events of all four models and do its work, putting the result in another model where the same or other views can pick it up.
In the views, also subscribe to the appropriate events which your models emit and update them accordingly.
This way, you can separate the model from the views. You won't get into trouble when the user rearranges views or closes them.
I think the best you can do is setup a Perspective with those views and lock them, so the user may not close them.
I do not remember exactly how you can do this, but I think setting the perspective as 'fixed' on the extension point declaration might do the trick.

UI First or logic first?

While working on projects, I often come to the dilemma of working on UI first or on logic first. Having UI first gives a nice overview of how the end product is going to look like whilehaving logic first uncovers any possible roadblocks in technology.
However, it is not always that crystal clear.. some times UI may need data to be populated to really show what it means and simulating the data could be more difficult than inplementing the logic.. what is your preferred approach for development and why? Which is more efficient and effective?
( I am seeing this issue more and more with iphone projects)
Neither.
You will have to so some thinking at the beginning of the project anyway, deciding the general approach you will take and what operations you will support.
Do that reasonably well and you will have defined the interface between the view and the underlying logic. Look at the Model-View-Controller approach for some inspiration.
What you want to have early on is an idea of what are the basic operations your logic code needs to do to in order to achieve a purpose. Usually it will be a simple function call but sometimes it may involve more than that. Have that clear first.
Next, a complex system that works is based on a simple system that works.
Which means you will need to have a basic UI you'll use to test a basic logic implementation first.
A simple form with a button which presents a message is basic enough. Then, it can grow, you implement a piece of functionality and then you add a simple UI you can test it with.
It is easier to do both piece by piece as the logic and UI for a small piece of the logic are conceptually similar and it will be easy to keep track of both while you implement and test.
The most important part is to keep the UI and logic decoupled, making them talk through a common interface. This will allow you to make quick edits when needed and improve the looks of the GUI towards the end.
You will be better able to just scrap the UI if you don't like it. All you'll need to do is use the same interface, one which you know how to do because you wrote it and you already implemented it.
If at some point you realize you've made a big mistake you can still salvage parts of the code, again because the UI and logic are decoupled and, hopefully, the logic part is also modular enough.
In short: think first, do both the UI and the logic in small increments and keep things modular.
Iterate between both (logic and UI). Back and forth. The UI may change as you understand how and what you can do with the logic and whatever constraints that may present. The logic may change as you change the features, behavior, or performance requirements of an easy-to-use, responsive, and decent looking UI.
I usually do the minimum possible of each until I have some barely working mockups. Then I use each mockup to test where my assumptions about the proper UI and/or logic might be right or wrong. Pick the most flawed, and start iterating.
Apple suggests mocking up the UI on paper first. (Have an eraser handy...)
If possible, in parallel.
But personally, I advocate logic first.
I start with the fundamentals first and that means getting the logic coded and working first. There are two reasons for this:
If I can't get the logic working correctly having a pretty UI is useless and a waste of my time
You will most likely change the UI when you work on the logic aspect making the UI process longer and more expensive
I usually get my UI in order first. The reason? As I prototype different designs, sometimes my idea for the app changes. If it does, then it's no consequence - there is no code to rewrite.
However, sometimes it is helpful to get the fundamentals first in order to determine if the app is going to work or not. If it's not going to function, then why waste time making interfaces?
I like to start by laying out the different parts of my project in something like Vizio.
I make boxes for the different views I expect to have, and I fill them with the information I expect them to contain.
I make another set of boxes for my expected model objects (logic). I fill them with the information I expect they will work with, and I draw lines between models and views where I think it will be necessary.
I do the same thing for object graphs (if I plan on using CoreData), and for database tables if I am going to have an external database.
Laying everything out visually helps me decide if I am missing any important features or interactions between project components. It also gives me something to quickly refer to if I later lose track of what I was doing.
From that point, I tend to work on a model until it has enough done to fill out part of a view, then I work on a view until it can interact with the model.
I also try to identify views or models that could be reused for multiple purposes so that I can reduce the overall amount of work I have to do.
Take the agile approach and work on small amounts of both in iterations. Slowly building each functional piece of the program so as to not build any monolithic piece at once.

Should I use interface builder or not?

I'd like to know more about the pros and cons of using interface builder when developing iPhone/iPad apps.
I've written a fairly complex and customized app that's on the app store right now, but all of the interfaces are hand coded as they are fairly complex. I've customised the navigation and tab bars with backgrounds, table view cells are manually drawn for speed, and some views are complex and scalable with many subviews.
I'm pondering whether or not to start using interface builder but I'm not sure to what extent I'll use it and whether it's worth it at all. Is it quicker? Can things still be easily customised?
Any advice would be most welcome!
There is absolutely no reason not to use it. One thing that scares people off is their experiences with other GUI tools, things that generated code for them or some other mess. Then the problem becomes that it is hard to round-trip the interface, you cannot easily modify things once they are generated because of the complexity of pushing those changes back into the emitted code.
Interface Builder does not generate code, it uses NSArchiver to read and write an actual object graph for the GUI. This has many benefits, starting with the fact that you can easily round-trip the interface and make incremental changes. It really is all good, use it. :-)
Personally I've found Interface Builder quite tricky to ramp up on, and sometimes it doesn't expose all the properties I want to edit (although this may have changed in the newer versions), so generally I've tended to create my UIs in code.
If you do use Interface Builder, make sure to consider localization. Apple's iPhone Developer docs recommend that the NIB be a localized resource that gets translated. That way the translator can see if the new text fits in the view. Unfortunately this means the translator needs to be capable of opening NIB files and editing them (or a developer needs to get involved in the translation process).
Personally, I prefer providing localized text resources and setting the text to the UI in code. I then provide comments in the Localizable.strings file saying how long the text can be, and providing any context the translator might need.
There are no cons.
Why dont use it? It makes everything easier :)

How do I test-drive GWT development?

Just googling 'TDD' and 'GWT' easily lead to this article where the author explained how he can test a GWT application without a container. However, I think his example is not test-driven as he has all the design first and then write the test afterwards, not 'test-first'.
This leads me to think: Is it possible to have 'test-first' development on a UI like GWT? Some people said UI code is not suitable for TDD. But I think by adopting the MVC pattern, maybe we can at least test-drive the MC part? (so V is the UI part which cannot be developed test-driven).
What will be the first failing test we would write on the article example?
Test driving UI is problematic because you often don't know what you want on the screen until you see it on the screen. For that reason, GUI development tends to be massively iterative and therefore very difficult to drive with tests.
This does not mean that we just abandon TDD for GUIs. Rather, we push as much code as we possibly can out of the GUI, leaving behind only simple wiring code. That wiring allows us to make the massively iterative changes we need, without affecting the essence of the problem.
This technique was probably best described by Michael Feathers some years ago in an article entitled "The Humble Dialog Box". It is also the fundamental idea behind the Model-View-Presenter pattern that caused such a stir four years ago; and has now been split into the Passive View and Supervising Controller patterns. The article link in this question takes advantage of these ideas, but in a test-after rather than a test-driven way.
The idea is to test drive everything except the view. Indeed, we don't even need to write the view for a good long time. Indeed, the View is so absurdly simple that it probably doesn't need any kind of unit tests at all. Or if it does, they can in fact be written last.
To test drive the Supervising Controller you simply make sure you understand how the data will be presented on the screen. You don't need to know where the data is, or what the font is, or what color it is, or any of the other cosmetic issues that cause the massive iteration of GUIs. Rather, you know one data item will be some kind of text field. Another will be a menu, still another will be a button or a check box. And then you make sure that the View can ask all the questions it needs to ask to get these items rendered correctly.
For example the text box may have a default value. The View should be able to ask for it. The menu may have some items greyed-out. The View should be able to ask for this information. The questions that the view asks are all about presentation, and are devoid of business rules.
By the same token, the view will tell the Supervising Controller when anything changes. The controller will modify the data appropriately, including any kind of validation and error recovery, and then the View can ask how that data should be presented.
All of this can be test driven because it's all decoupled from the visual display. It's all about how the data is manipulated and presented, and not about what it looks like. So it doesn't need to be massively iterated.
I've successfully test-driven the development of Swing and GWT applications through the GUI.
Testing "just behind the GUI" ignores the integration between the model code and the GUI components. The application needs to hook up event handlers to display data in the GUI when the model changes and receive input from the GUI and update the model. Testing that all those event handlers have been hooked up correctly is very tedious if done manually.
The big problem to overcome when testing through the GUI is how to cope with changes to the GUI during development.
GWT has hooks to help with this. You need to set debug IDs on the GWT widgets and import the DebugID module into your application. Your tests can then interact with the application by controlling a web browser, finding elements by their id and clicking on them or entering text into them. Web Driver is a very good API for doing this.
That's only the start, however. You also need to decouple your tests from the structure of the GUI: how the user navigates through the UI to get work done. This is the case whether you test through the GUI or behind the GUI against the controller. If you test against the controller, the controller dictates the way that the user navigates through the application's different views, and so your test is coupled to that navigation structure because it is coupled to the controller.
To address this, our tests control the application through a hierarchy of "drivers". The tests interact with drivers that let it perform user-focused activities, such as logging in, entering an order, and making a payment. The driver captures the knowledge of how those tasks are performed by navigating around and entering data into the GUI. It does this by using lower-level drivers that capture how navigation and data entry is performed by "gestures", such as clicking on a button or entering text into an input field. You end up with a hierarchy like:
User Goals: the tests verify that the user can achieve their goals with the system and demonstrates how those goals are achieved by a sequence of...
User Activities: things the user does through the GUI, represented as drivers that perform...
Gestures: low level mouse and keyboard input to control the GUI.
This hierarchy that is often used in the user-centered design literature (although with different terminology).