In DialogFlow, we can trigger functions and perform tasks based on either Intent Name or Actions.
Which one should we use to decide? What is better practice?
I've asked a similar question in the past and have experimented with both the function-by-intent and function-by-action patterns, and I have come to view the actions as almost completely useless. Their only benefit seems to be that you can attach the same action to multiple intents, but if anything I would want to have the same intent be handled by multiple functions, based on the specific contexts and parameters. This originates from a design goal of rather having fewer multi-purpose intents than a lot of simpler ones, though the latter pattern is arguably the one implied by many of the Google docs.
Conceptually I think it is useful to think of the fulfillment functions as the transition functions of a finite state machine, where the state is defined by the incoming webhook request, i.e. a tuple of (intent, contexts, parameters, event) plus maybe other metadata such as locale and surface capabilities. I haven't found the actions to add anything to this model.
In most cases, it won't matter. The approach that #gmolau describes is a good one, and I think they're pretty much spot on.
Certainly the documentation is increasingly going towards using Intent Name for everything rather than Action.
The case where I think it makes the most sense to use the action name instead of an Intent name is when you have different Intents that may respond to the same phrase, and ultimately do the same thing, but only in certain contexts. This lets you do the logic for what gets called on the Dialogflow side and not have to register the same Handler against multiple Intents.
Related
I've been a heavy Windsor users for the last several years. Prior to the Fluent Registration API, I would toggle between Xml Registration and huge piles of AddComponent() code. We've been happily using the Fluent Registration API and Installers specifically for quite some time now. I've gotten the impression from various writings like this:
http://docs.castleproject.org/Windsor.XML-Registration-Reference.ashx
That the Xml Registration approach has fallen out of favor and it wouldn't surprise me if it were marked for deprecation at some point in the near future.
Now, for my question: The Fluent Registration API and Installers work swimmingly for auto-wiring scenarios (i.e. when I want Windsor to just figure out how to construct my object graphs). Auto-wiring is the vast majority of IoC use cases out there, but what about when auto-wiring isn't possible? In other words I have multiple implementations of a service and I need to tell Windsor how to construct parts of my object graph. I've done this many times with the Xml Registration approach, but is there a more preferred approach these days? I'm hesitant to go the Xml Registration approach as its future seems uncertain, but I don't know how else to accomplish this with Windsor.
My uses cases are:
System needs to be able to swap implementations at QA-test (i.e.
credit checks and fraud detection processing where we want to test
without a dependency on a credit bureau API)
Provider patterns in our
system where we need to conditionally turn on and off different
implementations at deploy-time.
This all seems very well suited for IoC and we have all the building blocks in place, but want to make sure I'm taking the most future-proof approach with Windsor.
UPDATE:
While I like the feature toggle approach, I recently discovered a Windsor feature that is very useful on this front - Fallback Components. I'm leaving this edit here for anyone that might stumbled across this later.
Configuring your DI container completely through XML is error prone, verbose, and just too painful. The XML configuration possibilities are always a subset of what you can do with code based configuration; code is always more expressive.
Sometimes though your DI configuration depends on deploy-time configurations, but since the number of knobs you need are often fairly small, using a configuration flag is often a much better approach than polluting your configuration file with fully qualified type names.
Or let me put it differently, when you have large amounts of your DI configuration placed in your configuration file because your might want to change them at deploy time, please think again. Many of the changes need testing (by a developer) anyway, so there is no way you want someone from your operations team to fiddle around with that. And when you need a developer to look at it and verify it, what's the advantage of not having to recompile the project? Is this actually any quicker? A developer would still have to start the application anyway.
It is a false sense of flexibility and in fact a poor interface design (xml is the interface for your maintenance and operations department). BTW, are you the person that needs to document how the configuration file should be changed?
Instead of describing the list of fully qualified type names that are valid somewhere in the middle of the xml file, wouldn't it be much easier of all you have to write is "place 'false' in this field to disable ..."?
Here is an example of how to use a configuration switch:
bool detectFraught =
ConfigurationManager.AppSettings["DetectFraud"] != "false";
container.Register(
Component.For(typeof(IFraughtDetector)).ImplementedBy(
detectFraught ? typeof(RealDectector) : typeof(FakeDetector));
See how the configuration switch is now simply a boolean flag. This makes the configuration file much more maintainable, since the configuration is now a simple boolean switch instead of a complete type name (that can be misspelled).
Of course doing the ["DetectFraud"] != "false" isn't that nice by itself, but this can simply be solved by creating a strongly-typed configuration helper.
This answer might help as well. Allows you to dynamically, at runtime, provide an implementation. Though, sounds like you don't need it that dynamically and it's a little less obvious what's going on.
There are no plans to obsolete or remove the XML config support in Windsor.
Yes, you are right, it isn't a preferred approach due to its numerous drawbacks.
Anything you can do in XML can be done in code (note that inverse is not true).
Also keep in mind XML is not all-or-nothing. There are many ways to achieve the scenarios you gave as examples without resorting to registration in XML.
Feature toggles
Conditional compilation
if/else in your installer based on a appSettings flag
others...
I've used each of them in different projects in the past.
In web development there is a lot of focus on REST-style architectures, with the objectives of minimizing (or eliminating) state. The web frameworks that I have seen all emphasize this style (Django, Rails, flask, etc.).
While I agree that this is a good fit for the web in general, there are also many cases where this is inadequate. In particular I am thinking of the case where you want the user to follow a process, i.e. you want to offer a number of steps and these steps should be completed in a certain order (possibly with optional steps, deviating paths, etc.)
A good example of this might be a shopping cart: First you have to make your selection, then enter your address, choose shipment type, enter your payment details, finish. You don't want the user to skip any of these steps and the process can become a lot more complex. Ideally I would want this process to be defined in a separate place to separate this logic from the rest of the implementation.
Now my questions:
Are finite state machines the way to go here? Do they still work well if these processes become complex and need to change a lot (e.g. this step should go here, this step should go into this process instead, etc)?
What options are offered by/for web frameworks (not any in particular I am interested in the best solutions)?
What are interesting / good examples of where such processes occur? Shopping carts are an obvious example but I am sure there are lots more.
Yes, they are. Using state machines (workflows) is an appropriate solution for the problem you described. If designed well it can make your code more cleaner, remove mess from the code. Logic of each state and transition logic are incapsulated within a State class object so the code looks cleaner and more maintainable . Implementations may vary (say, the place you keep your transition logic - within state or create a separate transition manager) and don't match canonical description of state machine in discrete math so you'd better try what works for you better.
For Ruby you can check workflow: https://github.com/geekq/workflow or stonepath: https://github.com/bokmann/stonepath. State machine pattern is also can be found in javascript frameworks (SpoutCore). It's not difficult to implement your own small state machine engine.
Interesting examples? Lots of them. Processing orders, banking operations, games. I used state machine when created behaviour correction module which includes phychological tests, games, video. The transitions from state to state depended there on if tests are answered correctly, if game played successfully etc.
PS. I used the terms of state machine and workflow as synonyms but they are not the same; it was discussed here: http://jmettraux.wordpress.com/2009/07/03/state-machine-workflow-engine/ . You can also find some Ruby code and links there.
Besides missing some of the benefits of Event Sourcing, are there any other drawbacks to adapting an existing architecture to CQRS without the Event Sourcing piece?
I'm working on large application and the developers should be able to handle separating the existing architecture into Commands and Queries over the next few months, but asking them to also add in the Event Sourcing at this stage would be a HUGE problem from a resourcing perspective. Am I committing sacrilege by not including Event Sourcing?
Event Sourcing is optional and in most cases complicates things more than it helps if introduced too early. Especially when transitioning from a legacy architecture and even more when the team has no experience with CQRS.
Most of the advantages being attributed to ES can be obtained by storing your events in a simple Event Log. You don't have to drop your state-based persistence, (but in the long run you probably will, because at some point it will become the logical next step).
My recommendation: Simplicity is the key. Do one step at a time, especially when introducing such a dramatic paradigm shift. Start with simple CQRS, then introduce an Event Log when you (and your team) have become used to the new concepts. Then, if at all required, change your persistence to Event Sourcing and fire the DBA ;-)
I completely agree with Dennis, ES is no precondition for CQRS, in fact CQRS on its own is pretty easy to implement and has the potential to really simplify your design.
You can find a smooth introduction to it here
Secondly what benefits does CQRS on its own bring to the table?
Simplifies your domain objects, by sucking out all the query concerns
Makes code scalable, your queries are separated and can be easily tuned
As you iterate over your product design you can add/remove/change
individual commands/queries, instead of dealing with larger
structures as a whole (e.g. entities, aggregates, modules).
Commands and queries produce a well-known vocabulary to talk with
domain experts. Other architectural patterns (e.g. pipes and filters,
actors) use terms and concepts that may be harder to grasp by
non-programmers.
Limits the use of ORM (if you use one), I feel ORM's bring in
unwarranted complexity if you try and use them for querying, the
abstractions are leaky and heavy, trying to tune them is a nightmare :). Using an ORM only on the command side makes things much
easier, plain old SQL is the best for queries, probably using a simple library to convert result sets into DTO's is the most you need.
More on how CQRS benefits design can be found here
Also do not forget about the non tangible benefits of CQRS
If you still have your doubts, you may want to read this
We currently use CQRS for projects with medium complexity and have found it be very suitable. We started out using a custom bootstrap code and have now moved on to using the Axon Framework to give us some of the infrastructure components
Feel free to PM me in case you want to know anything more specific.
There is a fundamental problem of Event Sourcing pattern, that is the events in the Event Store may not be compatible with your event handlers in your application any more due to code change.
That being said, whenever you modify the event handler by adding new features, you need to think about the backward compatibility. You have to make sure your code can always handle the same event in different stage created by different version of your code.
When your application becomes more complex, you will find it really pain in the ass to make it backward compatible.
I think Event Sourcing is what makes people to be afraid of CQRS. And thats for a reason. Its not natural - when you interact with something in real world you don't need to get whole history about this object.
“event sourcing is a completely orthogonal concept to CQRS” (source) - technically if you don't use ES you loose nothing from CQRS features.
I have no idea why Event Sourcing is considered as the only foundation for solving of some "messaging" related problems like: duplication/missing of messages, reordering of messages and data collisions, etc. Its not true that if you don't use Event Sourcing you cant create encapsulated means to solve such problems other way.
How i see alternative ways to implement CQRS's messaging using another data-organizing principle you can read here.
I propose "signed documents" approach where you treat your data not as composition of modification events, but as composition of immutable parts signed by responsible users. Im sure there can be a lot of other solutions to implement message flow and data storage. And you need to take your business model into account when selecting which one you like to use.
The best CQRS pattern based framework in my opinion is MediatR by Jimmy Bogard, If you don't need Event Sourcing in the beginning of your application development, MediatR is the right choice. Here is the repository- https://github.com/jbogard/MediatR
In my opinion, you're making a big mistake by not using event sourcing with CQRS.
First up, you'll almost certainly have issues synchronising your Query model with the Command model. With an event store, if the query side ever gets out of synch, you simply need to replay your events to correct it. That's the theory anyway!
But with Event Sourcing, you also get to store the complete history of all entity transactions. And this means you can decide to create new queries and views after implementation. These are very often views that would not be possible with non-Event Sourced CQRS. I've heard Greg Young give the example of querying items that have been added, and then removed, from a shopping cart. With Event Sourcing this is possible. Without ES it's not possible because you only store the final state of the cart.
I have this grand idea to basically employ some brute force attack to test/verify that my web application doesn't crash.
Don't get me started on unit testing, and IoC stuff, this is something else entirely.
What I'm doing, and what I'm asking for help with is to create an intelligent exhaustive search, that explore parts of the program state.
What I have is a web page with things I can do, clicking is one thing, text input is another, some inputs like radio buttons and drop down lists are constrained to certain values. Pretty basic things. What I end up with a finite set of events and values and what I want to model is a progression of state. Maybe this is FSM optimization in a way, but the goal is to systematically go through arbitrary permutations of events and values and see what happens.
When a problem is found I want to try and provoke that error with as little effort as possible to be able to present a clear test case.
This relates to formal verification methods and I'm asking for help or insight from people with experience.
What you want to do sounds a little like model-checking, on the one hand, and automated test case generation on the other hand (in the latter category check out Concolic testing, a technique to avoid wasting time with unfeasible execution paths).
Model-checking would be the preferred method if you assume your web application is correct and want to prove that it is. But in the case of a warning, you may have to work to understand if the problem is real or not. Test case generation is oriented towards bug-finding: it does not prove that you app is correct, but if it finds a problem, it gives you an input vector to produce it so you don't need to wonder if the problem is real.
I am not aware of any existing tools for web apps, but that doesn't mean that they don't exist.
It sounds like you want a fuzzer. Peach is one such tool.
Exhaustive search can be non trivial task for limited resource (memory,space) ,but with many techniques the problem can be reduced ,like abstracting you code (ex: replacing database driver classes with stubs), an experience is presented in this paper: Abstract Model Checking of Web Applications Using Java PathFinder (Vinh Cuong Tran, Yoshinori Tanabe, Masami Hagiya, University of Tokyo).
If you look to a kind of formal verification of FSM like models, Java PathFinder has an extension to verify UML state charts written in Java+annotation (it depends on the Javapathfinder VM):
http://babelfish.arc.nasa.gov/trac/jpf/wiki/projects/jpf-statechart
I think the title speaks for itself guys - why should I write an interface and then implement a concrete class if there is only ever going to be 1 concrete implementation of that interface?
I think you shouldn't ;)
There's no need to shadow all your classes with corresponding interfaces.
Even if you're going to make more implementations later, you can always extract the interface when it becomes necessary.
This is a question of granularity. You cannot clutter your code with unnecessary interfaces but they are useful at boundaries between layers.
Someday you may try to test a class that depends on this interface. Then it's nice that you can mock it.
I'm constantly creating and removing interfaces. Some were not worth the effort and some are really needed. My intuition is mostly right but some refactorings are necessary.
The question is, if there is only going to ever be one concrete implementation, should there be an interface?
YAGNI - You Ain't Gonna Need It from Wikipedia
According to those who advocate the YAGNI approach, the temptation to write code that is not necessary at the moment, but might be in the future, has the following disadvantages:
* The time spent is taken from adding, testing or improving necessary functionality.
* The new features must be debugged, documented, and supported.
* Any new feature imposes constraints on what can be done in the future, so an unnecessary feature now may prevent implementing a necessary feature later.
* Until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, it may not work right, even if it eventually is needed.
* It leads to code bloat; the software becomes larger and more complicated.
* Unless there are specifications and some kind of revision control, the feature may not be known to programmers who could make use of it.
* Adding the new feature may suggest other new features. If these new features are implemented as well, this may result in a snowball effect towards creeping featurism.
Two somewhat conflicting answers to your question:
You do not need to extract an interface from every single concrete class you construct, and
Most Java programmers don't build as many interfaces as they should.
Most systems (even "throwaway code") evolve and change far past what their original design intended for them. Interfaces help them to grow flexibly by reducing coupling. In general, here are the warning signs that you ought to be coding to an interface:
Do you even suspect that another concrete class might need the same interface (like, if you suspect your data access objects might need XML representation down the road -- something that I've experienced)?
Do you suspect that your code might need to live on the other side of a Web Services layer?
Does your code forms a service layer to some outside client?
If you can honestly answer "no" to all these questions, then an interface might be overkill. Might. But again, unforeseen consequences are the name of the game in programming.
You need to decide what the programming interface is, by specifying the public functions. If you don't do a good job of that, the class would be difficult to use.
Therefore, if you decide later you need to create a formal interface, you should have the design ready to go.
So, you do need to design an interface, but you don't need to write it as an interface and then implement it.
I use a test driven approach to creating my code. This will often lead me to create interfaces where I want to supply a mock or dummy implementation as part of my test fixture.
I would not normally create any code unless it has some relevance to my tests, and since you cannot easily test an interface, only an implementation, that leads me to create interfaces if I need them when supplying dependencies for a test case.
I will also sometimes create interfaces when refactoring, to remove duplication or improve code readability.
You can always refactor your code to introduce an interface if you find out you need one later.
The only exception to this would be if I were designing an API for release to a third party - where the cost of making API changes is high. In this case I might try to predict the type of changes I might need to do in the future and work out ways of creating my API to minimise future incompatible changes.
One thing which no one mentioned yet, is that sometimes it is necessary in order to avoid depenency issues. you can have the interface in a common project with few dependencies and the implementation in a separate project with lots of dependencies.
"Only Ever going to have One implementation" == famous last words
It doesn't cost much to make an interface and then derive a concrete class from it. The process of doing it can make you rethink your design and often leads to a better end product. And once you've done it, if you ever find yourself eating those words - as frequently happens - you won't have to worry about it. You're already set. Whereas otherwise you have a pile of refactoring to do and it's gonna be a pain.
Editted to clarify: I'm working on the assumption that this class is going to be spread relatively far and wide. If it's a tiny utility class used by one or two other classes in a single package then yeah, don't worry about it. If it's a class that's going to be used in multiple packages by multiple other classes then my previous answer applies.
The question should be: "how can you ever be sure, that there is only going to ever be one concrete implementation?"
How can you be totally sure?
By the time you thought this through, you would already have created the interface and be on your way without assumptions that might turn out to be wrong.
With today's coding tools (like Resharper), it really doesn't take much time at all to create and maintain interfaces alongside your classes, whereas discovering that now you need an extra implementation and to replace all concrete references can take a long time and is no fun at all - believe me.
A lot of this is taken from a Rainsberger talk on InfoQ: http://www.infoq.com/presentations/integration-tests-scam
There are 3 reasons to have a class:
It holds some Value
It helps Persist some entity
It performs some Service
The majority of services should have interfaces. It creates a boundary, hides implementation, and you already have a second client; all of the tests that interact with that service.
Basically if you would ever want to Mock it out in a unit test it should have an interface.