For ibm chatbot can I tell chatbot that a word in a conversation is a entity or do I have to make it an intent.
For example
What are your interests? Question
Sports answer
Can I add sports as a entity right from the menu?
Entities by themselves will work if you know what the person is going respond with, and doesn't deviate much.
Where you don't know, but you know the structure in how they ask then you can use contextual entities.
The last option is to shape your message to the end user to change their behavior.
For example "What are you interests" is very broad. Two examples:
"Do you like to play sports?" = Gives a yes/no answer, which you can drill down on.
'What kind of sports do you like?" = Allows you to make a narrow entity to catch the answer.
I would recommend you to make interests an entity, because it makes more sense.
This quote from IBM developer blog talks about the use case for entities:
In some scenarios, it makes sense to define the entities of interest in your Conversation workspace; for example, when defining a “toppings” entity for a pizza ordering bot. However, in other scenarios, it is impractical to define all possible variations for an entity and it is best to rely on a service to extract the entity; for example, when extracting a “city” entity for a weather bot.
Also, How to build a Chatbot with Watson Assistant (free chatbot course) is a great beginner course.
Related
I'm building a chatbot, but I don't know how to add a complex machine learning entity, what I'm trying to do is to request fullname and extract first name, second name, middle name, etc. I know how to do it by using LUIS interface bot with bot composer I feel a little confused.
Something like this, but with bot framework composer:
Language Understanding (LU) is a core component of Composer, allowing developers and conversation designers to train language understanding directly in the context of editing a dialog.
As dialogs are edited in Composer developers can continuously add to their bots' natural language capabilities through a simple markdown-like format that makes it easy to define new intents and provide sample utterances.
Here is the sample for Built-in Language Generation and expression library.
https://learn.microsoft.com/en-us/azure/bot-service/file-format/bot-builder-lu-file-format?view=azure-bot-service-4.0#machine-learned-entity
I have a requirement to develop a test framework using Cucumber .
Requirements:
There is a Soap WS already developed for an existing project few years ago
There is a new REST ws developed for the same project
I have to validate the responses from the SOAP WS with responses of REST WS and check if both are same or not using a feature file (cucumber)?
basically have to check if fields values in Soap and REST are equal.
how to write the scenario and how to map those fields in SOAP ws with REST ws?
I am very new to BDD and cucumber. elaborate answers are very much appreciated
How to create the scenario
In BDD, we create a scenario with contexts, events and outcomes.
The context is the state in which the scenario starts. So in your case, you have two services, and probably something created in those services that might change the responses. Those become your "Given"s.
Let's say your services are dealing with user details. (Adapt these accordingly for whatever information matters to your scenario.)
Given the REST service has a user 'Chandra Prakash'
And the SOAP service has a user 'Chandra Prakash'
Most of the time in BDD we only have one event happening in each scenario. There can be two events though if there's interaction or something like time passing, and I think this counts too. So we make a call to both services, and the events become your 'When'.
When we ask the REST service for all users with family name 'Prakash'
And we ask the SOAP service for all users with family name 'Prakash'
And the last bit is the outcome, where we look to see if we got the value that's important to us.
Then both services should have found a user 'Chandra Prakash'.
Those lines, together, become the scenario in your feature file.
Rather than just having one scenario to verify that both are the same, I'd look for examples of what the systems do, and use one scenario for each example. But if you want that last call to just check they're both identical, that would be OK.
Wiring up the scenario to automation
Most examples of Cucumber that you find will use something like Selenium to automate over a web page, but you don't have that. Instead of a UI, you have an API. You're coming up with examples of how one system uses another system.
Your automation will be setting up data in the Givens, then calling the REST service or the SOAP service as appropriate in the Whens.
If you work through Cucumber's tutorial, and then use their techniques to wire the scenario up to the two services, you'll get what you're after.
Note that Cucumber has different flavours for different languages, so pick the right flavour for yourself.
Scenarios vs. Acceptance Criteria
In general, I like to ask, "Can you give me an example?" So, can you give me an example of what the REST service and the SOAP service do? If the answer is something of interest to the business and non-technical people, then you can write that example down. Try to use specific details, as I've done here with a user and their name, rather than being generic like
Given the REST service has a user with a particular family name
When we ask the REST service for users with that family name
Then it should retrieve them
This, here, isn't a scenario, even if it's good acceptance criteria. We want our scenarios to be specific and memorable. Try to use realistic data that the business would recognize.
Cucumber vs. plain code
There's one small problem that I can see.
It sounds as if what you're doing might be a technical integration test, rather than an example of the systems' behaviour. If that's the case, and there's no non-technical stakeholder who's interested in the scenario, then Cucumber may be overkill. It's enough to create something in the unit-level framework for your language (so JUnit for Java, etc.). I normally create a little DSL, like this:
GivenThePetshop.IsRunning();
WhenTheAccessories.AreSelected("Rubber bone", "Dog Collar (Large)");
ThenTheBasket.ShouldContain("Rubber bone", 1.50);
ThenTheBasket.ShouldContain("Dog Collar (Large)", 10.00)
ThenTheBasket.ShouldHaveTotal(11.50);
But even that's not needed if you're not reusing the steps anywhere else; you can just put the "Given, When, Then" in comments and call straight to the code.
Here are some arguments I'd use against Cucumber in this instance:
It's harder to refactor plain English than code
It introduces another layer of abstraction, making the code harder to understand
It's another framework for developers to learn
And it takes time to set it up in CI systems etc. too.
If you can't push back, though, using Cucumber won't be the end of the world.
Have some conversations!
The best way to get examples out is to ask someone, "Can you give me an example?" So I'd find the person who knows the business processes best, and ask them.
Please do this even if it's just an integration test, as they'll be able to help you think of other examples in which the systems behave differently. Each of those becomes another scenario.
And there you go! If you do need help that StackOverflow can't provide, there are mailing lists for both BDD and Cucumber, and plenty of examples out there too.
Ok, so I'm working on some health related app.
So far, we have our custom database, with Rest API end points, java spring app and oracle database.
Now they are considering to move to the HL7/FHIR specifications. I know pretty much nothing about this framework.
One of our requirements is some sort of audit module recording all sorts of events such as "this patient file got modified by that doctor".
The thing is the framework seems to include an AuditEvent module.
https://www.hl7.org/fhir/auditevent.html
Ideally when a PUT rest call occurs on a "patient" resource, we would create and save a new AuditEvent resource.
The problem I face is how do I know the author of the PUT? The staff member that triggered the patient record update?
There is nothing in their REST recommendations that specifies how we are supposed to cover that aspect? The "author" of a PUT.
https://www.hl7.org/fhir/http.html#vread
Is it specific to how we implement the specifications, some sort of session/security related userID
Many Thanks
PS: there would be other types of events apart from just recording REST calls.
The typical mechanism for identifying users in FHIR is OAuth. There's a bit of discussion on this in the specification here: http://www.hl7.org/fhir/security.html
It makes reference to the Smart on FHIR work which gives some additional guidance.
As well, you may want to look at the Heart work: http://openid.net/wg/heart
The high-level gist is that the authentication happens at the HTTP layer via redirects which then results in a token that gets included in the HTTP header for the PUT and other RESTful operations.
I'm completely new to REST. I helped to implement something that was called REST at work but it breaks so many of the rules that it's hard to qualify it as REST. I want to follow the HATEOAS guideline and the remaining question I have is regarding documentation of media types and their specification. Namely when one media type is really an extension of another.
For example, I've decided on 'application/hal+json' for the base media type. Everything that a user would receive is going to be a HAL blob with some added fields. I don't want to call my media types just 'application/hal+json', it seems to me more information should be available than that, but I want it to be clear that this is what they are in addition to the extra fields that are my data. Furthermore my system is going to end up having some of these fields inherited and such in both the request (which won't be HAL blobs) and response formats. A generic "User" type might have just a user id and name for example while an extension like "Student" or "Teacher" will have different additional fields.
Does it make sense to represent this extension somewhere in the media type itself? Do people generally document the relationship in their HATEOAS documentation links? If so, what's the general trend here? I want my API to be easy to use and thus figure it should follow norms that are available.
Couple of things i'd like to point out about your move to a true RESTful architecture.
First, RESTful APIs must perform content negotiation. To say your base type is hal+json is seems strange. It sounds like you want to have types like parent+hal+json or maybe hal+json;type=parent. This would mean your client would have to understand these types specifically...and this would be not very RESTful because it's just a local implementation. This is fine in the real world..you can do this...almost everyone does stuff like this.
To be true RESTful api you'd have to offer similar support for other content types...that could get messy.
Now specifically to HAL, there's two things available to you so that your client can "discover" what data types they are getting back. One is CURIEs and the other is Profiles https://datatracker.ietf.org/doc/html/draft-kelly-json-hal-06#section-5.6 I think Profiles is more what you're after here as it allows you to document conventions and constraints for the resource that will be retrieved.
But don't count out CURIE's either. There's lots of defined semantics out there already. Your model might fit one of http://schema.org sets and then you can just use their link relationships and the client should know what's going on.
If you really want a lot of control on the semantics of your resources...you may want to look at http://json-ld.org/ with it's #context concept would be a good reference.
In my opinion this is an area where the examples are very thin, especially for HAL. I've yet to seen a client smart enough to parse and care about semantics at run time. What i think is important is when someone is building the client that info is available they can figure out that a Student is a Person. One day the thing building the clients will be client generator code and it'll use that info to build a nice client side object model for you.
TL;DR if you stick with HAL use CURIEs and Profiles to get what you want.
This question is an extremely open ended discussion and it really depends on how different engineers interpret REST standards and best practices. Nonetheless, As a fellow software engineer with enough experience with REST services development (and faced the same questions professionally as you did), I would add my inputs here.
REST service development rules are heavily dependent on the url definitions. It's very important that you expose your apis in such a way that your clients can understand exactly what's happening per api just by looking at the url definition.
With that being said, different clients (and for that matter different engineers) view the best practices differently. For example, if you are trying to search a User by email, there are atleast two approaches
1) GET /users/emails/{email} // A client can interpret this as
"getting a user by email"
2) GET /users?email={email} // A client can interpret this as
"searching a user by email" because of query param
3) GET /users/email={email} // This can be interpreted just as # 1
It depends on the developer how they want to expose this api and how they document it for the clients. All the approaches are correct in different perspectives.
Now going specific to your questions. Here's what my approach would look like in terms of "User", "Student" and "Teacher".
I look at each of these 3 as separate resources? Why? because they are separate types even though 2 of them are extended from the 3rd one. Now how would my apis look like for these?
For Student:
1) Retrieving a list of students : GET /students
2) Retrieving a Student with id: GET /students/{id}
3) Creating a Student : POST /students
4) Updating a student : PUT /students/{id}
5) Delete a student: DELETE /students/{id}
6) Search a student: GET
/students?{whateverQueryParamsYouWantForSearch}
The same will be applied for Teacher as well.
Now here's for User.
1) GET /users : Retrieving list of all the users (Students and
Teachers)
2) GET /users?type={type} : Here's the kicker. You can specify the
type to be student OR teacher and you will return the data of specific
type (properly documented ofcourse)
3) POST /users?type={type} : Create a specific TYPE of user
(student or teacher)
.. And so on.
The main difference is .. the apis with root url /users can be used for BOTH types of users (Provided the type is always specified and documented to the clients). While the Student and Teacher apis are specific to those types.
My money has always been on specific types while generic types for searching (meaning for searching both types of users .. use /users?params). It's the most easy way for the clients to understand what's going on. Even documenting them is much easier.
Finally talking about HATEOAS. Yes, it's a part of the standards and the best practice is to ALWAYS provide a url/link to the resource you are returning or if your return object is complex and contains other resources which might contain resources who might themselves be exposed through apis. For example,
/users?type=student&email=abc#abc.com
will return all users with that email and it's better to follow HATEOAS here and provide a url to each returned user such that the url looks like : /students/{id}. This is how we have normally handled HATEOAS
That is all I have to add. As I said earlier, its a very open ended discussion. Each engineer interprets the standards differently and there is no ONE WAY to handle all usecases. There are some base rules and the clients and other developers will applaud you if you follow them :)
In my project,i have workflow which operates on multiple entities to accomplish a business transaction. What is the best place to represent the workflow logic? currently i just create a "XXXManager" which is responsible for collaborating with entity objects to conclude a business transaction. Are there other options?
I'd say you're doing the right thing having something collaborate with multiple entities to get something done. The important thing is that each entity (and indeed each service) should have a single responsibility.
The overarching workflow you're talking about is something that you can consider as a part of your Application Layer.
According to Paul Gielens (paraphrased) The Application Layer’s responsibility is to digest the course-grained requests (messages/commands) to acheive a certain overarching goal. It does this by sending a message to Domain Services for fulfillment. It then also (optionally) decides to send notifications to the Infrastructure Service.
But then what is a 'Service'?! It's an overloaded term but one that's described well (again, by Paul Gielens)
You might also want to read about Onion Architecture for more ideas...
Usually there is a domain object that should actually handle the control which is mistaken for an "entity".
Is there an instance of something that gets created as a result of this workflow? If so, the workflow logic probably belongs in there. Consider "Order" in the model below.
alt text http://img685.imageshack.us/img685/4383/order.png
Oder is both a noun and a verb. The "Order" is the object that created as a result of "Ordering". Like any good class, it has both data and behavior (and I don't mean getters and setters). The behavior is the dynamic process that goes with the data, i.e., the ordering workflow. "Order" is the controller.
This is why OO was invented.
DDD might not be exactly about this sort of thing, so I would suggest taking a look at the Service Layer architectural pattern. Martin Fowler's book Patterns of Enterprise Architecture is a good book that will explain it. You can find the description of the pattern on Fowler's web site as well.
Creating Workflow systems can be a daunting prospect. Have you considered using Workflow engines?
If I understand you correctly, you will need to create a manager which keeps track of the different transactions in the worklflow, related to user. There are probably other ways of doing it - but I've always used engines.
To the great answers, I'd like to add "domain events" (link is just to one possible implementation) which is something Evans himself has come to put more focus on ("increased emphasis on events").