Hi I'm new to watson conversation and Unity3D.
Thing I want to make is one simple screen that can talk with Chatbot.
TO do this I got watson conversation Unity SDK in my project.
and did what this page said. However I can't make the red dot to green like this. I barely experienced in coding area :(
How can I started? I want to make simple scene that can talk with chatbot.
You will have to create a workspace within the Conversation service on Bluemix. Click Launch Tool in the Manage page in your Conversation instance on Bluemix. Here you will be able to create a workspace. Once you have created a workspace, from the Workspaces tab you can click the three dots on the upper right of your workspace and View Details. From there you can copy your Workspace ID.
Back in Unity in the ConfigEditor under the Watson menu, create a variable called ConversationV1_ID and paste the value of your Workspace ID there.
In your calls to the Conversation service you will need to add a reference to this Workspace ID. You should be able to access the variable from the config file using
m_WorkspaceID = Config.Instance.GetVariableValue("ConversationV1_ID");
Related
First of all, thanks for taking the time to read through our issue.
So, we have a Dialogflow project connected to an already existing google project. When we try to test our skills on google by the Integrations tab, it displays an error 'Precondition check failed' without any more information, even though it still updates and uses our Dialogflow intents as it should.
The problem comes when we update anything on the actions console or try to make an alpha deploy of our skill. The moment we change anything, it comes back to the default configuration with the message 'Start building your action by defining the main invocation.' on the main invocation.
We have no clue how to handle this problem or if we have to configure something special on either of the systems to make it work. Any ideas are welcome.
If you want to integrate the Google Assistant with Dialoflow, I strongly recommend you check the new development Google Assistant's platform with a built-in conversation builder, here. Furthermore, there is a quick start guide and conversational actions guide.
As I mentioned in the first comment, you need to have appropriate permissions to create an interaction within Dialogflow, you can check the pre-defined roles here. In addition, since you are starting with Dialogflow, I would advise you to start with the available quick-starts and the setup tutorial, which explains how to begin using Dialogflow.
In Watson Dialog, <folder label="Global"> could be used to handle objections.
If in middle of some dialog user type an objection, folder Global could answer and after that keep the dialog at the same point.
I trying to do the same with Watson Conversation but I'm lost. Apparently it is not possible or not easy. The node everything_else don't solve the problem. It breaks the conversation.
Watson Conversation is or not is a evolution of Watson Dialog? It has less features?
Conversation and Dialog are two different systems. Dialog would maintain state, while in Conversation you are expected to maintain it.
There is no global feature at this time, but you can simulate the feature through two different ways.
1. Two workspaces.
This option is probably the easiest. You have your second workspace with all your global terms. In your process flow of the first workspace at the end of a check area you have a keyword. This keyword triggers your application layer to search the second workspace for the global answer.
This way you can maintain your position in the first workspace easily.
This example uses the return text "SearchGlobal" to trigger it. Once it completes, it will return to asking for a yes/no.
2. One workspace. Global folder
In this case when you see the "SearchGlobal" text, you store the context object from the response. Then send the users input again, only with the context object to jump to a related branch.
You can do this by either loading a context variable, or storing a pre-existing context object to jump to a branch. The latter is a little tricker.
I asked this question on the Project Tango Google+ page and it was suggested that I post it here.
Something that I'm very confused on is area learning. Apparently, how it works is that you scan a room, save the ADF file, then later you can visit the same room and load the ADF file and it will know your position in the room, correct?
Does anyone have any experience doing this in Unity? There's a "Save ADF" button in the example, but no way to load it afterward? How do you use ADFs you've previously saved? It's all very confusing to me right now. Can anyone help explain things a bit better?
DEPRECATED
Your understanding and explanation about the working of Area Learning and Area Description Files is right.
There is an example called "AreaLearningUnity" in Project Tango Unity Examples repo showing the usage of this functionality.
In this example you can save the ADF by clicking the SaveADF button, and when you restart the app again, it automatically loads the last saved adf.This functionality is executed by the following code in the example.
if(m_useADF)
{
// Query the full adf list.
PoseProvider.RefreshADFList();
// loading last recorded ADF
string uuid=PoseProvider.GetLatestADFUUID().GetStringDataUUID();
m_tangoApplication.InitProviders(uuid);
}
To choose a specific UUID instead of the latest one, you can use GetCachedADFList() call which returns a list of ADFs saved on your device, which can be used to choose the ADF you want to load.
I encourage you to take a look at PoseProvider Class in Project Tango Unity SDK.
EDIT: The SDK has changed so much, so this can be marked as deprecated.
I would like to add functionality to the AtTask system by "adding a layer".
What I want to know is whether this can be achieved with a plug-in for Internet Explorer.
To give a concrete example:
This extra layer would allow users to click on "Online Edit" document (which is not available right now). The linked application will open, and when you click save, the file is loaded back to AtTask.
All this happens in the background via the AtTask API, and is transparent to the user.
The question is: is it possible to add functionality to a site by somehow adding layers?
Last comment: this plug-in (or whatever needs to be installed inbto the browser) will only be visible/active when accesing the AtTask website.
Thanks in advance for your responses.
Within the confines of AtTask your best bet is to use an "External Page" create a service that handles the data in the manner you need.
The Dashboard that contains your External Page can be added as a tab via Layout Templates.
Most of the heavy lifting would have to be handled by your application. You would have to link the document(s) you wish to edit.
Some sort of referrer would be necessary to place the revised document back into AtTask. The method in which the client can do this would be determined by your preference and requirements. I am sure you can find some sort of Wiz-Bang jQuery uploader.
Depending on the level of control you have with your user base, you could make an Application URL
Windows : Registering an Application to a URI Scheme
OS X : Launching Scripts from Webpage Links
I do not know of any other way to handle this other than what Steve suggested.
Having said that a possible solution to this is to create a new application and iframe in AtTask.
At the top or wherever on the page your application could have a link for "Online Edit". Then you would use javascript to extract the task id, project id or any other needed information you need for the api to get the needed content to edit. Then save using the same api information.
I have not tried this type of method but theoretically it could work.
I don't know where I need to register my app after created my first project. I already followed their instruction regarding to app registration.
https://cloud.google.com/console
To register a new application, do the following:
Go to the Google Developers Console.
Select a project, or create a new one.
In the sidebar on the left, select APIs & auth. In the displayed
list of APIs, make sure all the APIs you are using show a status of
ON.
In the sidebar on the left, select Registered apps.
At the top of the page, select Register App.
Fill out the form and select Register.
Thanks a lot in advance.
The instructions you pasted refer to the older API Console.
You can either figure out their Cloud Console equivalents (Create Project, etc), or go to the previous version at https://code.google.com/apis/console/b/0/?noredirect
It's worth pointing out that Google's terminology is somewhat muddled. Sometimes "app" refers to a "project", other times it refers to a "client" within that project.
For example, you might have a project called "My Multi-Device Task List". That may have a web client, an Android client and an iOS client. Sometimes the word "app" refers to the project, other times it refers to one of its clients.