I am building a application where I am trying to use IBM Watson question & answer API.
Currently I see only corpus for Healthcare and Travel, but I would like to ingest Custom dataset suiting my needs. Can anyone please point me to right direction or exact API which does that or IBM already built explorer which I can use to upload the data files directly.
Thanks for the help
At this time you can not ingest corpus data into IBM Watson. That is coming in the future.
Related
I am a new bee to IBM Watson. I went through videos to create virtual assistant/chatbot where we could define intents/entities and answer accordingly. This seems fine when I have limited number of intents/entities. But say, I have a eBook and I want to train Watson to answer from this eBook. How do I achieve this. Anyone high level approach or direction will be really helpful.
There are different approaches.
You could use the integrated search skill which provides a link to Watson Discovery. You would upload your eBook to Watson Discovery and kind of index it.
Another approach is to use a database or something else as backend. Based on the input which identifies the search term and scopes which eBook to search, the answer would be retrieved from the backend database. This tutorial features a Db2 database and Watson Assistant retrieves the answer from the database. A similar approach is taken in this sample which shows how to retrieve excerpts from Wikipedia.
I'm creating a pipeline in Google Data Fusion that allows me to export my bing-ads data into Bigquery using my bing-ads developer token. I couldn't find any data sources that should be added to my pipeline in data fusion. Is fetching data from API calls even supported on Google Data Fusion and if it is, how can it be done?
HTTP based sources for Cloud Data Fusion are currently in development and will be released by Q3. Could you elaborate on your use case a little more, so we can make sure that your requirements will be covered by those plugins? For example, are you looking to build a batch or real-time pipeline?
In the meantime, you have the following two, more immediate options/workarounds:
If you are ok with storing the data in a staging area in GCS before loading it into BigQuery, you can use the HTTPToHDFS plugin that is available in the Hub. Use a path that starts with gs:///path/to/file
Alternatively, we also welcome contributions, so you can also build the plugin using the Cloud Data Fusion APIs. We are happy to guide you, and can point you to documentation and samples.
I’m trying to analyze customer feedback data (unstructured data) using IBM Data Science Experience (DSX) and Bluemix services. The objective is to do sentiment analysis.
Is it possible to call the Bluemix instances from DSX for this exercise? If yes, I’m looking for a sample Watson Machine Learning Flow.
Any alternative idea?
The supported list of transformations in IBM's ETL service DataConnect in Bluemix Cloud are these ones here: https://console.ng.bluemix.net/docs/services/dataworks1/using_operations.html#concept_h4k_5tf_xw
I have looked and looked but with no luck, what if I want to transform some of my data with an operation that is not included here? For example run custom code in a column and get some specific output?
Data Connect does not currently support refine operations outside of those provided with the service. We are adding new features and functionality weekly, but if you have a specific operation in mind, please let us know.
I will find out for you if we have the ability to execute custom code on our roadmap.
Regards,
Wesley - IBM Bluemix Data Connect Engineering
As Wes mentions above in the short term we will continue to add new data preparation and transformation capabilities to the service. Currently there is no extensibility that allows you to code new transformations.
In the longer term we are considering allowing users to edit/extend pipelines using languages like Scala and Python. We don't have a defined date for these new capabilities.
Regards,
Hernando Borda
IBM Bluemix Data Connect Product Manager
How can I extract data using Talend from websites such as below to do some data analysis:
Airbnb,
change.org
monster.com
ebay
I am new to TOS and not familiar with internet components. I think I may be confused regarding what connectors to use (trest, tsoap...). If anyone could help me understand which kind of connectors are needed that would be great.
You can use following architecture
tREST --> tExtractJSON or tExtractXMLFields component depending on your requirement.