How to extract two slot values having time in RASA - chatbot

I have a FormAction for Applying Leave which requires 3 fields
Leave_Type
Start_Date
End_Date
Now how to extract date and set appropriately in the slots. As the user input can simply be just a date value like - “12/09/2017” or “12 July 2007” or “Sept 21 2016”. The Form will prompt for each of the slot.
Duckling provides a way to enter range but for that user query should be like- "I want to apply leave from 12/2/2018 to 13/2/2018". But my bot prompts user with FormAction for each slot. So when bot asks for Start_Date the input date should be mapped to Start_Date slot

From what I understand, you want to extract both the start and end date slots from the same sentence if the user says something like "I want a leave from 21st September to 23rd September", and not have the bot ask for the end date again.
Since you are parsing dates and date-ranges, I recommend you include Duckling as a component in your NLU pipeline. It returns a plain string if it's a single date and a dict with from and to fields if it's a date range. So, in your action code, you could check the type of the returned entity and either fill both of the slots or just one of them.
Also, like Mukul mentioned, you will have to use slot mappings to map from the "time" entity returned by Duckling to your slots.
Your final solution will probably look something like this (I haven't included the leave type slot).
class LeaveForm(FormAction):
def name(self) -> Text:
return "leave_form"
#staticmethod
def required_slots(tracker: Tracker) -> List[Text]:
return ['start_date', 'end_date']
def validate_start_date(self,
value: Text,
dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any],
) -> Optional[Text]:
# Check if value is a Duckling date-range.
if isinstance(value, dict):
# Since both the fields are populated, the form
# will no longer prompt the user separately for the end_date
return {
'start_date': value['from'],
'end_date': value['to']
}
else:
return {
'start_date': value
}
def slot_mappings(self) -> Dict[Text, Union[Dict, List[Dict]]]:
return {
"start_date": self.from_entity(entity="time"),
"end_date": self.from_entity(entity="time")
}
def submit(self, dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any]) -> List[Dict]:
dispatcher.utter_template('utter_submit', tracker)
return []

Related

ag-grid: How to sort set-filter values after translation while keeping the raw (untranslated) filter values

I have a problem using ag-grid and set filters.
The values of the set-filter are a list of enums (strings) received from the backend
We translate these enum values to different languages using valueFormatter
Now we want the set filter to be sorted according to the translated texts
Here is a snapshot of our filter params:
filterParams: {
values: this.filterValuesFromBackend,
valueFormatter: (params: ValueFormatterParams) => this.translate(params.value)
}
As far as I can see sorting is done using unformatted values (enum values) and value formatting (here: translation) happens afterwards.
Important: Our table is a server side table where we use the filters in database queries.
Just using already translated texts as filter set values is no solution: We need the raw enum values from the set filter for database filtering afterwards.
Our current workaround ist to use a comparator which processes many additional translations just for sorting:
filterParams: {
...
comparator: (a: string, b: string) => {
const textA = this.translate(a);
const textB = this.translate(b);
return textA.localeCompare(textB);
}
},
Here is a simple example:
Filter values: bike, car, bus
Frontend display translated and sorted values:
in Englisch: bike, bus, car (alphabetic order)
in German: Auto, Bus, Fahrrad (alphabetic order, English: car, bus, bike)
in French: bus, vélo, voiture (alphabetic order, English: bus, bike, car)
Backend received "red", "green" and "blue" for database filtering
Do I miss something? Is using valueFormatte correct for translation tasks?

How to set a default value to server generated time?

Is is possible to have a datetime field which has as a default server generated time during save?
class ADoc(Document):
...
created_at = me.DateTimeField(default=datetime.datetime.utcnow)
This model has two issues:
created_at will get a value during model instance creation, not the time when the document is saved into the database.
Client time is used, which might differ from the server time -- I'd like to always use server time as the time source.
Maybe you can use the operator $currentDate ?
UPD:
I like pymongo, but I think that can be for MongoEngine, you can try:
collection = Animal._get_collection()
collection.update({}, {"$currentDate": {"date": 1}}, upsert=true)
example here
By default the _id field contains the timestamp on creation.
You can access it like this ObjectId("507c7f79bcf86cd7994f6c0e").getTimestamp().
There's also $currentDate and new Date.
I would use defined by me insert method in my ADoc class:
from datetime import datetime
class ADoc:
def __init__(self, db, config):
self.docs= db['docs']
self.config = config
def insert(self, document):
item = {
'name': document['name'],
'created': datetime.utcnow()
}
self.docs.insert_one(item)
then simply execute the method when needed
client = MongoClient(DB_URL)
db = client['yourDb']
doc = "your new doc"
adoc = ADoc(db, app.config)
adoc.insert(doc)

MongoDB Query: How to add some days to a date field before comparing it with a given date

The following code snippet retrieves all the users that have been modified before a given date:
val date = DateTime(...).getMillis
users.find(
Json.obj("lastUpdate" -> Json.obj("$lt" -> Json.obj("$date" -> date))),
None, None, page, perPage
)
How do I retrieve all the users that have been modified within a period starting from lastUpdate? In other words, I need to add some days to lastUpdate and compare it with a given date:
users.find(
Json.obj("lastUpdate" -> /* how do I add N days to lastUpdate before comparing it with date? */
Json.obj("$lt" -> Json.obj("$date" -> date))
),
None, None, page, perPage
)
You can't. A simple query in MongoDB can't use values from the document it's querying. You can only use values that are constant throughout the query.
Valid Query:
{
FirstName : "bar"
}
Invalid Query:
{
FirstName : LastName
}
You can however acheive that by other means, for example MongoDB's aggregation framework

Using MapReduce/Aggregation to create search facets

I have the documents of the following prototype:
{
title: "HD8200 DLP Projector",
normal_price: 4999.99,
specifications: [
{
ov: "HD (1920 x 1080)",
fn: "Resolution (Native / Max)",
o: 7,
f: 211
},
{
ov: "20000",
fn: "Contrast Ratio",
o: 15,
f: 225
}
]
}
I'm looking to create a list of filters for this product database, based on the specifications.
How can I get a list of option IDs (o) mapped to their product counter, for each field (f)?
Let's assume I need to achieve this for a specific list of field IDs (say, 211 and 225).
Write a mapper to take one document per map() call and write out a record for each option. The record key would be the field ID and the value would be a concatenation of a flag "1", the product title and option ID.
Write another mapper to read the list of id's and to write out a record with the same format: the key is the field ID and the value is a concatenation of a flag "0" and two empty strings.
Write a Reducer to read in the records written by the two mappers. Each call to reducer() will pass in all records written for a given field. If one of those records has a "0" flag then that field was one of the fields you're interested in. Only then would you write out a record for each record with a "1" flag. The key is the product title and the value is the option id.
Define your first job driver class to use the two mappers and the one reducer. This step will come up with pairs of product-options for the field id's you specify. You'll need a second job to gather the options by product, or vide versa.

Golang/mgo: Cannot retrieve value of int field from MongoDB document

I am querying a collection that includes an integer value among it's values, and loading resulting documents into this struct:
type Subscription struct {
Id bson.ObjectId "_id,omitempty"
Listen string
Job string
TimeoutSeconds int
Data string
}
var subscription Subscription
subscriptions := subscriptionsCol.Find(bson.M{"listen": "example_channel"}).Iter()
for subscriptions.Next(&subscription) {
log("Pending job: %s?%s (timeout: %d)\n",
subscription.Job,
subscription.Data,
subscription.TimeoutSeconds)
}
This is what phpMoAdmin shows me:
[_id] => MongoId Object (
[$id] => 502ed8d84eaead30a1351ea7
)
[job] => partus_test_job_a
[TimeoutSeconds] => 30
[listen] => partus.test
[data] => a=1&b=9
It puzzles me that subscription.TimeoutSeconds contains always 0, when I'm positive I have 30 in the document I inserted in the collection.
All other values are retrieved OK.
What's wrong with the int type?
Have you tried setting the "key" value for that field?
Unmarshal
The lowercased field name is used as the key for each exported field,
but this behavior may be changed using the respective field tag.
type Subscription struct {
Id bson.ObjectId "_id,omitempty"
Listen string
Job string
TimeoutSeconds int "TimeoutSeconds"
Data string
}
The other fields are working fine because their lowercase value matches your Mongo fields in the collection, whereas TimeoutSeconds is using the TitleCase. What is happening is the int field is being left at its zero value, since the Unmarshal can't map a field to it.
When UnMarshalling data, there are multiple keys that are supported.
Below are some examples:
type T struct {
A bool
B int "myb"
C string "myc,omitempty"
D string `bson:",omitempty" json:"jsonkey"`
E int64 ",minsize"
F int64 "myf,omitempty,minsize"
}
The general spec for 1 key-value pair during marshal is :
"[<key>][,<flag1>[,<flag2>]]"
`(...) bson:"[<key>][,<flag1>[,<flag2>]]" (...)`
GO provides support for particular keywords like bson (for mongo keys) and json for setting the json key in a resposne.
Check the Marshal GO Reference for more information.
Similarly there are some frameworks which provide further options to define the keys befor parsing. For example, in sql jinzhu github library gives support for setting default values, column ids to map, etc.
Anyone can use this feature and provide customized support.