can slot take entity values without a action function or forms in RASA? - chatbot

is it possible to pass values in the entity to slots without form or writing an action function?
nlu.yml
nlu:
- intent: place_order
examples: |
- wanna [large](size) shoes for husky
- need a [small](size) [green](color) boots for pupps
- have [blue](color) socks
- would like to place an order
- lookup: size
examples: |
- small
-medium
-large
- synonym: small
examples: |
- small
- s
- tiny
- synonym: large
examples: |
- large
- l
- big
- lookup: color
examples: |
- white
- red
- green
domain.yml
version: "2.0"
intents:
- greet
- goodbye
- affirm
- deny
- mood_great
- mood_unhappy
- bot_challenge
- place_order
entities:
- size
- color
slot:
size:
type: text
color:
type: text
responses:
utter_greet:
- text: "Hey! can I assist you ?"
utter_order_list:
- text : "your order is {size} [color} boots. right?"
stories.yml
version: "2.0"
stories:
- story: place_order
steps:
- intent: greet
- action: utter_greet
- intent: place_order
- action: utter_order_list
debug output: it recognize entity , but the value is not passed to slot
Hey! can I assist you ?
Your input -> I would like to place an order for large blue shoes for my puppy
Received user message 'I would like to place an order for large blue shoes for my puppy' with intent '{'id': -2557752933293854887, 'name': 'place_order', 'confidence': 0.9996021389961243}' and entities '[{'entity': 'size', 'start': 35, 'end': 40, 'confidence_entity': 0.9921159148216248, 'value': 'large', 'extractor': 'DIETClassifier'}, {'entity': 'color', 'start': 41, 'end': 45, 'confidence_entity': 0.9969255328178406, 'value': 'blue', 'extractor': 'DIETClassifier'}]'
Failed to replace placeholders in response 'your order is {size} [color} boots. right?'. Tried to replace 'size' but could not find a value for it. There is no slot with this name nor did you pass the value explicitly when calling the response. Return response without filling the response

"slot" is an unknown keyword. you should write "slots" instead of "slot" in the domain file and it will work.

Related

How to split a column based on first occurrence of a delimter MongoDB

I have A column like this and it should be split based on the first "-", example is below
MGESAD :
"6095 - NCAM - US - GIUTCB - US Consumer Bank - USRB"
"6595 - NBAM - US - UDAS - Consumer Bank - USRB"
"0595 - NWWAM - US - GWCB - US BANK Bank - USRB - TBL"
I need to split this column into:
Col1 Col2
6095 NCAM - US - GIUTCB - US Consumer Bank - USRB
6595 NBAM - US - UDAS - Consumer Bank - USRB
0595 NWWAM - US - GWCB - US BANK Bank - USRB - TBL
Tried So far:
db.getCollection("arTes").aggregate([
{
$addFields: {
MGE_ID: { $arrayElemAt: [ { "$split": [ "$MGESAD y", "-"] }, 0 ] },
MGE_DESC: { $arrayElemAt:[{ "$split": [ "$MGESAD ", "-"] },2] }
}
}
])
MGE_DESC is giving only 2 element I need entire string excluding the first split.
Let me know if there is any eaiser way to do this?
Query
pipeline update requires MongoDB >= 4.2
because you want to split on first index of "-" you can do it with out splitting in all "-" occurences
the bellow finds the index of "-" the left part is the MGESAD and the right is the MGE_DESC
*if you only want to aggregate, use the pipeline ["$set" ...] in aggregation
*if you wanted to do this not for the first or last "-" you could split and then $concat and maybe $reduce depending on your needs but here its more simple so those weren't used
PlayMongo
updade({},
[{"$set":
{"MGESAD":
{"$substrCP": ["$MGESAD", 0, {"$indexOfCP": ["$MGESAD", " - "]}]},
"MGE_DESC":
{"$substrCP":
["$MGESAD",
{"$add": [{"$indexOfCP": ["$MGESAD", " - "]}, 3]},
{"$strLenCP": "$MGESAD"}]}}}],
{"multi" : true})

Get shadow copies older than 5 days using PowerShell

I would like to get these shadow copies that were created more than 5 days ago. How could I do this using PowerShell?
cmd> Diskshadow
Diskshadow> List shadows all
* Shadow copy ID = {49fb469b-4940-45f7-98bd-08441e9e353c}
<No Alias>
- Shadow copy set: {32224b82-e802-4eab-a903-fb5dc6558800}
<No Alias>
- Original count of shadow copies = 11
- Original volume name: \\?\Volume{bba82744-b690-4b68-9180-c0d81
7c5a38f}\ [G:\]
- Creation time: 4/13/2021 6:03:34 PM
- Shadow copy device name: \\?\GLOBALROOT\Device\HarddiskVolumeS
hadowCopy313
- Originating machine: app.contoso.local
- Service machine: app.contoso.local
- Not exposed
- Provider ID: {b5946137-7b9f-4925-af80-51abd60b20d5}
- Attributes: No_Auto_Release Persistent Differential
* Shadow copy ID = {8ac42987-5f9a-4535-aef0-c6d64d7a658b}
* Shadow copy ID = {d9be01ee-c1e6-424f-ac9a-cf82ef4e5e58}
<No Alias>
- Shadow copy set: {32224b82-e802-4eab-a903-fb5dc6558800}
<No Alias>
- Original count of shadow copies = 11
- Original volume name: \\?\Volume{1120d149-97e5-4b8d-af19-bb243
38626ef}\ [H:\]
- Creation time: 4/13/2021 6:03:34 PM
- Shadow copy device name: \\?\GLOBALROOT\Device\HarddiskVolumeS
hadowCopy271
- Originating machine: app.contoso.local
- Service machine: app.contoso.local
- Not exposed
- Provider ID: {b5946137-7b9f-4925-af80-51abd60b20d5}
- Attributes: No_Auto_Release Persistent Differential
There is a module that can do this:
https://www.powershellgallery.com/packages/CPolydorou.ShadowCopy/1.1.2/Content/ShadowCopy.psm1

How to define a CloudWatch Alarm on the sum of two metrics with CloudFormation?

I need to trigger an alarm when the sum of the same metric (ApproximateNumberOfMessagesVisible) on two different queues exceed the value of 100
In September '17, this answer stated that the only way to do it was with a Lambda function getting the two values and summing them up via CloudWatch API.
At writing time, Feb. '19, it is possible to use "Metric Math", so there is no need to have a lambda function or an EC2 instance. Is it possible to use Metric Math to define an Alarm directly in CloudFormation ?
It is actually possible to implement the Alarm logic directly in CloudFormation.
Assuming to have two Scaling Policies ECSScaleUp and ECSScaleDown, the alarm definition will look like:
ECSWorkerSQSCumulativeAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmName: !Join ['-', [!Ref 'MyService', 'SQSCumulativeAlarm']]
AlarmDescription: "Trigger ECS Service Scaling based on TWO SQS queues"
Metrics:
- Id: e1
Expression: "fq + sq"
Label: "Sum of the two Metrics"
- Id: fq
MetricStat:
Metric:
MetricName: ApproximateNumberOfMessagesVisible
Namespace: AWS/SQS
Dimensions:
- Name: QueueName
Value: !GetAtt [ FirstQueue, QueueName]
Period: 60
Stat: Average
Unit: Count
ReturnData: false
- Id: sq
MetricStat:
Metric:
MetricName: ApproximateNumberOfMessagesVisible
Namespace: AWS/SQS
Dimensions:
- Name: QueueName
Value: !GetAtt [ SecondQueue, QueueName]
Period: 60
Stat: Average
Unit: Count
ReturnData: false
EvaluationPeriods: 2
Threshold: 100
ComparisonOperator: GreaterThanThreshold
AlarmActions:
- !Ref ECSScaleUp
- !Ref ECSScaleDown
OKActions:
- !Ref ECSScaleUp
- !Ref ECSScaleDown

How to restrict indexed_search to show only results of subtree (pages and there subpages) typo3 8 LTS

Search Page Domain 1 should not list results of Domain 2 and Domain 3
Search Page Domain 2 should only list results of Domain 2
Search Page Domain 3 should only list results of Domain 3
- - Domain 1 (uid:1)
- - Lvl1 Page Domain 1 (uid:2)
- - - Lvl2 Page 1 Domain 1 (uid:3)
- - Lvl1 Page Domain 1 (uid:4)
- - Lvl1 Page Domain 1 (uid:5)
- - Search Page Domain 1
- - Microsites (uid:5)
- - - Domain 2 (uid:6)
- - - - Unterseite 1 Domain 2 (uid:7)
- - - - Unterseite 2 Domain 2 (uid:8)
- - - - Unterseite 3 Domain 2 (uid:9)
- - - - Search Page Domain 2
- - - Domain 3 (uid:10)
- - - - Unterseite 1 Domain 3 (uid:11)
- - - - Unterseite 2 Domain 3 (uid:12)
- - - - Unterseite 3 Domain 3 (uid:13)
- - - - Search Page Domain 2
TS Setup for Pages:
UID:1
plugin.tx_indexedsearch.settings.defaultOptions.sections = rl1_2, 4, 5
Declaration:
rlx_y = Level x, Page y
4,5 = PageUid 4, PageUid 5
Search results:
Search Page Domain 1: Shows Content of PageUid's: 2,3,4,5
UID:6 (Domain 2 or Subpage)
plugin.tx_indexedsearch.settings.defaultOptions.sections = rl1_6
Search results:
Search Page Domain 2: Shows Content of PageUid's: 7,8,9
UID:10 (Domain 3 or Subpage)
plugin.tx_indexedsearch.settings.defaultOptions.sections = rl1_10
Search results:
Search Page Domain 3: Shows Content of PageUid's: 11,12,13

How to comment on a specific line number on a PR on github

I am trying to write a small script that can comment on github PRs using eslint output.
The problem is eslint gives me the absolute line numbers for each error.
But github API wants the line number relative to the diff.
From the github API docs: https://developer.github.com/v3/pulls/comments/#create-a-comment
To comment on a specific line in a file, you will need to first
determine the position in the diff. GitHub offers a
application/vnd.github.v3.diff media type which you can use in a
preceding request to view the pull request's diff. The diff needs to
be interpreted to translate from the line in the file to a position in
the diff. The position value is the number of lines down from the
first "##" hunk header in the file you would like to comment on.
The line just below the "##" line is position 1, the next line is
position 2, and so on. The position in the file's diff continues to
increase through lines of whitespace and additional hunks until a new
file is reached.
So if I want to add a comment on new line number 5 in the above image, then I would need to pass 12 to the API
My question is how can I easily map between the new line numbers which the eslint will give in it's error messages to the relative line numbers required by the github API
What I have tried so far
I am using parse-diff to convert the diff provided by github API into json object
[{
"chunks": [{
"content": "## -,OLD_TOTAL_LINES +NEW_STARTING_LINE_NUMBER,NEW_TOTAL_LINES ##",
"changes": [
{
"type": STRING("normal"|"add"|"del"),
"normal": BOOLEAN,
"add": BOOLEAN,
"del": BOOLEAN,
"ln1": OLD_LINE_NUMBER,
"ln2": NEW_LINE_NUMBER,
"content": STRING,
"oldStart": NUMBER,
"oldLines": NUMBER,
"newStart": NUMBER,
"newLines": NUMBER
}
}]
}]
I am thinking of the following algorithm
make an array of new line numbers starting from NEW_STARTING_LINE_NUMBER to
NEW_STARTING_LINE_NUMBER+NEW_TOTAL_LINESfor each file
subtract newStart from each number and make it another array relativeLineNumbers
traverse through the array and for each deleted line (type==='del') increment the corresponding remaining relativeLineNumbers
for another hunk (line having ##) decrement the corresponding remaining relativeLineNumbers
I have found a solution. I didn't put it here because it involves simple looping and nothing special. But anyway answering now to help others.
I have opened a pull request to create the similar situation as shown in question
https://github.com/harryi3t/5134/pull/7/files
Using the Github API one can get the diff data.
diff --git a/test.js b/test.js
index 2aa9a08..066fc99 100644
--- a/test.js
+++ b/test.js
## -2,14 +2,7 ##
var hello = require('./hello.js');
-var names = [
- 'harry',
- 'barry',
- 'garry',
- 'harry',
- 'barry',
- 'marry',
-];
+var names = ['harry', 'barry', 'garry', 'harry', 'barry', 'marry'];
var names2 = [
'harry',
## -23,9 +16,7 ## var names2 = [
// after this line new chunk will be created
var names3 = [
'harry',
- 'barry',
- 'garry',
'harry',
'barry',
- 'marry',
+ 'marry', 'garry',
];
Now just pass this data to diff-parse module and do the computation.
var parsedFiles = parseDiff(data); // diff output
parsedFiles.forEach(
function (file) {
var relativeLine = 0;
file.chunks.forEach(
function (chunk, index) {
if (index !== 0) // relative line number should increment for each chunk
relativeLine++; // except the first one (see rel-line 16 in the image)
chunk.changes.forEach(
function (change) {
relativeLine++;
console.log(
change.type,
change.ln1 ? change.ln1 : '-',
change.ln2 ? change.ln2 : '-',
change.ln ? change.ln : '-',
relativeLine
);
}
);
}
);
}
);
This would print
type (ln1) old line (ln2) new line (ln) added/deleted line relative line
normal 2 2 - 1
normal 3 3 - 2
normal 4 4 - 3
del - - 5 4
del - - 6 5
del - - 7 6
del - - 8 7
del - - 9 8
del - - 10 9
del - - 11 10
del - - 12 11
add - - 5 12
normal 13 6 - 13
normal 14 7 - 14
normal 15 8 - 15
normal 23 16 - 17
normal 24 17 - 18
normal 25 18 - 19
del - - 26 20
del - - 27 21
normal 28 19 - 22
normal 29 20 - 23
del - - 30 24
add - - 21 25
normal 31 22 - 26
Now you can use the relative line number to post a comment using github api.
For my purpose I only needed the relative line numbers for the newly added lines, but using the table above one can get it for deleted lines also.
Here's the link for the linting project in which I used this. https://github.com/harryi3t/lint-github-pr