Autocomplete custom data in Slack commands - autocomplete

Just playing with my first slack command. Is there any way of adding custom data from an external API for autocomplete. So what works perfectly right now, is calling the command /assign plus a slack user (both will be autocompleted, nice!). What I want/need is a list of items I would fetch from a remote endpoint, which can be selected by autocomplete.
Is this possible at all?
/assign #userX to [data_by_autocomplete]
Or do I need to solve that by a full conversation like:
=> /assign user #userX
=> BOT: Which task? Here is a list: ...
=> /assign taskY
=> BOT: Assigned TaskY to #userX
But this feels very cumbersome (and wrong). So basically what I want is a remotely fetched list for autocomplete in the same command.
PS: Command and functionality is a simplified example to illustrate the point.

No, you can not use custom autocomplete within the command line, but you can use custom autocomplete with the the new interactive message menus.
So I would suggest to break it up into two steps.
Enter slash command and provide username
Show interactive menu with autocomplete

Related

Using API POST command to 'Type' to console in PythonAnywhere. I can successfully 'Type' to the console, but how do I actually submit the command?

Currently using the PythonAnywhere API to attempt to access a Python script I wrote that is hosted in a Virtual Environment in PythonAnywhere. Using the Bubble API Connector if that matters.
I figured out how to use the POST command in combination with "/api/v0/user/{username}/consoles/{id}/send_input/" to successfully send the text "python HelloWorld.py" to my PA console. However, I don't know how to get the API to actually have the console execute that command / text. Is there some text that represents hitting the 'Enter' button or something of that nature?
Sorry if this is a dumb question or has an obvious solution but I am pretty new to this.
Thanks in advance.
Tried new line tag "/n" as well as combing the PA forums, but no luck finding this specific topic.
Expecting my text to end up in the console (which is happening) and for it to Execute (not happening).

write a vscode extension visual test

I created a VSCode Extension and then I need to write some tests.
I am using Mocha and chai.
I wrote a few tests and I don't have any issue with that part. My problem is with the below scenario:
I have a button, when I press that button, an input box will appear and then I need to key in a value in the input box and press the okay button.
Can you help me with how I can simulate this scenario by a test? should simulate press the first button by calling the Command palette but how to key in value in the input box?
** Please take note that I already wrote the function test, but the user wants to test the UI also.
can you help me in finding an example related to my problem?
There is something like vscode-extension-tester which lets you test the GUI very comfortable. All information you can find on its github main page:
https://github.com/redhat-developer/vscode-extension-tester/wiki

Google Actions CLI 3.1.0 version and actions.intent.TEXT

I want to be able to talk with Google Assistant, but connect the Actions project directly to an NLP service I already have running on my server. In other words, NOT use dialogflow.
All the following examples show how to do this.
With Rasa
https://blog.rasa.com/going-beyond-hey-google-building-a-rasa-powered-google-assistant/
With LUIS
https://www.grokkingandroid.com/using-the-actions-sdk/
https://dzone.com/articles/using-the-actions-sdk-for-google-assistant-develop
With Watson
https://www.youtube.com/watch?v=no0R0bSkHXc
They use the actions.intent.MAIN as the invocation and actions.intent.TEXT for all other utterances from the talker.
This is what I need. I don’t want to create a load of intents, with utterance phrases, inside the Action because I just want all the phrases spoken by the talker to be passed to my server, and for my NLP service to deal with them.
So I set up a new Action project, install the Actions CLI and then spend 3 days trying all possible combinations without success, because all these examples are using gactions cli 2.1.3 and Google have now moved on to gactions cli 3.1.0.
Not only have the commands changed, but so too has the file formats and structure.
It appears there is also a new Google Actions Console, and actions.intent.TEXT is no longer available.
My Action is webhook connected to my server, but I cannot figure out how to get the action.intent.TEXT included and working.
Everything I find, even here
Publishing Actions on google without Dialogflow
is pre version update and follows the same pattern.
Can anyone point to an up-to-date, v3.1.0, discussion, tutorial or example about how to send all talker phrases through to an NLP that isn’t dialogflow, or has Google closed that avenue?
Is it possible to somehow go back and use the 2.1 CLI either with the new Console or revert the console back. (I have both CLI versions, I can see how different their commands are)
Is it possible to go back and use 2.1?
There is no way to go back to AoG 2. You probably also don't want to do so - newer features aren't available with v2 and are only available with v3.
Can I use my own NLP with v3?
Yes, although it isn't as obvious, and there are some changes in semantics.
As an overview, what you'll need to do is:
Create a Type that can accept "Free form text". I usually call this type "Any".
In the console, it looks something like this:
Create a Custom Intent that has a single parameter of this Any Type and at least one phrase that captures everything for this parameter. (So you should add one training phrase, highlight the entire phrase, and set it for the parameter. Sometimes I also add additional phrases that includes words that I don't want to capture.) I usually call the Intent "matchAny" and the parameter "any".
In the console, it could be something like this:
Finally, you'll have a Scene that you transition to from the Main invocation. When it matches the "matchAny" Intent, it should call your webhook with a handler name. Your webhook will be called with the "any" parameter set with the user utterance. (Note that the JSON has also changed.
Again, the console might have it looking something like this:
That seems like a lot of work. Isn't there just some way to do all that from the command line?
Yes. You can do all of that in the configuration files that the CLI accesses and then upload it. (You can then also use the console to review the configuration, if necessary, to make sure they're configured as you expect. You can shift back and forth between them as appropriate.)
Google also has a github repository that contains most of the files pre-configured for this sort of setup.
You will need to update the configuration from the repository to handle the webhook correctly (it includes code to illustrate what is happening using the inline code editor) and to add your project ID.

Build pages not showing on admin panel?

As you can see here: [1]: https://i.stack.imgur.com/oVfts.png . I have build some pages from templates but they are not showing in admin panel. When I go for example to : http://sulu-dev.lo/contact, it opens that page.
Are you using the doctrine-dbal transport layer and do you have some kind of special character (dot, dash, ...) in your webspace configuration? There is currently an issue in Jackalope Doctrine DBAL which causes this behavior. Simply change the webspace key, use bin/adminconsole sulu:build --destroy to initialize Sulu again.
The --destroy option deletes all the existing data. If you don't want to do that you should move the /cmf/<webspace> node to match the new key using something like the PHPCR Shell on your own.

Predictive text inside the terminal in Dart

Trying to create a command line tool for use in a web framework and want to have predictive text inside the terminal to save typing. So far I've been using:
stdout.writeln('Text to output'); // To output to terminal
stdin.readLineSync(); // For reading responses from the terminal
However, using the writeIn function doesn't output to the user input area in the terminal, but rather a system response. What I need is to output to the user input in the terminal, like how the Symfony framework operates in PHP, so that when typing a guess can be given and the user can tab to complete (again, see the symfony CRUD or doctrine entity generator for an idea of the desired functionality).
I do see that using readLineSync might also be a problem (as the script will need to listen for user input and not be blocked).
Thank you for reading!