How do get provideTasks method of registerTaskProvider working - visual-studio-code

I am trying to get the new registerTaskProvider method in the VSCode Task API working within my extension, and I have been unable to make this work so far.
I have used the npm extension as a basis. Here are the steps that I have followed:
Used yo to create a new extension
Updated package.json to include a new activationEvent onCommand:workbench.action.tasks.runTask
Updated package.json to include configuration and taskDefinitions sections in the contributes section
Added the following code to the extension.ts file
NOTE: I realise that the linked code won't actually provide any additional tasks, I am just using this code as a basis for testing.
Now, when I try to debug the extension, the provideTasks method is never invoked. What am I missing?
Also, the only way to have the activate method called is when I invoke the Hello World command. However, I might not have a command associated with the extension. How can I force the activation of the extension?
Is there any additional documentation on how to get started with the registerTaskProvider API?
I have added a sample repository that has the current work to date.

Turns out that this was a case of PEBKAC.
After a Twitter discussion with Erich Gamma, he showed me that the provideTasks method is only consulted when you begin running a task. As shown here:
https://twitter.com/ErichGamma/status/885823516293177346
I had assumed that the provideTasks method would be consulted on activation of the extension.

Related

An assertion on the test case file - Webdriverio

I need some help, I'm starting with this automation stuff, I like it but I'm still learning, recently I create a test case that basically is, going to a certain page and click on a button to upgrade the account on an specific sale, so I did that but when I got my PR reviewed and devops ask me if I can add an assertion.
So, this code is on the spec file not on the page objects file, so the devops mean I have to create the code on the page object file and then call it on the spec file???? any tip would be great and thanks!
You can do it either way. You can write the assertion in the spec file, or write the assertion in the page objects file and call it from the specs file. If the latter is your framework's code convention, you may want to do it that way for consistency, but either way should work.

Google Actions CLI 3.1.0 version and actions.intent.TEXT

I want to be able to talk with Google Assistant, but connect the Actions project directly to an NLP service I already have running on my server. In other words, NOT use dialogflow.
All the following examples show how to do this.
With Rasa
https://blog.rasa.com/going-beyond-hey-google-building-a-rasa-powered-google-assistant/
With LUIS
https://www.grokkingandroid.com/using-the-actions-sdk/
https://dzone.com/articles/using-the-actions-sdk-for-google-assistant-develop
With Watson
https://www.youtube.com/watch?v=no0R0bSkHXc
They use the actions.intent.MAIN as the invocation and actions.intent.TEXT for all other utterances from the talker.
This is what I need. I don’t want to create a load of intents, with utterance phrases, inside the Action because I just want all the phrases spoken by the talker to be passed to my server, and for my NLP service to deal with them.
So I set up a new Action project, install the Actions CLI and then spend 3 days trying all possible combinations without success, because all these examples are using gactions cli 2.1.3 and Google have now moved on to gactions cli 3.1.0.
Not only have the commands changed, but so too has the file formats and structure.
It appears there is also a new Google Actions Console, and actions.intent.TEXT is no longer available.
My Action is webhook connected to my server, but I cannot figure out how to get the action.intent.TEXT included and working.
Everything I find, even here
Publishing Actions on google without Dialogflow
is pre version update and follows the same pattern.
Can anyone point to an up-to-date, v3.1.0, discussion, tutorial or example about how to send all talker phrases through to an NLP that isn’t dialogflow, or has Google closed that avenue?
Is it possible to somehow go back and use the 2.1 CLI either with the new Console or revert the console back. (I have both CLI versions, I can see how different their commands are)
Is it possible to go back and use 2.1?
There is no way to go back to AoG 2. You probably also don't want to do so - newer features aren't available with v2 and are only available with v3.
Can I use my own NLP with v3?
Yes, although it isn't as obvious, and there are some changes in semantics.
As an overview, what you'll need to do is:
Create a Type that can accept "Free form text". I usually call this type "Any".
In the console, it looks something like this:
Create a Custom Intent that has a single parameter of this Any Type and at least one phrase that captures everything for this parameter. (So you should add one training phrase, highlight the entire phrase, and set it for the parameter. Sometimes I also add additional phrases that includes words that I don't want to capture.) I usually call the Intent "matchAny" and the parameter "any".
In the console, it could be something like this:
Finally, you'll have a Scene that you transition to from the Main invocation. When it matches the "matchAny" Intent, it should call your webhook with a handler name. Your webhook will be called with the "any" parameter set with the user utterance. (Note that the JSON has also changed.
Again, the console might have it looking something like this:
That seems like a lot of work. Isn't there just some way to do all that from the command line?
Yes. You can do all of that in the configuration files that the CLI accesses and then upload it. (You can then also use the console to review the configuration, if necessary, to make sure they're configured as you expect. You can shift back and forth between them as appropriate.)
Google also has a github repository that contains most of the files pre-configured for this sort of setup.
You will need to update the configuration from the repository to handle the webhook correctly (it includes code to illustrate what is happening using the inline code editor) and to add your project ID.

Ansible CallbackBase result object content

Our dev team provides us an Ansible package to work with. I noticed thy develop a custom stdout_callback and I'm trying to understand it.
I'm looking at the code of the class CallbackBase available here and I noticed the result variable but I can't find a description of it's content.
Is there a place I can find such information?
Next question, how does Ansible call such callback? CallbackBase contains several methods but I'd like to know where those methods are called.
Thanks for your feedbacks

Create new work item type using VSTS Extension

Based on the documentation https://learn.microsoft.com/en-us/vsts/extend/overview?view=vsts#what-makes-up-an-extension, a VSTS extension can be used to extend the work item form.
However, I would like my extension to automatically create a new work item type once it is installed. Is this something that is possible? I can't find any documentation online that suggests how to do it.
Theoretically this is possible, the extension has a "first load" call which you can use to use the rest api to create a custom process or update the existing custom process. The REST Api to change processes isn't public yet, so you'll have to work from using fiddler to watch how the web ui does it.
Due to the way processes are linked to projects, all projects with that process will get the new work item type.
I could not find a lot of documentation online for this, but the VSS web extensions SDK(https://www.npmjs.com/package/vss-web-extension-sdk) has a REST client called 'ProcessDefinitionsRestClient' declared in the typings/tfs.d.ts file. This client has a createWorkItemType method available that looks like this:
createWorkItemType(workItemType: ProcessDefinitionsContracts.WorkItemTypeModel, processId: string): IPromise<ProcessDefinitionsContracts.WorkItemTypeModel>;.
The 'ProcessRestClient' client has methods to create a new/inherited process to which the new WIT can be added.
I have not tried it out yet, and these APIs are still in preview, but maybe they can get you started on the right path.

How to access per project / per preset configuration data in vue-cli 3.x?

I'm looking to create a vue-cli 3.x plugin that generates extra files in the project when invoked and templates these based on configuration info entered by the user.
It's rather easy to collect this information from user prompts but it'd be much nicer if users could put this info into some sort of configuration file and have the plugin read it from there.
I understand vue-cli now uses a vue.config.js file on the project level and ~/.vuerc on a more global (preset) level. However, it doesn't look like my generator function would have access to either one of those files.
"The entire preset" should be passed as the third argument to the function when invoking the plugin with module.exports = (api, options, rootOptions) => {} in ./generator/index.js, rootOptions is undefined.
Sililarly, the use of vue.config.js is discussed in the documentation when talking about service plugins, but there's no mention how to use it in a generator function.
Am I missing something obvious here or is there really no officially sanctioned way to do this? Nothing stops me from reading vue.config.js in my function, but this feels hacky. And I wouldn't even know how to find out which preset was used to create the project.
So your problem is that you can't find ~/.vuerc?
I had the same issue and I found the solution here.
The solution is to use the command line
vue config
to see .vuerc, and then use
vue config --delete presets.yourPresetName
to delete your preset.
As well, if you can't type in your preset name in the terminal because it contains a space or an apostrophe, you can use
vue config --edit
to open .vuerc with a text editor and just edit it there