how to get a list of built-in modules programatically - micropython

I would like to get a list of the built-in a(and frozen) modules using micropython, running on the board (ESP32) itself.
help('modules') will give you a list of all modules, but this is only send to the terminal. I have not been able to capture it on any other way.
also see micropython forum where the only option seems to be to use dupterm
which unfortunately is not available on ESP32

Related

Google Actions CLI 3.1.0 version and actions.intent.TEXT

I want to be able to talk with Google Assistant, but connect the Actions project directly to an NLP service I already have running on my server. In other words, NOT use dialogflow.
All the following examples show how to do this.
With Rasa
https://blog.rasa.com/going-beyond-hey-google-building-a-rasa-powered-google-assistant/
With LUIS
https://www.grokkingandroid.com/using-the-actions-sdk/
https://dzone.com/articles/using-the-actions-sdk-for-google-assistant-develop
With Watson
https://www.youtube.com/watch?v=no0R0bSkHXc
They use the actions.intent.MAIN as the invocation and actions.intent.TEXT for all other utterances from the talker.
This is what I need. I don’t want to create a load of intents, with utterance phrases, inside the Action because I just want all the phrases spoken by the talker to be passed to my server, and for my NLP service to deal with them.
So I set up a new Action project, install the Actions CLI and then spend 3 days trying all possible combinations without success, because all these examples are using gactions cli 2.1.3 and Google have now moved on to gactions cli 3.1.0.
Not only have the commands changed, but so too has the file formats and structure.
It appears there is also a new Google Actions Console, and actions.intent.TEXT is no longer available.
My Action is webhook connected to my server, but I cannot figure out how to get the action.intent.TEXT included and working.
Everything I find, even here
Publishing Actions on google without Dialogflow
is pre version update and follows the same pattern.
Can anyone point to an up-to-date, v3.1.0, discussion, tutorial or example about how to send all talker phrases through to an NLP that isn’t dialogflow, or has Google closed that avenue?
Is it possible to somehow go back and use the 2.1 CLI either with the new Console or revert the console back. (I have both CLI versions, I can see how different their commands are)
Is it possible to go back and use 2.1?
There is no way to go back to AoG 2. You probably also don't want to do so - newer features aren't available with v2 and are only available with v3.
Can I use my own NLP with v3?
Yes, although it isn't as obvious, and there are some changes in semantics.
As an overview, what you'll need to do is:
Create a Type that can accept "Free form text". I usually call this type "Any".
In the console, it looks something like this:
Create a Custom Intent that has a single parameter of this Any Type and at least one phrase that captures everything for this parameter. (So you should add one training phrase, highlight the entire phrase, and set it for the parameter. Sometimes I also add additional phrases that includes words that I don't want to capture.) I usually call the Intent "matchAny" and the parameter "any".
In the console, it could be something like this:
Finally, you'll have a Scene that you transition to from the Main invocation. When it matches the "matchAny" Intent, it should call your webhook with a handler name. Your webhook will be called with the "any" parameter set with the user utterance. (Note that the JSON has also changed.
Again, the console might have it looking something like this:
That seems like a lot of work. Isn't there just some way to do all that from the command line?
Yes. You can do all of that in the configuration files that the CLI accesses and then upload it. (You can then also use the console to review the configuration, if necessary, to make sure they're configured as you expect. You can shift back and forth between them as appropriate.)
Google also has a github repository that contains most of the files pre-configured for this sort of setup.
You will need to update the configuration from the repository to handle the webhook correctly (it includes code to illustrate what is happening using the inline code editor) and to add your project ID.

Ansible CallbackBase result object content

Our dev team provides us an Ansible package to work with. I noticed thy develop a custom stdout_callback and I'm trying to understand it.
I'm looking at the code of the class CallbackBase available here and I noticed the result variable but I can't find a description of it's content.
Is there a place I can find such information?
Next question, how does Ansible call such callback? CallbackBase contains several methods but I'd like to know where those methods are called.
Thanks for your feedbacks

How to change GPIO.BOARD TO GPIO.BCM for sensor SHT10

I want two scripts integrate in to one script.
Scripts for sensors SHT10 and MAX31855. Both make use of software SPI.
The SHT10 use GPIO.BOARD and the MAX31855 use GPIO.BCM.
The problem is that I get an error "ValueError: A different mode is already been set." I don't know how to resolve this because both sensors used different libraries. I think that the problem is in those libraries.
Is there an easy solution for this problem.
Running the scripts separately than there is no problem
You can try using GPIO.setmode(GPIO.BCM) and then in the other program using GPIO.setmode(GPIO.BOARD).

Include runtime type definitions using VSCode extension

I'm working on a library that lets users run Node processes from inside another application. The library is called "max-api"; functions for sending data to the host application are exposed through a Node module and are loaded in the expected way:
const maxAPI = require("max-api");
However, the user never interacts with this module directly. Rather, when the host application launches the Node process, it intercepts the call to require, checks if the name of the module is "max-api", and provides the module if so.
This works great, the only issue is we have no way to provide type definitions for this modules. So, the user doesn't get any autocomplete or validation for functions in the "max-api" module. I was thinking of writing a VSCode extension to provide these, but I'm not 100% sure how to get started. Thanks in advance for any advice.
You could write a TS typings file (see Definitely Typed). This would be installed in node_modules/#types and vscode will automatically pick it up to provide code completion for your module.

printk in driver

I am really new to linux module programming.
I need to some how be able to do some tweak to the ath9k driver in linux.
I finally got the compat wireless source code of ath9k to compile in ubuntu 11.04 and was trying to play around with the code.
I tried using printk to tried to get to see what happen.
First I put printk in the init.c file, the message I printed show up when I use dmesg in the terminal. However, when I tried to use the same printk in another file like rc.c it does not show up at all.
I am wondering why is that?
And is there some other way that I could some how log some information from the code similar to the fprintf. What I need is I need to extract somehow the packet header from the driver.
Thank you
Best Regards.
read about the proc fs, it's a great framework to extract data from device drivers.
once you have registered a device node as proc fs, you can read from it.
once the the read function is called, a callback function you defined is creating the output. this is an excellent way to retrieve data from device.
there are also two other methods, one is sysfs, you can google for it. and the second,
if the the device is a char device, you can implement an ioctrl function which returns the info you need.