Deserializing With Dill on Remote Pyro5 Object Yields Error - dill

So I'm trying to have a remote Pyro5 object receive serialized arbitrary functions to execute them. The remote objects are run on a separate machine, registered on a Pyro5 name-server.
The built in serializer (serpent) in Pyro5 does not support function serialization, so I manually serialize the function with Dill, yielding a bytes type, which I send over when calling the remote object through Pyro5. I subsequently de-serialize the function on the remote-side, which yields the error:
ImportError: cannot import name 'Annotated' from 'typing' (/usr/lib/python3.8/typing.py)
I've tested to implement a separate function which sends the serialized data back to the client from the remote object, where I subsequently de-serialized the data into a function, successfully. This implies that using Dill on the remote end, as opposed to Pyro5, is the culprit.

I'm the dill author. It would help to know what versions of python, dill, etc you are using -- both on the local and remote machines. However, I'm going to guess that you are using python 3.9+ locally, and the remote machine is using python 3.8 (see the traceback you posted). The Annotated class was added to the typing module in python 3.9... hence the error when dill expects to find it in typing.

Related

Why do I get the error "Server Creation Failed: "Class not Registered"?

I am creating a COM server (using actxserver) for CSTStudio using MATLAB, but I'm getting the error
Class not regiestred in feval
which is working inside actxserver. Also when I tried to create COM server for other applications such as Word and PowerPoint it is working fine but showing an error for the invoke function.
Here is the MATLAB Code:
addpath(genpath('G:\MATLAB\CST-MATLAB-API-master'));
cst = actxserver('CSTStudio.application');
mws = cst.invoke('NewMWS');
This is the error:
MicrostripAntenna
Error using feval
Server Creation Failed: Class not registered
Error in actxserver (line 89)
h=feval(['COM.' convertedProgID], 'server', machinename, interface);
Error in MicrostripAntenna (line 32)
cst = actxserver('CSTStudio.application');
Not clear what you mean by saying it is "working for other applications." Do you mean you can create those objects or that from within those programs you can create CSTStudio.application? #1 is CSTStudio installed on that computer? Is there a registry entry for the ProgId on the computer? Is it an in-process (.dll) or local server (.exe)?
My first suggestion is that you use VB Script to try and diagnose whether it is a 32/64-bit mismatch and whether the server is even on the machine.
Take this VB Script and save it to a file called CSTStudioTest.vbs
dim app
set app = CreateObject("CSTStudio.application")
MsgBox TypeName(app)
Change directory to where the newly created file exists.
Then, execute the script in two different ways:
c:\windows\system32\wscript.exe CSTStudioTest.vbs
and also as
c:\windows\syswow64\wscript.exe CSTStudioTest.vbs
If both succeed, then that means CSTStudio is a local server (.exe)
If one succeeds, then that means CSTStudio is an in-process server (.dll) and that it means it will only work with programs of the same bitness. If only the first script succeeded, it means CSTStudio server is a 64-bit in process server. If only the second script succeeds, it means it is a 32-bit in process server. If it is an in process server, then you can only directly call it from a process that has the same bitness (64 bit from 64 bit, or 32 bit from 32 bit).
If both of those scripts fail, that means CSTStudio is not installed correctly on your computer (if at all).
If the bitness between MATLAB and CSTStudio is different, the easiest remedy is to get a version of MATLAB or CSTStudio that matches the bitness of the other.

kdb - persisting functions on kdb server & Context management

I see a lot of info regarding serializing tables on kdb but is there a suggested best practice on getting functions to persist on a kdb server? At present, I and loading a number of .q files in my startup q.q on my local and have duplicated those .q files on the server for when it reboots.
As I edit, add and change functions, I am doing so on my local dev machine in a number of .q files all referencing the same context. I then push them one-by-one sending them to the server using code similar to below which works great for now but I am pushing the functions to the server and then manually copying each .q file and then manually editing the q.q file on the server.
\p YYYY;
h:hopen `:XXX.XXX.XX.XX:YYYY;
funcs: raze read0[`$./funcs/funcsAAA.q"];
funcs: raze read0[`$./funcs/funcsBBB.q"];
funcs: raze read0[`$./funcs/funcsCCC.q"];
h funcs;
I'd like to serialize them on the server (and conversely get them when the system reboots. I've dabbled with on my local and seems to work when I put these in my startup q.q
`.AAA set get `:/q/AAAfuncs
`.BBB set get `:/q/BBBfuncs
`.CCC set get `:/q/CCCfuncs
My questions are:
Is there a more elegant solution to serialize and call the functions on the server?
Clever way to edit the q.q on the server to add the .AAA set get :/q/AAAfuncs
Am I thinking about this correctly? I recognize this could be dangerous in a prod enviroment
ReferencesKDB Workspace Organization
In my opinion (and experience) all q functions should be in scripts that the (production) kdb instance can load directly using either \l /path/to/script.q or system"l /path/to/script.q", either from local disk or from some shared mount. All scripts/functions should ideally be loaded on startup of that instance. Functions should never have to be defined on the fly, or defined over IPC, or written serialised and loaded back in, in a production instance.
Who runs this kdb instance you're interacting with? Who is the admin? You should reach out to the admins of the instance to have them set up a mechanism for having your scripts loaded into the instance on startup.
An alternative, if you really can't have your function defined server side, is to define your functions in your local instance on startup and then you send the function calls over IPC, e.g.
system"l /path/to/myscript.q"; /make this load every time on startup
/to have your function executed on the server without it being defined on the server
h:hopen `:XXX.XXX.XX.XX:YYYY;
res:h(myfunc1;`abc);
This loads the functions in your local instance but sends the function to the remote server for evaluation, along with the input parameter `abc
Edit: Some common methods for "loading every time on startup" include:
Loading a script from the startup command line, aka
q myscript.q -p 1234 -w 10000
You could have a master script which loads subscripts.
Load a database or script directory contains scripts from the startup command line, aka
q /path/to/db -p 1234 -w 10000
Jeff Borror mentions this here: https://code.kx.com/q4m3/14_Introduction_to_Kdb%2B/#14623-scripts and here: https://code.kx.com/q4m3/14_Introduction_to_Kdb%2B/#14636-scripts
Like you say, you can have a q.q script in your QHOME

Hyperledger Sawtooth Supply Chain transaction example in python

I successfully built and ran the transaction processor for supply chain on ubuntu 16.04. Now I would like to create a client transaction using the python sdk. I referred to
https://sawtooth.hyperledger.org/docs/core/nightly/1-2/_autogen/sdk_submit_tutorial_python.html
and
https://sawtooth.hyperledger.org/docs/supply-chain/nightly/master/family_specification.html#transactions
as reference.
But so far the validator always rejects my transaction and calls it invalid. My TP is running correctly and is receiving the transaction but is unable to deserialize the payload.
Does anyone have an example script in python for creating a transaction? For example creating a new agent or fish?
Now it works. I was able to generate the .proto files from https://github.com/hyperledger/sawtooth-supply-chain/tree/master/protos for python. After installing the supply-rest-api the validator accepts my payload.

What events (.net, WMI, etc.) can I hook to take an action when a PowerShell module is imported?

I want to create a listener in PowerShell that can take an action when an arbitrary PowerShell module is imported.
Is there any .net event or WMI event that is triggered during module import (manually or automatically) that I can hook and then take an action if the module being imported matches some criteria?
Things that I have found so far that might be components of a solution
Module event logging
Runspace pool state changed
Triggering PowerShell when event log entry is created
Maybe not directly useful but if we could hook the same event from within a running PowerShell process that might help
Use PowerShell profile to load PowerShellConfiguration module
Create a proxy function for Import-Module to check whether the module being imported matches one that needs configuration loaded for it
In testing Import-Module isn't called when auto loading imports a module so this doesn't catch every imported module
Context
I want to push the limits of aspect oriented programming/separation of concerns/DRY in PowerShell where things like module state (API keys, API root URLs, credentials, database connection strings, etc.) can all be set via set functions that only change the state of in memory module scoped internal variables so that an external system can pull those values from any arbitrary means of persistence (psd1, PSCustomObject, registry, environment variables, json, yaml, database query, etcd, web service call, anything else that is appropriate to your specific environment).
The problem keeps coming up in the modules we write and is made even more painful when trying to support powershell core cross platform where different means of persistence might not be available (like the registry) but may be the best option for some people in their environment (group policy pushing registry keys).
Supporting an infinitely variable means of persisting configuration within each module that is written is the wrong way to handle this but is what is done across many modules today which results in varying levels of compatibility not because the core functionality doesn't work but simply due to how the module persists and retrieves configuration information.
The method of persisting and then loading some arbitrary module configuration should be independent of the module's implementation but to do that I need a way to know when the module is loaded so that I can trigger pulling the appropriate values from whatever the right persistence mechanism is in the particular environment we are in to then configure the module with the appropriate state.
An example of how I thinks this might work is maybe there is a .net event on the runspace object that is triggered when a module is loaded. This might have to be tied to a WMI event that executes each time a PowerShell runspace is instantiated. If we had a PowerShellConfiguration module that knew what modules it had been setup to load configuration into, then the wmi event could trigger the import of the PowerShellConfiguration module which on import would start listening to the .net event for importing modules into the runspace and call the various configuration related Set methods of a module when it sees the module imported.

Creating a simple command line interface (CLI) using a python server (TCP sock) and few scripts

I have a Linux box and I want to be able to telnet into it (port 77557) and run few required commands without having to access to the whole Linux box. So, I have a server listening on that port, and echos the entered command on the screen. (for now)
Telnet 192.168.1.100 77557
Trying 192.168.1.100...
Connected to 192.168.1.100.
Escape character is '^]'.
hello<br />
You typed: "hello"<br />
NOW:
I want to create lot of commands that each take some args and have error codes.
Anyone has done this before?
It would be great if I can have the server upon initialization go through each directory
and execute the init.py file and in turn, the init.py file of each command call
into a main template lib API (e.g. RegisterMe()) and register themselves with the server as function call backs.
At least this is how I would do it in C/C++.
But I want the best Pythonic way of doing this.
/cmd/
/cmd/myreboot/
/cmd/myreboot/ini.py (note underscore don't show for some reason)
/cmd/mylist/
/cmd/mylist/init.py
... etc
IN: /cmd/myreboot/__ini__.py:
from myMainCommand import RegisterMe
RegisterMe(name="reboot",args=Arglist, usage="Use this to reboot the box", desc="blabla")
So, repeating this creates a list of commands and when you enter the command in the telnet session, then the server goes through the list, matches the command and passed the args to that command and the command does the job and print the success or failure to stdout.
Thx
I would build this app using combination of cmd2 and RPyC modules.
Twisted's web server does something kinda-sorta like what you're looking to do. The general approach used is to have a loadable python file define an object of a specific name in the loaded module's global namespace. Upon loading the module, the server checks for this object, makes sure that it derives from the proper type (and hence has the needed interface) then uses it to handle the requested URL. In your case, the same approach would probably work pretty well.
Upon seeing a command name, import the module on the fly (check the built-in import function's documentation for how to do this), look for an instance of "command", and then use it to parse your argument list, do the processing, and return the result code.
There likely wouldn't be much need to pre-process the directory on startup though you certainly could do this if you prefer it to on-the-fly loading.