Stagefright: OMX Subsystem in Stagefright and OMX Core are running in which process context - stagefright

I am facing some issues with Stagefright command line utility where I am unable to understand if the OMX subsystem (OMX, OMXMaster) in Stagefright and OMX core are running in the current application's process or different process. Which part of the code in Stagefright explains the communication between OMXCodec and OMX subsystem. There is no much information in google. I kindly request the readers to give explanation on these concepts.

When AwesomePlayer object is created, mClient.connect is called which basically invokes the OMXClient's connect method.
In the implementation OMXClient::connect, one can observe that media.player service is retrieved through with mOMX is initialized as can be observed from here.
MediaPlayerService is registered through the instantiation invoked by MediaServer as here.
In other words, OMX native implementation is running in MediaServer process where as the proxy is running in the caller's context which could be the shell in case of Stagefright command line utility.
When a new component is allocated, the component could be a SoftOMXComponent or a HW accelerated component. The SoftOMXComponent is created in the caller's context, whereas the HW accelerated component is created in MediaServer. This is managed through 2 variables mLocalOMX and mRemoteOMX as here.

Related

Debugging Azure IoT Edge modules using Visual Studio Code

I can't get local debug of IoT Edge modules working on VS Code, but part of the problem could be that I don't understand what I'm doing in the steps.
I'm following the Microsoft guide here. Can anyone explain to me when I run the command "Azure IoT Edge: Start IoT Edge Hub Simulator for Single Module" in VS Code, why do I need to pass an "input name"? Why doesn the simulator need to know this. I've got multiple input commands on my edge module and the fact I need to pass it is making me question what the simulator actually does. I want to be able to debug multiple inputs.
Also on the same documentation, I can't see how it defines which module I want to run in the simulator. Am I missing something or is the process confusing?
When you Start the IoT Edge Hub Simulator for a Single Module, you spawn two Docker containers. One is the edgeHub and the other is a testing utility. The testing utility acts as a server that you can send HTTP requests to, the requests specify the input name and the data. You can use this to send messages to various inputs on your module. Just looking at that, I understand why it is confusing to supply the input name to the simulator. But when you inspect the edgeHub container, you'll see the following environment values being passed:
"routes__output=FROM /messages/modules/target/outputs/* INTO BrokeredEndpoint(\"/modules/input/inputs/print\")",
"routes__r1=FROM /messages/modules/input/outputs/input2 INTO BrokeredEndpoint(\"/modules/target/inputs/input2\")",
"routes__r2=FROM /messages/modules/input/outputs/foo INTO BrokeredEndpoint(\"/modules/target/inputs/foo\")",
"routes__r3=FROM /messages/modules/input/outputs/input1 INTO BrokeredEndpoint(\"/modules/target/inputs/input1\")"
Just like on a real device, you need routes to talk to your module. The edgeHub container registers these routes with the values you supplied during the starting of the simulator. That input can be a comma-separated list. So if you are using more inputs, feel free to supply them when you start the simulator. Under the covers, that command runs:
iotedgehubdev start -i "input1,input2,foo"
Note: when I was testing this with the latest VS Code Extension, the first time I ran it, the textbox contained: "input1,input2".

What events (.net, WMI, etc.) can I hook to take an action when a PowerShell module is imported?

I want to create a listener in PowerShell that can take an action when an arbitrary PowerShell module is imported.
Is there any .net event or WMI event that is triggered during module import (manually or automatically) that I can hook and then take an action if the module being imported matches some criteria?
Things that I have found so far that might be components of a solution
Module event logging
Runspace pool state changed
Triggering PowerShell when event log entry is created
Maybe not directly useful but if we could hook the same event from within a running PowerShell process that might help
Use PowerShell profile to load PowerShellConfiguration module
Create a proxy function for Import-Module to check whether the module being imported matches one that needs configuration loaded for it
In testing Import-Module isn't called when auto loading imports a module so this doesn't catch every imported module
Context
I want to push the limits of aspect oriented programming/separation of concerns/DRY in PowerShell where things like module state (API keys, API root URLs, credentials, database connection strings, etc.) can all be set via set functions that only change the state of in memory module scoped internal variables so that an external system can pull those values from any arbitrary means of persistence (psd1, PSCustomObject, registry, environment variables, json, yaml, database query, etcd, web service call, anything else that is appropriate to your specific environment).
The problem keeps coming up in the modules we write and is made even more painful when trying to support powershell core cross platform where different means of persistence might not be available (like the registry) but may be the best option for some people in their environment (group policy pushing registry keys).
Supporting an infinitely variable means of persisting configuration within each module that is written is the wrong way to handle this but is what is done across many modules today which results in varying levels of compatibility not because the core functionality doesn't work but simply due to how the module persists and retrieves configuration information.
The method of persisting and then loading some arbitrary module configuration should be independent of the module's implementation but to do that I need a way to know when the module is loaded so that I can trigger pulling the appropriate values from whatever the right persistence mechanism is in the particular environment we are in to then configure the module with the appropriate state.
An example of how I thinks this might work is maybe there is a .net event on the runspace object that is triggered when a module is loaded. This might have to be tied to a WMI event that executes each time a PowerShell runspace is instantiated. If we had a PowerShellConfiguration module that knew what modules it had been setup to load configuration into, then the wmi event could trigger the import of the PowerShellConfiguration module which on import would start listening to the .net event for importing modules into the runspace and call the various configuration related Set methods of a module when it sees the module imported.

Borland C++: How can a Windows service shutdown itself?

I have an old Windows Service written in Borland C++ Builder that I need to extend so that it can shutdown itself under certain conditions.
If I shutdown the service manually via the service control manager, it shuts down properly without any problems. So I thought, calling this->DoShutdown(); would be sufficient (this being an instance derived from TService). But this leaves the service in the state "Shutting down...". I could call ExitProcess afterwards, but this creates an entry in the event log that the service has been shut down unexpectedly.
So what is the proper way to make a Borland C++ Windows service shut down itself?
DoShutdown() is called by TService when it receives a SERVICE_CONTROL_SHUTDOWN request from the SCM while Windows is being shut down. DoShutdown() is not intended to be called directly in user code.
The easiest way to have your service terminate itself is to call its Controller() method (either directly, or via its global ServiceController() function), passing either SERVICE_CONTROL_STOP or SERVICE_CONTROL_SHUTDOWN in the CtrlCode parameter. Let the service handle the request as if it had come from the SCM, so it can act accordingly.

Sitecore commands with autofac

I have created a sitecore command which triggers an index rebuild.
I would like to be able to inject services with autofac.
Therefore I have followed this tutorial : http://maze-dev.blogspot.be/2014/03/dependency-injection-in-custom-sitecore.html
After having everything in place, it seems like the sitecore scheduling task tries to create a new instance of this command. While these already injected in the commandconfiguration class.
Is there anything else that needs to be done?
The problem is that a Sitecore scheduled task runs in a separate thread, and since the command is registered as InstancePerLifetimeScope (if following the example in the linked blog post), Autofac will inject a new instance in the scheduled task.
Instead, in your scheduled task you should probably get the command from the CommandManager, using something like:
var command = CommandManager.GetCommand("mynamespance:mycategory:mycommand");
and then call Execute on the command.
Now, since the CommandConfigurator at bootstrap time registers the resolved command instance in the static CommandManager, the instance can effectively be seen as a singleton, and it should be available fully injected in the scheduled task (if the command is retrieved through the CommandManager, that is.) If the command is also executed from elsewhere in your Sitecore solution, it will most likely be on another thread. In that case it is probably a good idea to consider if your command implementation is thread safe.

Script the windows azure command line utilities

There is java based server component responsible for remote management of amazon virtual machines. I need to write an azure adapter for this component.
I thought I would be better off using node.js based command line utils for azure management.
I wanted to know the way to invoke scripts either from c#/java and then process the output so that I could pass the output to the calling server component.
for e.g. An instruction to create a new vm will return the instance id back to the calling method.
Basically I would need to script the logic in to the adapter methods.
Any directions will be of great help.
-Sharath
Depending on the technology you're choosing you have a few options:
Using the System.Management.Automation assembly you can call any PowerShell script in a C#/.NET application
In Java you can call a batch file that runs a PowerShell script (where you would invoke the Azure cmdlets). There's an interesting discussion on the MSDN forum.
And why not use the Service Management API? This is a REST API that makes it possible to call it from .NET, Java, Node, ...