PowerShell Azure Function: How to fix "Failed to start a new language worker for runtime: powershell"? - powershell

In an Azure Function App containing one PowerShell function I get the following log message regularly:
"Failed to start a new language worker for runtime: powershell."
It has "Error" level and thus triggers our error alert notifications. I'm not entirely sure when this message appears. It might appear around restarting the function app, which might explain it somewhat. I think I remember it appearing during normal function operation - but I might be mistaken.
There is a rather involved thread over here about a similar message but for the dotnet runtime that suggests there are configuration options to configure: Azure Function - Failed to start a new language worker for runtime: dotnet-isolated
My function app runtime version is ~4, PS Core version is 7.0, platform is 64 bit and Windows.
What is the error message trying to telling me? Can I ignore it? Is there a configuration I can add to fix it?

After monitoring this for a while over the course of multiple deployments and restarts of PowerShell-based Function Apps my conclusion is this:
The error "Failed to start a new language worker for runtime: powershell." only appears when restarting the Function App. Thus my takeaway is that it can be ignored.

Related

Rundeck jobs fail when powershell script hits any error. NonZeroResultCode

Rundeck 4.8.0 community version on Redhat 9 Linux with Windows node.
My Rundeck jobs call powershell (.ps1) scripts on the windows node.
If there are any errors encountered in the script, the Rundeck job dies.
The rundeck output gives the NonZeroResultCode message
NonZeroResultCode: [WinRMPython] Result code: 1
There's more code that needs to be run after where the error occurred, but Rundeck just dies and doesn't continue the rest of the .ps1.
I previously used Rundeck version 3.something, I thing it was 3.9.
If there was an error in the script, such as a get or a set failed, the Rundeck console would just display the text of the error in red, and continue.
Now I know I can change my code and add try/catch statements, -erroraction SilentlyContinue and so on. However it makes no sense to me that Rundeck takes it upon itself to kill my script because a get or a set failed.
I want to be the one to decide if I want to exit the script or not, I don't want Rundeck to make that decision.
Can this behavior be changed?
thanks in advance.
That's the default Rundeck behavior.
You can "attach" an error handler on that script-step (on any step actually), e.g: the error handler could be the script code when your step fails.
The error handler feature is designed for that kind of scenario, take a look at this.

Debugging JavaScript in Edge & VS Code causes DCOM 10016 Event / Access violation

Environment: Windows 10
IDE: Visual Studio Code
Extensions: Live Server v5.7.5 by Ritwick Dey and Microsoft Edge Tools for VS Code v2.1.0
When I am debugging JavaScript files, if I put a break point in an exported class, I get the error shown in the image bellow.
I cleared the Windows System log, and right after I start debugging and get the error, a new entry is in the Windows system log. This happens every time without fail. The error in the Windows System log is:
The application-specific permission settings do not grant Local Activation permission
for the COM Server application with CLSID
{2593F8B9-4EAF-457C-B68A-50F6B8EA6B54}
and APPID
{15C20B67-12E7-4BB6-92BB-7AFF07997402}
to the user DOMAIN\\local_user SID (S-1-5-21-2158192427-3696246665-2163083460-1135) from
address LocalHost (Using LRPC) running in the application container Unavailable SID
(Unavailable). This security permission can be modified using the Component Services
administrative tool.
My question is how do I fix this issue?
Update 7/26/2022:
If I remove the breakpoint from the constructor of the class and put it elsewhere in the class, it works without any errors. The error occurs if the breakpoint is in the constructor.
I found the answer and it is not anything above.
Well, I finally solved the problem. I am updating this answer so that someone else will know the answer without going down all the wrong paths that I went down. The problems was not any of the tools. The problem was with the code. While technically the code was correct, executing it with a breakpoint caused the error I talked about above. I was able to fix this problem by moving all the class member variables to the top of the class before all member functions. The error only occurs when you add a breakpoint before the member variables are defined. Code analyzers say there is nothing wrong with the code. The error message could be more informative!
If you want to see example code associated with this problem. See this post

Running scalapbc command from a thread pool

I am trying to run the scalapb command from a threadpool with each thread running the scalapbc command;
When I do that, I get an error of the form :
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGBUS (0x7) at pc=0x00007f32dd71fd70, pid=8346, tid=0x00007f32e0018700
As per my google search, this issue occurs when the /tmp folder is full or it is trying to be accessed by multiple code simultaneously.
My question is, is there a way to issue scalapbc commands using threading without getting above error? how to make sure that the temp folders being used by the individual threads don't interfere with each other?
This issue occurs most of the times when I run the code but sometimes the build passes as well.

Can I trap the Informatica Amazon S3Bucket name doesn’t match standards

In Informatica we have mapping source qualifiers connecting to Amazon Web Services—AWS.
We often and erratically get a failure that our s3 bucket names do not comply with naming standards. We restart the workflows again and they continue on every time successfully.
Is there a way to trap for this specifically and then maybe call a command object to restart the workflow command via PMCMD?
How are you starting the workflows in regular runs?
If you are using a shell script, you can add a logic to restart if you see a particular error. I have created a script a while ago to restart workflows for a particular error.
In a nut shell it works like this
start workflow (with pmcmd)
#in case of an error
check repository db and get the error
if the error is specific to s3 bucket name
restart the workflow
Well... It's possible for example to have workflow one (W1):
your_session --> cmd_touch_file_if_session_failed
and another workflow (W2), running continuously:
event_wait_for_W1_file --> pmcmd_restart_W1 --> delete_watch_file
Although it would be a lot better to nail down the cause for your failures and get it resolved.

failed using cuda-gdb to launch program with CUPTI calls

I'm having this weird issue: I have a program that uses CUPTI callbackAPI to monitor the kernels in the program. It runs well when it's directly launched; but when I put it under cuda-gdb and run, it failed with the following error:
error: function cuptiSubscribe(&subscriber, CUpti_CallbackFunc)my_callback, NULL) failed with error CUPTI_ERROR_NOT_INITIALIZED
I've tried all examples in CUPTI/samples and concluded that programs that use callbackAPI and activityAPI will fail under cuda-gdb. (They are all well-behaved without cuda-gdb) But the fail reason differs:
If I have calls from activityAPI, then once run it under cuda-gdb, it'll hang for a minute then exit with error:
The CUDA driver has hit an internal error. Error code: 0x100ff00000001c Further execution or debugging is unreliable. Please ensure that your temporary directory is mounted with write and exec permissions.
If I have calls from callbackAPI like my own program, then it'll fail out much sooner with the same error:
CUPTI_ERROR_NOT_INITIALIZED
Any experience on this kinda issue? I really appreciate that!
According to NVIDIA forum posting here and also referred to here, the CUDA "tools" must be used uniquely. These tools include:
CUPTI
any profiler
cuda-memcheck
a debugger
Only one of these can be "in use" on a code at a time. It should be fairly easy for developers to use a profiler, or cuda-memcheck, or a debugger independently, but a possible takeaway for those using CUPTI, who also wish to be able to use another CUDA "tool" on the same code, would be to provide a coding method to be able to disable CUPTI use in their application, when they wish to use another tool.