vs-code taking exorbitant amount of CPU - visual-studio-code

I'm consistently having problems with VS-Code taking up tons of CPU:
Now maybe this is just what VS-Code needs but what is processing? Is there a way to break this down by extensions? Is there any other thing I should be checking?
Note: i have already set "terminal.integrated.cursorBlinking": false which at least used to cause perf problems in vs-code [issue]
The macOS process explorer shows these VS Code processes:
this is after I restarted and reduced down to a single session so CPU isn't at the moment a problem ... I'm just showing this as I'd really like to map these "helper" processes to the various extensions but can't find any way to do it.
on closer examination, it's apparent that the extensions are not 1:1 with helper processes so this mapping is not proving to be super useful yet

Related

VScode rg process taking all the CPU

My VScode is behaving strangely. When I checkout or pull with many changes in files, it creates many processes called rg that drain the CPU to 100% usage. This problem persists even if I kill the VScode, I have to manually kill the processes.
I found some old threads about disabling symlinks with "search.followSymlinks": false but it didn't help. Might it be some indexing problem?
I have noticed that initializing js/ts language features is spinning but never completes and the whole UI lags. Happy to provide more details like extensions, etc.
I couldn't find a thread with the same problem around 2021/22 so sorry if duplicated.

Meaning of SigQuit in Swift 3, Xcode 8.2.1

I am trying to create a custom keyboard in iOS 10 that acts like a T-9 keyboard. When switching to my custom keyboard, the app extension reads in a list of about 10,000 words from a txt file and builds a trie out of them.
However, I keep getting a "SigQuit" error when I first try to use the keyboard. Rerunning the keyboard right after it failed seems to usually work. Xcode doesn't give me any explanation for why it failed other than the SigQuit error on some assembly code line.
So, my question is, for what reasons might Xcode throw a SigQuit error? I have tried debugging to no avail, and googling SigQuit does not seem to return any useful information. I considered that my keyboard is using too many resources / taking up too much time on startup, but I checked the CPU usage and it peaked at less than 1%. Similarly, the memory used was something like 25mb which doesn't seem terrible.
Keyboard extensions have a much lower memory limit than apps. Your extension was likely killed by the operating system.
See: https://developer.apple.com/library/content/documentation/General/Conceptual/ExtensibilityPG/ExtensionCreation.html
Memory limits for running app extensions are significantly lower than
the memory limits imposed on a foreground app. On both platforms, the
system may aggressively terminate extensions because users want to
return to their main goal in the host app. Some extensions may have
lower memory limits than others: For example, widgets must be
especially efficient because users are likely to have several widgets
open at the same time.
Yeah, seems like you have to Run, then Stop, and it'll run fine on the simulator or device.

Why shouldn't babel-node be used in production?

The babel-node docs carry a stern warning:
Not meant for production use
You should not be using babel-node in production. It is unnecessarily heavy, with high memory usage due to the cache being stored in memory. You will also always experience a startup performance penalty as the entire app needs to be compiled on the fly.
Let's break this down:
Memory usage – huh? All modules are 'cached' in memory for the lifetime of your application anyway. What are they getting at here?
Startup penalty – how is this a performance problem? Deploying a web app already takes several seconds (or minutes if you're testing in CI). Adding half a second to startup means nothing. In fact if startup time matters anywhere, it matters more in development than production. If you're restarting your web server frequently enough that the startup time is an issue, you've got much bigger problems.
Also, there is no such warning about using Babel's require hook (require('babel-register')) in production, even though this presumably does pretty much exactly the same thing as babel-node. For example, you can do node -r babel-register server.js and get the same behaviour as babel-node server.js. (My company does exactly this in hundreds of microservices, with no problems.)
Is Babel's warning just FUD, or am I missing something? And if the warning is valid, why doesn't it also apply to the Babel require hook?
Related: Is it okay to use babel-node in production
– but that question just asks if production use is recommended, and the answers just quote the official advice, i.e. "No". In contrast, I am questioning the reasoning behind the official advice.
babel-node
The production warning was added to resolve this issue :
Without the kexec module, you can get into a really ugly situation where the child_process dies but its death or error never bubbles up. For more info see https://github.com/babel/babel/issues/2137.
It would be great if the docs on babel-node explained that it is not aimed for production and that without kexec installed that it has bad behaviour.
(emphasis mine)
The link for the original issue #2137 is dead, but you can find it here.
So there seems to be two problems here :
"very high memory usage on large apps"
"without kexec installed that it has bad behaviour"
These problems lead to the production warning.
babel-register
Also, there is no such warning about using Babel's require hook (require('babel-register')) in production
There may be no warning but it is not recommanded either. See this issue :
babel-register is primarily recommended for simple cases. If you're running into issues with it, it seems like changing your workflow to one built around a file watcher would be ideal. Note that we also never recommend babel-register for production cases.
I don't know enough about babel's and node's internals to give a full answer; some of this is speculation, but the caching babel-node would do is not the same thing as the cache node does.
babel-node's cache would be another cache on top of node's require cache, and it would have to, at best, cache the resulting source code (before it's fed to node).
I believe node's cache, after evaluating a module, will only cache things reachable from the exports, or, rather, the things that are not reachable anymore will be eventually GCed.
The startup penalty will depend on the contents of your .babelrc, but you're forcing babel to do the legwork to translate your entire source code every time it is executed. Even if you implement a persistent cache, babel-node would still need to do a cache fetch and validation for each file of your app.
In development, more appropriate tools like webpack in watch mode can, after the cold start, re-translate only modified files, which would be much faster than even a babel-node with perfectly optimized cache.

Unusual spikes in CPU utilization in CentOS 6.6 while starting pycharm

my system since last couple of days is behaving strangely. I am a regular user of pycharm software, and it used to work on my system very smoothly with no hiccups at all. But since last couple of days, whenever I start pycharm, my CPU utilization behaves strangly, like in the image: Unusual CPU util
I am confused as when I go to processes or try ps/top in terminal, there are no process which is utilizing cpu more then 1 or 2%. So I am not sure where these resources are getting consumed.
By unusual CPU util I mean, That first CPU1 is getting used 100% for couple or so minutes, then CPU2. Which is, only one cpu's utilization goes to 100% for sometime followed by other's. This goes on for 10 to 20 minutes. then system comes back to normal.
P.S.: I don't think this problem is related to pycharm, as I face similar issues while doing other work also, just that I always face this with pycharm for sure.
POSSIBLE CAUSE: I suspect you have a thrashing problem. The CPU usage of your applications are low because none of them are actually getting much useful work done. All the processing is being taken up by moving memory pages to and from the disk. Your CPU usage probably settles down after a time because your application has entered a state where its memory working set has shrunk to a point where it all can be held in memory at one time.
This has probably happened because one of the apps on your machine is handling a larger data set than before, and so requires more addressable memory. Another possibility is that, for some reason, a lot more apps are running on your machine.
POTENTIAL SOLUTION: There are several ways you can address this. The simplest is to put more RAM on your machine. If this doesn't work or isn't possible, you'll have to figure out which app is the memory hog. You may simply have to work with smaller problems/data-sets or offload some of the apps onto a different box.
MIGRATING CPU LOAD: Operating systems will move tasks (user apps, kernel) around for many different reasons. The reasons can range anywhere from it being just plain random to certain apps having more of their addressable memory in one bank vs another. Given that you are probably doing a lot of thrashing, I'm not surprised that the processor your app is running is randomized over time.

Powershell memory usage - expensive?

I am new to powershell but has written up a few scripts running on a windows2003 server. It's definitely more powerful than cmd scripting (maybe due to me having a programming background). However, when I delve further, I noticed that:
Each script launched will run under 1 powershell process, ie.
you see a new powershell process for each script.
the scripts I tested for memory are really simple, say, build a
string or query an environment variable, then Start-Sleep for 60
sec, So nothing needy (as to memory usage). But each process takes
around >30MB. Call me stingy, but as there are memory-intensive
applications scheduled to run everyday, and if I need to schedule a
few powershell scripts to run regularly and maybe some script
running continuously as a service, I'd certainly try to keep memory
consumption as low as possible. <-- This is because we recently
experienced a large application failure due to lack of memory.
I have not touched on C# yet, but would anyone reckon that it sometimes may be better to write the task in C#?
Meanwhile, I've seen posts regarding memory leak in powershell. Am I right to think that the memory created by the script will be withing the process space of powershell, so that when the script terminates hence powershell terminates, the memory created get cleared?
My PowerShell.exe 2.0 by itself (not running a script) is ~30MB on XP. This shouldn't worry you much with the average memory per machine these days. Regarding memory leaks, there have been cases where people use 3rd party libraries that have memory leaks when objects arn't properly disposed of. To address those you have to manually invoke the garbage collectorusing [gc]::Collect(), but this is rare. Other times i've seen people use Get-Content to read a very large file and assign it to a variable before using it. This will take alot of memory as well. In that case you can use the pipeline to read the file portions at a time to reduce your memory footprint.
1 - Yes, a new process is created. The same is true when running a cmd script, vb script, or C# compiled executable.
2 - Loading the powershell host and runtime will take some non-trivial amount of memory, which will vary from system to system and version to version. It will generally be a heavier-weight process than a cmd shell or a dedicated C# exe. For those MB, you are getting the rich runtime and library support that makes Powershell so powerful.
General comments:
The OS allocates memory per-process. Once a process terminates, all of its memory is reclaimed. This is the general design of any modern OS, and is not specific to Powershell or even Windows.
If your team is running business-critical applications on hardware such that a handful of 30MB processes can cause a catastrophic failure, you have bigger problems. Opening a browser and going to Facebook will eat more memory than that.
In the time it takes you to figure out some arcane batch script solution, you could probably create a better solution in Powershell, and your company could afford new dedicated hardware with the savings in billable hours :-)
You should use the tool which is most appropriate for the job. Powershell is often the right tool, but not always. It's great for automating administrative tasks in a Windows environment (file processing, working with AD, scheduled tasks, setting permissions, etc, etc). It's less great for high-performance, heavily algorithmic tasks, or for complex coding against raw .NET APIs. For these tasks, C# would make more sense.
Powershell has huge backing/support from Microsoft (and a big user community!), and it's been made very clear that it is the preferred scripting environment for Windows going forward. All new server-side tech for Windows has powershell support. If you are working in admin/IT, it would be a wise investment to build up some skills in Powershell. I would never discourage someone from learning C#, but if your role is more IT than dev then Powershell will be the right tool much more often, and your colleagues are more likely to also understand it.
Powershell requires (much) more resources (RAM) than cmd so if all you need is something quick and simple, it makes more sense to use cmd.
CMD uses native Win32 calls and Powershell uses the .Net framework. Powershell takes longer to load, and can consume a lot more RAM than CMD.
"I monitored a Powershell session executing Get-ChildItem. It grew to
2.5GB (all of it private memory) after a few minutes and was no way nearly finished. CMD “dir /o-d” with a small scrollback buffer
finished in about 2 minutes, and never took more than 300MB of
memory."
https://qr.ae/pGmwoe