Wait integrity test to finish (Do Silent^Integrity("/tmp/logfile")) - intersystems-cache

I would like to know how it's possible to run a integrity test without starting it in background. So I want to run it in foreground and wait until it's finished.
The following runs on background (http://docs.intersystems.com/cache20071/csp/docbook/DocBook.UI.Page.cls?KEY=GSA_manage):
Do Silent^Integrity("/tmp/logfile")
I also can't find the routine of ^Integrity (in %SYS). How may I see the code?
Using Caché Intersystems 2008.
Thanks by advance,

In the %SYS namespace, you can run ^Integrity directly without providing a tag name, e.g.:
> Do ^Integrity
You should be able to view the source code in Cache Studio in your version, assuming you are in the %SYS namespace. I can pull it up fine in Cache 2010, though I understand that Intersystems has stopped providing the underlying source for much of their standard codebase in more recent versions. If, in fact, you don't have the source for ^Integrity available on your system, you'll simply have to contact them for any information you need beyond what the documentation provides.

Related

How to investigate VS Code taking 30% of CPU although it is supposed to do nothing

My CPU is oscillating between 20 and 30% usage for CPU usage based on Windows Task manager. it is occurring for several hours now.
I expect this VS Code instance to do nothing.
How can I investigate what is going wrong?
I tried to open "Developer: Toggle Developer Tools", then go to performance tab and record. Unfortunately it is reporting that most of the time is in "idle" (which is what I would expect)
(I also tried to ask on Twitter without success https://twitter.com/apupier/status/1100348567926071296)
regards,
Based on the comments it seems that what the Task Manager reports is the total use of VS both on CPU utilization and memory.
A broad range of reasons could explain the observations you made.
1.Increased CPU and Memory usage by VS Studio.
2. Increased Fan Speed.
3. Your code being idle.
It can be the case that the VS Code or one of its plugins is actually doing something even if you do not actively use it. For sure if it is opened, even without being used the program will use some memory.
You can find more information on the CPU usage per VS Code Extension by typing: code --status in the command line. You can also try to execute: code --disable-extensions to run VS Code without any extensions to see if the CPU/Memory usage is reduced.
Results of the code --status will look like this
There are some related issues you could also see in GitHub, I checked before writing this answer:
100% core CPU usage without apparent reason
Excess CPU usage
Excess CPU usage editing C file
It is usually an extension. E.g. Python Intellisense. It is perhaps outsourcing processing for some scientific project aimed for the good of humanity. Fingers crossed.
Update 2022:
Earlier you could find them easily with VS Codes builtin Process Explorer. Help > "Open Process Explorer."
But the newer versions are very sneaky. They seem to have evolved making them difficult to catch while stealing your cpu. Disclaimer: the behavior may very well be even an unintentional glitch although it does not appear so.
Can you catch it in action?
Its as tough as catching a fly. As of Feb 2022, the moment you attempt to probe into the cpu usage either via vs code "help/Open Process Explorer" OR sometimes even win task manager, it stops/vanishes like a fly. Then it stays inactive for some hours or a day. You forget about it and get busy coding only to find the fans are going crazy because it has sneaked in to be active again. The newer version of the bug is perhaps programmed as such.
None the less with a lot of patience, you can sometimes catch them. Here is one instance and yet it vanished before i could scroll to catch the name.
VS Code Process manager
Solution:
I don't have a reason to probe it beyond a limit, but a small monitoring script should be able to catch the culprit.
Personally, I just had to remove the "Python extension for Visual Studio Code (Python IntelliSense - Pylance)" and that was enough to resolve.
IDE's a notoriously expensive to run. As soon as you open VS Code it loads the program from your hard drive, into RAM; acting as a staging point for all the processes VS Code uses to manage its environment. Things like,
Overhead of the Electron framework upon which it is built
Checking for external file changes that need to be synchronized to the editor
Render pipeline
Child processes to support any extensions you have running
Terminal instances (and by extension anything running in those terminals)
Here's a nifty little extension I found after some quick Googling. It will show you the subprocesses running in VS Code, and may help you identify exactly what is taking up the most bandwidth. Do keep in mind, that by killing some of those processes, you may begin to lose the associated functionality, and indeed possibly even cause VS Code to crash. The only sure-fire way to keep it from taxing your CPU, is to shut it down completely when you're not using it.
Perhaps you could try out another IDE like Sublime, IntelliJ, or Atom and see if they act more as you expect when idle. Personally, I really love the features of Jetbrain's IntelliJ (and similar: Webstorm, PhpStorm, etc).
I got the same problem. It might have something to do with the git operations. You might have DELETED many projects from your current folder, while git didn't register the deletion.
When you do something with the changes, git operations will use a lot of CPU.
The simplest solution is to create a new folder and start running VSCode in it. You can delete the whole old folder, or you can leave it alone. It's up to you.

Start an application at system start without login

We have a new server running and we got some new programs doing import routines. So far so good... But there is one program that is put into autostart folder. So it doesn't run until admin logs in and it stops if we logout.
I'd like to put this one into a seperate session so it may work without any interaction by simply starting it with the task scheduler at startup. Is this the right way to do this? Is it safe if I log in later and log out?
Many thanks!
Edit: The applications shows as a symbol in the task bar if running, it can be configured by this. Anything I must know about this if I change?
Edit: It is not my application, I cannot rewrite it as a service.
I successfully added the application by using task schelduler on startup. Login and logout will not quit the application but no symbol is shown. Please add details to my side questions and I'll mark your answer as the accepted one.
Edit: Ended up using this one. If I have to configure, I stop the application in task manager and start it again by link. After that I quit the application and restart it by task scheduler manual start.
You need to run your program as a Windows Service. One way of doing it is using the sc.exe program:
> sc create <new_service_name> binPath= "c:\myapp\myapp.exe"
You can read about it here.
You need to separate your application in two.
To allow it to run without a user session, you need a windows service. That should handle all the background stuff. You can then register the service and set it to start when the system starts.
To allow it to have a UI, and show up in the notification area, you need a windows application. This will be in autostart as usual, and will communicate with the service - for example, over named pipes.
While it is still (barely) possible to run an UI application without a user session, it's only maintained for backwards compatibility, and already shows a lot of problems. It will likely be removed altogether in the future, because it breaks quite a few contracts. Do not rely on hacks like this.
I also used the task scheduler to create the application at system startup. It should be noted that if you want to use for mining, you have to disable an option in "settings" where it says that if the application lasts more than three days in a row it will end.
It really works wonderfully!
it is a old question but I recently solved in another way...
(before I was using a scheduled-task for startup but this gave me diverse problems with lots software...)
Some programs also for diverse reasons must be run at a user level... or even inside a specific user session...
So the best way I found was to use a tool like Sysinternal/Autoruns to program the auto-logon to a specific user (it is a registry setting)... and in the startup-folder of that user (or any other "autorun/autolaunch" task)... run a script that first locks the screen... and next runs the other intended programs... that will run under that user profile...
so you can choose a standard user or a administrator... or even launch programs from a standard user in adminsitrator mode...
I hope will help...
This "hack" solved me many problems with startup apps...
I could not get the "sc create" command to work. Instead I manually edited the registry using regedit. I added a new key in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services.
I used the following page to figure out required parameters and their values. Note that the names do not map.
https://learn.microsoft.com/en-us/windows-hardware/drivers/install/inf-addservice-directive
Old question, but for anyone that stumbles here. Use srvany to set the program as a custom service.
Note that when you do this with for example dropbox, googledrive, etc., you will need stop the service, then open the program normally to make changes like password, updates, etc.
below is a well enough intro.
https://www.iceflatline.com/2015/12/run-a-windows-application-as-a-service-with-srvany/
Download the tool kit here
https://www.microsoft.com/en-us/download/details.aspx?id=17657
Convert user application to Service and Register it using Regsvr32 or installutil.exe. It will start the service using SYSTEM user account. Which is a high privilege account.
Note : You can`t run any Window based application. Even a Message only window.

Changing Code At Runtime While Debugging

I am using Eclipse Kepler Service Release 2 , EPIC 0.5.46 and Strawberry Perl 5 version 18 for perl programming. For debugging I am using Eclipse debugger and PadWalker .
I have an interactive perl program that writes to files based on answers provided by the users to multiple prompts. While debugging , every time i change a single line of code I have to rerun the whole program again and provide inputs to every prompt , which is really time consuming.
Is there a way to make changes to the code in a sub routine , in the middle of debugging session such that the instruction pointer resets itself to the first line of that sub routine. This way i do not have to restart the session to recompile the new code.
Appreciate your inputs and suggestions. Thank You!!!
What you want to do can be done, and I've done it many times in Perl myself. For example, see this.
However although what you describe may work (and is a bit dangerous), the way it is generally done a bit different and safer.
First one has to assume a regular kind of command structure like a command processor, or say a web server.
In a command processor or web server, you read a command (or get a web request), perform an action, then read another command, perform another action and so on. From your description, it sounds like you have such a structure.
In my case, I have each debugger command stored as in Perl file. This is helpful not only for facilitating this task, but also for understanding, testing and changing the code.
Given this kind of program structure, instead of trying to change the program counter, you complete the command and at the level where you are about to read a new command, you make the change and then reload the file which changes the code.
The specific Perl construct to do this is called do. Don't use require or use which will load in a Perl file only if that file or module hasn't been previously loaded. In your situation, you want to reload even if it has been loaded before.
So now how do you get to be able to issue a do command? As you suggest, you could do it through a debugger. Assuming you have this overall program stucture as described above, you put the breakpoint somewhere a common point in the caller which loops over things to process, rather than try to change things in indvidual commands.
And you don't even need a debugger to do this! Many web frameworks like Ruby on Rails, have a "development" mode where they save timestamps on files that implement functionality. If the file has changed they issue the "do" command before running the request.

Talend studio tWaitForFile issue

I am using a tWaitForFile component from a Talend Studio Project and I want to know if there is a way to be sure a file to trig the event when this file is fully written on disk.
I tried to set the advanced property : "Wait the file to be released"
but it seems this is useless, the file trigs the component even it is not finished to be transmitted.
Does anybody have the same behaviour and a solution to fix that?
The version of Tos is: 4.2.3
The advanced setting "Wait for file to be released" only works on Windows. It has no effect on Unix, which probably explains why it did not work for you.
It is generally difficult, or even impossible, for a Unix process to figure out if a file has been written completely or not. Consequently, there is no easy way to do this in Talend, either.
(For example, if you wanted to wait until the file size does not change anymore -- how long do you wait?)
A common solution involves the process writing to the file: Create the file under a different name first, and when it is written completely, rename it to the name that the other process expects. That way, it will show up in its full size immediately.

Hybrid version control & sync system?

Is anyone aware of a hybrid version control and synchronising system?
I'm currently a happy mercurial user, but my projects usually contain a mixture of files.
Most of these (code, documentation, ...) I want to be version-controlled. This is why I use mercurial.
However, on the rare occasion I have files that I would like to synchronise between my working copies, but not version control.
For example, I version control the code I write to do image processing. This code can produce a whole bunch of output images which I'd like to have synchronised so I don't have to remember to shuffle them around my various computers, but there's no point having these version controlled.
To clarify - I am aware of extension to mercurial such as bfiles and bigfiles, which are handy for my image example, but I was just wondering if anyone out there knows of alternative ways to handle this. I just want the one system that I can tell "version control all files except those ones, which should be synced but have no history".
cheers!
EDIT: I could do something like adding a hg marksync <filename> that added <filename> to a list of files to be synced, and then adding a hook to hg push/hg pull that would (say) run rsync (or whichever sync tool) in the background, but I wondered if there was a less hacky solution (I think bfiles/bigfiles do something along these lines anyway).
Version Control System (any) doesn't care about synchronization of
not versioned data
besides default pathes
If you want sync any files - use specially designed for this task tools: f.e. rsync
This code can produce a whole bunch of output images which I'd like to have synchronised
Is this DATA or part of your CODE?
If data: Keep out of your versioning system, just don't go there. If it is part of your code (like layout images) check it in. Those are the only ways which are the generally accepted.
A nice solution for the data would be syncing OR generating them. So you might add a step after deployment to a server: GenerateImages().
edit: In addition to the comment made by the thread starter:
If the images are data and you need to process them on a different system don't think about the version control for your code. It is unrelated. The steps which would make sense to me, in order of processing:
Start with updating your image code, check it in versioning. Then deploy (yes this is deployment) the updated code to the cruncher computer. Now code is done.
Then you have tasks which the number cruncher should handle. Like processing the images. So start that processing from either the cruncher itself (probably some queue happens there) or from a central dispatcher.
Then you have the results locally at the cruncher. Now something has to happen with that data, so that's also part of your software. Decide whether you want the cruncher to send them to some central storage, your workstation or another location. Let the software handle that. This is the most hard part as I read through your question. Many solutions are possible from just FTP/network transfers to specific storage solutions. Willing to help but need more info about the real issues, amounts, sizes etc. on these parts.
If the new updated version of the image processor makes the old generated images obsolete implement that also in your code, by for example attaching an attribute to the files generated, a seperate folder or another indication. That way you could request the cruncher after update to re-generate any obsolete files.