Can I open and run from multiple command line prompts in the same directory? - command-line

I want to open two command line prompts (I am using CMDer) from the same directory and run different commands at the same time.
Would those two commands interrupt each other?
One is for compiling a web application I am building (takes like 7 minutes to compile), and the other is to see the history of the commands I ran (this one should be done quickly).
Thank you!

Assuming that CMDer does nothing else than to issue the same commands to the operating system as a standard cmd.exe console would do, then the answer is a clear "Yes, they do interfere, but it depends" :D
Break down:
The first part "opening multiple consoles" is certainly possible. You can open up N console windows and in each of them switch to the same directory without any problems (Except maybe RAM restrictions).
The second part "run commands which do or do not interfere" is the tricky part. If your idea is that a console window presents you something like an isolated environment where you can do things as you like and if you close the window everything is back to normal as if you never ever touched anything (think of a virtual machine snapshot which is lost/reverted when closing the VM) - then the answer is: This is not the case. There will be cross console effects observable.
Think about deleting a file in one console window and then opening this file in a second console window: It would not be very intuitive if the file would not have been vanished in the second console window as well.
However, sometimes there are delays until changes to the file system are visible to another console window. It could be, that you delete the file in one console and make a dir where the file is sitting in another console and still see that file in the listing. But if you try to access it, the operating system will certainly quit with an error message of the kind "File not found".
Generally you should consider a console window to be a "View" on your system. If you do something in one window, the effect will be present in the other, because you changed the underlying system which exists only once (the system is the "Model" - as in "Model-View-Controller Design Pattern" you may have heard of).
An exception to this might be changes to the environment variables. These are copied from the current state when a console window is started. And if you change the value of such a variable, the other console windows will stay unaffected.
So, in your scenario, if you let a build/compile operation run and during this process some files on your file system are created, read (locked), altered or deleted then this would be a possible conflicting situation if the other console window tries to access the same files. It will be a so called "race condition", that is, a non-deterministic process, which state of a file will be actual to the second console window (or both, if the second one also changes files which the first one wants to work with).
If there is no interference on a file level (reading the same files is allowed, writing to the same file is not), then there should be no problem of letting both tasks run at the same time.
However, on a very detailed view, both processes would interfere in that they need the same limited but vastly available CPU and RAM resources of your system. This should not pose any problems with the todays PC computing power, considering features like X separate cores, 16GB of RAM, Terabytes of hard drive storage or fast SSDs, and so on.
Unless there is a very demanding, highly parallelizable, high priority task to be considered, which eats up 98% CPU time, for example. Then there might be a considerable slow down impact on other processes.
Normally, the operating system's scheduler does a good job on giving each user-process enough CPU time to finish as quickly as possible, while still presenting a responsive mouse cursor, playing some music in the background, allowing a Chrome running with more than 2 tabs ;) and uploading the newest telemetry data to some servers on the internet, all at the same time.
There are techniques which make it possible that a file is available as certain snapshots to a given timestamp. The key word would be "Shadow Copy" under Windows. Without going into details, this technique allows for example defragmenting a file while it is being edited in some application or a backup could copy a (large) file while a delete operation is run at the same file. The operating system ensures that the access time is considered when a process requests access to a file. So the OS could let the backup finish first, until it schedules the delete operation to run, since this was started after the backup (in this example) or could do even more sophisticated things to present a synchronized file system state, even if it is actually changing at the moment.

Related

TI-84 corrupted programs

So I got a TI-84 calculator a few months ago. As of this morning, I had 30 programs that I wrote myself stored on it. The largest size program was slightly over 200, with the vast majority being under 100. The RAM Free was about 14900, and the ARC Free has always been 1919K.
This evening, when I went to check the Memory on it, I noticed that one of my programs (for the surface area of a rectangular pyramid) showed that it had a size of 200+. I took a look at the program, and its commands were scrambled, and had commands from other programs in it. I went back to the Memory management section and deleted the program, thinking that if it was corrupted, then deleting it would be the wisest choice.
I looked through the rest of my programs, and, to my horror, I saw that my program for the volume of a cylinder (the first program I ever wrote) had a size of 17000+. I decided to delete it too, but when I pushed the ENTER button to select the program, the TI-84 froze and the contents on the screen slowly faded into an all white screen. The calculator was completely unresponsive at this point. So, after some research, I pushed the reset button on the back of the TI-84, and that seemed to solve the problem, despite erasing all of my programs, except for the one that was at 17000+ (which I immediately deleted).
I have no idea why this occurred, as my research did not find any similar instances. I know my programs became corrupted, but I want to know what happened and why so I can prevent this from happening again. I already plan on backing up any future programs I write.
Sometimes programs can be corrupted by faulty assembly code in assembly programs and in apps. However, if you have only been using TI-Basic, it it unlikely to be code. Also, the hardware can sometimes get messed up by dropping or hitting the calculator. My calculator has also behaved very strangely while operating with low batteries and batteries of different ages (some more charged than others). Also, it is good to have plenty of extra RAM and Archive memory (although that doesn't seem to be your problem).
As far as solutions/preventative measures go, back your programs up, make sure you only download/use correct assembly (or none at all), and take good care of the calculator (batteries, jolts, etc.).

What happens to a running program when my computer switches to Hibernate?

my laptop goes to hibernate when I'm running a matlab code because of overheating. when I turn it on again, matlab countinue to running my code. I'm worry about the results of my code! what do you think? is there any problem with hibernating and resuming the matlab code?
thank you.
I recommend looking at this: http://www.mathworks.com/matlabcentral/answers/46569-what-happens-to-a-running-program-when-my-computer-switches-to-hibernate
Theoretically, when the computer hibernates, the status of the memory and disk are saved. However, as it is pointed out in the link I provided, this is not very reliable and can lead to corruption of files and/or data.
Instead, I recommend that your program saves necessary variables from time to time using checkpoints, so that your program can run reliably even when your code is paused or your computer hibernates. Take a look at this link to see how to implement checkpoints: http://www.walkingrandomly.com/?p=5343.

Does ProfileOptimization actually work?

One of the new performance enhanchements for .NET 4.5 is the introduction of the 'MultiCode JIT'.
See here for more details.
I have tried this, but it seems to have no effect on my application.
The reason why I am interested is that my app (IronScheme) takes a good long time to startup if not NGEN'd, which implies a fair amount of JIT'ng is involved at startup. (1.4 sec vs 0.1 sec when NGEN'd).
I have followed the instructions on how to enable this, and I can see a 'small' (4-12KB) is created. But on subsequent startup, it seems to have absolutely no effect on improving the startup time. It is still 1.4 sec.
Has anyone actually seen (or made) this work in practice?
Also, are there any limitations on which code will be 'tracked'? Eg: assembly loading contexts, transient assemblies, etc. I ask this as the created file never seems to grow, but I am in fact generating a fair amount of code (in a transient assembly).
One bug that I did encounter was that SetProfileRoot does not seem to understand a / as a path separator, make sure to use \ .
The rule of thumb we use at Microsoft is that Multicore JIT gets you about half way towards NGEN startup performance. Thus if your app starts in 0.1 seconds with NGEN and 1.4 seconds without NGEN, we would expect Multicore JIT startup to take about 0.75 seconds.
That being said, we had to put some limitations in place to guarantee that program execution order is the same with and without MCJ. MCJ will sometimes pause the background thread waiting for modules to be loaded by the foreground thread, and will abort background compilation if there is an assembly resolve or module resolve event.
If you want to find out what's happening in your case, we have ETW (Event Tracing For Windows) instrumentation of the MCJ feature and we will be releasing a version of PerfView soon which will be able to collect these events by if you take a trace of your app startup.
Update: PerfView has been updated to be able to show background JIT information. Here are the steps to diagnosing with the latest version (1.2.2.0):
Collect a trace using PerfView of your application startup, either using Collect->Run or Collect->Collect from the main PerfView menu.
Assuming you used Collect-> Run, put the name of your .exe in the Command text box, pick a filename (i.e. IronScheme.etl), select Background JIT from Advanced Options, and click Run Command.
Close your application and double click on the IronScheme.etl file that gets generated.
Double click on the JIT Stats view in the list underneath IronScheme.etl, you should see something like this in the view that pops up:
This process uses Background JIT compilation (System.Runtime.ProfileOptimize)
Methods Background JITTed : 2,951
Percent # Methods Background JITTed : 52.9%
MSec Background JITTing : 3,901
Percent Time JITTing is Background : 50.9%
Background JIT Thread : 11308
You can click on "View Raw Background Jit Diagnostics" to see all of the MCJ events in excel. One question I forgot to ask: are you running this on a multicore machine or multicore VM? It is a common mistake to test out MCJ in a VM that only has a single logical processor.
Calling Activator.CreateInstance during startup seems to kill MCJ?
Or rather that triggered an Assembly Resolve, which completely seems to stop MCJ. And never work after that. Maybe the MSDN docs should mention this.

How can I call two install4j launchers in sequence from one user app launch?

I have a tiny remote incremental update program (https://github.com/HitTheSticks/alamode), written in java, that needs to be launched before every execution of my main software (which is also written in java). It would be a bad idea to run the updater in the same process space as the main software, given that it's going to be stomping on all sorts of files that are pulled into the classpath by the main program.
Since the updater has basically no dependencies, I can simply generate a launcher with a safe context for it and run it before the main program.
But java's exec() facilities don't follow fork()/exec() semantics. Which means that for the updater to exec() the launcher will result in both JVMs running simultaneously, with all logging IO filtering through the updater. Yuck.
How can I make sure that when the user clicks [my start menu item], it always seamlessly runs [my updater]->[my program]?
I see 'com.install4j.api.launcher.ApplicationLauncher', but it doesn't specify whether or not the calling application can exit without also taking down the launched application. And preferably, I wouldn't nest the applications at all... I'd like them launched sequentially.
Oh, and it'd be cool if the solution were at least mostly portable.

Monitoring a directory in Cocoa/Cocoa Touch

I am trying to find a way to monitor the contents of a directory for changes. I have tried two approaches.
Use kqueue to monitor the directory
Use GCD to monitor the directory
The problem I am encountering is that I can't find a way to detect which file has changed. I am attempting to monitor a directory with potentially thousands of files in it and I do not want to call stat on every one of them to find out which ones changed. I also do not want to set up a separate dispatch source for every file in that directory. Is this currently possible?
Note: I have documented my experiments monitoring files with kqueue and GCD
My advice is to just bite the bullet and do a directory scan in another thread, even if you're talking about thousands of files. But if you insist, here's the answer:
There's no way to do this without rolling up your sleeves and going kernel-diving.
Your first option is to use the FSEvents framework, which sends out notifications when a file is created, edited or deleted (as well as things to do with attributes). Overview is here, and someone wrote an Objective C wrapper around the API, although I haven't tried it. But the overview doesn't mention the part about events surrounding file changes, just directories (like with kqueue). I ended up using the code from here along with the header file here to compile my own logger which I could use to get events at the individual file level. You'd have to write some code in your app to run the logger in the background and monitor it.
Alternatively, take a look at the "fs_usage" command, which constantly monitors all filesystem activity (and I do mean all). This comes with Darwin already, so you don't have to compile it yourself. You can use kqueue to listen for directory changes, while at the same time monitoring the output from "fs_usage". If you get a notification from kqueue that a directory has changed, you can look at the output from fs_usage, see which files were written to, and check the filenames against the directory that was modified. fs_usage is a firehose, so be prepared to use some options along with Grep to tame it.
To make things more fun, both your FSEvents logger and fs_usage require root access, so you'll have to get authorization from the user before you can use them in your OS X app (check out the Authorization Services Programming Guide for info on how to do it).
If this all sounds horribly complicated, that's because it is. But at least you didn't have to find out the hard way!