What happens to a running program when my computer switches to Hibernate? - matlab

my laptop goes to hibernate when I'm running a matlab code because of overheating. when I turn it on again, matlab countinue to running my code. I'm worry about the results of my code! what do you think? is there any problem with hibernating and resuming the matlab code?
thank you.

I recommend looking at this: http://www.mathworks.com/matlabcentral/answers/46569-what-happens-to-a-running-program-when-my-computer-switches-to-hibernate
Theoretically, when the computer hibernates, the status of the memory and disk are saved. However, as it is pointed out in the link I provided, this is not very reliable and can lead to corruption of files and/or data.
Instead, I recommend that your program saves necessary variables from time to time using checkpoints, so that your program can run reliably even when your code is paused or your computer hibernates. Take a look at this link to see how to implement checkpoints: http://www.walkingrandomly.com/?p=5343.

Related

Can I open and run from multiple command line prompts in the same directory?

I want to open two command line prompts (I am using CMDer) from the same directory and run different commands at the same time.
Would those two commands interrupt each other?
One is for compiling a web application I am building (takes like 7 minutes to compile), and the other is to see the history of the commands I ran (this one should be done quickly).
Thank you!
Assuming that CMDer does nothing else than to issue the same commands to the operating system as a standard cmd.exe console would do, then the answer is a clear "Yes, they do interfere, but it depends" :D
Break down:
The first part "opening multiple consoles" is certainly possible. You can open up N console windows and in each of them switch to the same directory without any problems (Except maybe RAM restrictions).
The second part "run commands which do or do not interfere" is the tricky part. If your idea is that a console window presents you something like an isolated environment where you can do things as you like and if you close the window everything is back to normal as if you never ever touched anything (think of a virtual machine snapshot which is lost/reverted when closing the VM) - then the answer is: This is not the case. There will be cross console effects observable.
Think about deleting a file in one console window and then opening this file in a second console window: It would not be very intuitive if the file would not have been vanished in the second console window as well.
However, sometimes there are delays until changes to the file system are visible to another console window. It could be, that you delete the file in one console and make a dir where the file is sitting in another console and still see that file in the listing. But if you try to access it, the operating system will certainly quit with an error message of the kind "File not found".
Generally you should consider a console window to be a "View" on your system. If you do something in one window, the effect will be present in the other, because you changed the underlying system which exists only once (the system is the "Model" - as in "Model-View-Controller Design Pattern" you may have heard of).
An exception to this might be changes to the environment variables. These are copied from the current state when a console window is started. And if you change the value of such a variable, the other console windows will stay unaffected.
So, in your scenario, if you let a build/compile operation run and during this process some files on your file system are created, read (locked), altered or deleted then this would be a possible conflicting situation if the other console window tries to access the same files. It will be a so called "race condition", that is, a non-deterministic process, which state of a file will be actual to the second console window (or both, if the second one also changes files which the first one wants to work with).
If there is no interference on a file level (reading the same files is allowed, writing to the same file is not), then there should be no problem of letting both tasks run at the same time.
However, on a very detailed view, both processes would interfere in that they need the same limited but vastly available CPU and RAM resources of your system. This should not pose any problems with the todays PC computing power, considering features like X separate cores, 16GB of RAM, Terabytes of hard drive storage or fast SSDs, and so on.
Unless there is a very demanding, highly parallelizable, high priority task to be considered, which eats up 98% CPU time, for example. Then there might be a considerable slow down impact on other processes.
Normally, the operating system's scheduler does a good job on giving each user-process enough CPU time to finish as quickly as possible, while still presenting a responsive mouse cursor, playing some music in the background, allowing a Chrome running with more than 2 tabs ;) and uploading the newest telemetry data to some servers on the internet, all at the same time.
There are techniques which make it possible that a file is available as certain snapshots to a given timestamp. The key word would be "Shadow Copy" under Windows. Without going into details, this technique allows for example defragmenting a file while it is being edited in some application or a backup could copy a (large) file while a delete operation is run at the same file. The operating system ensures that the access time is considered when a process requests access to a file. So the OS could let the backup finish first, until it schedules the delete operation to run, since this was started after the backup (in this example) or could do even more sophisticated things to present a synchronized file system state, even if it is actually changing at the moment.

List the four steps that are necessary to run a program on a completely dedicated machine—a computer that is running only that program

In my OS class, we use a text book "Operating System Concepts" by Silberschatz.
I ran into this question and answer in practice exercise and wanted to know further explanation.
Q. List the four steps that are necessary to run a program on a completely dedicated machine—a computer that is running only that program.
A.
1. Reserve machine time
2. Manually load program into memory
3. Load starting address and begin execution
4. Monitor and control execution of program from console
Actually, I don't understand the first step, "Reserve machine time". Could you explain what each step means here?
Thank you in advance.
If the computer can run only a single program, but the computer is shared between multiple people, then you will have to agree on a time that you get to use the computer to run your program. This was common up through the 1960s. It is still common in some contexts, such as very expensive super-computers. Time-sharing became popular during the 1970s, enabling multiple people to appear to share a computer at the same time, when in fact the computer quickly switched from one person's program to another.
In my opinion teaching about old batch systems in today's OS classes is not very helpful. You should use some text which is more relevant to contemporary OS design such as the Minix Book
Apart from that if you really want to learn about old systems then wikipedia has pretty good explanation.
Early computers were capable of running only one program at a time.
Each user had sole control of the machine for a scheduled period of
time. They would arrive at the computer with program and data, often
on punched paper cards and magnetic or paper tape, and would load
their program, run and debug it, and carry off their output when done

program execution is too slow in eclipse and was fast just yesterday for the same program

I am executing one java program via eclipse and I was executing the exact same program yesterday and my program execution was only taking 10 min yesterday, today the same program is taking more than an hour and I did not change any single thing in my code. could you plwase give me a solution to revert back to the old duration of my program execution that I had yesterday
If you did not change anything in your sourcecode, I see the following possible reasons for this:
Side-effects on the machine you are running the program on, like other (maybe hidden) processes soak up cpu time and slow down your program.
This could also be the machine itself being slower (slowdown from to much heat, etc.)
Your code is doing some "random" things that require longer runs sometimes (sounds unlikely tho)
Somehow eclipse is causing an issue (try to run your program without it)
Your java runtime might cause a problem (sounds unlikely aswell, but maybe updating it to the newest version can help)

exe stops execution after couple of hours

I have one exe which collect some information and once information collected saved in local machine. I have managed loop such that it will do same task for infinite time.
But exe stops execution after couple of hours (approx 5-6 hours), it neither crashed nor gives exception.
I tried to find reason in windbg but I haven't got any exception in it.
Now, Could anyone help me to detect problem?
should I go for sysinternal tool or any other, which debugger tool should I use?
A few things jump to mind that could be killing your program:
Out of memory condition
Stack overflow
Integer wrap in loop counter
Programs that run forever are notoriously difficult to write correctly, because your memory management must be perfect. Without more information though, it's impossible to answer this question.
If the executable is not yours and is Naitive C/C++ code, you may want to capture first chance exception dumps or monitor the exe using Windows debug tools (such as DebugDiag or ADPlus).
Alternatively, if you have access to the developer of the executable, they may add more tracing to the exe (ETW or otherwise) to understand the possible failure points in the code.

Thread 1: Program received signal : "EXC_BAD_ACCESS"

I am currently programming a iPhone-app for my maturity research. But there is an behavior I don't understand: Sometimes when i compile my project there is:
Thread 1: Program received signal : "EXC_BAD_ACCESS".
But when I compile the same code a second, or a third time the code just runs fine and i can't get why. I use some MonteCarloSimulation but when it fails it fails executing one of the first 100 simulations. But when every thing runs fine it executes 1000000 simulations without an error.. Really strange isn't it?
Do you have any idea? Can this be an issue of Xcode or arc?
All other things just work perfect.
Do you have to get any further information? I can also send you my code as an email.
This usually means you're trying to access an object that has already been deallocated.
In order to debug these things, Objective C uses something called "NSZombie" that will keep those objects around so you can at least see what it is that's trying to be called. See this question for some details on how to use it.
This is typically caused by accessing some memory that's been corrupted, chances are you have a reference to an object which has been deleted. A lot of the time you may find that the memory where the object was located has not yet been overwritten, so when you attempt to access that memory your data is still intact and there is no problem, hence it working some of the time.
Another scenario would be that you've got some code writing into memory using a bad reference, so you're writing into an area you shouldn't be. Depending on the memory layout when the program starts, this could have no effect some of the time but cause something catastrophic at other times.