Can we find out the list of process which are currently running on in iOS programmatically?
Somewhat similar to shown here
http://www.techet.net/sysstat/
shown at process tab
Suggestions are always welcome.
Thanks
See this other answer I literally just posted.
Take a look at the modified Darwin C-code I posted:
darwin.c
darwin.h
If you look in there, inside OS_get_table(), you'll find a bunch of commented out printf statements. If you uncomment and change those, storing the data in some kind of usable data structure, you can collect all this type of information.
Note don't just uncomment all the printf statements and expect that code to work, though. iOS has limits on the rate at which apps can write to std out, so you'll get throttled if you have tons of printfs in a short period of time.
Related
I have a problem. I want to execute some commands in the Commandline of linux. I tested TProcess (So i am using Lazarus) but now when i am starting the programm, there is nothing, wich the Program do.
Here is my Code:
uses [...], unix, process;
[...]
var LE_Path: TLabeledEdit;
[...]
Pro1:=TProcess.Create(nil);
Pro1.CommandLine:=(('sudo open'+LE_Path.Text));
Pro1.Options := Pro1.Options; //Here i used Options before
Pro1.Execute;
With this Program, i want to open Files with sudo (The Programm is running on the User Interface)
->Sorry for my Bad English; Sorry for fails in the Question: I am using StackOverflow the first time.
I guess the solution was a missing space char?
Change
Pro1.CommandLine:=(('sudo open'+LE_Path.Text));
to
Pro1.CommandLine:=(('sudo open '+LE_Path.Text));
# ----------------------------^--- added this space char.
But if you're a beginner programmer, my other comments are still worth considering:
trying to use sudo in your first bit of code may be adding a whole extra set of problems. SO... Get something easier to work first, maybe
/bin/ls -l /path/to/some/dir/that/has/only/a/few/files.
find out how to print a statement that will be executed. This is the most basic form of debugging and any language should support that.
Your english communicated your problem well enough, and by including sample code and reasonable (not perfect) problem description "we" were able to help you. In general, a good question contains the fewest number of steps to re-create the problem. OR, if you're trying to manipulate data,
a. small sample input,
b. sample output from that same input
c. your "best" code you have tried
d. your current output
e. your thoughts about why it is not working
AND comments to indicate generally other things you have tried.
I have a lot of if statements throughout my code and was wondering if there is anyway in Matlab to see which if statements are being used when I run my code. I know I could put variables throughout my code and see which ones are being triggered, but I was wondering if there is an easier way. Maybe a built in MATLAB function or something.
Thanks
Type profile viewer in the command line of matlab and execute your code from there. There you can see in the profile report how many times each line is called as well as how long it takes executing the line of code.
More information:
http://www.mathworks.nl/help/matlab/ref/profile.html
http://www.mathworks.nl/help/matlab/matlab_prog/profiling-for-improving-performance.html
To answer precesely to your question, there is a command to log every line the execution is going through. And if you're familiar with unix-like platform, it is the same command: echo. See the Matlab help of echo to see how you can use it. For example, echo on all sets echoing on for all function files.
Besides that, I advise you two things better than analysing the output of echoing a whole script:
look at every warning in the code editor, and apply meaningful corrections.
use the profiler of matlab, as stated in the answer from EJG89, it is indeed a powerful tool!
In some code coverage tools you can "hide" certain lines of code from the coverage tool, so that those lines do not count towards the coverage totals. For example, some code might be run only in circumstances that are hard or impossible to test (such as certain hardware failures). Thus, you might get 100% coverage reported even though some code was not exercised.
Setting aside for the moment whether this is wise, is this sort of thing possible with Perl's Devel::Cover?
(Devel::Cover can ignore entire files, but I am interested in ignoring just a few lines in a single file.)
A lot of uncoverable code features have been implemented but they are not documented because I wasn't sure of the interface. However, it's been a few years since anything changed in that area.
Probably the easiest way to see how to use the features is to look at tests/uncoverable in the distribution (see https://github.com/pjcj/Devel--Cover/blob/master/test/uncoverable). If you can't or don't want to change your code you can use the .uncoverable file (see https://github.com/pjcj/Devel--Cover/blob/master/tests/.uncoverable) and the cover options as mentioned by toolic.
If you do this, be sure to use the basic_html report which will mark a construct as in error if you tag it as uncoverable but it gets executed anyway.
I really should get around to tidying everything up and documenting it.
According to the TODO file on CPAN, this capability is not currently supported, but the developers see it as a valuable addition:
Enhancements:
Marking of unreachable code - commandline tool and gui.
The cover script mentions promising options: -add_uncoverable_point and -delete_uncoverable_point.
I frequently use the Scala console to evaluate and test code before I actually write it down in my project. If I want to know the contents of a variable, I can just enter it and scala evaluates it. But is there also a way to show the code of methods I entered?
I know there's the UP-key to show single lines, but what I was searching for is to show the whole code at once.
There's a file in your home directory named .scala_history that contains all of your recent REPL history. I regularly copy and paste code from this file into project source files. It's not exactly the same as showing the code for individual methods in the REPL, but it might help you accomplish the same goals.
See the comments by Paul Phillips in this issue for a discussion of some related functionality in the REPL (grouping statements in the history):
At some point I implemented the logic for this, but the real obstacle
is jline. It has enough trouble figuring out where the cursor is under
the simplest conditions. Start throwing big multiline blocks into the
history and it breaks down in tears. Would love to see this and
SI-2547 addressed by the community.
...
I expect to fix this soon too, but it depends on how well the recent
jline work goes. I implemented it long ago, and display issues are the
impediment.
Both of these comments are over two years old, so I wouldn't hold your breath.
I dont know a command to load all the code from command line.
What you can do is to :load path/to/my/file.scala to load some complex code and re- :load it when you changed the code in the file.
I'm about to rewrite a large portion of a project that I have developed over the last 10years while learning perl. There is alot of optimisation that can be gained.
A key part of the code is a large if/elsif block that require xxx.cgi files depending on a POST value. Eg:
if($FORM{'action'} eq "1"){require "1.cgi";}
elsif($FORM{'action'} eq "2"){require "2.cgi";}
elsif($FORM{'action'} eq "3"){require "3.cgi";}
elsif($FORM{'action'} eq "4"){require "4.cgi";}
It has many more irritations but just how expensive is using "require" in perl?
require itself has a relatively low cost in any case and, if you require the same file more than once within a single run of your program, it will detect that the file has already been loaded and not attempt to load it a second time. However, if you have a long and highly-populated search path (#INC) and you require (or use) a lot of files, it's possible that all of the directory searches could add up; this isn't common (and doesn't sound likely in your case), but it can be improved by reorganizing your module directories so that the things you're loading show up earlier in #INC.
The potentially-major performance hit referred to by earlier answers is the cost of compiling the code in the files you require. Getting rid of the require by moving the code into your main program will not help with this, as the code will still need to be compiled. In your case, it would probably make things worse, as it would cause the code for all options to be compiled on every one rather than only compiling the code used by the one action selected by the user.
As has been said, it really depends on the actual code in those files. Your best bet would be to do tests using Devel::NYTProf and/or Benchmark to see where the most time is being spent in your code if you are unhappy with its performance.
You can also read Profiling Perl on perl.com, but it is a bit outdated as it uses Devel::DProf.
Not answer to your primary question, but still a good idea for code refactor i read recently in Ovid blog.
The first time, possibly expensive; Perl has to search a path to find the file and load it up. Subsequent times, it's cheap -- a table is consulted and the file isn't actually loaded a second time. If this is in a CGI that is run once per request and then exited, then this is not too good.
It's really going to depend on the size of the files you're calling to. If you have massive CGI files, then it might detriment the performance of your software. If we're talking 6 or 7 lines of code each, then no issue. Try benchmarking your program's performance with and without, and make your own judgement.