I am required to print huge amounts of data ( in the order of a few 100 MBs) on the console. Using println for this is failing miserably on IntelliJ.
Is there any alternative like console.log which can handle and display this data without lagging and slowing down?
Thanks in advance!
You can buffer, and perhaps bypass character encoding. It's also worth looking at the IntelliJ settings, particularly if you don't see this problem when running from the commandline - perhaps IntelliJ is offering some functionality in its console (e.g. highlight errors, link stack traces to line numbers) that involves scanning every line. You might also worry about word wrapping in your console
(That said, I'm not sure how you expect to understand anything from 100MB of printing - if it's something you need to look at an overview of to "see the pattern", try making code see the pattern the same way you do)
Related
I would appreciate it very much if you helped me with the following most annoying problem:
I'm using PyDev in Eclipse on my Ubuntu 14.04 machine, and every time I run my code in debug mode, it takes around 3-4 minutes to start.
My research yielded, that it takes a very long time to run each "import" statement row (without import statements, the problem vanishes).
Can anyone tell how can I overcome this problem?
Thanks!
I'm attaching:
1) my import statements.
2) my file tree (the file I'm running is in the folder "Gil").
3) and the debug window (during these 3-4 minutes, eclipse adds more and more lines there, that just say "light.py" (this is the file I'm running))
I'm only guessing here, but from your output in PyDev it seems you're executing something with multiprocessing or another thing which creates python subprocesses (which is why I think you're having a new light.py entry every time in the debugger).
Without looking at your code it's a bit hard guessing on what's actually happening, but I can give you some suggestions here:
Make your imports lazier (if you're always executing a new process which has to re-execute all the imports, that can indeed lead to quite more time -- imports in Python are usually slow, even more so with a debugger in place... maybe do a profile in regular mode to actually know what's going on -- if it's open source or you can afford the price, http://www.pyvmmonitor.com/ can probably help you quite a bit here -- if you haven't profiled your code before, you probably have low-hanging fruits which can give you a nice speedup).
Use only programatic breakpoints with the remote debugger (see: http://pydev.org/manual_adv_remote_debugger.html) -- this will make your code run at regular speed until it hits the programmatic breakpoint.
If none of those help, please add more details on your code (are you using stackless, greenlets, threads, multiple processes, etc? -- also 3-4 minutes may be much or not. Without having the original time to get there, it's hard to know...).
Clearly, there is something wrong with my understanding of brainfuck, or there's something wrong with bf interpreter on ideone.com.
By entering code as simple as ,.,. (reads two characters and prints them), I get an error "bff: out of memory (871638280)" . Why do I get this ?
NOTE: The true problem is that I'm trying to solve a problem on SPOJ, and some code that works on brainfuck interpreters that I found across the internet, doesn't work on SPOJ and ideone.com.
It appears to work fine, my BF torture test runs properly.
ideone.com 9fQ2Ej
I am NOT going to try to fight this UI to make the BF look correct!
It's here:
https://github.com/rdebath/Brainfuck/blob/master/bitwidth.b
It does appear to have a large cells size though and isn't fast enough to offset this.
EDIT: (No newlines below Grrr)
Anyway Daniel Christofani's end test:
,>+++++++++,>+++++++++++[<++++++<++++++<+>>>-]<<.>.<<-.>.>.<<.
Gives 'LA' showing that the program accepts input successfully, gives the correct character for newline and gives '-1' for end of file. As it's a big cell interpreter this is perfectly acceptable.
HOWEVER; I do see your point, there's something weird going on I suggest you try one of the javaScript implementations. They run in your browser.
http://t-monster.com/apps/brainfuck_IDE
http://www.iwriteiam.nl/Ha_bf_online.html
http://brainfuck.devbar.de/
I frequently use the Scala console to evaluate and test code before I actually write it down in my project. If I want to know the contents of a variable, I can just enter it and scala evaluates it. But is there also a way to show the code of methods I entered?
I know there's the UP-key to show single lines, but what I was searching for is to show the whole code at once.
There's a file in your home directory named .scala_history that contains all of your recent REPL history. I regularly copy and paste code from this file into project source files. It's not exactly the same as showing the code for individual methods in the REPL, but it might help you accomplish the same goals.
See the comments by Paul Phillips in this issue for a discussion of some related functionality in the REPL (grouping statements in the history):
At some point I implemented the logic for this, but the real obstacle
is jline. It has enough trouble figuring out where the cursor is under
the simplest conditions. Start throwing big multiline blocks into the
history and it breaks down in tears. Would love to see this and
SI-2547 addressed by the community.
...
I expect to fix this soon too, but it depends on how well the recent
jline work goes. I implemented it long ago, and display issues are the
impediment.
Both of these comments are over two years old, so I wouldn't hold your breath.
I dont know a command to load all the code from command line.
What you can do is to :load path/to/my/file.scala to load some complex code and re- :load it when you changed the code in the file.
I know I risk asking a speculative question, however, inspired by this recent question I wonder which editor does the best job of syntax highlighting Perl. Being well aware of the difficulties (impossibilities) of parsing Perl I know there will not be a perfect case. Still I wonder if there is a clear leader in faithful representation.
N.B. I use gedit and it works fine, but with known issues.
Komodo Edit does a good job and also scans your modules (including those installed via CPAN) and does well at generating autocomplete data for them.
I'm a loyal vim user and rarely encounter anything odd with the native syntax.vim, except for these cases (I'll edit in more if/when I find them; others please feel free also):
!!expression is better written !!!!expression (everything after two ! is rendered as a comment quoted string; four ! brings everything back to normal)
m## or s### renders everything after the # as a comment; I usually use {} as a delimiter when avoiding / for leaning toothpick syndrome
some edge cases for $hash{key} where key is not a simple alphanumeric string - although it's safer to enclose such key names in '' anyway so as to not have to look up the exact cases for when a bareword is treated as a key name
I haven't used it, but Padre should be good since it's written in Perl. IIRC It uses PPI for parsing
jEdit...with the tweaks that I have amassed over the years. It's got the most customizable syntax highlighting I've ever seen.
I use Emacs in CPerl mode. I think it does a terrific job, although similar to Ether's answer, it's not perfect. What's more, I usually use Htmlize to publish highlighted code to the web. It's kind of annoying to use fancier forums like this one that do their own syntax highlighting, since it's not really any easier and the results aren't as good.
SOLVED see Edit 2
Hello,
I've been writing a Perl program to handle automatic upgrading of local (proprietary) programs (for the company I work for).
Basically, it runs via cron, and unfortunately has a memory leak (or something similar). The problem is that the leak only happens when I'm not looking (aka when run via cron, not via command line).
My code does not contain any circular (or other) references, so the commonly cited tools will not help me (Devel::Cycle, Devel::Peek).
How would I go about figuring out what is using so much memory that the kernel kills it?
Basically, the code SFTPs into a server (using ```sftp...`` `), calls OpenSSL to verify the file, and then SFTPs more if more files are needed, and installs them (untars them).
I have seen delays (~15 sec) before the first SFTP session, but it has never used so much memory as to be killed (in my presence).
If I can't sort this out, I'll need to re-write in a different language, and that will take precious time.
Edit: The following message is printed out by the kernel which led me to believe it was a memory leak:
[100023.123] Out of memory: kill process 9568 (update.pl) score 325406 or a child
[100023.123] Killed Process 9568 (update.pl)
I don't believe it is an issue with cron because of the stalling (for ~15 sec, sometimes) when running it via the command-line. Also, there are no environmental variables used (at least by what I've written, maybe underlying things do?)
Edit 2: I found the issue myself, with help from the below comment by mobrule (in response to this question). It turns out that the script was called from a crontab of a user (non-root) just once a day and that (non-root privs) caused a special infinite loop situation.
Sorry guys, I feel kinda stupid for not finding this before, but thanks.
mobrule, if you submit your comment as an answer, I will accept it as it lead to me finding the problem.
End Edits
Thanks,
Brian
P.S. I may be able to post small snippets of code, but not the whole thing due to company policy.
You could try using Devel::Size to profile some of your objects. e.g. in the main:: scope (the .pl file itself), do something like this:
use Devel::Size qw(total_size);
foreach my $varname (qw(varname1 varname2 ))
{
print "size used for variable $varname: " . total_size($$varname) . "\n";
}
Compare the actual size used to what you think is a reasonable value for each object. Something suspicious might pop out immediately (e.g. a cache that is massively bloated beyond anything that sounds reasonable).
Other things to try:
Eliminate bits of functionality one at a time to see if suddenly things get a lot better; I'd start with the use of any external libraries
Is the bad behaviour localized to just one particular machine, or one particular operating system? Move the program to other systems to see how its behaviour changes.
(In a separate installation) try upgrading to the latest Perl (5.10.1), and also upgrade all your CPAN modules
How do you know that it's a memory leak? I can think of many other reasons why the OS would kill a program.
The first question I would ask is "Does this program always work correctly from the command line?". If the answer is "No" then I'd fix these issues first.
On the other hand if the answer is "Yes", I would investigate all the differences between having the program executed under cron and from the command line to find out why it is misbehaving.
If it is run by cron, that shouldn't it die after iteration? If that is the case, hard for me to see how a memory leak would be a big deal...
Are you sure it is the script itself, and not the child processes that are using the memory? Perhaps it ends up creating a real lot of ssh sessions , instead of doing a bunch of stuff in one session?