Perl improve performance of generating 1 pixel image [closed] - perl

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Hi I need to generate a 1x1 pixel image in perl, what can be the fastest way to generate this. Assuming i will be getting 10K connections/per second on my web server.
Currently i am using this :
print MIME::Base64::decode("iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAABGdBTUEAALGPC/xhBQAAAAZQTFRF////AAAAVcLTfgAAAAF0Uk5TAEDm2GYAAAABYktHRACIBR1IAAAACXBIWXMAAAsSAAALEgHS3X78AAAAB3RJTUUH0gQCEx05cqKA8gAAAApJREFUeJxjYAAAAAIAAUivpHEAAAAASUVORK5CYII=
I cannot host a static file, as we need to process the request for some data.
Thanks
Kathiresh Nadar

First off, for high performance perl you should be using fastcgi (via the FCGI module directly, or the CGI::Fast wrapper) or mod-perl or some other technology to make your script stick around as a persistent process in memory.
Secondly, if you're processing the request for some other data first and that involves anything like writing to a file or talking to a database or something like that, your time will be dominated by that processing. Generating the image is not the slow part.
But let's answer your question anyway: assuming that you are using some keep-the-script-in-memory technology, then the first thing you can do is move your MIME::Base64::decode call to a BEGIN block, store the result in a variable, and use that variable.
But also, sending the image over the wire is likely going to take longer than the processing on the server, so why are you sending 167 bytes of PNG when you could be sending 42 bytes of GIF? Put both of those pieces of advice together, and you get:
my $gifdata;
BEGIN { $gifdata = MIME::Base64::decode(
"R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7"); }
print $gifdata;

Related

how to filter by data using BPF [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I am trying to filter a pcap file using bpf syntax. I need it to return only GET requests from HTTP that contains a certain word in the GET request, is it possible to do it? I managed to get the GET requests from HTTP but I can't find how to filter by the data of the packet.
What you've been asked to do is tricky, difficult, and impractical unless Wireshark or TCPDump do not have a protocol parser for some weird protocol that you are using.
A way to grab GET requests using BPF only would be as follows:
dst port 80 and tcp[(tcp[12]>>2):4]=0x47455420
The reason it must be done in this way is that you must account for the possibility of changing TCP options and, as a result, changing locations for where the data offset begins. This figures out where the data starts and examines the first four bytes for the string "GET ".
You may also note that I am taking a shortcut for the TCP data offset value in byte 12. It would be better practice to do this:
(tcp[12]>>4)*4
or this:
(tcp[12]&0xf0 >> 2)
This would account for any bits in the reserved lower nibble being enabled.
It is late for an answer but anyway.
You can filter the GET or any other HTTP requests with BPF.
The next example from bpfcc-tools shows the similar task implementation. It supposed to works on live network interface, not a pcap file. But I hope you can adopt it to be applied on a file.

Why use a wildcard (_) in Swift? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Going through the Apple documentation for Swift, I'm designed a for loop that creates 5 buttons to act as "stars" in a 5-star rating feature.
I noticed that the for loop is constructed as follows:
for _ in 0...<5
And in the explanation for this, Apple mentions that you can use a wildcard _ operator when you don't need to know which iteration of the loop is currently executing.
But what's the upside to not knowing the iteration? Is it a memory saving issue optimization? Is there ever a scenario where you don't want to know the iteration?
In general, unused variables add semantic overhead to code. When someone reads your code, they need to make an effort to understand the function of each line and identifier, so that they can accurately anticipate the impact of changing the code. A wildcard operator allows you to communicate that a value that is required by the syntax of the language is not relevant to the code that follows (and is not, in fact, even referenced).
In your specific example, a loop might well need executing a certain number of times, but if you want to do the exact same thing on each iteration, the iteration count is irrelevant. It's not especially common, but it does happen. The semantic overhead in this case is low, but it's meaningful (you're making it clear from the start that you intend to do the same thing every time), and it's a good habit to get in, broadly.
The for loop index needs to be stored in memory, so this won't serve as any kind of memory optimization.
I think most importantly it easily conveys to readers of the code that the loop index is not important.
Additionally, it prevents you from having to come up with some arbitrary dummy variable name. It also declutters your name space when you're debugging inside the loop, since you won't be shown this dummy variable.
Is there ever a scenario where you don't want to know the iteration?
Certainly. Consider:
func echo(_ s:String, times:Int) -> String {
var result = ""
for _ in 1...times { result += s }
return result
}
How would you write it?
This is not necessarily about knowing the iteration. When you don't need to know the element inside the loop you are iterating. The underscore is that placeholder - just as the underscore is the symbol in the declaration suppressing externalization of the name. Check this SO Thread for more detail about this.

Get bogus value when execute break point in a variable [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I got a bogus value of finalScore, what happened?
I want to know why finalScore is 4339953456 rather than a correct number 520.
Your breakpoint is on the finalScore = line, it means that the program is stopped before this value has been computed.
It should show no value instead of a bogus value, probably, but this is not something that you have to worry about: set your breakpoint one line later and your finalScore will have a proper value.

What is the issue of select() using so much CPU power? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I am writing a network communication program using non-blocking sockets (C/C++) and select. The program is pretty big, so I cannot upload source code. In a very aggressive testing session, I use testing code to open and close both TCP and UDP frequently. It always ends up that one end does not respond and has CPU usage over 98 or 99%. Then I use gdb to attach. "bt" shows the following:
0x00007f1b71b59ac3 in __select_nocancel () at ../sysdeps/unix/syscall-template.S:82
82 ../sysdeps/unix/syscall-template.S: No such file or directory.
in ../sysdeps/unix/syscall-template.S
What type of error could it be?
$ uname -a
Linux kiosk2 2.6.32-34-generic #77-Ubuntu SMP Tue Sep 13 19:39:17 UTC 2011 x86_64 GNU/Linux
It's impossible to say without looking at the code, but often when a select-based loop starts spinning at ~100% CPU usage, it's because one or more of the sockets you told select() to watch are ready-for-read (and/or ready-for-write) so that select() returns right away instead of blocking... but then the code neglects to actually recv() (or send()) any data on that socket. After failing to read/write anything, your event loop would try to go back to sleep by calling select() again, but of course the socket's data (or buffer space, in the ready-for-write case) is still there waiting to be handled, so select() returns immediately again, the buggy code neglects to do the recv() (or send()) again, and around and around we go at top speed :)
Another possibility would be that you are passing in a timeout value to select() that is either zero or near-zero, causing select() to return very quickly even when no sockets are ready-for-anything... that often happens when people forget to re-initialize the timeval struct before each call to select(). You need to re-initialize the timeval struct each time because some implementations of select() will modify it before returning.
My suggestion is to put some printf's (or your favorite equivalent) immediately before and immediately after your call to select(), and watch that output as you reproduce the fault. That will show you whether the spinning is happening inside of a single call to select(), or if something is causing select() to return immediately over and over again.

Scala actors - worst practices? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I feel a bit insecure about using actors in Scala. I have read documentation about how to do stuff, but I guess I would also need some DON'T rules in order to feel free to use them.
I think I am afraid that I will use them in a wrong way, and I will not even notice it.
Can you think of something, that, if applied, would result in breaking the benefits that Scala actors bring, or even erroneous results?
Avoid !? wherever possible. You will get a locked system!
Always send a message from an Actor-subsystem thread. If this means creating a transient Actor via the Actor.actor method then so be it:
case ButtonClicked(src) => Actor.actor { controller ! SaveTrade(trdFld.text) }
Add an "any other message" handler to your actor's reactions. Otherwise it is impossible to figure out if you are sending a message to the wrong actor:
case other => log.warning(this + " has received unexpected message " + other
Don't use Actor.actor for your primary actors, sublcass Actor instead. The reason for this is that it is only by subclassing that you can provide a sensible toString method. Again, debugging actors is very difficult if your logs are littered with statements like:
12:03 [INFO] Sending RequestTrades(2009-10-12) to scala.actors.Actor$anonfun$1
Document the actors in your system, explicitly stating what messages they will receive and precisely how they should calculate the response. Using actors results in the conversion of a standard procedure (normally encapsulated within a method) to become logic spread across multiple actor's reactions. It is easy to get lost without good documentation.
Always make sure you can communicate with your actor outside of its react loop to find its state. For example, I always declare a method to be invoked via an MBean which looks like the following code snippet. It can otherwise be very difficult to tell if your actor is running, has shut down, has a large queue of messages etc.
.
def reportState = {
val _this = this
synchronized {
val msg = "%s Received request to report state with %d items in mailbox".format(
_this, mailboxSize)
log.info(msg)
}
Actor.actor { _this ! ReportState }
}
Link your actors together and use trapExit = true - otherwise they can fail silently meaning your program is not doing what you think it is and will probably go out of memory as messages remain in the actor's mailbox.
I think that some other interesting choices around design-decisions to be made using actors have been highlighted here and here
I know this doesn't really answer the question, but you should at least take heart in the fact that message-based concurrency is much less prone to wierd errors than shared-memory-thread-based concurrency.
I presume you have seen the actor guidelines in Programming in Scala, but for the record:
Actors should not block while processing a message. Where you might want to block try to arrange to get a message later instead.
Use react {} rather than receive {} when possible.
Communicate with actors only via messages.
Prefer immutable messages.
Make messages self-contained.