Pytesseract is too slow.. High disk I/O - tesseract

I'm creating a bot for a video game, everything is working well (thanks to some stackoverflow members), but pytesseract response time is too high.
I have to read a picture of this kind every second (after editing it to turn it into black over white, very quick process that doesn't take time).
What I'm doing is dividing the picture into 9, one for each line, and then call pytesseract.image_to_string(img) for each.
This process takes about 3 seconds, and I think it can be faster, given that the text is short.
I noticed a high disk I/O in Process Hacker, see the following screenshot : Disk I/O
Last thing, I have the feeling that it's a bit better when executing the python script as administrator, but I'm not sure and it's not enough..
Do you have a solution that I can implement to make it faster ?

You need to use tesseract api instead of pytesseract, that initialize tesseract (e.g. read traineddata) each time you run ocr (and store ocr image to disk and read ocr result from disk...). For example have a look at https://github.com/zdenop/SimpleTesseractPythonWrapper/blob/master/SimpleTesseractPythonWrapper.ipynb

Related

Stop execution when RAM is filled (i.e. avoid writing to Disk)

I have this problem:
I run some large calculations before going to sleep (or work).
When I return sometimes RAM is already filled and the program starts writing to Disk, which is a problem since then computer becomes almost non responsive, also the button "Interrupt the current operation" doesn't stop mserver.exe from executing a task.
This is what I saw 10 mins after I pressed the button "Interrupt the current operation":
Not to mention that calculations are probably like 100 or even 1000 times slower when it starts using Disk instead of RAM (so it's pointless anyway).
Another problem is that I was unable to save some variables to file since in Maple I couldn't type anything while mserver.exe was executing a task and after I killed the process mserver.exe I was still unable to save those variables since Maple commands don't work when connection to kernel is lost.
So, my question: can I make it so that mserver.exe won't use Disk at all (I mean from Maple alone, not by disabling page file in Windows) and just stop execution automatically when RAM is full (just like Classic Maple does when it hits 2GB limit)?
Also it would be nice to be able to limit Maple from using processor too much, for example up to 75% or so, so that I could work on that computer without problems.
You might experiment with a few of the options available for specifying limits on the Maple (kernel, mserver) engine.
In particular,
--init-reserve-mem=memorysize
(or, possible, the -T option). See here for some more detail:
https://www.maplesoft.com/support/help/MapleSim/view.aspx?path=maple
On Linux/OSX you could pass that in a call to the maple script that launches Maple. On MS-Windows you could add that to the command string/Property in the launcher (icon).
You might try setting it to a fraction of your total RAM, eg. 50-75%, and see how it goes. Presumably you'll have some other processes running.
As far as restricting the CPU use goes, that's more of a OS issue. On Linux/OSX you could use the system's nice facility. I don't know what's avaliable on MS-Windows (built-in or 3rd party). You might be able to set the Priority of the running mserver process from the Task Manager. Or you might look at something like the START facility:
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/start

Is there a way to get a SIMPLE report using perl NYTProf?

I installed NYTProf and ran my code using it. Trying to get a simple listing of lines and time spent on them. Good Lord, all this profiler has is html file reports or data dumps designed for import into data analysis tools. I'm working on a remote system and firing up a browser to load file:/// URLs requires setting up tunnels and remote (slow) x-servers etc. it's is a pain in the butt. All's I want is a simple:
Function:Line percentage-time-spent (sorted with longest time spent lines at the top)
openlogs:27 40%
readlogs:124 30%
closelogs:1243 20%
profile:67 10%
You know, a profiler.
This is perl. It's not UX demonstration time. I'm not preparing a report for a Congressional sub-committee.
The documentation doesn't seem to show a way to get a simple report designed for developers to show which slowness to attack in their code. Am I missing something here? Anyone know a way to do this quickly?
The way to get a simple profiler output is to use Devel::Profile
$ perl -d:Profile my_script.pl
$ cat prof.out

compression algorithm that allows random reads/writes in a file?

Does anyone have a better compression algorithm that would allow random reads/writes?
I think you could use any compression algorithm if you write it in blocks, but ideally I would not like to have to decompress a whole block at a time. But if you have suggestions on an easy way to do this and how to know the block boundaries, please let me know. If this is part of your solution, please also let me know what you do when the data you want to read is across a block boundary?
In the context of your answers please assume the file in question is 100GB, and sometimes I'll want to read the first 10 bytes, and sometimes I'll want to read the last 19 bytes, and sometimes I'll want to read 17 bytes in the middle. .
Have these people never heard of "compressed file systems", which have been around since before Microsoft was sued in 1993 by Stac Electronics over compressed file system technology?
I hear that LZS and LZJB are popular algorithms for people implementing compressed file systems, which necessarily require both random-access reads and random-access writes.
Perhaps the simplest and best thing to do is to turn on file system compression for that file, and let the OS deal with the details. But if you insist on handling it manually, perhaps you can pick up some tips by reading about NTFS transparent file compression.

Difference between machine language, binary code and a binary file

I'm studying programming and in many sources I see the concepts: "machine language", "binary code" and "binary file". The distinction between these three is unclear to me, because according to my understanding machine language means the raw language that a computer can understand i.e. sequences of 0s and 1s.
Now if machine language is a sequence of 0s and 1s and binary code is also a sequence of 0s and 1s then does machine language = binary code?
What about binary file? What really is a binary file? To me the word "binary file" means a file, which consists of binary code. So for example, if my file was:
010010101010010
010010100110100
010101100111010
010101010101011
010101010100101
010101010010111
Would this be a binary file? If I google binary file and see Wikipedia I see this example picture of binary file which confuses me (it's not in binary?....)
Where is my confusion happening? Am I mixing file encoding here or what? If I were to ask one to SHOW me what is machine language, binary code and binary file, what would they be? =) I guess the distinction is too abstract to me.
Thnx for any help! =)
UPDATE:
In Python for example, there is one phrase in a file I/O tutorial, which I don't understand: Opens a file for reading only in binary format. What does reading a file in binary format mean?
Machine code and binary are the same - a number system with base 2 - either a 1 or 0. But machine code can also be expressed in hex-format (hexadecimal) - a number system with base 16. The binary system and hex are very interrelated with each other, its easy to convert from binary to hex and convert back from hex to binary. And because hex is much more readable and useful than binary - it's often used and shown. For instance in the picture above in your question -uses hex-numbers!
Let say you have the binary sequence 1001111000001010 - it can easily be converted to hex by grouping in blocks - each block consisting of four bits.
1001 1110 0000 1010 => 9 14 0 10 which in hex becomes: 9E0A.
One can agree that 9E0A is much more readable than the binary - and hex is what you see in the image.
I'm honestly surprised to not see the information I was looking for, looking back though, I guess the title of this thread isn't fully appropriate to the question the OP was asking.
You guys all say "Machine Code is a bunch of numbers".
Sure, the "CODE" is a bunch of numbers, but what people are wondering (I'm guessing) is "what actually is happening physically?"
I'm quite a novice when it comes to programming, but I understand enough to feel confident in 'roughly' answering this question.
Machine code, to the actual circuitry, isn't numbers or values.
Machine code is a bunch of voltage gates that are either open or closed, and depending on what they're connected to, a certain light will flicker at a certain time etc.
I'm guessing that the "machine code" dictates the pathway and timing for specific electrical signals that will travel to reach their overall destination.
So for 010101, 3 voltage gates are closed (The 0's), 3 are open (The 1's)
I know I'm close to the right answer here, but I also know it's much more sophisticated - because I can imagine that which I don't know.
010101 would be easy instructions for a simple circuit, but what I can't begin to fathom is how a complex computer processes all of the information.
So I guess let's break it down?
x-Bit-processors tell how many bits the processor can process at once.
A bit is either 1 or 0, "On" or "Off", "Open" or "Closed"
so 32-bit processors process "10101010 10101010 10101010 10101010" - this many bits at once.
A processor is an "integrated circuit", which is like a compact circuit board, containing resistors/capacitors/transistors and some memory. I'm not sure if processors have resistors but I know you'll usually find a ton of them located around the actual processor on the circuit board
Anyways, a transistor is a switch so if it receives a 1, it sends current in one direction, or if it receives a 0, it'll send current in a different direction... (or something like that)
So I imagine that as machine code goes... the segment of code the processor receives changes the voltage channels in such a way that it sends a signal to another part of the computer (why do you think processors have so many pins?), probably another integrated circuit more specialized to a specific task.
That integrated circuit then receives a chunk of code, let's say 2 to 4 bits 01 or 1100 or something, which further defines where the final destination of the signal will end up, which might be straight back to the processor, or possibly to some output device.
Machine code is a very efficient way of taking a circuit and connecting it to a lightbulb, and then taking that lightbulb out of the circuit and switching the circuit over to a different lightbulb
Memory in a computer is highly necessary because otherwise to get your computer to do anything, you would need to type out everything (in machine code). Instead, all of the 1's and 0's are stored inside some storage device, either a spinning hard disk with a magnetic head pin that 'reads' 1's or 0's based on the charge of the disk, or a flash memory device that uses a series of transistors, where sending a voltage through elicits 1's and 0's (I'm not fully aware how flash memory works)
Fortunately, someone took the time to think up a different base number system for programming (hex), and a way to compile those numbers (translate them) back into binary. And then all software programs have branched out from there.
Each key on the keyboard creates a specific signal in binary that translates to
a bunch of switches being turned on or off using certain voltages, so that a current could be run through the specific individual pixels on your screen that create "1" or "0" or "F", or all the characters of this post.
So I wonder, how does a program 'program', or 'make' the computer 'do' something... Rather, how does a compiler compile a program of a code different from binary?
It's hard to think about now because I'm extremely tired (so I won't try) but also because EVERYTHING you do on a computer is because of some program.
There are actively running programs (processes) in task manager. These keep your computer screen looking the way you've become accustomed, and also allow for the screen to be manipulated as if to say the pictures on the screen were real-life objects. (They aren't, they're just pictures, even your mouse cursor)
(Ok I'm done. enough editing and elongating my thoughts, it's time for bed)
Also, what I don't really get is how 0's are 'read' by the computer.
It seems that a '0' must not be a 'lack of voltage', rather, it must be some other type of signal
Where perhaps something like 1 volt = 1, and 0.5 volts = 0. Some distinguishable difference between currents in a circuit that would still send a signal, but could be the difference between opening and closing a specific circuit.
If I'm close to right about any of this, serious props to the computer engineers of the world, the level of sophistication is mouthwatering. I hope to know everything about technology someday. For now I'm just trying to get through arduino.
Lastly... something I've wondered about... would it even be possible to program today's computers without the use of another computer?
Machine language is a low-level programming language that generally consists entirely of numbers. Because they are just numbers, they can be viewed in binary, octal, decimal, hexadecimal, or any other way. Dave4723 gave a more thorough explanation in his answer.
Binary code isn't a very well-defined technical term, but it could mean any information represented by a sequence of 1s and 0s, or it could mean code in a machine language, or it could mean something else depending on context.
Technically, all files are stored in binary, we just don't usually look at the binary when we view a file. However, the term binary file is usually used to refer to any non-text file; e.g. an .exe, a .png, etc.
You have to understand how a computer works in its basic principles and this will clear things up for you... Therefore I recommend on reading into stuff like Neumann Architecture
Basically in a very simple computer you only have one memory like an array
which has instructions for your processor, the data and everything is a binary numbers.
Your program starts at a certain place in your memory and reads the first number...
so here comes the twist: these numbers can be instructions or data.
Your processor reads these numbers and interprets them as instructions
Example: the start address is 0
in 0 is a instruction like "read value from address 120 into the ALU (Math-Unit)
then it steps to address 1
"read value from address 121 into ALU"
then it steps to address 2
"subtract numbers in ALU"
then it steps to address 3
"if ALU-Value is smaller than zero go to address 10"
it is not smaller than zero so it steps to address 4
"go to address 20"
you see that this is a basic if(a < b)
You can write these instructions as numbers and they can be run by your processor but because nobody wants to do this work (that was what they did with punchcards in the 60s)
assembler was invented...
that looks like:
add 10 ,11, 20 // load var from address 10 and 11; run addition and store into address 20
In Conclusion:
Assembler (processor instructions) can be called binary because it's stored in plain numbers
But everything else can be a Binary file, too.
In reality if you have a simple .exe file it is both... If you have variables in there like a = 10 and b = 20, these values can be stored some where between if clauses and for loops... It depends on the compiler where it put these
But if you have a complex 3D-model it can be stored in a separate file with no executable code in it...
I hope it helps to clear things up a little.

Why can IDLE be slower than running a program from the command line?

I was recently having problems with IDLE stopping responding once it hit a certain point in my code when processing very long strings, as reflected here: What's an efficient way to encode a (very long) string from a dictionary? (Python).
That has since been resolved; the same code that caused IDLE to freeze up ran in a second or two from the command line. Now, out of curiosity, why would this be?
(And, yes, I know I should probably use another IDE. However, at the moment I'm only working on a small project and I like how lean and simple IDLE is.)
IDLE chokes when presenting large results. I entered 'x'*100000, and IDLE took three seconds to display the result. It also froze when I tried to cursor up. The regular Python shell displayed the result practically instantaneously.