I'm using the CAM::PDF Perl module to parse PDFs. The module works great except for one issue, it seems to split words randomly. Is there any way of fixing this via settings or some algorithmic way to put the words back together?
For example:
"has offices located in New Yor k and Dublin."
-Notice New York
"price competit ion"
-price competition
The section of code is below:
$pdf = CAM::PDF->new($pdf_name);
$text = $pdf->getPageText($page);
print("$text\n");
;
In general it's not always possible to reconstruct the original text from a PDF. Often the physical structure doesn't match the output.
In this case you are quite possibly being affected by manual kerning. I.e. splitting on character pairs and adjusting the spacing to produce a more pleasing result - see http://en.wikipedia.org/wiki/Kerning.
So breaking within words and outputting smaller chunks, which is being recognised by CAM::PDF as separate words.
If you have some control on your PDF production, you could experiment with fonts and kerning settings - but this might also compromise output quality.
PDF::OCR2 is likely to handle kerning more robustly and might do a better overall job of recognizing the original text.
Related
I have an issue where extracting data from database it sometimes (quite often) adds spaces in between strings of texts that should not be there.
What I'm trying to do is create a small script that will look at these strings and remove the spaces.
The problem is that the spaces can be in any position in the string, and the string is a variable that changes.
Example:
"StaffID": "0000 25" <- The space in the number should not be there.
Is there a way to have the script look at this particular line, and if it finds spaces, to remove them.
Or:"DateOfBirth": "23-10-199 0" <-It would also need to look at these spaces and remove them.
The problem is that the same data also has lines such as:
"Address": " 91 Broad street" <- The spaces should be here obviously.
I've tried using TRIM, but that only removes spaces from start/end.
Worth mentioning that the data extracted is in json format and is then imported using API into the new system.
You should think about the logic of what you want to do, and whether or not it's programmatically possible to determine if you can teach your script where it is or is not appropriate to put spaces. As it is, this is one of the biggest problems facing AI research right now, so unfortunately you're probably going to have to do this by hand.
If it were me, I'd specify the kind of data format that I expect from each column, and try my best to attempt to parse those strings. For example, if you know that StaffID doesn't contain spaces, you can have a rule that just deletes them:
$staffid = $staffid.replace("\s+",'')
There are some more complicated things that you can do with forced formatting (.replace) that have already been covered in this answer, but again, that requires some expectation of exactly what data is going to come out of what column.
You might want to look more closely at where those spaces are coming from, rather than process the output like this. Is the retrieval script doing it? Maybe you can optimize the database that you're drawing from?
What does FC_WEIGHT refer to? Please advise: Although a text file was produced it is large and consists largely of numbers which makes it hard to proofread. I need relatively good confidence the output matches the input. If there is a fix please point me to it and bring joy to my dull drab existence.
entered the command
ps2ascii /Users/dwstclair/Desktop/untitled3/stmt_20181130.pdf a.txt
The result was:
DEBUG: FC_WEIGHT didn't match
On the off chance a default font was missing on my system
I added DroidSansFallback.ttf (no joy)
Basically, I wouldn't use ps2ascii. Its long been deprecated and doesn't even ship in more recent versions of Ghostscript.
Instead consider using the txtwrite device. It works with a wider range of input (in particular it can use ToUnicode CMaps in PDF files, which ps2ascii cannot) and is capable of producing output in other than ASCII, which is quite useful. Even if you aren't working with non-Latin languages, the ability to preserve ligatures (eg fi, ffi, ffl etc) is convenient.
The actual answer to your question is 'don't worry about it'.
FC_WEIGHT refers to the weight of a font (light, bold, regular, ExtraBold etc). This message can only arise when you are using FontConfig, and Ghostscript is enumerating the available fonts from font config, trying to find a match for a missing font in the input. This means that a candidate font did not match the target font's weight.
Since you aren't going to use the font, it doesn't affect you.
I need to write a file that is probably being interpreted by something like RPG IV on an AS/400 (but I don't know that). The file will be created by reading data from our MySQL database and then writing it in the specified format. It could be quite large ( potentially measured in GB but haven't determined yet ). Right now I'm thinking Perl's built in format might actually be my best bet, because things like Xslate, and Template Toolkit are more designed for things that aren't fixed width (HTML). My only concern there is that format doesn't appear to have conditionals and it looks like I may need them (I found a format left justified if field A is set, right justified and padded if not)
Other possibilities that come to mind are pack and the sprintf family of functions.
I don't think pack supports right-justified text, so that wouldn't be an option.
That leaves (s)printf. You can build format specifiers programatically to support your conditional logic for justification.
Template Toolkit can do a serviceable job at creating fixed width formatted files. The trick is to use the templates to describe the file and record structure, but have a Perl function format the data for each field.
It may be easier to skip the templates and do all the formatting in Perl. Either way you need to consider how you need to format your fields. In my experience sprintf is better and handling more of the formatting cases required by fixed width formatted files. You will probably still need to implement a few helper functions the hand oddities (like EBCDIC/COBOL signed numbers encoded in ASCII, if your unlucky enough).
There are a thousand odd special cases in legacy fixed width formatted files, it's almost enough to make me like XML data files, typically it's the oddest special case in the end that determines what the best method for formatting the file is.
Using iReport 4.5.0, I'm setting these two properties and values:
net.sf.jasperreports.text.truncate.at.char=true
net.sf.jasperreports.text.truncate.suffix=...
The intent is to add "..." to the end of textfields whenever they must be truncated, and that the truncation determination happens at the character level, rather than at the word level. This works as expected when exporting to PDF. However, when exporting to HTML, the last truncated token (with the suffix appended) will often, though not always, wrap incorrectly. (It does this even though StretchType is set to No Stretch.) Example:
If I change net.sf.jasperreports.text.truncate.at.char=false (so that it breaks on words instead of characters) it seems to work more often, but only because word breaks usually leave more space for the suffix. The unexpected line wrapping still occurs with word breaks, especially if I increase the length of the given suffix.
My best guess is that the HTML exporter measurement isn't precisely calculating the width required by the given suffix (if it's calculating it at all).
Can anyone confirm?
Any suggestions as to a workaround?
It seems like with StretchType set to No Stretch, that the HTML exporter should probably also set white-space:nowrap. However, although that would prevent the line from wrapping, the end of the suffix would be partially hidden (due to overflow:hidden styling).
"My best guess is that the HTML exporter measurement isn't precisely calculating the width required by the given suffix (if it's calculating it at all)."
I confirm that this is surely the reason.
But there's not really a simple workaround. Your PDF is good, so you're doing something right. Well... you're doing lots of things right. ;-)
In HTML you don't know--in a very fundamental way--the precise details of the font that will render the text. You can certainly specify the font. But the client machine might not have it. Or it might have one that is the same... but not quite the same. Or the client might choose to use a different font or different size via various client-side override mechanisms.
If you try different fonts, you should notice slightly different results. You may be able to find one that works better more often. (Clearly, this isn't 100% perfect.)
If you aren't using Font Extensions, then you should. If you are using Font Extensions, then you can specify the list of fonts in descending preference that ought to be used in the HTML. This should give you enough control to get behavior that is good in a large number of cases. Often you can make it perfect in all of the cases that you care about.
I am looking for an online service (or collection of images) that can return an image for any unicode code point.
Unicode.org does not have an image for each one, consider for example
http://www.unicode.org/cgi-bin/GetUnihanData.pl?codepoint=31cf
EDIT: I need to use these images programmatically, so the code chart PDFs provided at unicode.org are not useful.
The images in the PDF are copyrighted, so there are legal issues around extracting them. (I am not a lawyer.) I suspect that those legal issues prevent a simple solution from being provided, unless someone wants to go to the trouble of drawing all of those images. It might happen, but seems unlikely.
Your best bet is to download a selection of fonts that collectively cover the entire range of characters, and display the characters using those fonts. There are two difficulties with this approach: combining characters and invisible characters.
The combining characters can easily be detected from the Unicode database, and you can supply a base character (such as NBSP) to use for displaying them. (There is a special code point intended for this purpose, but I can't find it at the moment.)
Invisible characters could be displayed with a dotted square box containing the abbreviation for the character. Those you may have to locate manually and construct the necessary abbreviations. I am not aware of any shortcuts for that.