I'm trying to translate some formatting specifications from a .docx template to LaTeX,
and am trying to get it exact for the academic challenge.
Consider the following Office Open XML snippet:
<w:pPr>
<w:widowControl/>
<w:spacing w:line="312" w:lineRule="auto"/>
<w:jc w:val="both"/>
</w:pPr>
<w:rPr>
<w:rFonts w:eastAsia="Times New Roman"/>
<w:sz w:val="20"/>
</w:rPr>
The line spacing is specified as 312 dxa (or 15.6 [big] points), and the font size is 20 half-points (or 10 [big] points). LibreOffice writer interprets the spacing as 130% proportional.
What does 312dxa correspond to? Is it the distance from baseline to baseline? How does LibreOffice figure 130%?
15.6 is 130% of 12, which is 1.2 times 10. (For single spacing (in LaTeX) a good rule of thumb is to set \baselineskip to 1.2 times the font size). That's my guess.
According to section 17.3.1.33 of ISO-IEC-29500-1, the semantics of the after attribute is modified by the spacingRule attribute:
When spacingRule has either of the values exactly or atLeast, then it expresses a distance between lines as a signed Twips (twentieth of a postscript point) measure.
When spacingRule has the value auto, then it expresses a distance in units of 240ths of a line.
In the OP's example, the latter applies, and the space between lines is 312/240=130%.
The OP's original hunch about how line spacing is measured is a good one: the 240 fraction for the auto option is probably based on a default case where the baselines of two lines of 10pt type are separated by 1.2 times the the font size, i.e., 240 Twips. In the default case auto and exactly/atLeast have the same behavior for a given value of after.
But if the titular question is taken literally, it is not the right question to be asked: the ISO standard specifies exactly the syntax of a OOXML (WML) document. It specifies the semantics of the document in a good amount of detail, but there is an eventual level of granularity beyond which the standard cannot give answers, because they are design decisions that are instead reflected in the vendor-specific implementation.
Related
I'm currently working on fastText unsuperived learning. I wanted clarify something of context window present in fastText documentation.
In the description of the fasttext library for python https://github.com/facebookresearch/fastText/tree/master/python for training a fastText model there are different arguments, one of the arguments is,
ws: size of the context window
My input file contains lines with 2 - 3 tokens.
Eg.,
Senior Database Administrator
Senior DotNet programmer
Network administrator
Head Programmer (Mainframe)
The default window size 5. Here, in the above example, I have lines with token count less than the window size. What will happen if the window size is bigger than the document length?
FastText (& related algorithms like word2vec) will simply use as much of the context window as is possible.
For example, assume a window-size of 5 and the input tokens:
['Senior', 'Database', 'Administrator']
When training with the 'center' word 'Senior', the algorithm would be ready to consult up-to-5 words in either direction.
But, there are 0 words preceding 'Senior', and only 2 words succeeding 'Senior', so only those 2 following words will be considered as neighbors.
(No 'plug values' will be used as if they were blank-neighbors, nor will any 'bleed-through' to beighboring texts occur.)
Two other related notes to keep in mind:
These algorithms do need neighboring words for any training to occur, so any texts with just a single word are essentially no-ops. (If there happens to be a word that only ever appears alone, you might still see a vector for it at the end of training, but in the implementations with which I am familiar, that will just be a randomly-initialized starting vector, completely untrained by real usage examples.)
Most implementations will simulate a weighting-of-neighboring-words by not *always using exactly your declared window-size, but rather, for each pass over a specific target center word, choosing a random window-size, from 1 to your chosen window-size. In this way, immediate-neighbors are always part of training, while words further away are more-often skipped.
I totally understand the necessity of integral, and brackets by pieces (2320 2321 239B-23AE) Since it helps building large notations.
But the for the large summation 23B3 23B4, if one stretch these two, they will still lose their shapes. I do not understand what is the logic behind separating this character, or why not a corresponding one for the product symbol 220F. Furthermore, I wonder in which case these two symbols should be used.
This doesn't answer the question fully, but the following paragraph from Unicode Technical Report #28: Unicode 3.2 is probably the most authoritative explanation you can find:
Symbol Pieces. [to follow “APL Functional Symbols”] The characters in the range U+239B..U+23B3, plus U+23B7, comprise a set of bracket and other symbol fragments for use in mathematical typesetting. These pieces originated in older font standards, but have been used in past mathematical processing as characters in their own right to make up extra-tall glyphs for enclosing multi-line mathematical formulae. Mathematical fences are ordinarily sized to the content that they enclose. However, in creating a large fence, the glyph is not scaled proportionally; in particular the displayed stem weights must remain compatible with the accompanying smaller characters. Thus, simple scaling of font outlines cannot be used to create tall brackets. Instead, a common technique is to build up the symbol from pieces. In particular, the characters U+239B LEFT PARENTHESIS UPPER HOOK through U+23B3 SUMMATION BOTTOM represent a set of glyph pieces for building up large versions of the fences (, ), [, ], {, and }, and of the large operators ∑ and ∫. These brace and operator pieces are compatibility characters. They should not be used in stored mathematical text, but are often used in the data stream created by display and print drivers.
This is followed by a table showing how pieces are intended to be used together to create specific symbols.
Unicode mostly standardises existing character repertoires, and keeps their peculiarities so that conversions round-trip properly. A corresponding product symbol is not part of Unicode because the originating character repertoire did not have one. Ask on https://www.unicode.org/consortium/distlist-unicode.html about the provenance of the summation top/bottom.
I am looking for large symbols in unicode like these:
∏ ∐ ∑ ∫
⨀ ⨁ ⨂
⊕ ⊖ ⊗ ⊘ ⊙
⎲
⎳
⌠
⌡
The only one I found is by combining two unicode symbols ⎲and ⎳. Not sure why that exists, but not a large product symbol. That's all I am really looking for (∏ over multiple lines like the sigma). If any of the other ones exist over 2 lines that would be great to know as well. Perhaps there is some way to manually make the large ∏ symbol out of smaller primitives.
⎲and ⎳. Not sure why that exists
When a collection of existing glyphs is added to Unicode, it is desirable to make encoding between character sets round-trip safe. So glyphs that are duplicates or variants of each other are kept anyway.
As of Unicode 10, these are the greek letter pi (and its compat decompositions) available: ∏Ππϖᴨℼℿ There are no top and bottom halves like for integral and summation.
You should not attempt to build a glyph piecewise from other glyphs shifted into position. (You said "primitives", but Unicode does not work that way.) The result is not accessible and somewhat likely to break in rendering on systems other than yours.
The correct solution is to use the ∏ glyph and simply scale up its font size. Look into MathML if you are using only ad-hoc notation so far.
If I apply Unicode Normalization Form C to a string, will the number of code points in the string ever increase?
Yes, there are code points that expand to multiple code points after applying NFC normalization. Within the Basic Multilingual Plane, for example, there are 70 code points that expand to 2 code points after applying NFC normalization, and there are 2 code points (U+FB2C and U+FB2D within the Alphabetic Presentation Forms block) that expand to 3 code points.
One guarantee that you have for this so-called "expansion factor" is that no string will ever expand more than 3 times in length (in terms of number of code units) after NFC normalization is applied:
There is also a Unicode Consortium stability policy that canonical mappings are always limited in all versions of Unicode, so that no string when decomposed with NFC expands to more than 3× in length (measured in code units). This is true whether the text is in UTF-8, UTF-16, or UTF-32. This guarantee also allows for certain optimizations in processing, especially in determining buffer sizes.
Section 9, Detecting Normalization Forms. UAX #15: Unicode Normalization Forms.
I have written a Java program to determine which code points within a Unicode block expand to multiple code points: http://ideone.com/9PUOCb
Alternatively, Tom Christiansen's unichars utility, part of the Unicode::Tussle CPAN module, can be used. (Note: Mac users may see an error at the make test installation step saying that the Perl version is too old. If you see this error, you can install the module by running notest install Unicode::Tussle within a CPAN shell.)
Examples:
Print the code points in the BMP that expand to 3 code points:
unichars 'length(NFC) == 3'
שּׁ U+FB2C HEBREW LETTER SHIN WITH DAGESH AND SHIN DOT
שּׂ U+FB2D HEBREW LETTER SHIN WITH DAGESH AND SIN DOT
Count the number of code points in all planes that expand to more than one code point:
unichars -a 'length(NFC) > 1' | wc -l
85
See also the frequently asked question What are the maximum expansion factors for the different normalization forms?
It's part of the process of OCR,which is :
How to segment the sentences into words,and then characters?
What's the candidate algorithm for this task?
As a first pass:
process the text into lines
process a line into segments (connected parts)
find the largest white band that can be placed between each pair of segments.
look at the sequence of widths and select "large" widths as white space.
everything between white space is a word.
Now all you need a a good enough definition of "large".
First, NIST (Nat'l Institutes of Standards and Tech.) published a protocol known as the NIST Form-Based Handwriting Recognition System about 15 years ago for the this exact question--i.e., extracting and preparing text-as-image data for input to machine learning algorithms for OCR. Members of this group at NIST also published a number of papers on this System.
The performance of their classifier was demonstrated by data also published with the algorithm (the "NIST Handwriting Sample Forms.")
Each of the half-dozen or so OCR data sets i have downloaded and used have referenced the data extraction/preparation protocol used by NIST to prepare the data for input to their algorithm. In particular, i am pretty sure this is the methodology relied on to prepare the Boston University Handwritten Digit Database, which is regarded as benchmark reference data for OCR.
So if the NIST protocol is not a genuine standard at least it's a proven methodology to prepare text-as-image for input to an OCR algorithm. I would suggest starting there, and using that protocol to prepare your data unless you have a good reason not to.
In sum, the NIST data was prepared by extracting 32-bit x 32 bit normalized bitmaps directly from a pre-printed form.
Here's an example:
00000000000001100111100000000000
00000000000111111111111111000000
00000000011111111111111111110000
00000000011111111111111111110000
00000000011111111101000001100000
00000000011111110000000000000000
00000000111100000000000000000000
00000001111100000000000000000000
00000001111100011110000000000000
00000001111100011111000000000000
00000001111111111111111000000000
00000001111111111111111000000000
00000001111111111111111110000000
00000001111111111111111100000000
00000001111111100011111110000000
00000001111110000001111110000000
00000001111100000000111110000000
00000001111000000000111110000000
00000000000000000000001111000000
00000000000000000000001111000000
00000000000000000000011110000000
00000000000000000000011110000000
00000000000000000000111110000000
00000000000000000001111100000000
00000000001110000001111100000000
00000000001110000011111100000000
00000000001111101111111000000000
00000000011111111111100000000000
00000000011111111111000000000000
00000000011111111110000000000000
00000000001111111000000000000000
00000000000010000000000000000000
I believe that the BU data-prep technique subsumes the NIST technique but added a few steps at the end, not with higher fidelity in mind but to reduce file size. In particular, the BU group:
began with the 32 x 32 bitmaps; then
divided each 32 x 32 bitmap into
non-overlapping blocks of 4x4;
Next, they counted the number of
activated pixels in each block ("1"
is activated; "0" is not);
the result is an 8 x 8 input matrix
in which each element is an integer (0-16)
for finding binary sequence like 101000000000000000010000001
detect sequence 0000,0001,001,01,1
I am assuming you are using the image-processing toolbox in matlab.
To distinguish text in an image. You might want to follow:
Grayscale (speeds up things greatly).
Contrast enhancement.
Erode the image lightly to remove noise (scratches/blips)
Dilation (heavy).
Edge-Detection ( or ROI calculation).
With Trial-and-error, you'll get the proper coefficients such that the image you obtain after 5th step will contain convex regions surrounding each letter/word/line/paragraph.
NOTE:
Essentially the more you dilate, the larger element you get. i.e. least dilation would be useful in identifying letters, whereas comparitively high dilation would be needed to identify lines and paragraphs.
Online ImgProc MATLAB docs
Check out the "Examples in Documentation" section in the online docs or refer to the image-processing toolbox documentation in Matlab Help menu.
The examples given there will guide you to the proper functions to call and their various formats.
Sample CODE (not mine)