PdfReaderContentParser.ProcessContent returns whitespace for clear text - itext

I'd like to parse a pdf for texts containing both, binary and clear text data. When I try to do it with PdfReaderContentParser the GetResultantText method returns the right texts for the binary content but whitespaces for the clear text content. Here is the code I use:
byte[] binaryPdf = File.ReadAllBytes(this.fileName);
reader = new PdfReader(binaryPdf);
PdfReaderContentParser parser = new PdfReaderContentParser(reader);
for (int i = 1; i <= reader.NumberOfPages; i++)
{
SimpleTextExtractionStrategy simpleStragety = parser.ProcessContent(i, new SimpleTextExtractionStrategy());
string contentText = simpleStragety.GetResultantText();
// Do something with the contentText
// ...
}
Any idea how to get all content?

Overview
In a comment the OP clarified which texts he was missing in his extracted text:
Basically for all descriptions on the left-hand side (e.g. Lifting moment) I get whitespaces instead of the actual text.
The reason for this is fairly simple: In the page content there are only spaces (if anything at all) on most of the left side. The labels you see actually are read-only form fields.
For example the "Lifting moment" is the value of the form field 13B141032.
If you want text extraction to include these fields, too, you should consider flattening the document in a first step (moving the field appearances into the regular page content stream) and extracting text from this flattened document.
Document analysis
It looks like the major part of the internationalization of the specification labels has been done using form fields.
For an overview I separated the original document
into its regular page content
and the form fields
There indeed are several strings of spaces in the page content under the form fields.
I would assume that there once was an earlier version of that document (or a template for it) which contained those labels (maybe in only one language or probably two) as page content.
Then there was a task of more dynamic internationalization, so someone replaced the existing labels in the page content by spaces and added new internationalized labels as read-only form-fields, probably because form fields are easier to manipulate.
Considering that the original labels seem to have been replaced by an equal number of spaces, though, one might speculate that there even is another program manipulating the page stream of this and similar documents at hard coded offsets, and to not break this program in the course of internationalization the actual labels had to be created outside the page content. Stranger things have happened...
Flatten and extract
As mentioned above, if you want text extraction to include these fields, too, you should consider flattening the document in a first step (moving the field appearances into the regular page content stream) and extracting text from this flattened document. This can be done like this:
[Test]
public void ExtractFlattenedTextTestSeeb()
{
FileInfo file = new FileInfo(#"PATH_TO_FILE\41851208.pdf");
Console.Out.Write("41851208.pdf, flattened before extraction\n\n");
using (MemoryStream memStream = new MemoryStream())
{
using (PdfReader readerOrig = new PdfReader(file.FullName))
using (PdfStamper stamper = new PdfStamper(readerOrig, memStream))
{
stamper.Writer.CloseStream = false;
stamper.FormFlattening = true;
}
memStream.Position = 0;
using (PdfReader readerFlat = new PdfReader(memStream))
{
PdfReaderContentParser parser = new PdfReaderContentParser(readerFlat);
for (int i = 1; i <= readerFlat.NumberOfPages; i++)
{
SimpleTextExtractionStrategy simpleStragety = parser.ProcessContent(i, new SimpleTextExtractionStrategy());
string contentText = simpleStragety.GetResultantText();
Console.Write("Page {0}:\n\n{1}\n\n", i, contentText);
}
}
}
}
The result StandardOutput:
41851208.pdf, flattened before extraction
Page 1:
90–120 l/min
(23.8–31.7 US gal./min)
60 kg
(132 lbs)
115 kg
(254 lbs)
350 l
(92.5 US gal.)
100 kg 105 kg
(220 lbs) (231 kg)
100 kg
(220 lbs)
250 l 300 l
(66.0 US gal.) (79.3 US gal.)
90 kg
(198 lbs)
180 l
(47.6 US gal.)
5305kg
(11695 lbs)
5265kg
(11607 lbs)
5395kg
(11894 lbs)
5205kg
(11475 lbs)
5010kg
(11045 lbs)
4780kg
(10538 lbs)
4470kg
(9854 lbs)
4190kg
(9237 lbs)
3930kg
(8664 lbs)
5215kg
(11497 lbs)
5045kg
(11122 lbs)
4860kg
(10714 lbs)
4650kg
(10251 lbs)
4350kg
(9590 lbs)
4100kg
(9039 lbs)
3850kg
(8488 lbs)
25.2 m
(82’ 8")
23.2 m
(76’ 1")
21.0 m
(68’ 11")
18.7 m
(61’ 4")
16.4 m
(53’ 10")
14.1 m
(46’ 3")
11.8 m
(38’ 9")
9.7 m
(31’ 10")
7.7 m
(25’ 3")
36.5 MPa (365 bar)
(5293 psi)
endlos
endless
sans finite
25.2 m
31.2 m
(82’ 8")
(102’ 4")
21.0 m
(68’ 11")
14900kg
(32848 lbs)
403.2 kNm (41.1 mt)
(297270 ft.lbs)
49.1 kNm (5.0 mt)
PK 42002–SH A–G
(36210 ft.lbs)
37.3 kNm (3.8 mt)
PK 42002–SH A–C
(27510 ft.lbs)
1GETR 2GETR
PK 42002–SH A – C
KT250 KT300 KT350 KT180
2GETR STZY
+V1
+V2
+2/4
7(F) 8(G) 6(E) 5(D) 4(C) 3(B) 2(A)
+V1
+V2
(S410–SK–D)
DTS410SHC/03
0100
11/2010
PK 42002–SH
Type Model Modell
Page Page Seite
Chapitre Chapter Kapitel
Edition Edition Ausgabe
Öltank
Mehrgewicht:
Alle Gewichtsangaben ohne Aufbauzubehör,Zusatzgeräte und Öl.
Hydr. Ausschübe:
Max. Reichweite + Fly-Jib:
Max. Reichweite:
Fördermenge der Pumpe:
Betriebsdruck:
Schwenkmoment:
Schwenkbereich:
Max. Reichweite:
Max. hydraulische Reichweite:
Max. Hubkraft:
Max. Hubmoment:
Gewicht +V ohne 2/4
Krangewicht (R3X,STZS):
Technische Daten
Konstruktionsänderungen vorbehalten, fertigungstechn. Toleranzen müssen berücksichtigt werden.
Oil tank
Excess weight:
All weights given without assembly accessory,additional devices and oil.
Hydr. boom extensions:
Max. outreach + Fly-Jib:
Max. outreach:
Pump capacity:
Operating pressure:
Slewing torque:
Slewing angle:
Max. outreach:
Max. hydraulic outreach:
Max. lifting capacity:
Lifting moment:
Weight +V without 2/4
Crane weight (R3X,STZS):
Specifications
Subject to change, production tolerances have to be taken into account.
Réservoir
Excessif poids:
Tous les poids sans huile ni accessoire de montage ni appareils accessoires
Extensions hydrauliques:
Portee maximale + Fly-Jib:
Max. portee:
Debit de pompe:
Pression d' utilisation:
Couple de rotation:
Angle de rotation:
Max. portee:
Portee hydraulique maximale:
Capacite maxi de levage:
Couple de levage:
Poids +V sans 2/4
Poids grue (R3X,STZS):
Données Techniques
Sous reserve de modifications de conception. Les tolerances relatives a la technique de production doivent etre prises en consideration.
As you see, "Lifting moment" and all the other missing labels are there now.

Related

Marginal Means accounting for the random effect uncertainty

When we have repeated measurements on an experimental unit, typically these units cannot be considered 'independent' and need to be modeled in a way that we get valid estimates for our standard errors.
When I compare the intervals obtained by computing the marginal means for the treatment using a mixed model (treating the unit as a random effect) and in the other case, first averaging over the unit and THEN runnning a simple linear model on the averaged responses, I get the exact same uncertainty intervals.
How do we incorporate the uncertainty of the measurements of the unit, into the uncertainty of what we think our treatments look like?
In order to really propogate all the uncertainty, shouldn't we see what the treatment looks like, averaged over "all possible measurements" on a unit?
``` r
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
library(emmeans)
library(lme4)
#> Loading required package: Matrix
library(ggplot2)
tmp <- structure(list(treatment = c("A", "A", "A", "A", "A", "A", "A",
"A", "A", "A", "A", "A", "B", "B", "B", "B", "B", "B", "B", "B",
"B", "B", "B", "B"), response = c(151.27333548, 162.3933313,
159.2199999, 159.16666725, 210.82, 204.18666667, 196.97333333,
194.54666667, 154.18666667, 194.99333333, 193.48, 191.71333333,
124.1, 109.32666667, 105.32, 102.22, 110.83333333, 114.66666667,
110.54, 107.82, 105.62000069, 79.79999821, 77.58666557, 75.78666928
), experimental_unit = c("A-1", "A-1", "A-1", "A-1", "A-2", "A-2",
"A-2", "A-2", "A-3", "A-3", "A-3", "A-3", "B-1", "B-1", "B-1",
"B-1", "B-2", "B-2", "B-2", "B-2", "B-3", "B-3", "B-3", "B-3"
)), row.names = c(NA, -24L), class = c("tbl_df", "tbl", "data.frame"
))
### Option 1 - Treat the experimental unit as a random effect since there are
### 4 repeat observations for the same unit
lme4::lmer(response ~ treatment + (1 | experimental_unit), data = tmp) %>%
emmeans::emmeans(., ~ treatment) %>%
as.data.frame()
#> treatment emmean SE df lower.CL upper.CL
#> 1 A 181.0794 10.83359 4 151.00058 211.1583
#> 2 B 101.9683 10.83359 4 71.88947 132.0472
#ggplot(.,aes(treatment, emmean)) +
#geom_pointrange(aes(ymin = lower.CL, ymax = upper.CL))
### Option 2 - instead of treating the unit as random effect, we average over the
### 4 repeat observations, and run a simple linear model
tmp %>%
group_by(experimental_unit) %>%
summarise(mean_response = mean(response)) %>%
mutate(treatment = c(rep("A", 3), rep("B", 3))) %>%
lm(mean_response ~ treatment, data = .) %>%
emmeans::emmeans(., ~ treatment) %>%
as.data.frame()
#> treatment emmean SE df lower.CL upper.CL
#> 1 A 181.0794 10.83359 4 151.00058 211.1583
#> 2 B 101.9683 10.83359 4 71.88947 132.0472
#ggplot(., aes(treatment, emmean)) +
#geom_pointrange(aes(ymin = lower.CL, ymax = upper.CL))
### Whether we include a random effect for the unit, or average over it and THEN model it, we find no difference in the
### marginal means for the treatments
### How do we incoporate the variation of the repeat measurments to the marginal means of the treatments?
### Do we then ignore the variation in the 'subsamples' and simply average over them PRIOR to modeling?
<sup>Created on 2021-07-31 by the [reprex package](https://reprex.tidyverse.org) (v2.0.0)</sup>
emmeans() does take into account the errors of random effects. This is what I get when I remove the complex sequences of pipes:
> mmod = lme4::lmer(response ~ treatment + (1 | experimental_unit), data = tmp)
> emmeans(mmod, "treatment")
treatment emmean SE df lower.CL upper.CL
A 181 10.8 4 151.0 211
B 102 10.8 4 71.9 132
Degrees-of-freedom method: kenward-roger
Confidence level used: 0.95
This is as shown. If I fit a fixed-effects model that accounts for experimental units as a fixed effect, I get:
> fmod = lm(response ~ treatment + experimental_unit, data = tmp)
> emmeans(fmod, "treatment")
NOTE: A nesting structure was detected in the fitted model:
experimental_unit %in% treatment
treatment emmean SE df lower.CL upper.CL
A 181 3.25 18 174.2 188
B 102 3.25 18 95.1 109
Results are averaged over the levels of: experimental_unit
Confidence level used: 0.95
The SEs of the latter results are considerably lower, and that is because the random variations in experimental_unit are modeled as fixed variations.
Apparently the piping you did accounts for the variation of the random effects and includes those in the EMMs. I think that is because you did things separately for each experimental unit and somehow combined those results. I'm not very comfortable with a sequence of pipes that is 7 steps long, and I don't understand why that results in just one set of means.
I recommend against the as.data.frame() at the end. That zaps out annotations that can be helpful in understanding what you have. If you are doing that to get more digits precision, I'll claim that those are digits you don't need, it just exaggerates the precision you are entitled to claim.
Notes on some follow-up comments
Subsequently, I am convinced that what we see in the piped operations in the second part of the OP doe indeed comprise computing the mean of each EU, then analyzing those.
Let's look at that in the context of the formal model. We have (sorry MathJax doesn't work on stackoverflow, but I'll leave the markup there anyway)
$$ Y_{ijk} = \mu + \tau_i + U_{ij} + E_{ijk} $$
where $Y_{ijk}$ is the kth response measurement on the ith treatment and jth EU in the ith treatment, and the rhs terms represent respectively the overall mean, the (fixed) treatment effects, the (random) EU effects, and the (random) error effects. We assume the random effects are all mutually independent. With a balanced design, the EMMs are just the marginal means:
$$ \bar Y_{i..} = \mu + \tau_i + \bar U_{i.} + \bar E_{i..} $$
where a '.' subscript means we averaged over that subscript. If there are n EUs per treatment and m measurements on each EU, we get that
$$ Var(\bar Y_{i..} = \sigma^2_U / n + \sigma^2_E / mn $$
Now, if we aggregate the data on EUs ahead of time, we are starting with
$$ \bar Y_{ij.} = \mu + U_{ij} + \bar E_{ij.} $$
However, if we then compute marginal means by averaging over j, we get exactly the same thing as we did before with $\bar Y_{i..}$, and the variance is exactly as already shown. That is why it doesn't matter if we aggregated first or not.

Encoding Spotify URI to Spotify Codes

Spotify Codes are little barcodes that allow you to share songs, artists, users, playlists, etc.
They encode information in the different heights of the "bars". There are 8 discrete heights that the 23 bars can be, which means 8^23 different possible barcodes.
Spotify generates barcodes based on their URI schema. This URI spotify:playlist:37i9dQZF1DXcBWIGoYBM5M gets mapped to this barcode:
The URI has a lot more information (62^22) in it than the code. How would you map the URI to the barcode? It seems like you can't simply encode the URI directly. For more background, see my "answer" to this question: https://stackoverflow.com/a/62120952/10703868
The patent explains the general process, this is what I have found.
This is a more recent patent
When using the Spotify code generator the website makes a request to https://scannables.scdn.co/uri/plain/[format]/[background-color-in-hex]/[code-color-in-text]/[size]/[spotify-URI].
Using Burp Suite, when scanning a code through Spotify the app sends a request to Spotify's API: https://spclient.wg.spotify.com/scannable-id/id/[CODE]?format=json where [CODE] is the media reference that you were looking for. This request can be made through python but only with the [TOKEN] that was generated through the app as this is the only way to get the correct scope. The app token expires in about half an hour.
import requests
head={
"X-Client-Id": "58bd3c95768941ea9eb4350aaa033eb3",
"Accept-Encoding": "gzip, deflate",
"Connection": "close",
"App-Platform": "iOS",
"Accept": "*/*",
"User-Agent": "Spotify/8.5.68 iOS/13.4 (iPhone9,3)",
"Accept-Language": "en",
"Authorization": "Bearer [TOKEN]",
"Spotify-App-Version": "8.5.68"}
response = requests.get('https://spclient.wg.spotify.com:443/scannable-id/id/26560102031?format=json', headers=head)
print(response)
print(response.json())
Which returns:
<Response [200]>
{'target': 'spotify:playlist:37i9dQZF1DXcBWIGoYBM5M'}
So 26560102031 is the media reference for your playlist.
The patent states that the code is first detected and then possibly converted into 63 bits using a Gray table. For example 361354354471425226605 is encoded into 010 101 001 010 111 110 010 111 110 110 100 001 110 011 111 011 011 101 101 000 111.
However the code sent to the API is 6875667268, I'm unsure how the media reference is generated but this is the number used in the lookup table.
The reference contains the integers 0-9 compared to the gray table of 0-7 implying that an algorithm using normal binary has been used. The patent talks about using a convolutional code and then the Viterbi algorithm for error correction, so this may be the output from that. Something that is impossible to recreate whithout the states I believe. However I'd be interested if you can interpret the patent any better.
This media reference is 10 digits however others have 11 or 12.
Here are two more examples of the raw distances, the gray table binary and then the media reference:
1.
022673352171662032460
000 011 011 101 100 010 010 111 011 001 100 001 101 101 011 000 010 011 110 101 000
67775490487
2.
574146602473467556050
111 100 110 001 110 101 101 000 011 110 100 010 110 101 100 111 111 101 000 111 000
57639171874
edit:
Some extra info:
There are some posts online describing how you can encode any text such as spotify:playlist:HelloWorld into a code however this no longer works.
I also discovered through the proxy that you can use the domain to fetch the album art of a track above the code. This suggests a closer integration of Spotify's API and this scannables url than previously thought. As it not only stores the URIs and their codes but can also validate URIs and return updated album art.
https://scannables.scdn.co/uri/800/spotify%3Atrack%3A0J8oh5MAMyUPRIgflnjwmB
Your suspicion was correct - they're using a lookup table. For all of the fun technical details, the relevant patent is available here: https://data.epo.org/publication-server/rest/v1.0/publication-dates/20190220/patents/EP3444755NWA1/document.pdf
Very interesting discussion. Always been attracted to barcodes so I had to take a look. I did some analysis of the barcodes alone (didn't access the API for the media refs) and think I have the basic encoding process figured out. However, based on the two examples above, I'm not convinced I have the mapping from media ref to 37-bit vector correct (i.e. it works in case 2 but not case 1). At any rate, if you have a few more pairs, that last part should be simple to work out. Let me know.
For those who want to figure this out, don't read the spoilers below!
It turns out that the basic process outlined in the patent is correct, but lacking in details. I'll summarize below using the example above. I actually analyzed this in reverse which is why I think the code description is basically correct except for step (1), i.e. I generated 45 barcodes and all of them matched had this code.
1. Map the media reference as integer to 37 bit vector.
Something like write number in base 2, with lowest significant bit
on the left and zero-padding on right if necessary.
57639171874 -> 0100010011101111111100011101011010110
2. Calculate CRC-8-CCITT, i.e. generator x^8 + x^2 + x + 1
The following steps are needed to calculate the 8 CRC bits:
Pad with 3 bits on the right:
01000100 11101111 11110001 11010110 10110000
Reverse bytes:
00100010 11110111 10001111 01101011 00001101
Calculate CRC as normal (highest order degree on the left):
-> 11001100
Reverse CRC:
-> 00110011
Invert check:
-> 11001100
Finally append to step 1 result:
01000100 11101111 11110001 11010110 10110110 01100
3. Convolutionally encode the 45 bits using the common generator
polynomials (1011011, 1111001) in binary with puncture pattern
110110 (or 101, 110 on each stream). The result of step 2 is
encoded using tail-biting, meaning we begin the shift register
in the state of the last 6 bits of the 45 long input vector.
Prepend stream with last 6 bits of data:
001100 01000100 11101111 11110001 11010110 10110110 01100
Encode using first generator:
(a) 100011100111110100110011110100000010001001011
Encode using 2nd generator:
(b) 110011100010110110110100101101011100110011011
Interleave bits (abab...):
11010000111111000010111011110011010011110001...
1010111001110001000101011000010110000111001111
Puncture every third bit:
111000111100101111101110111001011100110000100100011100110011
4. Permute data by choosing indices 0, 7, 14, 21, 28, 35, 42, 49,
56, 3, 10..., i.e. incrementing 7 modulo 60. (Note: unpermute by
incrementing 43 mod 60).
The encoded sequence after permuting is
111100110001110101101000011110010110101100111111101000111000
5. The final step is to map back to bar lengths 0 to 7 using the
gray map (000,001,011,010,110,111,101,100). This gives the 20 bar
encoding. As noted before, add three bars: short one on each end
and a long one in the middle.
UPDATE: I've added a barcode (levels) decoder (assuming no errors) and an alternate encoder that follows the description above rather than the equivalent linear algebra method. Hopefully that is a bit more clear.
UPDATE 2: Got rid of most of the hard-coded arrays to illustrate how they are generated.
The linear algebra method defines the linear transformation (spotify_generator) and mask to map the 37 bit input into the 60 bit convolutionally encoded data. The mask is result of the 8-bit inverted CRC being convolutionally encoded. The spotify_generator is a 37x60 matrix that implements the product of generators for the CRC (a 37x45 matrix) and convolutional codes (a 45x60 matrix). You can create the generator matrix from an encoding function by applying the function to each row of an appropriate size generator matrix. For example, a CRC function that add 8 bits to each 37 bit data vector applied to each row of a 37x37 identity matrix.
import numpy as np
import crccheck
# Utils for conversion between int, array of binary
# and array of bytes (as ints)
def int_to_bin(num, length, endian):
if endian == 'l':
return [num >> i & 1 for i in range(0, length)]
elif endian == 'b':
return [num >> i & 1 for i in range(length-1, -1, -1)]
def bin_to_int(bin,length):
return int("".join([str(bin[i]) for i in range(length-1,-1,-1)]),2)
def bin_to_bytes(bin, length):
b = bin[0:length] + [0] * (-length % 8)
return [(b[i]<<7) + (b[i+1]<<6) + (b[i+2]<<5) + (b[i+3]<<4) +
(b[i+4]<<3) + (b[i+5]<<2) + (b[i+6]<<1) + b[i+7] for i in range(0,len(b),8)]
# Return the circular right shift of an array by 'n' positions
def shift_right(arr, n):
return arr[-n % len(arr):len(arr):] + arr[0:-n % len(arr)]
gray_code = [0,1,3,2,7,6,4,5]
gray_code_inv = [[0,0,0],[0,0,1],[0,1,1],[0,1,0],
[1,1,0],[1,1,1],[1,0,1],[1,0,0]]
# CRC using Rocksoft model:
# NOTE: this is not quite any of their predefined CRC's
# 8: number of check bits (degree of poly)
# 0x7: representation of poly without high term (x^8+x^2+x+1)
# 0x0: initial fill of register
# True: byte reverse data
# True: byte reverse check
# 0xff: Mask check (i.e. invert)
spotify_crc = crccheck.crc.Crc(8, 0x7, 0x0, True, True, 0xff)
def calc_spotify_crc(bin37):
bytes = bin_to_bytes(bin37, 37)
return int_to_bin(spotify_crc.calc(bytes), 8, 'b')
def check_spotify_crc(bin45):
data = bin_to_bytes(bin45,37)
return spotify_crc.calc(data) == bin_to_bytes(bin45[37:], 8)[0]
# Simple convolutional encoder
def encode_cc(dat):
gen1 = [1,0,1,1,0,1,1]
gen2 = [1,1,1,1,0,0,1]
punct = [1,1,0]
dat_pad = dat[-6:] + dat # 6 bits are needed to initialize
# register for tail-biting
stream1 = np.convolve(dat_pad, gen1, mode='valid') % 2
stream2 = np.convolve(dat_pad, gen2, mode='valid') % 2
enc = [val for pair in zip(stream1, stream2) for val in pair]
return [enc[i] for i in range(len(enc)) if punct[i % 3]]
# To create a generator matrix for a code, we encode each row
# of the identity matrix. Note that the CRC is not quite linear
# because of the check mask so we apply the lamda function to
# invert it. Given a 37 bit media reference we can encode by
# ref * spotify_generator + spotify_mask (mod 2)
_i37 = np.identity(37, dtype=bool)
crc_generator = [_i37[r].tolist() +
list(map(lambda x : 1-x, calc_spotify_crc(_i37[r].tolist())))
for r in range(37)]
spotify_generator = 1*np.array([encode_cc(crc_generator[r]) for r in range(37)], dtype=bool)
del _i37
spotify_mask = 1*np.array(encode_cc(37*[0] + 8*[1]), dtype=bool)
# The following matrix is used to "invert" the convolutional code.
# In particular, we choose a 45 vector basis for the columns of the
# generator matrix (by deleting those in positions equal to 2 mod 4)
# and then inverting the matrix. By selecting the corresponding 45
# elements of the convolutionally encoded vector and multiplying
# on the right by this matrix, we get back to the unencoded data,
# assuming there are no errors.
# Note: numpy does not invert binary matrices, i.e. GF(2), so we
# hard code the following 3 row vectors to generate the matrix.
conv_gen = [[0,1,0,1,1,1,1,0,1,1,0,0,0,1]+31*[0],
[1,0,1,0,1,0,1,0,0,0,1,1,1] + 32*[0],
[0,0,1,0,1,1,1,1,1,1,0,0,1] + 32*[0] ]
conv_generator_inv = 1*np.array([shift_right(conv_gen[(s-27) % 3],s) for s in range(27,72)], dtype=bool)
# Given an integer media reference, returns list of 20 barcode levels
def spotify_bar_code(ref):
bin37 = np.array([int_to_bin(ref, 37, 'l')], dtype=bool)
enc = (np.add(1*np.dot(bin37, spotify_generator), spotify_mask) % 2).flatten()
perm = [enc[7*i % 60] for i in range(60)]
return [gray_code[4*perm[i]+2*perm[i+1]+perm[i+2]] for i in range(0,len(perm),3)]
# Equivalent function but using CRC and CC encoders.
def spotify_bar_code2(ref):
bin37 = int_to_bin(ref, 37, 'l')
enc_crc = bin37 + calc_spotify_crc(bin37)
enc_cc = encode_cc(enc_crc)
perm = [enc_cc[7*i % 60] for i in range(60)]
return [gray_code[4*perm[i]+2*perm[i+1]+perm[i+2]] for i in range(0,len(perm),3)]
# Given 20 (clean) barcode levels, returns media reference
def spotify_bar_decode(levels):
level_bits = np.array([gray_code_inv[levels[i]] for i in range(20)], dtype=bool).flatten()
conv_bits = [level_bits[43*i % 60] for i in range(60)]
cols = [i for i in range(60) if i % 4 != 2] # columns to invert
conv_bits45 = np.array([conv_bits[c] for c in cols], dtype=bool)
bin45 = (1*np.dot(conv_bits45, conv_generator_inv) % 2).tolist()
if check_spotify_crc(bin45):
return bin_to_int(bin45, 37)
else:
print('Error in levels; Use real decoder!!!')
return -1
And example:
>>> levels = [5,7,4,1,4,6,6,0,2,4,3,4,6,7,5,5,6,0,5,0]
>>> spotify_bar_decode(levels)
57639171874
>>> spotify_barcode(57639171874)
[5, 7, 4, 1, 4, 6, 6, 0, 2, 4, 3, 4, 6, 7, 5, 5, 6, 0, 5, 0]

htmlTable in Rmd - conversion to Word docx

I have the following Rmd file, which produces an html file, which I then copy-paste into a docx file (for collaborators). Here are things I'd like to know how to do with the tables, but I can't find answers in the vignettes here:
A. I want to know how to remove the blank column that gets inserted in Word in between Cgroup 1 and Cgroup 2.
B. I want to know how to set the width of the column with the row names ("1st row",...)
C. How can I change the font and font size? I tried following this but it doesn't work to have output: word_document with htmlTable()
D. To ease the conversion to Word, is there a way to specify page breaks? Landscape orientation?
Thank you so much!
---
title: "Example"
output:
Gmisc::docx_document:
fig_caption: TRUE
force_captions: TRUE
---
Results
=======
```{r, echo = FALSE}
library(htmlTable)
library(Gmisc)
library(knitr)
mx <-
matrix(ncol=6, nrow=8)
rownames(mx) <- paste(c("1st", "2nd",
"3rd",
paste0(4:8, "th")),
"row")
colnames(mx) <- paste(c("1st", "2nd",
"3rd",
paste0(4:6, "th")),
"hdr")
for (nr in 1:nrow(mx)){
for (nc in 1:ncol(mx)){
mx[nr, nc] <-
paste0(nr, ":", nc)
}
}
htmlTable(mx,
cgroup = c("Cgroup 1", "Cgroup 2"),
n.cgroup = c(2,4))
```
The styling seemed to be off for the row names and it is now fixed in version 1.10.1 that you can download using the devtools package: devtools::install_github("gforge/htmlTable", ref="develop")
Regarding the styling the function allows almost any CSS-style you could image. Unfortunately it requires copy-pasting into Word and this functionality hasn't been Microsofts highest priority. You can easily adapt you example to accomodate the requiered changes using the css.cell:
library(htmlTable)
library(knitr)
mx <-
matrix(ncol=6, nrow=8)
rownames(mx) <- paste(c("1st", "2nd",
"3rd",
paste0(4:8, "th")),
"row")
colnames(mx) <- paste(c("1st", "2nd",
"3rd",
paste0(4:6, "th")),
"hdr")
for (nr in 1:nrow(mx)){
for (nc in 1:ncol(mx)){
mx[nr, nc] <-
paste0(nr, ":", nc)
}
}
css.cell = rep("font-size: 1.5em;", times = ncol(mx) + 1)
css.cell[1] = "width: 4cm; font-size: 2em;"
htmlTable(mx,
css.cell=css.cell,
css.cgroup = "color: red",
css.table = "color: blue",
cgroup = c("Cgroup 1", "Cgroup 2"),
n.cgroup = c(2,4))
There is no way to remove the empty column generated by cgroups. This was required for the table to look nice and is a conscious design choice.
Regarding page-breaks I doubt there is any elegant way for doing that. An alternative could possibly be the ReporteRs package. I haven't used it myself but it's closer integrated with Word and could possibly be a solution.

Indentation misplaced while creating PDF using perl module PDF::API2

I have data in an array and I can write the data into a pdf format using PDF::API2 .But the problem is during the writing process the Indentation(spaces) is not exactly same as in the array
In array format:
ATOM 1 N MET A 0 24.277 8.374 -9.854 1.00 38.41 N 0.174
ATOM 38 OE2 GLU A 4 37.711 19.692 -12.684 1.00 28.70 O 0.150
In PDF format:
ATOM 1 N MET A 0 24.277 8.374-9.8541.0038.41 N 0.174
ATOM 38 OE2 GLU A 4 37.71119.692-12.684 1.00 28.70 O 0.150
My code:
my $pdf = PDF::API2->new(-file => "/home/httpd/cgi-bin/new.pdf");
$pdf->mediabox("A4");
my $page = $pdf->page;
my $fnt = $pdf->corefont('Arial',-encoding => 'latin1');
my $txt = $page->text;
$txt->textstart;
$txt->font($fnt, 8);
$txt->translate(100,800);
$j1=0;
for($i=0;$i<=scalar(#ar_velz);$i++) #Data input to write in PDF
{
$txt->lead(10);
$txt->section("$ar_velz[$i]", 500, 800); #writing each array index
if($j1 == 75) #To create a page for every 75 lines
{
$page = $pdf->page;
$fnt = $pdf->corefont('Arial',-encoding => 'latin1');
$txt = $page->text;
$txt->textstart;
$txt->font($fnt, 8);
$txt->lead(10);
$txt->translate(100,800);
$j1=0;
}
$j1++;
}
$txt->textend;
$pdf->save;
$pdf->end( );
}
That happens because Arial is not a mono-spaced font. The characters all have different widths. Especially a blank space is usually not very wide. If you want the spacing to stay intact, you need to use a mono-spaced font, such as Courier.
$fnt = $pdf->corefont('Courier',-encoding => 'latin1');
That fact is also why PDF::API2 includes a method advancewidth in its PDF::API2::Content class. You can use that to check if a block of text is too wide to fit into a line, and manually wrap it if needed. Of course for your table, that doesn't help.
An alternative to the mono-spaced font might be to use PDF::Table, which can create tables inside a PDF::API2 document.

Matlab Code for Reading Text file with inconsistent rows

I am new to Matlab and have been working my way through using Google. But now I have hit the wall it seems.
I have a text file which looks like following:
Information is for illustration reasons only
Aggregated Results
Date;$/Val1;Total $;Exp. Val1;Act. Val1
01-Oct-2008; -5.20; -1717; 330; 323
02-Oct-2008; -1.79; -595; 333; 324
03-Oct-2008; -2.29; -765; 334; 321
04-Oct-2008; -2.74; -917; 335; 317
Total Period; -0.80; -8612; 10748; 10276
Aggregated Results for location State PA
Date;$/Val1;Total $;Exp. Val1;Act. Val1
01-Oct-2008; -5.20; -1717; 330; 323
02-Oct-2008; -1.79; -595; 333; 324
03-Oct-2008; -2.29; -765; 334; 321
Total Period; -0.80; -8612; 10748; 10276
Results for account A1
Date;$/Val1;Total $;Exp. Val1;Act. Val1
01-Oct-2008; -7.59; -372; 49; 51
Total Period; -0.84; -1262; 1502; 1431
Results for account A2
Date;$/MWh;Total $;Exp. MWh;Act. MWh
01-Oct-2008; -8.00; -392; 49; 51
02-Oct-2008; 0.96; 47; 49; 51
03-Oct-2008; -0.75; -37; 50; 48
04-Oct-2008; 1.28; 53; 41; 40
Total Period; -0.36; -534; 1502; 1431
I want to extract following information in a cell/matrix format so that I can use it later to selectively do operations like average of accounts A1 and A2 or average of PA and A1, etc.
PA -0.8
A1 -0.84
A2 -0.036
I'd go this way:
fid = fopen(filename,'r');
A = textscan(fid,'%s','delimiter','\r');
A = A{:};
str_i = 'Total Period';
ix = find(strncmp(A,str_i,length(str_i)));
res = arrayfun(#(i) str2num(A{ix(i)}(length(str_i)+2:end)),1:numel(ix),'UniformOutput',false);
res = cat(2,res{:});
This way you'll get all the numerical values after a string 'Total Period' in a matrix, so that you may pick the values you need.
Similarly you may operate with strings PA, A1 and A2.
Matlab is not that nice when it comes to dealing with messy data. You may want to preprocess it a bit first.
However, here is an easy general way to import mixed numeric and non-numeric data in Matlab for a limited number of normal sized files.
Step 1: Copy the contents of the file into excel and save it as xls or xlsx
Step 2: Use xlsread
[NUM,TXT,RAW]=xlsread('test.xlsx')
From there the parsing should be maneagable.
Hopefully they will add non-numeric support to csvread or dlmread in the future.