This may sound like a dumb question, but I'd like to know where is the .tex file saved, when I compile a pdf document from a Rmd file, using RStudio server.
I added the keep_tex option, so the header of Rmd looks like this :
---
output:
pdf_document:
keep_tex: yes
---
Then when I compiled, the output looks like this
|...................... | 33%
ordinary text without R code
|........................................... | 67%
label: plot
processing file: test.Rmd
cropping /tmp/Rtmpb1x3Q0/preview-3bfe24922427.dir/test_files/figure-latex/plot-1.pdf
PDFCROP 1.33, 2012/02/01 - Copyright (c) 2002-2012 by Heiko Oberdiek.
==> 1 page written on `/tmp/Rtmpb1x3Q0/preview-3bfe24922427.dir/test_files/figure-latex/plot-1.pdf'.
|.................................................................| 100%
ordinary text without R code
/usr/lib/rstudio-server/bin/pandoc/pandoc test.utf8.md --to latex --from markdown+autolink_bare_uris+ascii_identifiers+tex_math_single_backslash-implicit_figures --output /tmp/Rtmpb1x3Q0/preview-3bfe24922427.dir/test.tex --template /home/myusername/R/x86_64-pc-linux-gnu-library/3.1/rmarkdown/rmd/latex/default.tex --highlight-style tango --latex-engine pdflatex --variable 'geometry:margin=1in'
output file: test.knit.md
/usr/lib/rstudio-server/bin/pandoc/pandoc test.utf8.md --to latex --from markdown+autolink_bare_uris+ascii_identifiers+tex_math_single_backslash-implicit_figures --output /tmp/Rtmpb1x3Q0/preview-3bfe24922427.dir/test.pdf --template /home/myusername/R/x86_64-pc-linux-gnu-library/3.1/rmarkdown/rmd/latex/default.tex --highlight-style tango --latex-engine pdflatex --variable 'geometry:margin=1in'
Output created: /tmp/Rtmpb1x3Q0/preview-3bfe24922427.dir/test.pdf
I'd like to find the intermediate .tex file (or test.knit.md), and do a bit of editing. Except it is no where to be found. Not in the working directory, or /home/myusername/, or /, or /tmp/Rtmpb1x3Q0/.
I'd really appreciate it if someone has the answer.
Actually, this first line in the console
/usr/lib/rstudio-server/bin/pandoc/pandoc test.utf8.md --to latex --from markdown+autolink_bare_uris+ascii_identifiers+tex_math_single_backslash-implicit_figures --output /tmp/Rtmpb1x3Q0/preview-3bfe24922427.dir/test.tex --template /home/myusername/R/x86_64-pc-linux-gnu-library/3.1/rmarkdown/rmd/latex/default.tex --highlight-style tango --latex-engine pdflatex --variable 'geometry:margin=1in'
told us that the output .tex file is in /tmp/Rtmpb1x3Q0/preview-3bfe24922427.dir/test.tex
Somehow I did not find the file last time, but on a recent instance, the .tex file is actually there, so that answers the question.
The initial code from the question
---
output:
pdf_document:
keep_tex: yes
---
Throws an error for me, while the following does not:
---
output:
pdf_document: default
keep_tex: T
---
However, I was still unable to find the .tex file following the console output. It appears to still not be saved. Instead what worked easily was running the following lines in the R studio console:
#install.packages(rmarkdown)
rmarkdown::render("FileName.Rmd", output_format = latex_document())
The file "FileName.Rmd" needs to be in the current working directory - which is where the .tex file will be saved.
Related
In my loop, I run a dbt command and save the output to a .yml file. The following command works and generates a schema in my .yml file accurately:
for file in models/l30_mart/*.sql; do
table=$(basename "$file" .sql)
dbt run-operation generate_model_yaml --args "{\"model_name\": \"$table\"}" > test.yml
done
However, in the example above, I am saving the test.yml file in the root directory. When I try to save the file in another path for example models/l30_mart/test.yml like this, it doesn't work:
for file in models/l30_mart/*.sql; do
table=$(basename "$file" .sql)
dbt run-operation generate_model_yaml --args "{\"model_name\": \"$table\"}" > models/l30_mart/test.yml
done
In this case, when I open the test.ymlfile, I see this:
12:06:42 Running with dbt=1.0.1
12:06:43 Encountered an error:
Compilation Error
The schema file at models/l30_mart/test.yml is invalid because no version is specified. Please consult the documentation for more information on schema.yml syntax:
https://docs.getdbt.com/docs/schemayml-files
What am I missing out on?
If I try something like this to save different files with the extracted tablename variable as the filename, it also doesn't work:
for file in models/l30_mart/*.sql; do
table=$(basename "$file" .sql)
dbt run-operation generate_model_yaml --args "{\"model_name\": \"$table\"}" > models/l30_mart/$table.yml
done
In this case, the files either have this output:
20:39:44 Running with dbt=1.0.1
20:39:45 Encountered an error:
Compilation Error
The schema file at models/l30_mart/**firsttable.yml** is invalid because no version is specified. Please consult the documentation for more information on schema.yml syntax:
https://docs.getdbt.com/docs/schemayml-files
or this (eg in the secondtablename.yml file):
20:39:48 Running with dbt=1.0.1
20:39:49 Encountered an error:
Parsing Error
Error reading dbt_4flow: l30_mart/firstablename.yml - Runtime Error
Syntax error near line 2
------------------------------
1 | 20:39:44 Running with dbt=1.0.1
2 | 20:39:45 Encountered an error:
3 | Compilation Error
4 | The schema file at models/l30_mart/firsttablename.yml is invalid because no version is specified. Please consult the documentation for more information on schema.yml syntax:
5 |
Raw Error:
------------------------------
mapping values are not allowed in this context
in "<unicode string>", line 2, column 31
Note that the secondtablename.yml mentions the firsttablename.yml.
I don't know dbt but the explanation that seems likely is that dbt for some reason parses all *.yml files in that target directory when you call it. Since the shell opens the pipe to the *.yml file before calling dbt, the file already exists (but initially empty) when dbt is called. Since dbt expects the file to contain a version, you get an error.
To check whether this assessment is correct, write into a temporary file:
for file in models/l30_mart/*.sql; do
target_file=$(mktemp)
table=$(basename "$file" .sql)
dbt run-operation generate_model_yaml --args "{\"model_name\": \"$table\"}" > $target_file
mv $target_file models/l30_mart/test.yml
done
(Be aware of mktemp shenanigans if you're using macOS)
Edit: Since dbt seems to be affected by the files existing, you can also try to generate all files and move them into the correct directory afterwards:
target_dir=$(mktemp -d)
for file in models/l30_mart/*.sql; do
table=$(basename "$file" .sql)
dbt run-operation generate_model_yaml --args "{\"model_name\": \"$table\"}" > $target_dir/$table.yml
done
mv $target_dir/*.yml models/l30_mart/
rmdir $target_dir
I have generated a box file from a png image then I followed this tutorial:
https://pretius.com/how-to-prepare-training-files-for-tesseract-ocr-and-improve-characters-recognition/ to generate custom traineddata file.
I encountered an error when I tried to use the generated traineddata alongside with Pytesseract.
and i got this kind of error:
raise TesseractError(proc.returncode, get_errors(error_string))
pytesseract.pytesseract.TesseractError: (-4, "read_params_file:
Can't open txt read_params_file: Can't open txt read_params_file: Can't open txt read_params_file: Can't open txt Error: LSTM requested, but not present!! Loading tesseract. mgr->GetComponent(TESSDATA_NORMPROTO, &fp)
:Error:Assert failed:in file adaptmatch.cpp, line 552")
I'm using Tesseract version 5.0
This is my config options
traineddata = f'+eng+lav+lav2'
config = f'-l {traineddata} --oem 1 --psm 3 {tessdata_dir}'
I followed the same tutorial and encountered the exact same error. At my first tries the ***.traineddata didn't generated well, and I findout that one file was missing (normproto). So I just cleaned all the generated files (except the corrected .box files) and rerun the train process, and everything worked fine on the second attempt.
I want rmarkdown/knitr to create a pdf output using a custom tex template.
It is possible to override pandoc defaults with:
output:
pdf_document:
pandoc_args: '--pdf-engine=xelatex'
---
However it is not working when I add a further command:
'--pdf-engine=xelatex --template template.tex'
The error:
Unknown option --template template.tex .
How can I set a custom tex template using rmarkdown?
Much easier solution:
mainfont: Minion Pro
output:
pdf_document:
latex_engine: xelatex
---
I was making a math class note with some unicode characters (Simplified Chinese, in my case) in it. And when I was trying to convert it into PDF file, it popped out 500 error. The error message reads:
...
*************************************************
("E:\Program Files (x86)\MiKTeX 2.9\tex\latex\fontspec\fontspec.sty"
("E:\Program Files (x86)\MiKTeX 2.9\tex\latex\fontspec\fontspec-xetex.sty"
("E:\Program Files (x86)\MiKTeX 2.9\tex\latex\base\fontenc.sty"
("E:\Program Files (x86)\MiKTeX 2.9\tex\latex\base\tuenc.def"))
("E:\Program Files (x86)\MiKTeX 2.9\tex\latex\fontspec\fontspec.cfg")
! Undefined control sequence.
<argument> \LaTeX3 error:
Erroneous variable \c__fontspec_shape_n_n_tl used!
l.3806 \emfontdeclare{ \emshape, \eminnershape }
?
! Emergency stop.
<argument> \LaTeX3 error:
Erroneous variable \c__fontspec_shape_n_n_tl used!
l.3806 \emfontdeclare{ \emshape, \eminnershape }
No pages of output.
Transcript written on notebook.log.
I guess the fontspec part went wrong, but I don't know how to solve it.
For your information, here is what I've done before I got the 500 error.
1.I've installed the pandoc, and I already have Miktex before;
2.I've changed the file
...\nbconvert\templates\latex\article.tplx
rewritten the article class to be ctexart;
3.I've changed the file
...\nbconvert\templates\latex\exporters\pdf.py
rewritten the latex command to be
latex_command = List([u"xelatex", u"{filename}"], config=True,
help="Shell command used to compile latex."
)
4.I've also tried this:"https://github.com/ipython/ipython/issues/7150", which tends to convert the ipynb file into latex file first, then into PDF. And this didn't work for me either. The main reason is that the config file can't be found by jupyter nbconvert command.
For your information, my OS is Win7 Ultimate x64, with Chrome for Anaconda3 jupyter notebook.
Thanks in advance for anyone who takes time to read my post. Any help would be appreciated.
I am using unoconv to convert an ods spreadsheet to a csv file.
Here is the command:
unoconv -vvv --doctype=spreadsheet --format=csv --output= ~/Dropbox
/mariners_site/textFiles/expenses.csv ~/Dropbox/Aldeburgh/expenses
/expenses.ods
It saves the output file in the same directory as the source file, not in the specified directory. The error message is:
Output file: /home/richard/Dropbox/mariners_site/textFiles/expenses.csv
unoconv: UnoException during export phase:
Unable to store document to file:///home/richard/Dropbox/mariners_site
/textFiles/expenses.csv (ErrCode 19468)
I'm sure that this worked initially, but it has since stopped.
I have checked for permissions and they are identical for both directories.
I translated ErrCode 19468 for you and it boils down to meaning ERRCODE_SFX_DOCUMENTREADONLY.
You can find more information about the specific meaning of LibreOffice ErrCode numbers from the unoconv documentation at: https://github.com/dagwieers/unoconv/blob/master/doc/errcode.adoc
The clue here is that you have a whitespace-character between --output= and the filename (--output= ~/Dropbox
/mariners_site/textFiles/expenses.csv) and because of that unoconv gets an empty output value (which means the current directory) and is given 2 files. And that explains why you get this specific error IMO