Label to reference to a table generated by xtable in lyx - knitr

In order to be able to use a label tab:my_title, I would like to place a table generated by xtable in a Lyx table float, but I cannot get a table to be generated inside the float. This chunk produces correct output in Lyx:
<<cars, echo=FALSE, warning=FALSE, results='asis', message=FALSE>>=
library(xtable)
print.xtable(xtable(head(cars), digits = 0), include.rownames=FALSE,
type = "latex", floating=FALSE)
#
But when I place the above chunk in a table float, Lyx doesn't generate an output. I specified floating=FALSE so that xtable doesn't produce a floating environment for the table.
An answer to a similar question looks like a hack. I would prefer to properly insert the xtable output in a float.
Minimal lyx file as requested by #scottkosty
#LyX 2.1 created this file. For more info see http://www.lyx.org/
\lyxformat 474
\begin_document
\begin_header
\textclass article
\use_default_options true
\begin_modules
knitr
\end_modules
\maintain_unincluded_children false
\language english
\language_package default
\inputencoding auto
\fontencoding global
\font_roman default
\font_sans default
\font_typewriter default
\font_math auto
\font_default_family default
\use_non_tex_fonts false
\font_sc false
\font_osf false
\font_sf_scale 100
\font_tt_scale 100
\graphics default
\default_output_format default
\output_sync 0
\bibtex_command default
\index_command default
\paperfontsize default
\spacing single
\use_hyperref false
\papersize default
\use_geometry false
\use_package amsmath 1
\use_package amssymb 1
\use_package cancel 1
\use_package esint 1
\use_package mathdots 1
\use_package mathtools 1
\use_package mhchem 1
\use_package stackrel 1
\use_package stmaryrd 1
\use_package undertilde 1
\cite_engine basic
\cite_engine_type default
\biblio_style plain
\use_bibtopic false
\use_indices false
\paperorientation portrait
\suppress_date false
\justification true
\use_refstyle 1
\index Index
\shortcut idx
\color #008000
\end_index
\secnumdepth 3
\tocdepth 3
\paragraph_separation indent
\paragraph_indentation default
\quotes_language english
\papercolumns 1
\papersides 1
\paperpagestyle default
\tracking_changes false
\output_changes false
\html_math_output 0
\html_css_as_file 0
\html_be_strict false
\end_header
\begin_body
\begin_layout Standard
\begin_inset ERT
status open
\begin_layout Plain Layout
<<cars, echo=FALSE, warning=FALSE, results='asis', message=FALSE>>=
\end_layout
\begin_layout Plain Layout
library(xtable)
\end_layout
\begin_layout Plain Layout
print.xtable(xtable(head(cars), digits = 0), include.rownames=FALSE,
\end_layout
\begin_layout Plain Layout
type = "latex", floating=FALSE)
\end_layout
\begin_layout Plain Layout
#
\end_layout
\end_inset
\end_layout
\end_body
\end_document

if you place a xtable object into a LyX table float your table is declared twice, one in table float and one by xtable and you get a LaTeX error. The trick is to use the parameter's xtable only.contents=TRUE then xtable return only the content without any table declaration but the declaration of tabular is laking too. Then we can built a little function to add the necessary code for everything works well.
The new function:
<<include=FALSE>>=
lyxTab<-function(tab){
ncol<-dim(tab)[1]
nligne<-dim(tab)[2]
col<-paste(rep("r",ncol), collapse='')
col<-paste('{', col ,'}', sep='')
cat('\\begin{tabular}', col, sep='')
cat('\\hline')
print(tab, only.contents=TRUE, include.rownames=FALSE)
cat('\\end{tabular}')
}
The new function lyxTab() receive a xtable object like parameter and we can use it in float table environment.
<<echo=FALSE, results='asis'>>=
#code in float table environment
library(xtable)
data(iris)
lyxTab(xtable(head(iris)))

Related

PostgreSQL absolute over relative xpath location

Consider the following xml document that is stored in a PostgreSQL field:
<E_sProcedure xmlns="http://www.minushabens.com/2008/FMSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" modelCodeScheme="Emo_ex" modelCodeSchemeVersion="01" modelCodeValue="EMO_E_PROCEDURA" modelCodeMeaning="Section" sectionID="11">
<tCatSnVsn_Pmax modelCodeScheme="Emodinamica_referto" modelCodeSchemeVersion="01" modelCodeValue="tCat4" modelCodeMeaning="My text"><![CDATA[1]]></tCatSnVsn_Pmax>
</E_sProcedure>
If I run the following query I get the correct result for Line 1, while Line 2 returns nothing:
SELECT
--Line 1
TRIM(BOTH FROM array_to_string((xpath('//child::*[#modelCodeValue="tCat4"]/text()', t.xml_element)),'')) as tCatSnVsn_Pmax_MEANING
--Line2
,TRIM(BOTH FROM array_to_string((xpath('/tCatSnVsn_Pmax/text()', t.xml_element)),'')) as tCatSnVsn_Pmax
FROM (
SELECT unnest(xpath('//x:E_sProcedure', s.XMLDATA::xml, ARRAY[ARRAY['x', 'http://www.minushabens.com/2008/FMSchema']])) AS xml_element
FROM sr_data as s)t;
What's wrong in the xpath of Line 2?
Your second xpath() doesn't return anything because of two problems. First: you need to use //tCatSnVsn_Pmax as the xml_element still starts with <E_sProcedure>. The path /tCatSnVsn_Pmax tries to select a top-level element with that name.
But even then, the second one won't return anything because of the namespace. You need to pass the same namespace definition to the xpath(), so you need something like this:
SELECT (xpath('/x:tCatSnVsn_Pmax/text()', t.xml_element, ARRAY[ARRAY['x', 'http://www.minushabens.com/2008/FMSchema']]))[1] as tCatSnVsn_Pmax
FROM (
SELECT unnest(xpath('//x:E_sProcedure', s.XMLDATA::xml, ARRAY[ARRAY['x', 'http://www.minushabens.com/2008/FMSchema']])) AS xml_element
FROM sr_data as s
)t;
With modern Postgres versions (>= 10) I prefer using xmltable() for anything nontrivial. It makes passing namespaces easier and accessing multiple attributes or elements.
SELECT xt.*
FROM sr_data
cross join
xmltable(xmlnamespaces ('http://www.minushabens.com/2008/FMSchema' as x),
'/x:E_sProcedure'
passing (xmldata::xml)
columns
sectionid text path '#sectionID',
pmax text path 'x:tCatSnVsn_Pmax',
model_code_value text path 'x:tCatSnVsn_Pmax/#modelCodeValue') as xt
For your sample XML, the above returns:
sectionid | pmax | model_code_value
----------+------+-----------------
11 | 1 | tCat4

very large fields in As400 ISeries database

I would like to save a large XML string (possibly longer than 32K or 64K) into an AS400 file field. Either DDS or SQL files would be OK. Example of SQL file below.
CREATE TABLE MYLIB/PRODUCT
(PRODCODE DEC (5 ) NOT NULL WITH DEFAULT,
PRODDESC CHAR (30 ) NOT NULL WITH DEFAULT,
LONGDESC CLOB (70K ) ALLOCATE(1000) NOT NULL WITH DEFAULT)
We would use RPGLE to read and write to fields.
The goal is to then pull out data via ODBC connection on a client side.
AS400 character fields seem to have 32K limit, so this is not great option.
What options do I have? I have been reading up on CLOBs but there appear to be restrictions writing large strings to CLOBS and reading CLOB field remotely. Note that client is (still) on v5R4 of AS400 OS.
thanks!
Charles' answer below shows how to extract data. I would like to insert data. This code runs, but throws a '22501' SQL error.
D wLongDesc s 65531a varying
D longdesc s sqltype(CLOB:65531)
/free
//eval longdesc = *ALL'123';
eval Wlongdesc = '123';
exec SQL
INSERT INTO PRODUCT (PRODCODE, PRODDESC, LONGDESC)
VALUES (123, 'Product Description', :LongDesc );
if %subst(sqlstt:1:2) <> '00';
// an error occurred.
endif;
// get length explicitly, variables are setup by pre-processor
longdesc_len = %len(%trim(longdesc_data));
wLongDesc = %subst(longdesc_data:1:longdesc_len);
/end-free
C Eval *INLR = *on
C Return
Additional question: Is this technique suitable for storing data which I want to extract via ODBC connection later? Does ODBC read CLOB as pointer or can it pull out text?
At v5r4, RPGLE actually supports 64K character variables.
However, the DB is limited to 32K for regular char/varchar fields.
You'd need to use a CLOB for anything bigger than 32K.
If you can live with 64K (or so )
CREATE TABLE MYLIB/PRODUCT
(PRODCODE DEC (5 ) NOT NULL WITH DEFAULT,
PRODDESC CHAR (30 ) NOT NULL WITH DEFAULT,
LONGDESC CLOB (65531) ALLOCATE(1000) NOT NULL WITH DEFAULT)
You can use RPGLE SQLTYPE support
D code S 5s 0
d wLongDesc s 65531a varying
D longdesc s sqltype(CLOB:65531)
/free
exec SQL
select prodcode, longdesc
into :code, :longdesc
from mylib/product
where prodcode = :mykey;
wLongDesc = %substr(longdesc_data:1:longdesc_len);
DoSomthing(wLongDesc);
The pre-compiler will replace longdesc with a DS defined like so:
D longdesc ds
D longdesc_len 10u 0
D longdesc_data 65531a
You could simply use it directly, making sure to only use up to longdesc_len or covert it to a VARYING as I've done above.
If absolutely must handle larger than 64K...
Upgrade to a supported version of the OS (16MB variables supported)
Access the CLOB contents via an IFS file using a file reference
Option 2 is one I've never seen used....and I can't find any examples. Just saw it mentioned in this old article..
http://www.ibmsystemsmag.com/ibmi/developer/general/BLOBs,-CLOBs-and-RPG/?page=2
This example shows how to write to a CLOB field in Db2 database... with help from Charles and Mr Murphy's feedback.
* ----------------------------------------------------------------------
* Create table with CLOB:
* CREATE TABLE MYLIB/PRODUCT
* (MYDEC DEC (5 ) NOT NULL WITH DEFAULT,
* MYCHAR CHAR (30 ) NOT NULL WITH DEFAULT,
* MYCLOB CLOB (65531) ALLOCATE(1000) NOT NULL WITH DEFAULT)
* ----------------------------------------------------------------------
D PRODCODE S 5i 0
D PRODDESC S 30a
D i S 10i 0
D wLongDesc s 65531a varying
D longdesc s sqltype(CLOB:65531)
D* Note that variables longdesc_data and longdesc_len
D* get create automatocally by SQL pre-processor.
/free
eval wLongdesc = '123';
longdesc_data = wLongDesc;
longdesc_len = %len(%trim(wLongDesc));
exec SQL set option commit = *none;
exec SQL
INSERT INTO PRODUCT (MYDEC, MYCHAR, MYCLOB)
VALUES (123, 'Product Description',:longDesc);
if %subst(sqlstt:1:2)<>'00' ;
// an error occurred.
endif;
Eval *INLR = *on;
Return;
/end-free

psql expanded display - avoid dashes

When I have a very wide column (like a json document) and I am using expanded display to make the contents at least partly readable, I am still seeing extremely ugly record separators, that seems to want to be as wide as the widest column, like so:
Is there a way to avoid the "Sea of Dashes"?
-[ RECORD 1 ]--+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
id | 18
description | {json data xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx}
parameter | {json data xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx}
name | Foo
-[ RECORD 2 ]--+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
id | 19
description | {}
parameter | {json data xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx}
name | CustomerRequestEventType
to avoid sea of dashes, use \pset format unaligned, eg:
t=# \x
Expanded display is on.
t=# \pset format unaligned
Output format is unaligned.
t=# with ts as (select generate_series('2010-01-01'::timestamp,'2010-01-10'::timestamp,'1 day'::interval) s) select array_agg(s) from ts; array_agg|{"2010-01-01 00:00:00","2010-01-02 00:00:00","2010-01-03 00:00:00","2010-01-04 00:00:00","2010-01-05 00:00:00","2010-01-06 00:00:00","2010-01-07 00:00:00","2010-01-08 00:00:00","2010-01-09 00:00:00","2010-01-10 00:00:00"}
Time: 0.250 ms
As you can see, no dashes, but the long string is still wrapped over lines by the length of the window (or not wrapped at all). In case of unformatted string this is the solution, but you mentioned json - it can be devided in a pretty way. To do so instead of using unaligned format in psql, consume jsonb_pretty function or pretty flag of other functions, eg (with array_to_json(..., true):
t=# with ts as (select generate_series('2010-01-01'::timestamp,'2010-01-31'::timestamp,'1 day'::interval) s) select array_to_json(array_agg(s),true) from ts;
array_to_json|["2010-01-01T00:00:00",
"2010-01-02T00:00:00",
"2010-01-03T00:00:00",
"2010-01-04T00:00:00",
"2010-01-05T00:00:00",
"2010-01-06T00:00:00",
"2010-01-07T00:00:00",
"2010-01-08T00:00:00",
"2010-01-09T00:00:00",
"2010-01-10T00:00:00",
"2010-01-11T00:00:00",
"2010-01-12T00:00:00",
"2010-01-13T00:00:00",
"2010-01-14T00:00:00",
"2010-01-15T00:00:00",
"2010-01-16T00:00:00",
"2010-01-17T00:00:00",
"2010-01-18T00:00:00",
"2010-01-19T00:00:00",
"2010-01-20T00:00:00",
"2010-01-21T00:00:00",
"2010-01-22T00:00:00",
"2010-01-23T00:00:00",
"2010-01-24T00:00:00",
"2010-01-25T00:00:00",
"2010-01-26T00:00:00",
"2010-01-27T00:00:00",
"2010-01-28T00:00:00",
"2010-01-29T00:00:00",
"2010-01-30T00:00:00",
"2010-01-31T00:00:00"]
Time: 0.291 ms
Note I still use unaligned format to avoid "+" though...

Set value of boolean (type = bit(1)) in Mysql Workbench

How can I edit the value of a boolean (type = bit(1)) in Mysql Workbench's result grid of a table?
If I set it to 1 or 0 I get the error :
ERROR 1406: 1406: Data too long for column 'enabled' at row 1
Write b'1' for true, and b'0' for false.
To avoid this well-known bug, you can also write 01 for true/1 and 00 for false/0.
E.g.
results in
which is valid MySql syntax.
You can also use true false keywords for value where true = 1 and false = 0.
MySQL Workbench Version: 6.3.4.0
UPDATE
Mysql Server Version: 5.6.27-log

Postgres-pgloader-transformation in columns

Loading flat file to postgres table.I need to do few transformations while reading the file and load it.
Like
-->Check for characters, if it is present, default some value. Reg_Exp can be used in oracle. How the functions can be called in below syntax?
-->TO_DATE function from text format
-->Check for Null and defaulting some value
-->Trim functions
-->Only few columns from source file should be loaded
-->Defaulting values, say for instance, source file has only 3 columns. But we need to load 4 columns. One column should be defaulted with some value
LOAD CSV
FROM 'filename'
INTO postgresql://role#host:port/database_name?tablename
TARGET COLUMNS
(
alphanm,alphnumnn,nmrc,dte
)
WITH truncate,
skip header = 0,
fields optionally enclosed by '"',
fields escaped by double-quote,
fields terminated by '|',
batch rows = 100,
batch size = 1MB,
batch concurrency = 64
SET work_mem to '32 MB', maintenance_work_mem to '64 MB';
Kindly help me, how this can be accomplished used pgloader?
Thanks
Here's a self-contained test case for pgloader that reproduces your use-case, as best as I could understand it:
/*
Sorry pgloader version "3.3.2" compiled with SBCL 1.2.8-1.el7 Doing kind
of POC, to implement in real time work. Sample data from file:
raj|B|0.5|20170101|ABCD Need to load only first,second,third and fourth
column; Table has three column, third column should be defaulted with some
value. Table structure: A B C-numeric D-date E-(Need to add default value)
*/
LOAD CSV
FROM inline
(
alphanm,
alphnumnn,
nmrc,
dte [date format 'YYYYMMDD'],
other
)
INTO postgresql:///pgloader?so.raja
(
alphanm,
alphnumnn,
nmrc,
dte,
col text using "constant value"
)
WITH truncate,
fields optionally enclosed by '"',
fields escaped by double-quote,
fields terminated by '|'
SET work_mem to '12MB',
standard_conforming_strings to 'on'
BEFORE LOAD DO
$$ drop table if exists so.raja; $$,
$$ create table so.raja (
alphanm text,
alphnumnn text,
nmrc numeric,
dte date,
col text
);
$$;
raj|B|0.5|20170101|ABCD
Now here's the extract from running the pgloader command:
$ pgloader 41287414.load
2017-08-15T12:35:10.258000+02:00 LOG Main logs in '/private/tmp/pgloader/pgloader.log'
2017-08-15T12:35:10.261000+02:00 LOG Data errors in '/private/tmp/pgloader/'
2017-08-15T12:35:10.261000+02:00 LOG Parsing commands from file #P"/Users/dim/dev/temp/pgloader-issues/stackoverflow/41287414.load"
2017-08-15T12:35:10.422000+02:00 LOG report summary reset
table name read imported errors total time
----------------------- --------- --------- --------- --------------
fetch 0 0 0 0.007s
before load 2 2 0 0.016s
----------------------- --------- --------- --------- --------------
so.raja 1 1 0 0.019s
----------------------- --------- --------- --------- --------------
Files Processed 1 1 0 0.021s
COPY Threads Completion 2 2 0 0.038s
----------------------- --------- --------- --------- --------------
Total import time 1 1 0 0.426s
And here's the content of the target table when the command is done:
$ psql -q -d pgloader -c 'table so.raja'
alphanm │ alphnumnn │ nmrc │ dte │ col
═════════╪═══════════╪══════╪════════════╪════════════════
raj │ B │ 0.5 │ 2017-01-01 │ constant value
(1 row)