*How can I obtain a mem pointer with progress using GET-POINTER-VALUE....?
In Windows works fine:
DEFINE VARIABLE vUNO AS MEMPTR.
DEFINE VARIABLE vDOS AS MEMPTR.
DEFINE VARIABLE vTRES AS MEMPTR.
DEFINE VARIABLE sUNO AS CHARACTER.
DEFINE VARIABLE sDOS AS CHARACTER.
DEFINE VARIABLE sTRES AS CHARACTER.
DEFINE VARIABLE rUno AS MEMPTR.
/*Para prueba, poner algo a UNO, DOS y TRES.*/
DEFINE VARIABLE PTR AS MEMPTR.
ASSIGN sUNO = "Uno"
sDOS = "Dos"
sTRES = "Tres"
SET-SIZE(vUNO ) = LENGTH(sUNO ) * 2
SET-SIZE(vDOS ) = LENGTH(sDOS ) * 2
SET-SIZE(vTRES ) = LENGTH(sTRES) * 2
PUT-STRING(vUNO , 1) = sUNO
PUT-STRING(vDOS , 1) = sDOS
PUT-STRING(vTRES, 1) = sTRES.
SET-SIZE(PTR) = 4 /*Apuntador a vUNO -> 1*/
+ 4 /*Apuntador a vDOS -> 5*/
+ 4. /*Apuntador a vTRES -> 9*/
/*
NOTA:
4 porque en arquitcturas a 32 bits apuntadores miden 4 bytes.
Checar en Unix porque Hp-Ux (de Axa) es a 64 bits (apuntadores a 8 bytes).
*/
/* [1] 2 3 4 [5] 6 7 8 [9] 10 11 12 */
MESSAGE PROGRAM-NAME(1) SKIP
GET-STRING(vUNO,1 ) "/" GET-POINTER-VALUE(vUNO) SKIP
GET-STRING(vDOS,1 ) "/" GET-POINTER-VALUE(vDOS) SKIP
GET-STRING(vTRES,1) "/" GET-POINTER-VALUE(vTRES) SKIP
VIEW-AS ALERT-BOX INFO BUTTONS OK.
/******************************************/
It returns:
---------------------------
Información
---------------------------
C:\GMM2000\Temp\p19350.cmp
Uno / 87066920
Dos / 85914720
Tres / 85914744
---------------------------
Aceptar
---------------------------
but with the same code Unix returns:
---------------------------
Información
---------------------------
/gmm2000/p13659.cmp
Uno / ?
Dos / ?
Tres / ?
---------------------------
Aceptar
---------------------------
Please HEEEEEELLLLPPPPP!!!!*
The HP-UX 64bit AVM doesn't do 64bit pointers for some of the releases, as the external interfaces only do 32bits.I forget which version implemented full 64 bit pointers though - that would be a question to ask PSC TS.
I just tried your code on 10.2B Linux. It seems to work:
┌────────── Information ──────────┐
│ /home/tom/p04012_Untitled1.ped │
│ Uno / 16817200 │
│ Dos / 16992512 │
│ Tres / 16992544 │
│ ─────────────────────────────── │
│ <OK> │
└─────────────────────────────────┘
Know that we know that you are running 64 bit Progress v9 on HPUX...
In a 64 bit environment GET-POINTER-VALUE() returns a 64 bit result. But Progress v9 does not have an int64 data type. Try assigning the result to a DECIMAL variable. That should be able to hold the value.
Related
I'm downloading (with a proc download in SAS) a dataset from a server with wlatin1 encoding to a server with UTF-8 encoding.
I've got the error
ERROR: Some character data was lost during transcoding in the data set
libref.datasetname. Either the data contains characters that are not
representable in the new encoding or truncation occurred during transcoding.
I tried setting inencoding='utf8' or inencoding='asciiany' on the input dataset but it doesn't work (maybe because the wlatin1 server has SAS 9.3 whereas the UTF-8 server has SAS 9.4).
Rewriting the file like in the following code (and then doing the proc download of myoutput) works, but I was wondering if there is a more elegant way to do the same thing.
data myoutput;
set pathin.myinput;
/*Translittera la I accentata con I normale*/
des_nome = tranwrd(des_nome,'CD'x,'I');
des_nome = tranwrd(des_nome,'ED'x,'i');
/*Translittera la A accentata con A normale*/
des_nome = tranwrd(des_nome,'C1'x,'A');
des_nome = tranwrd(des_nome,'E1'x,'a');
/*Translittera la E accentata con E normale*/
des_nome = tranwrd(des_nome,'C9'x,'E');
des_nome = tranwrd(des_nome,'E9'x,'e');
/*Translittera la O accentata con O normale*/
des_nome = tranwrd(des_nome,'D2'x,'O');
des_nome = tranwrd(des_nome,'D3'x,'O');
des_nome = tranwrd(des_nome,'D6'x,'O');
des_nome = tranwrd(des_nome,'F3'x,'o');
/*Translittera la U accentata con U normale*/
des_nome = tranwrd(des_nome,'DC'x,'U');
des_nome = tranwrd(des_nome,'F9'x,'u');
/*Translittera la Y accentata con Y normale*/
des_nome = tranwrd(des_nome,'DD'x,'Y');
des_nome = tranwrd(des_nome,'FD'x,'y');
/*Translittera accenti strani con '*/
des_nome = tranwrd(des_nome,'B4'x,"'");
/*Translittera simboli strani con spazi*/
des_nome = tranwrd(des_nome,'A7'x,' '); /* § nel NOME */
des_nome = tranwrd(des_nome,'A3'x,' '); /* £ nel NOME */
cod_cap_res = tranwrd(cod_cap_res,'A3'x,' '); /* £ nel CAP */
run;
Check out this SAS macro %copy_to_utf8 which will convert your data to UTF8 automatically.
https://go.documentation.sas.com/doc/en/pgmsascdc/9.4_3.5/nlsref/p1f9ghftk3fgrin16t57ub6svmet.htm
%copy_to_utf8(pathin.myinput, myoutput)
This issue is that the storage length of at least one of the variables in original dataset is too short to fit the expanded length required by the UTF-8 representation of at least one of the strings.
Here is simple way to demonstrate the problem. Create a simple file with all 256 possible characters of the WLATIN1 (or LATIN1) encoding.
340 %put %sysfunc(getoption(encoding,keyword));
ENCODING=WLATIN1
341 data 'c:\downloads\wlatin1.sas7bdat';
342 string = collate(0,256);
343 run;
NOTE: The data set c:\downloads\wlatin1.sas7bdat has 1 observations and 1 variables.
NOTE: DATA statement used (Total process time):
real time 0.00 seconds
cpu time 0.00 seconds
Now try to read it in a session using UTF-8 encoding. Even if you make the variable longer in the output dataset the conversion fails.
38 data test1;
39 length string $1024;
40 set 'c:\downloads\wlatin1.sas7bdat';
NOTE: Data file WC000001.WLATIN1.DATA is in a format that is native to another host, or the file encoding does not match the
session encoding. Cross Environment Data Access will be used, which might require additional CPU resources and might reduce
performance.
41 run;
ERROR: Some character data was lost during transcoding in the dataset WC000001.WLATIN1. Either the data contains characters that
are not representable in the new encoding or truncation occurred during transcoding.
NOTE: The DATA step has been abnormally terminated.
NOTE: The SAS System stopped processing this step because of errors.
WARNING: The data set WORK.TEST1 may be incomplete. When this step was stopped there were 0 observations and 1 variables.
WARNING: Data set WORK.TEST1 was not replaced because this step was stopped.
NOTE: DATA statement used (Total process time):
real time 0.01 seconds
cpu time 0.00 seconds
So to fix that read the file using ENCODING='ANY' and then convert the strings from WLATIN1 to UTF-8.
42 data test;
43 length string $1024;
44 set 'c:\downloads\wlatin1.sas7bdat' (encoding='any');
45 string = kcvt(string,'wlatin1','utf-8');
46 run;
NOTE: There were 1 observations read from the data set c:\downloads\wlatin1.sas7bdat.
NOTE: The data set WORK.TEST has 1 observations and 1 variables.
NOTE: DATA statement used (Total process time):
real time 0.01 seconds
cpu time 0.01 seconds
You might have better luck letting the newer version of SAS that is running with UTF-8 encoding do the transcoding instead of forcing the remote libref engine to attempt to deal with it.
So if you wanted to download the dataset MYLIB.MYDATA you could first copy the actual dataset file. Then transcode it.
%syslput lwork=%qsysfunc(pathname(work));
rsubmit;
proc download binary infile="%sysfunc(pathname(MYLIB))/mydata.sas7bdat"
outfile="&lwork/mydata.sas7bdat" ;
run;
endrsubmit;
data mydata;
set mydata(encoding='wlatin1');
run;
I am currently using Radare2 to extract opcodes from PE files. Currently, I am attempting to use the "pd" command which from the API: "pd n # offset: Print n opcodes disassembled". I am wondering if there is a way to calculate/find out exactly what "n" is for each file I process. Thanks
ENVIRONMENT
radare2: radare2 4.2.0-git 23519 # linux-x86-64 git.4.1.1-84-g0c46c3e1e commit: 0c46c3e1e30bb272a5a05fc367d874af32b41fe4 build: 2020-01-08__09:49:0
system: Ubuntu 18.04.3 LTS
SOLUTION
This example shows 4 different options to view / print disassembly or opcodes.
View disassembly in radare2 via visual mode:
Command one: aaaa # Analyze the file
Command two: Vp # Open disassembly in visual mode
Print disassembly of all functions in r2 or r2pipe:
Command one: aaaa # Analyze the file
Command two: pdf ##f > out
pdf # Print disassembly of a function
##f # Repeat the command for every function
> out # Redirect the output to the file named out
Print only the instruction in r2 or r2pipe:
Command one: aaaa # Analyze the file
Command two: pif ##f ~[0] > out
pif # Print instructions of a function
##f # Repeat the command for every function
~[0] # Only print the first column (The instruction)
> out # Redirect the output to the file named out
Obtained detailed information for each opcode using r2 or r2pipe:
Command one: aaaa # Analyzey the file
Command two: aoj ##=`pid ##f ~[0]` > out
aoj # Display opcode analysis information in JSON
##= # Repeat the command for every offset return by sub-query
pid ##f ~[0] # The sub-query
pid # Print disassembly with offset and bytes
##f # Repeat the command for every function
~[0] # Only print the first column (The offset)
> out # Redirect the output to the file named out
EXAMPLE
Replace the commands here with any option from above.
Example using radare2 shell
user#host:~$ r2 /bin/ls
[0x00005850]> aaaa
...
[0x00005850]> pdf ##f > out
[0x00005850]> q
user#host:~$ cat out
...
┌ 38: fcn.00014840 ();
│ ; var int64_t var_38h # rsp+0xffffffd0
│ 0x00014840 53 push rbx
│ 0x00014841 31f6 xor esi, esi
│ 0x00014843 31ff xor edi, edi
│ 0x00014845 e846f2feff call sym.imp.getcwd
│ 0x0001484a 4885c0 test rax, rax
│ 0x0001484d 4889c3 mov rbx, rax
│ ┌─< 0x00014850 740e je 0x14860
│ │ ; CODE XREF from fcn.00014840 # 0x14868
│ ┌──> 0x00014852 4889d8 mov rax, rbx
│ ╎│ 0x00014855 5b pop rbx
│ ╎│ 0x00014856 c3 ret
..
│ ╎│ ; CODE XREF from fcn.00014840 # 0x14850
│ ╎└─> 0x00014860 e88beffeff call sym.imp.__errno_location
│ ╎ 0x00014865 83380c cmp dword [rax], 0xc
│ └──< 0x00014868 75e8 jne 0x14852
└ 0x0001486a e861feffff call fcn.000146d0
; CALL XREFS from fcn.00013d00 # 0x13d9d, 0x13da8
...
Example using Python with r2pipe
import r2pipe
R2 = r2pipe.open('/bin/ls') # Open r2 with file
R2.cmd('aaaa') # Analyze file
R2.cmd('pdf ##f > out') # Write disassembly for each function to out file
R2.quit() # Quit r2
This is a small example of a pyspark column (String) in my dataframe.
column | new_column
------------------------------------------------------------------------------------------------- |--------------------------------------------------
Hoy es día de ABC/KE98789T983456 clase. | 98789
------------------------------------------------------------------------------------------------- |--------------------------------------------------
Como ABC/KE 34562Z845673 todas las mañanas | 34562
------------------------------------------------------------------------------------------------- |--------------------------------------------------
Hoy tiene ABC/KE 110330/L63868 clase de matemáticas, | 110330
------------------------------------------------------------------------------------------------- |--------------------------------------------------
Marcos se ABC 898456/L56784 levanta con sueño. | 898456
------------------------------------------------------------------------------------------------- |--------------------------------------------------
Marcos se ABC898456 levanta con sueño. | 898456
------------------------------------------------------------------------------------------------- |--------------------------------------------------
comienza ABC - KE 60014 -T60058 | 60014
------------------------------------------------------------------------------------------------- |--------------------------------------------------
inglés y FOR 102658/L61144 ciencia. Se viste, desayuna | 102658
------------------------------------------------------------------------------------------------- |--------------------------------------------------
y comienza FOR ABC- 72981 / KE T79581: el camino hacia la | 72981
------------------------------------------------------------------------------------------------- |--------------------------------------------------
escuela. Se FOR ABC 101665 - 103035 - 101926 - 105484 - 103036 - 103247 - encuentra con su | [101665,103035,101926,105484,103036,103247]
------------------------------------------------------------------------------------------------- |--------------------------------------------------
escuela ABCS 206048/206049/206050/206051/205225-FG-matemáticas- | [206048,206049,206050,206051,205225]
------------------------------------------------------------------------------------------------- |--------------------------------------------------
encuentra ABCS 111553/L00847 & 111558/L00895 - matemáticas | [111553, 111558]
------------------------------------------------------------------------------------------------- |--------------------------------------------------
ciencia ABC 163278/P20447 AND RETROFIT ABCS 164567/P21000 - 164568/P21001 - desayuna | [163278,164567,164568 ]
------------------------------------------------------------------------------------------------- |--------------------------------------------------
ABC/KE 71729/T81672 - 71781/T81674 71782/T81676 71730/T81673 71783/T81677 71784/T | [71729,71781,71782,71730,71783,71784]
------------------------------------------------------------------------------------------------- |--------------------------------------------------
ciencia ABC/KE2646/L61175:E/F-levanta con sueño L61/62LAV AT Z5CTR/XC D3-1593 | [2646]
-----------------------------------------------------------------------------------------------------------------------------------------------------
escuela ABCS 6048/206049/6050/206051/205225-FG-matemáticas- MSN 2345 | [6048,206049,6050,206051,205225]
-----------------------------------------------------------------------------------------------------------------------------------------------------
FOR ABC/KE 109038_L35674_DEFINE AND DESIGN IMPROVEMENTS OF 1618 FROM 118(PDS4 BRACKETS) | [109038]
-----------------------------------------------------------------------------------------------------------------------------------------------------
y comienza FOR ABC- 2981 / KE T79581: el camino hacia la 9856 | [2981]
I want to extract all numbers that contain: 4, 5 or 6 digits from this text.
Condition and cases to extract them:
- Attached to ABC/KE (first line in the example above).
- after ABC/KE + space (second and third line).
- after ABC + space (line 4)
- after ABC without space (line 5)
- after ABC - KE + space
- after for word
- after ABC- + space
- after ABC + space
- after ABCS (line 10 and 11)
Example of failed cases:
Column | new_column
------------------------------------------------------------------------------------------------------------------------
FOR ABC/KE 109038_L35674_DEFINE AND DESIGN IMPROVEMENTS OF 1618 FROM 118(PDS4 BRACKETS) | [1618] ==> should be [109038]
------------------------------------------------------------------------------------------------------------------------
ciencia ABC/KE2646/L61175:E/F-levanta con sueño L61/62LAV AT Z5CTR/XC D3-1593 | [1593] ==> should be [2646]
------------------------------------------------------------------------------------------------------------------------
escuela ABCS 6048/206049/6050/206051/205225-FG-matemáticas- MSN 2345 | [6048,206049,6050,206051,205225, 2345] ==> should be [6048,206049,6050,206051,205225]
I hope that I resumed the cases, you can see my example above and the expect output.
How can I do it ?
Thank you
One way using regexes to clean out the data and set up a lone anchor with value of ABC to identify the start of a potential match. after str.split(), iterate through the resulting array to flag and retrieve consecutive matching numbers that follow this anchor.
Edit: Added underscore _ into the data pattern (\b(\d{4,6})(?=[A-Z/_]|$)) so that it now allows underscore as an anchor to follow the matched substring of 4-6 digit. this fixed the first line, line 2 and 3 should be working with the existing regex patterns.
import re
from pyspark.sql.types import ArrayType, StringType
from pyspark.sql.functions import udf
(1) Use regex patterns to clean out the raw data so that we have only one anchor ABC to identify the start of a potential match:
clean1: use [-&\s]+ to convert '&', '-' and whitespaces to a SPACE ' ', they are used to connect a chain of numbers
example: `ABC - KE` --> `ABC KE`
`103035 - 101926 - 105484` -> `103035 101926 105484`
`111553/L00847 & 111558/L00895` -> `111553/L00847 111558/L00895`
clean2: convert text matching the following three sub-patterns into 'ABC '
+ ABCS?(?:[/\s]+KE|(?=\s*\d))
+ ABC followed by an optional `S`
+ followed by at least one slash or whitespace and then `KE` --> `[/\s]+KE`
example: `ABC/KE 110330/L63868` to `ABC 110330/L63868`
+ or followed by optional whitespaces and then at least one digit --> (?=\s*\d)
example: ABC898456 -> `ABC 898456`
+ \bFOR\s+(?:[A-Z]+\s+)*
+ `FOR` words
example: `FOR DEF HJK 12345` -> `ABC 12345`
data: \b(\d{4,6})(?=[A-Z/_]|$) is a regex to match actual numbers: 4-6 digits followed by [A-Z/] or end_of_string
(2) Create a dict to save all 3 patterns:
ptns = {
'clean1': re.compile(r'[-&\s]+', re.UNICODE)
, 'clean2': re.compile(r'\bABCS?(?:[/\s-]+KE|(?=\s*\d))|\bFOR\s+(?:[A-Z]+\s+)*', re.UNICODE)
, 'data' : re.compile(r'\b(\d{4,6})(?=[A-Z/_]|$)', re.UNICODE)
}
(3) Create a function to find matched numbers and save them into an array
def find_number(s_t_r, ptns, is_debug=0):
try:
arr = re.sub(ptns['clean2'], 'ABC ', re.sub(ptns['clean1'], ' ', s_t_r.upper())).split()
if is_debug: return arr
# f: flag to identify if a chain of matches is started, default is 0(false)
f = 0
new_arr = []
# iterate through the above arr and start checking numbers when anchor is detected and set f=1
for x in arr:
if x == 'ABC':
f = 1
elif f:
new = re.findall(ptns['data'], x)
# if find any matches, else reset the flag
if new:
new_arr.extend(new)
else:
f = 0
return new_arr
except Exception as e:
# only use print in local debugging
print('ERROR:{}:\n [{}]\n'.format(s_t_r, e))
return []
(4) defind the udf function
udf_find_number = udf(lambda x: find_number(x, ptns), ArrayType(StringType()))
(5) get the new_column
df.withColumn('new_column', udf_find_number('column')).show(truncate=False)
+------------------------------------------------------------------------------------------+------------------------------------------------+
|column |new_column |
+------------------------------------------------------------------------------------------+------------------------------------------------+
|Hoy es da de ABC/KE98789T983456 clase. |[98789] |
|Como ABC/KE 34562Z845673 todas las ma?anas |[34562] |
|Hoy tiene ABC/KE 110330/L63868 clase de matem篓垄ticas, |[110330] |
|Marcos se ABC 898456/L56784 levanta con sue?o. |[898456] |
|Marcos se ABC898456 levanta con sue?o. |[898456] |
|comienza ABC - KE 60014 -T60058 |[60014] |
|ingl篓娄s y FOR 102658/L61144 ciencia. Se viste, desayuna |[102658] |
|y comienza FOR ABC- 72981 / KE T79581: el camino hacia la |[72981] |
|escuela. Se FOR ABC 101665 - 103035 - 101926 - 105484 - 103036 - 103247 - encuentra con su|[101665, 103035, 101926, 105484, 103036, 103247]|
|escuela ABCS 206048/206049/206050/206051/205225-FG-matem篓垄ticas- |[206048, 206049, 206050, 206051, 205225] |
|encuentra ABCS 111553/L00847 & 111558/L00895 - matem篓垄ticas |[111553, 111558] |
|ciencia ABC 163278/P20447 AND RETROFIT ABCS 164567/P21000 - 164568/P21001 - desayuna |[163278, 164567, 164568] |
|ABC/KE 71729/T81672 - 71781/T81674 71782/T81676 71730/T81673 71783/T81677 71784/T |[71729, 71781, 71782, 71730, 71783, 71784] |
+------------------------------------------------------------------------------------------+------------------------------------------------+
(6) code for debugging, use find_number(row.column, ptns, 1) to check how/if the first two regex patterns work as expected:
for row in df.limit(10).collect():
print('{}:\n {}\n'.format(row.column, find_number(row.column, ptns)))
Some Notes:
in clean2 pattern, ABCS and ABS are treated the same way. if they are different, just remove the 'S' and add a new alternative ABCS\s*(?=\d) to the end of the pattern
re.compile(r'\bABC(?:[/\s-]+KE|(?=\s*\d))|\bFOR\s+(?:[A-Z]+\s+)*|ABCS\s*(?=\d)')
current pattern clean1 only treats '-', '&' and whitespaces as consecutive connector, you might add more characters or words like 'and', 'or', for example:
re.compile(r'[-&\s]+|\b(?:AND|OR)\b')
FOR words is \bFOR\s+(?:[A-Z]+\s+)*, this might be adjusted based on if numbers are allowed in words etc.
This was tested on Python-3. using Python-2, there might be issue with unicode, you can fix it by using the method in the first answer of reference
I have currently build a Yocto core-minimal-images (with meta-sunxi) for an orange pi zero board(a cheap chinese board that i use for my studies)
https://github.com/linux-sunxi/meta-sunxi
And it succesfully boot on my board,but in the /dev directory i didnt have acces to the SPI NOR memory. After some search on the orange pi wiki i find that i need some line to my device tree : https://linux-sunxi.org/Orange_Pi_Zero#Installing_from_linux
&spi0 {
status = "okay";
flash: m25p80#0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "winbond,w25q128";
reg = <0>;
spi-max-frequency = <40000000>;
};
};
But i dont really understand how to proceed...because i don't find which files i need to edit ? and maybe this not a good idee ? i think its better to create a .bbappend recipes no ?
the information that i have gather by searching in the meta-sunxi directories:
in conf/orange-pi-zero/KERNEL_DEVICETREE = "sun8i-h2-plus-orangepi-zero.dtb"
but there is no "sun8i-h2-plus-orangepi-zero.dts" file in the meta-sunxi directories ?
"sun8i-h2-plus-orangepi-zero.dtb"file is present in /build/tmp/deploy/images/orange-pi-zero/ so i don't really know how it is generated ? is it only download by yocto ? ( no device tree compilation ? )
by serachin on the net i was able to find sun8i-h2-plus-orangepi-zero.dts
at: https://github.com/torvalds/linux/blob/master/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts
and it contains theses interesting lines :
&spi0 {
/* Disable SPI NOR by default: it optional on Orange Pi Zero boards */
status = "disabled";
flash#0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "mxicy,mx25l1606e", "winbond,w25q128";
reg = <0>;
spi-max-frequency = <40000000>;
};
};
So maybe someone is able to give some advice to add SPI NOR support on my board ? what is the best way ? make some .bbappend ? or create my own meta by copying "meta-sunxi" and edit it ? and then which files i need to edit ?
thanks in advance for your time
Pierre.
Compiling the image with Yocto with the meta BSP layer pulls the kernel (checksout into tmp/work-shared/<MACHINE>/kernel-source/) and compiles it and you get the final output image which you can flash from tmp/deploy/images/<MACHINE>/. But in your case the mainline kernel doesn't enabled the SPI by default, so you need to enable it in the Linux Kernel source code.
If you have the Yocto build setup already, then you can edit the Device Tree and prepare the patch. You can move into tmp/work-shared/orange-pi-zero/kernel-source/ and edit the kernel source code and change
status = "okay";
and prepare the git patch using usually sequence,
git add arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts
git commit -s -m "Enable SPI by default"
git format-patch HEAD~
Then you append this patch in two ways.
Edit the recipes-kernel/linux/linux-mainline_git.bb and add your patch file into SRC_URI. Copy the patch file into recipes-kernel/linux/linux-mainline
If you don't want to edit the meta-sunxi layer, create linux-mainline_%.bbappend in your meta layer and do the same.
The below patch can be directly applied to meta-sunxi to fix this case. You can find the same here.
From 3a1a3515d33facdf8ec9ab9735fb9244c65521be Mon Sep 17 00:00:00 2001
From: Parthiban Nallathambi <parthiban#linumiz.com>
Date: Sat, 10 Nov 2018 12:20:41 +0100
Subject: [PATCH] orange pi zero: Add SPI support by default
Signed-off-by: Parthiban Nallathambi <parthiban#linumiz.com>
---
...rm-dts-enable-SPI-for-orange-pi-zero.patch | 26 +++++++++++++++++++
recipes-kernel/linux/linux-mainline_git.bb | 1 +
2 files changed, 27 insertions(+)
create mode 100644 recipes-kernel/linux/linux-mainline/0001-arm-dts-enable-SPI-for-orange-pi-zero.patch
diff --git a/recipes-kernel/linux/linux-mainline/0001-arm-dts-enable-SPI-for-orange-pi-zero.patch b/recipes-kernel/linux/linux-mainline/0001-arm-dts-enable-SPI-for-orange-pi-zero.patch
new file mode 100644
index 0000000..e6d7933
--- /dev/null
+++ b/recipes-kernel/linux/linux-mainline/0001-arm-dts-enable-SPI-for-orange-pi-zero.patch
## -0,0 +1,26 ##
+From 1676d9767686404211c769de40e6aa55642b63d5 Mon Sep 17 00:00:00 2001
+From: Parthiban Nallathambi <parthiban#linumiz.com>
+Date: Sat, 10 Nov 2018 12:16:36 +0100
+Subject: [PATCH] arm: dts: enable SPI for orange pi zero
+
+Signed-off-by: Parthiban Nallathambi <parthiban#linumiz.com>
+---
+ arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts b/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts
+index 0bc031fe4c56..0036065da81c 100644
+--- a/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts
++++ b/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts
+## -144,7 +144,7 ##
+
+ &spi0 {
+ /* Disable SPI NOR by default: it optional on Orange Pi Zero boards */
+- status = "disabled";
++ status = "okay";
+
+ flash#0 {
+ #address-cells = <1>;
+--
+2.17.2
+
diff --git a/recipes-kernel/linux/linux-mainline_git.bb b/recipes-kernel/linux/linux-mainline_git.bb
index 5b8e321..9b2bcbe 100644
--- a/recipes-kernel/linux/linux-mainline_git.bb
+++ b/recipes-kernel/linux/linux-mainline_git.bb
## -27,5 +27,6 ## SRCREV_pn-${PN} = "b04e217704b7f879c6b91222b066983a44a7a09f"
SRC_URI = "git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git;protocol=git;branch=master \
file://defconfig \
+ file://0001-arm-dts-enable-SPI-for-orange-pi-zero.patch \
"
S = "${WORKDIR}/git"
--
2.17.2
I'm trying to print the outcome of an anova like so:
library(pander)
m.aov = aov(Sepal.Width ~ Species * Sepal.Length, iris)
pander(m.aov, split.table=Inf)
and I get this as expected if I type it into the console:
----------------------------------------------------------------------
Df Sum Sq Mean Sq F value Pr(>F)
-------------------------- ---- -------- --------- --------- ---------
**Species** 2 11.34 5.672 76.48 2.329e-23
**Sepal.Length** 1 4.769 4.769 64.3 3.368e-13
**Species:Sepal.Length** 2 1.513 0.7566 10.2 7.19e-05
**Residuals** 144 10.68 0.07417 NA NA
----------------------------------------------------------------------
Table: Analysis of Variance Model
However, if I embed this into a knitr chunk, I don't get the table:
```{r, results='asis'}
library(pander)
m.aov = aov(Sepal.Width ~ Species * Sepal.Length, iris)
pander(m.aov, split.table=Inf)
```
Knit the above and one obtains
```r
pander(m.aov, split.table=Inf)
```
i.e., the code chunk with no output.
Question: Is this a bug (in knitr? pander?) or something I've overlooked? How can I work around it?
> sessionInfo()
R version 3.0.2 (2013-09-25)
Platform: x86_64-pc-linux-gnu (64-bit)
locale:
[1] LC_CTYPE=en_AU.UTF-8 LC_NUMERIC=C LC_TIME=en_AU.UTF-8
[4] LC_COLLATE=en_AU.UTF-8 LC_MONETARY=en_AU.UTF-8 LC_MESSAGES=en_AU.UTF-8
[7] LC_PAPER=en_AU.UTF-8 LC_NAME=C LC_ADDRESS=C
[10] LC_TELEPHONE=C LC_MEASUREMENT=en_AU.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] knitr_1.8 pander_0.5.1 vimcom_1.0-0 setwidth_1.0-3 colorout_1.0-3
loaded via a namespace (and not attached):
[1] digest_0.6.4 evaluate_0.5.5 formatR_1.0 Rcpp_0.11.2 stringr_0.6.2 tools_3.0.2