Converting integer to binary for SPI transfer - raspberry-pi

I am trying to convert an integer to 3 bytes to send via the SPI protocol to control a MAX6921 serial chip. I am using spi.xfer2() but cannot get it to work as expected.
In my code, I can call spi.xfer2([0b00000000, 0b11101100, 0b10000000]) it displays the letter "H" on my display, but when I try and convert the int value for this 79616, it doesn't give the correct output:
val = 79616
spi.xfer2(val.to_bytes(3, 'little'))
My full code so far is on GitHub, and for comparison, this is my working code for Arduino.
More details
I have an IV-18 VFD tube driver module, which came with some example code for an ESP32 board. The driver module has a 20 output MAX6921 serial chip to drive the vacuum tube.
To sent "H" to the second grid position (as the first grid only displays a dot or a dash) the bits are sent to MAX6921 in order OUT19 --> OUT0, so using the LSB in my table below. The letter "H" has an int value 79616
I can successfully send this, manually, via SPI using:
spi.xfer2([0b00000000, 0b11101100, 0b10000000])
The problem I have is trying to convert other letters in a string to the correct bits. I can retrieve the integer value for any character (0-9, A-Z) in a string, but can't then work out how to convert it to the right format for spi.xfer() / spi.xfer2()
My Code
def display_write(val):
spi.xfer2(val.to_bytes(3, 'little'))
# Loops over the grid positions
def update_display():
global grid_index
val = grids[grid_index] | char_to_code(message[grid_index:grid_index+1])
display_write(val)
grid_index += 1
if (grid_index >= 9):
grid_index = 0
The full source for my code so far is on GitHub
Map for MAX6921 Data out pins to the IV-18 tube pins:
BIT
24
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
IV-18
G9
G8
G7
G6
G5
G4
G3
G2
G1
A
B
C
D
E
F
G
DP
MAX6921
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
IV-18 Pinout diagram

Related

Error in post hoc test for lmer(): both multcomp() and emmeans()

I have a dataset of measurements of "Y" at different locations, and I am trying to determine how variable Y is influenced by variables A, B, and D by running a lmer() model and analyzing the results. However, when I reach the post hoc step, I receive an error when trying to analyze.
Here is an example of my data:
table <- " ID location A B C D Y
1 1 AA 0 0.6181587 -29.67 14.14 168.041
2 2 AA 1 0.5816176 -29.42 14.21 200.991
3 3 AA 2 0.4289670 -28.57 13.55 200.343
4 4 AA 3 0.4158891 -28.59 12.68 215.638
5 5 AA 4 0.3172721 -28.74 12.28 173.299
6 6 AA 5 0.1540603 -27.86 14.01 104.246
7 7 AA 6 0.1219355 -27.18 14.43 128.141
8 8 AA 7 0.1016643 -26.86 13.75 179.330
9 9 BB 0 0.6831649 -28.93 17.03 210.066
10 10 BB 1 0.6796935 -28.54 18.31 280.249
11 11 BB 2 0.5497743 -27.88 17.33 134.023
12 12 BB 3 0.3631052 -27.48 16.79 142.383
13 13 BB 4 0.3875498 -26.98 17.81 136.647
14 14 BB 5 0.3883785 -26.71 17.56 142.179
15 15 BB 6 0.4058061 -26.72 17.71 109.826
16 16 CC 0 0.8647298 -28.53 11.93 220.464
17 17 CC 1 0.8664036 -28.39 11.59 326.868
18 18 CC 2 0.7480748 -27.61 11.75 322.745
19 19 CC 3 0.5959143 -26.81 13.27 170.064
20 20 CC 4 0.4849077 -26.77 14.68 118.092
21 21 CC 5 0.3584687 -26.65 15.65 95.512
22 22 CC 6 0.3018285 -26.33 16.11 71.717
23 23 CC 7 0.2629121 -26.39 16.16 60.052
24 24 DD 0 0.8673077 -27.93 12.09 234.244
25 25 DD 1 0.8226558 -27.96 12.13 244.903
26 26 DD 2 0.7826429 -27.44 12.38 252.485
27 27 DD 3 0.6620447 -27.23 13.84 150.886
28 28 DD 4 0.4453213 -27.03 15.73 102.787
29 29 DD 5 0.3720257 -27.13 16.27 109.201
30 30 DD 6 0.6040217 -27.79 16.41 101.509
31 31 EE 0 0.8770987 -28.62 12.72 239.036
32 32 EE 1 0.8504547 -28.47 12.92 220.600
33 33 EE 2 0.8329484 -28.45 12.94 174.979
34 34 EE 3 0.8181102 -28.37 13.17 138.412
35 35 EE 4 0.7942685 -28.32 13.69 121.330
36 36 EE 5 0.7319724 -28.22 14.62 111.851
37 37 EE 6 0.7014828 -28.24 15.04 110.447
38 38 EE 7 0.7286984 -28.15 15.18 121.831"
#Create a dataframe with the above table
df <- read.table(text=table, header = TRUE)
df
# Make sure location is a factor
df$location<-as.factor(df$location)
Here is my model:
# Load libraries
library(ggplot2)
library(pscl)
library(lmtest)
library(lme4)
library(car)
mod = lmer(Y ~ A * B * poly(D, 2) * (1|location), data = df)
summary(mod)
plot(mod)
I now need to determine what variables significantly influence Y. Thus I ran Anova() from the package car (output pasted here).
Anova(mod)
# Analysis of Deviance Table (Type II Wald chisquare tests)
#
# Response: Y
# Chisq Df Pr(>Chisq)
# A 8.2754 1 0.004019 **
# B 0.0053 1 0.941974
# poly(D, 2) 40.4618 2 1.636e-09 ***
# A:B 0.1709 1 0.679348
# A:poly(D, 2) 1.6460 2 0.439117
# B:poly(D, 2) 5.2601 2 0.072076 .
# A:B:poly(D, 2) 0.6372 2 0.727175
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
This suggests that:
A significantly influences Y
B does not significantly influence Y
D significantly influences Y
So next I would run a post hoc test for each of these variables, but this is where I run into issues. I have tried using both emmeans and multcomp packages below:
library(emmeans)
emmeans(mod, list(pairwise ~ A), adjust = "tukey")
# NOTE: Results may be misleading due to involvement in interactions
# Error in if ((misc$estType == "pairs") && (paste(c("", by), collapse = ",") != :
# missing value where TRUE/FALSE needed
pairs(emmeans(mod, "A"))
# NOTE: Results may be misleading due to involvement in interactions
# Error in if ((misc$estType == "pairs") && (paste(c("", by), collapse = ",") != :
# missing value where TRUE/FALSE needed
library(multcomp)
summary(glht(mod, linfct = mcp(A = "Tukey")), test = adjusted("fdr"))
# Error in h(simpleError(msg, call)) :
# error in evaluating the argument 'object' in selecting a method for function 'summary': Variable(s) ‘depth’ of class ‘integer’ is/are not contained as a factor in ‘model’.
This is the first time I've run an ANOVA/post hoc test on a lmer() model, and though I've read a few introductory sites for this model, I'm not sure I am testing it correctly. Any help would be appreciated.
If I am looking at the data correctly, A is the variable that has values of 0, 1, ..., 7. Now look at your anova table, where you see that A has only 1 d.f., not 7 as it should for a factor having 8 levels. That means your model is taking A to be a numerical predictor -- which is rather meaningless. Make A into a factor and re-fit he model. You'll have better luck.
I also think you meant to have + (1|location) at the end of the model formula, rather than having the random effects interacting with some of the polynomial effects.

KDB/Q: How to write a loop and append output table?

Disclaimer: I am very new to the Q language so please excuse my silly question.
I have a function that currently is taking on 2 parameters (date;sym).It runs fine for 1 sym and 1 day. however, I need to perform this on multiple syms and dates which will take forever.
How do I create a loop that run the function on every sym, and on every date?
In python, it is straighforward as :
for date in datelist:
for sym in symlist:
func(date,sym)
How can I do something similar to this in Q? and how can I dynamically change the output table names and append them to 1 single table?
Currently, I am using the following:
output: raze .[function] peach paralist
where paralist is a list of parameter pairs: ((2020.06.01;ABC);(2020.06.01;XYZ)) but imho this is nowhere near efficient.
What would be the best way to achieve this in Q?
I'll generalize everything, if you have a given function foo which will operate on an atom dt with a vector s
q)foo:{[dt;s] dt +\: s}
q)dt:10?10
q)s:100?10
q)dt
8 1 9 5 4 6 6 1 8 5
q)s
4 9 2 7 0 1 9 2 1 8 8 1 7 2 4 5 4 2 7 8 5 6 4 1 3 3 7 8 2 1 4 2 8 0 5 8 5 2 8..
q)foo[;s] each dt
12 17 10 15 8 9 17 10 9 16 16 9 15 10 12 13 12 10 15 16 13 14 12 9 11 11 ..
5 10 3 8 1 2 10 3 2 9 9 2 8 3 5 6 5 3 8 9 6 7 5 2 4 4 ..
13 18 11 16 9 10 18 11 10 17 17 10 16 11 13 14 13 11 16 17 14 15 13 10 12 12 ..
9 14 7 12 5 6 14 7 6 13 13 6 12 7 9 10 9 7 12 13 10 11 9 6 8 8 ..
The solution is to project the symList over the function in question, then use each (or peach) for the date variable.
If your function requires an atomic date and sym, then you can just create a new function to implement this
q)bar:{[x;y] foo[x;] each y};
datelist:`date$10?10
symlist:10?`IBM`MSFT`GOOG
function:{0N!(x;y)}
{.[function;x]} peach datelist cross symlist
cross will return all combinations of sym and date
Is this what you need?
Try to use two "double" '
raze function'[datelist]'[symlist]
peach or each won't work here. They are not operators, but anonymous functions with two parameters: each is k){x'y}. That is why function each list1 each list2 statement is invalid, but function'[list1]'[list2] works.
From reading your response to another answer you are looking to save the results with unique names yes? Take a look at this solution using set to save and get to retrieve.
q)t:flip enlist each `colA`colB!(100;`name)
q)t
colA colB
---------
100 name
q)f:{[date;sym]tblName:`$string[date],string sym;tblName set update date:date,sym:sym from t}
q)newTbls:f'[.z.d+til 3;`AAA`BBB`CCC]
q)newTbls
`2020.09.02AAA`2020.09.03BBB`2020.09.04CCC
q)get each newTbls
+`colA`colB`date`sym!(,100;,`name;,2020.09.02;,`AAA)
+`colA`colB`date`sym!(,100;,`name;,2020.09.03;,`BBB)
+`colA`colB`date`sym!(,100;,`name;,2020.09.04;,`CCC)
q)get first newTbls
colA colB date sym
------------------------
100 name 2020.09.02 AAA
Does this meet your needs?
This could be a stab in the dark, but why not create a hdb rather than all these variables output20191005ABC, output20191006ABC .. etc and given you want to append them to 1 table.
Below I have outlined how to create a date partitioned hdb called outputHDB which has one table outputTbl. I created the hdb by running a function by date and sym and then upserting those rows to disk.
C:\Users\Matthew Moore>mkdir outputHDB
C:\Users\Matthew Moore>cd outputHDB
// can change the outputHDB as desired
// start q
h:hopen `::6789; // as a demo I connected to another hdb process and extracted some data per sym / date over IPC
hdbLoc:hsym `$"C:/Users/Matthew Moore/outputHDB";
{[d;sl]
{[d;s]
//output:yourFunc[date;sym];
// my func as a demo, I'm grabbing rows where price = max price by date and by sym from another hdb using the handle h
output:{[d;s]
h({[d;s] select from trades where date = d, sym = s, price = max price};d;s)
}[d;s];
// HDB Part
path:` sv (hdbLoc;`$string d;`outputTbl;`);
// change `outputTbl to desired table name
// dynamically creates the save location and then upserts one syms data directly to disk
// e.g. `:C:/Users/Matthew Moore/outputHDB/2014.04.21/outputTbl/
// extra / at the end saves the table as splayed i.e. each column is it's own file within the outputTbl directory
path upsert .Q.en[`:.;output];
// .Q.en enumerates syms in a table which is required when saving a table splayed
}[d;] each sl;
// applies the parted attribute to the sym column on disk, this speeds up querying for on disk data
#[` sv (hdbLoc;`$string d;`outputTbl;`);`sym;`p#];
}[;`AAPL`CSCO`DELL`GOOG`IBM`MSFT`NOK`ORCL`YHOO] each dateList:2014.04.21 2014.04.22 2014.04.23 2014.04.24 2014.04.25;
Now that the hdb has been created, you can load it from disk and query with qSQL
q)\l .
q)select from outputTbl where date = 2014.04.24, sym = `GOOG
date sym time src price size
------------------------------------------------------------
2014.04.24 GOOG 2014.04.24D13:53:59.182000000 O 46.43 2453

KDB+/Q: Custom min max scaler

Im trying to implement a custom min max scaler in kdb+/q. I have taken note of the implementation located in the ml package however I'm looking to be able to scale data between a custom range i.e. 0 and 255. What would be an efficient implementation of min max scaling in kdb+/q?
Thanks
Looking at the link to github on the page you referenced it looks like you may be able to define a function like so:
minmax255:{[sf;x]sf*(x-mnx)%max[x]-mnx:min x}[255]
Where sf is your scaling factor (here given by 255).
q)minmax255 til 10
0 28.33333 56.66667 85 113.3333 141.6667 170 198.3333 226.6667 255
If you don't like decimals you could round to the nearest whole number like:
q)minmax255round:{[sf;x]floor 0.5+sf*(x-mnx)%max[x]-mnx:min x}[255]
q)minmax255round til 10
0 28 57 85 113 142 170 198 227 255
(logic here is if I have a number like 1.7, add .5, and floor I'll wind up with 2, whereas if I had a number like 1.2, add .5, and floor I'll end up with 1)
If you don't want to start at 0 you could use | which takes the max of it's left and right arguments
q)minmax255roundlb:{[sf;lb;x]lb|floor sf*(x-mnx)%max[x]-mnx:min x}[255;10]
q)minmax255roundlb til 10
10 28 56 85 113 141 170 198 226 255
Where I'm using lb to mean 'lower bound'
If you want to apply this to a table you could use
q)show testtab:([]a:til 10;b:til 10)
a b
---
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
q)update minmax255 a from testtab
a b
----------
0 0
28.33333 1
56.66667 2
85 3
113.3333 4
141.6667 5
170 6
198.3333 7
226.6667 8
255 9
The following will work nicely
minmaxCustom:{[l;u;x]l + (u - l) * (x-mnx)%max[x]-mnx:min x}
As petty as it sounds, it is my strong recommendation that you do not follow through with Shehir94 solution for a custom minimum value. Applying a maximum to get a starting range, it will mess with the original distribution. A custom minmax scaling should be a simple linear transformation on a standard 0-1 minmax transformation.
X' = a + bX
For example, to get a custom scaling of 10-255, that would be a b=245 and a=10, we would expect the new mean to follow this formula and the standard deviation to only be a Multiplicative, but applying lower bound messes with this, for example.
q)dummyData:10000?100.0
q)stats:{`transform`minVal`maxVal`avgVal`stdDev!(x;min y;max y; avg y; dev y)}
q)minmax255roundlb:{[sf;lb;x]lb|sf*(x-mnx)%max[x]-mnx:min x}[255;10]
q)minmaxCustom:{[l;u;x]l + (u - l) * (x-mnx)%max[x]-mnx:min x}
q)res:stats'[`orig`lb`linear;(dummyData;minmax255roundlb dummyData;minmaxCustom[10;255;dummyData])]
q)res
transform minVal maxVal avgVal stdDev
-----------------------------------------------
orig 0.02741043 99.98293 50.21896 28.92852
lb 10 255 128.2518 73.45999
linear 10 255 133.024 70.9064
// The transformed average should roughly be
q)10 + ((255-10)%100)*49.97936
132.4494
// The transformed std devaition should roughly be
q)2.45*28.92852
70.87487
To answer the comment, this could be applied over a large number of coluwould be applied to a table in the following manner
q)n:10000
q)tab:([]sym:n?`3;col1:n?100.0)
q)multiColApply:{[tab;scaler;colList]flip ft,((),colList)!((),scaler each (ft:flip tab)[colList])}
q)multiColApply[tab;minmaxCustom[10;20];`col1`col2]
sym col1 col2 col3
------------------------------
cag 13.78461 10.60606 392.7524
goo 15.26201 16.76768 517.0911
eoh 14.05111 19.59596 515.9796
kbc 13.37695 19.49495 406.6642
mdc 10.65973 12.52525 178.0839
odn 16.24697 17.37374 301.7723
ioj 15.08372 15.05051 785.033
mbc 16.7268 20 534.7096
bhj 12.95134 18.38384 711.1716
gnf 19.36005 15.35354 411.597
gnd 13.21948 18.08081 493.1835
khi 12.11997 17.27273 578.5203

formatting an AVC/H.264 mdat

Can anyone tell me or point me to a section of the specification(s) that clearly demonstrates how from an elementary stream with a series of NALUs how these should be written into a ISO BMFF mdat?
I can see looking at samples and other code that I should have something like: AUD, SPS, PPS, SEI, VideoSlice, AUD etc etc
Things that are not entirely clear to me:
If the SPS and PPS are also stored out of band in the AVCC are they required in the mdat?
If they are required in the mdat when/where should they be written? e.g. just prior to an IDR?
What is the requirement for AUDs?
If I am generating sample sizes for the trun is the calcuation for this? In the example I am working to recreate the first sample in the trun has a size of 22817 however if I look at the first sample in the mdat the NALU size prefix is 22678. The value in the trun appears to be the size of all the NALUs + sizes up to and including the first sample (see my example below)
>
1 0016E405 (1500165) - box.Size
2 6D646174 (mdat) - box.Type
3 00000002 (2) NAL Size
4 0910 - (2) AUD # 5187
5 00000025 (37)
6 27640020 AC248C0F 0117EF01 10000003 00100000 078E2800 0F424001 E84EF7B8 0F844229 C0 (37) # 5193 SPS
7 00000004 (4)
8 28DEBCB0 (4) PPS
9 0000000B (11)
10 06000781 36288029 67C080 (? SEI ?)
11 0000000C (12)
12 06010700 00F00000 03020480 (? SEI is type 6)
13 0000002D (45) # 5269
14 060429B5 00314741 393403CA FFFC8080 FA0000FA 0000FA00 00FA0000 FA0000FA 0000FA00 00FA0000 FA0000FF 80 (SEI ??)
15 00005896 (22678)
16 25888010 02047843 00580010 08410410 0002….. 22678 bytes video # 5322
If the SPS and PPS are also stored out of band in the AVCC are they required in the mdat?
No
If they are required in the mdat when/where should they be written? e.g. just prior to an IDR?
Yes, if you choose to include them, but there is no reason to
What is the requirement for AUDs?
They are optional
If I am generating sample sizes for the trun is the calcuation for this?
The number of bytes in the access unit (AU, aka frame). Which may contain more than one NALU. SPS/PPS/SEI/AUD all counted toward the AU size. The 4 byte size prefixed to each NALUs is also counted in the AU size recored in the trun.
bytes
4 | 3 00000002 (2) NAL Size
2 | 4 0910 - (2) AUD # 5187
4 | 5 00000025 (37)
37 | 6 27640020 AC248C0F 0117EF01 10000003 00100000 078E2800 0F424001 E84EF7B8 0F844229 C0 (37) # 5193 SPS
4 | 7 00000004 (4)
4 | 8 28DEBCB0 (4) PPS
4 | 9 0000000B (11)
11 | 10 06000781 36288029 67C080 (? SEI ?)
4 | 11 0000000C (12)
12 | 12 06010700 00F00000 03020480 (? SEI is type 6)
4 | 13 0000002D (45) # 5269
45 | 14 060429B5 00314741 393403CA FFFC8080 FA0000FA 0000FA00 00FA0000 FA0000FA 0000FA00 00FA0000 FA0000FF 80 (SEI ??)
4 | 15 00005896 (22678)
22678 | 16 25888010 02047843 00580010 08410410 0002….. 22678 bytes video # 5322
------|
22817 | <- bytes total

how to get value at an address with radare

If I'm using radare2, and I run, lets say dr while debugging, it'll print pointers for some of the registers.
Lets pretend like esp is resolving to 0x04084308 or something similar.
If I want to get the value that esp is pointing to, how could I do that?
Thanks in advance.
print rsp register value
[0x560207c7275a]> dr?rsp
0x7fffa5e429c8
print 4 bytes hex at 0x7fffa5e429c8
[0x560207c7275a]> px 4 #rsp
- offset - 0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF
0x7fffa5e429c8 9b00 dae7 ....
print 8 bytes hex at 0x7fffa5e429c8 ( command px == x )
[0x560207c7275a]> x 8 #rsp
- offset - 0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF
0x7fffa5e429c8 9b00 dae7 347f 0000 ....4...
[0x560207c7275a]>
This can be solved with drr, which will show more information about the registers, such as where they point :).
Otherwise, if you want to get a value in the programs memory, you can s 0xaddr and then V to show information near there.