Reading multi-channel serial usb in pure data - linux-device-driver

I have multichannel serial data from an 8 channel ADC chip that I'm connecting to my computer via a serial-USB cable. I want to use these separate channels in Pure Data, but the pd_comport object doesn't read multi-channel serial data. I've scoured the Pd discussion fora but there's no mention of how to do this.
Any thoughts on how I can go about it?

per definition a serial connection is only single-channel. if you have multiple (synchronized) channels, it's called parallel.
so your problem is basically one of the two following:
parallel serial streams
if you are transmitting the 8 ADC-channels via different serial connections, your (special) cable should register 8 different devices (e.g. /dev/ttyUSB5, /dev/ttyUSB6, .../dev/ttyUSB12).
in this case, simply use multiple [comport] objects (one for each serial device you want to interface)
single multiplex stream
in the (more likely) case, that your ADC transmits it's 8 channels in a single serial connection by multiplexing the data, you will have to demultiplex the serial stream yourself. how to do this, is very much depending on the actual format of the data.
assuming your ADCs are only 8bit and you have only 4 channels (for simplicity), then you might receive a serial stream like: ... A1 B1 C1 D1 A2 B2 C2 D2 A3 B3 .... (with A, B,... being the samples for the 4 channels; and 1,2,... being the sample frames), then you could demultiplex the signal into 4 streams with something like
|
[t b f]
| |
| +------------+ |
[i ]/[+ 1]/[% 4]/ |
| |
[pack 0 0]
|
[route 0 1 2 3]
| | | |
in practice your protocol might look slightly different (e.g. there ought to be a way to specify frame boundaries (there's no way by just looking at the numbers, whether you are actually seeing A1 B1 C1 D1 A2 B2 or B1 C1 D1 A2 B2 C2, so it's unclear whether the 1st sample belongs to channelA or channelB).
thus you really must get your hands on the protocol definition and interpret the data you get from [comport]

Related

Do proposal numbers need to be unique in Paxos?

What can happen if two proposals are issued with the same ProposalId? It shouldn't create any problem if they have the same value. But what about different values? I can work out a scenario where it risks liveness at the time of failure, but would it create any problem for the protocol's safety?
It's a nice idea because it would ease an annoying design requirement for a practical system: designing a scheme to ensure proposers have different, yet increasing round numbers.
It doesn't work, unfortunately.
Let's define PaxosPrime to be a variation of Paxos where different proposers are allowed to have the same round numbers. We show through contradiction that PaxosPrime is not a consensus algorithm.
Proof sketch: Assume PaxosPrime is a consensus algorithm. Let's consider where each acceptor has a different value, but with the same round (w.l.o.g we'll pick 3). Each will also have promised at 3. We then have a pair of proposers interact in the system.
A1 | A2 | A3
v p | v p | v p
------+-------+------
x#3 3 | y#3 3 | z#3 3
P1 prepares for round 4 (i.e. Phase 1.A) and receives all three values x#3, y#3, and z#3. There is no provision in Paxos for tie-breaking, so we'll have it choose x.
P2 also prepares for round 4 and receives y#3 and z#3 from A2 and A3, respectively. We'll have it choose y.
P1 sends accepts for x#4 (i.e. Phase 2.A) and the acceptors all process it. At this point all the acceptors have value v#4 and promises for 4. The system has consensus on the value x.
P2 sends accepts for round y#4 and the acceptors A2 and A3 successfully process it. A majority of the acceptors now have value y#4; The system has consensus on the value y.
Here is the trace from the acceptors' point of view:
A1 | A2 | A3
v p | v p | v p
------+-------+------
x#3 3 | y#3 3 | z#3 3 # Start
x#3 4 | y#3 4 | z#3 4 # Process P1's propose command
x#3 4 | x#3 4 | x#3 4 # Process P1's accept command
x#3 4 | y#3 4 | y#3 4 # Process P2's accept command
The system first had consensus on the value x then consensus on the value y. This is a contradiction on the definition of consensus; thus PaxosPrime is not a consensus algorithm.
∎
The Synod algorithm technically does not require the proposal numbers to be unique among all processes. And the monotonically increasing requirement is only an optimization. We could technically choose a random proposal number, and still have a correct algorithm.
The algorithm starts off with a proposer sending a prepare(proposal number) to all acceptors. It can then only uses that proposal number in a proposal if a majority of acceptors haven't seen a proposal number as high as or higher than that yet and successfully send back a promise which can only happen once per proposal number. After this stage, if the proposer is able to continue, then that proposal number is effectively unique. So the algorithm already enforces the uniqueness of proposal numbers that are actually used in a proposal.

Calculating Q value in dqn with experience replay

consider the Deep Q-Learning algorithm
1 initialize replay memory D
2 initialize action-value function Q with random weights
3 observe initial state s
4 repeat
5 select an action a
6 with probability ε select a random action
7 otherwise select a = argmaxa’Q(s,a’)
8 carry out action a
9 observe reward r and new state s’
10 store experience <s, a, r, s’> in replay memory D
11
12 sample random transitions <ss, aa, rr, ss’> from replay memory D
13 calculate target for each minibatch transition
14 if ss’ is terminal state then tt = rr
15 otherwise tt = rr + γmaxa’Q(ss’, aa’)
16 train the Q network using (tt - Q(ss, aa))^2 as loss
17
18 s = s'
19 until terminated
In step 16 the value of Q(ss, aa) is used to calculate the loss. When is this Q value calculated? At the time the action was taken or during the training itself?
Since replay memory only stores < s,a,r,s' > and not the q-value, is it safe to assume the q value will be calculated during the time of training?
Yes, in step 16, when training the network, you are using the the loss function (tt - Q(ss, aa))^2 because you want to update network weights in order to approximate the most recent Q-values, computed as rr + γmaxa’Q(ss’, aa’) and used as target. Therefore, Q(ss, aa) is the current estimation, which is typically computed during training time.
Here you can find a Jupyter Notebook with a simply Deep Q-learning implementation that maybe is helpful.

Using constraints across multiple edges in OrientDB

I have a use case where I have 3 nodes connected by 2 edges.
In the example below, how can I constrain the relationship between the nodes so that I can only traverse the graph from A1 to C1 and A2 to C2 but I shouldn't be able to go from A1 to C2 or A2 to C1.
A1 <--edge--> B1 <--edge--> C1
A2 <--edge--> B1 <--edge--> C2
Example use case:
Character(A) PlayedBy(edge) Actor(B) In(edge) Movie(C)
where multiple characters can by played by a single actor in multiple movies but not all characters appeared in all movies linked by the actor. It's a many to one to many relationship where A is also linked to C.

Using variable length data inputs with EM algorithm clustering

We have a set of sequences with taxi positions. We want to cluster the data by considering the sequential patterns in the data lines.
For example:
T1, T2, T3, T4 be the travels and a,b,c,d,e be set of places.
The data we have is like,
T1 b c b a d
T2 a
T3 a b a b a b c e d
T4 b c d c b d c a
But the problem is the length of the data are not variable. How can we cluster these type of data using EM. Since it does not accept variable length data is there way we can customize it.
EM is a general principle. You can use it with very different models.
Probably the most popular model for EM is Gaussian Mixture Modeling, GMM.
Naturally, if you use covariances, GMM requires a fixed dimensionality.
But if you use other models, there is no reason it cannot work with variable length vectors. For example, there are EM variants that process text data, and text usually does have different length.

Training LIBSVM with multivariate data in MATLAB

How LIBSVM works performs multivariate regression is my generalized question?
In detail, I have some data for certain number of links. (Example 3 links). Each link has 3 dependent variables which when used in a model gives output Y. I have data collected on these links in some interval.
LinkId | var1 | var2 | var3 | var4(OUTPUT)
1 | 10 | 12.1 | 2.2 | 3
2 | 11 | 11.2 | 2.3 | 3.1
3 | 12 | 12.4 | 4.1 | 1
1 | 13 | 11.8 | 2.2 | 4
2 | 14 | 12.7 | 2.3 | 2
3 | 15 | 10.7 | 4.1 | 6
1 | 16 | 8.6 | 2.2 | 6.6
2 | 17 | 14.2 | 2.3 | 4
3 | 18 | 9.8 | 4.1 | 5
I need to perform prediction to find the output of
(2,19,10.2,2.3).
How can I do that using above data for training in Matlab using LIBSVM? Can I train the whole data as input to the svmtrain to create a model or do I need to train each link separate and use the model create for prediction? Does it make any difference?
NOTE : Notice each link with same ID has same value.
This is not really a matlab or libsvm question but rather a generic svm related one.
How LIBSVM works performs multivariate regression is my generalized question?
LibSVM is just a library, which in particular - implements the Support Vector Regression model for the regression tasks. In short words, in a linear case, SVR tries to find a hyperplane for which your data points are placed in some margin around it (which is quite a dual approach to the classical SVM which tries to separate data with as big margin as possible).
In non linear case the kernel trick is used (in the same fashion as in SVM), so it is still looking for a hyperplane, but in a feature space induced by the particular kernel, which results in the non linear regression in the input space.
Quite nice introduction to SVRs' can be found here:
http://alex.smola.org/papers/2003/SmoSch03b.pdf
How can I do that using above data for training in Matlab using LIBSVM? Can I train the whole data as input to the svmtrain to create a model or do I need to train each link separate and use the model create for prediction? Does it make any difference? NOTE : Notice each link with same ID has same value.
You could train SVR (as it is a regression problem) with the whole data, but:
seems that var3 and LinkId are the same variables (1->2.2, 2->2.3, 3->4.1), if this is a case you should remove the LinkId column,
are values of var1 unique ascending integers? If so, these are also probably a useless featues (as they do not seem to carry any information, they seem to be your id numbers),
you should preprocess your data before applying SVM so eg. each column contains values from the [0,1] interval, otherwise some features may become more important than others just because of their scale.
Now, if you would like to create a separate model for each link, and follow above clues, you end up with 1 input variable (var2) and 1 output variable var4, so I would not recommend such a step. In general it seems that you have very limited featues set, it would be valuable to gather more informative features.