I'm having trouble to encode classical data into quantum state in Qiskit IBM Quantum Lab. Let me explain the problem clearly: For university purpose I have to encode two 4-dimensional vectors with amplitude encoding (i.e using QRAM) and confront them with the fidelity distance. I wrote something like this:
#qram
def encodeVector(circuit,data,i,controls,rotationQubit,ancillaQubits):
#mcry(angolo,controls,target,ancilla)
# |00>
circuit.x(i)
circuit.mcry(np.arcsin(data[0]),controls,rotationQubit,ancillaQubits)
circuit.x(i)
circuit.barrier()
# |01>
circuit.x(i[1])
circuit.mcry(np.arcsin(data[1]),controls,rotationQubit,ancillaQubits)
circuit.x(i[1])
circuit.barrier()
# |10>
circuit.x(i[0])
circuit.mcry(np.arcsin(data[2]),controls,rotationQubit,ancillaQubits)
circuit.x(i[0])
circuit.barrier()
# |11>
circuit.mcry(np.arcsin(data[3]),controls,rotationQubit,ancillaQubits)
I know I need log(4)=2 qbits to encode 4 components of one vector. So the circuit is something like this: 2 qbit |i> for the first vector, 2 qbit |j> for the second one and finally 2 qbit |r> for the rotation. The teta angle can be found by applying arcsin to the components I think. And first of all I need to create registers 00,01,10,11 with an Hadamard Gate on qbit i and j (superposition). Let's post the code:
psi_norm = [1,0,0,0]
phi_norm = [0,0,0,1]
prova = QuantumRegister(1,"p") #for fidelity
i = QuantumRegister(2,"i") #first vector
j = QuantumRegister(2,"j") #second vector
r = QuantumRegister(2,"r") #rotation qbit
b = ClassicalRegister(1,"b") #for measurement
circuit = QuantumCircuit(prova,i,j,r,q,b)
circuit.h(i)
circuit.h(j)
circuit.barrier()
encodeVector(circuit, psi_norm, i, i[:], r[0],None) #encode first vector
circuit.barrier()
encodeVector(circuit, phi_norm, j, j[:], r[1],None) #encode second one
Now I calculate the fidelity distance which is the following circuit:
#fidelity
circuit.h(prova[0])
circuit.cswap(prova[0],i[0],j[0])
circuit.cswap(prova[0],i[1],j[1])
circuit.cswap(prova[0],r[0],r[1])
circuit.h(prova[0])
circuit.measure(prova[0],b[0])
The vector are orthogonal so the fidelity must have probabilities equal to 1/2, but in my case I get them wrong and I don't know where I make mistakes. Here the histogram:
https://i.stack.imgur.com/eqr47.png
Related
Background: I am working on a problem similar to the nonlinear logistic regression described in the link [1] (my problem is more complicated, but link [1] is enough for the next sections of this post). Comparing my results with those obtained in parallel with a R package, I got similar results for the coefficients, but (very approximately) an opposite logLikelihood.
Hypothesis: The logLikelihood given by fitnlm in Matlab is in fact the negative LogLikelihood. (Note that this impairs consequently the BIC and AIC computation by Matlab)
Reasonning: in [1], the same problem is solved through two different approaches. ML-approach/ By defining the negative LogLikelihood and making an optimization with fminsearch. GLS-approach/ By using fitnlm.
The negative LogLikelihood after the ML-approach is:380
The negative LogLikelihood after the GLS-approach is:-406
I imagine the second one should be at least multiplied by (-1)?
Questions: Did I miss something? Is the (-1) coefficient enough, or would this simple correction not be enough?
Self-contained code:
%copy-pasting code from [1]
myf = #(beta,x) beta(1)*x./(beta(2) + x);
mymodelfun = #(beta,x) 1./(1 + exp(-myf(beta,x)));
rng(300,'twister');
x = linspace(-1,1,200)';
beta = [10;2];
beta0=[3;3];
mu = mymodelfun(beta,x);
n = 50;
z = binornd(n,mu);
y = z./n;
%ML Approach
mynegloglik = #(beta) -sum(log(binopdf(z,n,mymodelfun(beta,x))));
opts = optimset('fminsearch');
opts.MaxFunEvals = Inf;
opts.MaxIter = 10000;
betaHatML = fminsearch(mynegloglik,beta0,opts)
neglogLH_MLApproach = mynegloglik(betaHatML);
%GLS Approach
wfun = #(xx) n./(xx.*(1-xx));
nlm = fitnlm(x,y,mymodelfun,beta0,'Weights',wfun)
neglogLH_GLSApproach = - nlm.LogLikelihood;
Source:
[1] https://uk.mathworks.com/help/stats/examples/nonlinear-logistic-regression.html
This answer (now) only details which code is used. Please see Tom Lane's answer below for a substantive answer.
Basically, fitnlm.m is a call to NonLinearModel.fit.
When opening NonLinearModel.m, one gets in line 1209:
model.LogLikelihood = getlogLikelihood(model);
getlogLikelihood is itself described between lines 1234-1251.
For instance:
function L = getlogLikelihood(model)
(...)
L = -(model.DFE + model.NumObservations*log(2*pi) + (...) )/2;
(...)
Please also not that this notably impacts ModelCriterion.AIC and ModelCriterion.BIC, as they are computed using model.LogLikelihood ("thinking" it is the logLikelihood).
To get the corresponding formula for BIC/AIC/..., type:
edit classreg.regr.modelutils.modelcriterion
this is Tom from MathWorks. Take another look at the formula quoted:
L = -(model.DFE + model.NumObservations*log(2*pi) + (...) )/2;
Remember the normal distribution has a factor (1/sqrt(2*pi)), so taking logs of that gives us -log(2*pi)/2. So the minus sign comes from that and it is part of the log likelihood. The property value is not the negative log likelihood.
One reason for the difference in the two log likelihood values is that the "ML approach" value is computing something based on the discrete probabilities from the binomial distribution. Those are all between 0 and 1, and they add up to 1. The "GLS approach" is computing something based on the probability density of the continuous normal distribution. In this example, the standard deviation of the residuals is about 0.0462. That leads to density values that are much higher than 1 at the peak. So the two things are not really comparable. You would need to convert the normal values to probabilities on the same discrete intervals that correspond to individual outcomes from the binomial distribution.
I am very new to matlab, hidden markov model and machine learning, and am trying to classify a given sequence of signals. Please let me know if the approach I have followed is correct:
create a N by N transition matrix and fill with random values which sum to 1for each row. (N will be the number of states)
create a N by M emission/observation matrix and fill with random values which sum to 1 for each row
convert different instances of the sequence (i.e each instance will be saying the word 'hello' ) into one long stream and feed each stream to the hmm train function such that:
new_transition_matrix old_transition_matrix = hmmtrain(sequence,old_transition_matrix,old_emission_matrix)
give the final transition and emission matrix to hmm decode with an unknown sequence to give the probability
i.e [posterior_states logrithmic_probability] = hmmdecode( sequence, final_transition_matrix,final_emission_matris)
1. and 2. are correct. You have to be careful that your initial transition and emission matrices are not completely uniform, they should be slightly randomized for the training to work.
3. I would just feed in the 'Hello' sequences separately rather than concatenating them to form a single long sequence.
Let's say this is the sequence for Hello: [1,0,1,1,0,0]. If you form one long sequence from 3 'Hello' sequences, you would get:
data = [1,0,1,1,0,0,1,0,1,1,0,0,1,0,1,1,0,0]
This is not ideal, instead you should feed the sequences in separately like:
data = [1,0,1,1,0,0; 1,0,1,1,0,0; 1,0,1,1,0,0].
Since you are using MatLab, I would recommend using the HMM toolbox by Murphy. It has a demo on how you can train an HMM with multiple observation sequences:
M = 3;
N = 2;
% "true" parameters
prior0 = normalise(rand(N ,1));
transmat0 = mk_stochastic(rand(N ,N ));
obsmat0 = mk_stochastic(rand(N ,M));
% training data: a 5*6 matrix, e.g. 5 different 'Hello' sequences of length 6
number_of_seq = 5;
seq_len= 6;
data = dhmm_sample(prior0, transmat0, obsmat0, number_of_seq, seq_len);
% initial guess of parameters
prior1 = normalise(rand(N ,1));
transmat1 = mk_stochastic(rand(N ,N ));
obsmat1 = mk_stochastic(rand(N ,M));
% improve guess of parameters using EM
[LL, prior2, transmat2, obsmat2] = dhmm_em(data, prior1, transmat1, obsmat1, 'max_iter', 5);
LL
4. What you say is correct, below is how you calculate the log probaility in the HMM toolbox:
% use model to compute log[P(Obs|model)]
loglik = dhmm_logprob(data, prior2, transmat2, obsmat2)
Finally: Have a look at this paper by Rabiner on how the mathematics work if anything is unclear.
Hope this helps.
The final goal I am trying to achieve is the generation of a ten minutes time series: to achieve this I have to perform an FFT operation, and it's the point I have been stumbling upon.
Generally the aimed time series will be assigned as the sum of two terms: a steady component U(t) and a fluctuating component u'(t). That is
u(t) = U(t) + u'(t);
So generally, my code follows this procedure:
1) Given data
time = 600 [s];
Nfft = 4096;
L = 340.2 [m];
U = 10 [m/s];
df = 1/600 = 0.00167 Hz;
fn = Nfft/(2*time) = 3.4133 Hz;
This means that my frequency array should be laid out as follows:
f = (-fn+df):df:fn;
But, instead of using the whole f array, I am only making use of the positive half:
fpos = df:fn = 0.00167:3.4133 Hz;
2) Spectrum Definition
I define a certain spectrum shape, applying the following relationship
Su = (6*L*U)./((1 + 6.*fpos.*(L/U)).^(5/3));
3) Random phase generation
I, then, have to generate a set of complex samples with a determined distribution: in my case, the random phase will approach a standard Gaussian distribution (mu = 0, sigma = 1).
In MATLAB I call
nn = complex(normrnd(0,1,Nfft/2),normrnd(0,1,Nfft/2));
4) Apply random phase
To apply the random phase, I just do this
Hu = Su*nn;
At this point start my pains!
So far, I only generated Nfft/2 = 2048 complex samples accounting for the fpos content. Therefore, the content accounting for the negative half of f is still missing. To overcome this issue, I was thinking to merge the real and imaginary part of Hu, in order to get a signal Huu with Nfft = 4096 samples and with all real values.
But, by using this merging process, the 0-th frequency order would not be represented, since the imaginary part of Hu is defined for fpos.
Thus, how to account for the 0-th order by keeping a procedure as the one I have been proposing so far?
I have the transition matrix, emission matrix and starting state for a hidden Markov model. I want to generate a sequence of observations (emissions). However, I'm stuck on one thing.
I understand how to choose among two states (or emissions). If Event A probability x, then Event B (or, really not-A) occurs with probability 1-x. To generate a sequence of A's and B's, with a random number, rand, you do the following.
for iteration in iterations:
observation[iteration] <- A if rand < x else B
I don't understand how to extend this to more than two variables. For example, if three events occur such that Event A occurs with probability x1, Event B with x2 and Event C with 1-(x1+x2), then how do I extend the above pseudocode?
I didn't find the answer Googling. In fact I get the impression that I'm missing a basic fact that many of the notes online assume. :-/
One way would be
x<-rand()
if x < x1 observation is A
else if x < x1 + x2 observation is B
else observation is C
Of course if you have a large number of alternatives it might be better to build a cumulative probability table (holding x1, x1+x2, x1+x2+x3 ...) and then do a binary search in that table given the random number. If you are willing to do more preprocessing, there is an even more efficient way, see here for example
The two value case is a binomial distribution, and you generate random draws from a binomial distribution (a series of coin flips, essentially).
For more than 2 variables, you need to draw samples from a multinomial distribution, which is simply a generalisation of the binomial distribution for n>2.
Regardless of what language you use, there should most likely be built-in functions to accomplish this task. Below is some code in Python, which simulates a set of observations and states given your hmm model object:
import numpy as np
def random_MN_draw(n, probs):
""" get X random draws from the multinomial distribution whose probability is given by 'probs' """
mn_draw = np.random.multinomial(n,probs) # do 1 multinomial experiment with the given probs with probs= [0.5,0.5], this is a coin-flip
return np.where(mn_draw == 1)[0][0] # get the index of the state drawn e.g. 0, 1, etc.
def simulate(self, nSteps):
""" given an HMM = (A, B1, B2 pi), simulate state and observation sequences """
lenB= len(self.emission)
observations = np.zeros((lenB, nSteps), dtype=np.int) # array of zeros
states = np.zeros(nSteps)
states[0] = self.random_MN_draw(1, self.priors) # appoint the first state from the prior dist
for i in range(0,lenB): # initialise observations[i,0] for all observerd variables
observations[i,0] = self.random_MN_draw(1, self.emission[i][states[0],:]) #ith variable array, states[0]th row
for t in range(1,nSteps): # loop through t
states[t] = self.random_MN_draw(1, self.transition[states[t-1],:]) # given prev state (t-1) pick what row of the A matrix to use
for i in range(0,lenB): # loop through the observed variable for each t
observations[i,t] = self.random_MN_draw(1, self.emission[i][states[t],:]) # given current state t, pick what row of the B matrix to use
return observations,states
In pretty much every language, you can find equivalents of
np.random.multinomial()
for multinomial and other discrete or continuous distributions as built-in functions.
I have an physical instrument of measurement (force platform with load cells) which gives me three values, A, B and C. It happens, though, that these values - that should be orthogonal - actually are somewhat coupled, due to physical characteristics of the measuring device, which causes cross-talk between applied and returned values of force and torque.
Then, it is recommended that a calibration matrix be used to transform the measured values into a better estimate of the actual values, like this:
The problem is that it is necessary to perform a SET of measurements, so that different measured(Fz, Mx, My) and actual(Fz, Mx, My) are least-squared to get some C matrix that works best for the system as a whole.
I can solve Ax = B problems with scipy.linalg.lststq, or even scipy.linalg.solve (giving an exact solution) for ONE measurement, but how should I proceed to consider a set of different measurements, each one with its own equation giving a potentially different 3x3 matrix?
Any help is much appreciated, thanks for reading.
I posted a similar question containing just the mathematical part of this at math.stackexchange.com, and this answer solved the problem:
math.stackexchange.com/a/232124/27435
In case anyone have a similar problem in the future, here is the almost literal Scipy implementation of that answer (first lines are initialization boilerplate code):
import numpy
import scipy.linalg
### Origin of the coordinate system: upper left corner!
"""
1----------2
| |
| |
4----------3
"""
platform_width = 600
platform_height = 400
# positions of each load cell (one per corner)
loadcell_positions = numpy.array([[0, 0],
[platform_width, 0],
[platform_width, platform_height],
[0, platform_height]])
platform_origin = numpy.array([platform_width, platform_height]) * 0.5
# applying a known force at known positions and taking the measurements
measurements_per_axis = 5
total_load = 50
results = []
for x in numpy.linspace(0, platform_width, measurements_per_axis):
for y in numpy.linspace(0, platform_height, measurements_per_axis):
position = numpy.array([x,y])
for loadpos in loadcell_positions:
moments = platform_origin-loadpos * total_load
load = numpy.array([total_load])
result = numpy.hstack([load, moments])
results.append(result)
results = numpy.array(results)
noise = numpy.random.rand(*results.shape) - 0.5
measurements = results + noise
# now expand ("stuff") the 3x3 matrix to get a linearly independent 3x3 matrix
expands = []
for n in xrange(measurements.shape[0]):
k = results[n,:]
m = measurements[n,:]
expand = numpy.zeros((3,9))
expand[0,0:3] = m
expand[1,3:6] = m
expand[2,6:9] = m
expands.append(expand)
expands = numpy.vstack(expands)
# perform the actual regression
C = scipy.linalg.lstsq(expands, measurements.reshape((-1,1)))
C = numpy.array(C[0]).reshape((3,3))
# the result with pure noise (not actual coupling) should be
# very close to a 3x3 identity matrix (and is!)
print C
Hope this helps someone!