tensorflow quantum to amplitude encoding - encoding

I have been using tensorflow quantum(TFQ) for quantum machine learning rencently but there still some problems. I wanna use amplitude encoding to encode classical data like MNIST images into quantum circuit but dont know how to do it. TFQ seems not have functions to do the amplitude encoding directly?
Wish someone can give me some advise about amplitude encoding implementation with TFQ. Thank you so much.

Related

How to input a stream of time-series data into deep learning network in Matlab?

I am a new Matlab user and I would be grateful if you help me. I have converted a set of time series into pictural presentations using CWT (continuous wavelet transform) and trained a deep learning network with quite a reasonable accuracy. I have made use of classify to check the trained network performance for the output of a single image. Now I am going to use it for a series of images consecutively feeding on the main time series, so how do I have to use classify in this issue?
regards

How to sample a signal from a continuous time domain in to a input port digital design (Simulink to HDL integration)

I am doing Simulink based Hardware software co-simulation. I have a simulink block which is outputing fixed point 32 bit data in a continuous domain. I want to send this data to an HDL design again in fixed point 32 bit format. Whenever i integrate the two blocks together, i am getting error. I tried adding quantizer but it only works on uint/double data types which is not acceptable to the HDL block. How can I discretize the data such that it is acceptable in the RTL domain? If i add unit delay,it works but the data is delayed which is not fair
Thanks a lot for your help
To sample a continuous signal and convert it to digital electronics, you usually need an ADC: "Analog to digital converter".
You should start by googling ADC integration.

Estimation of Pitch from Speech Signals Using Autocorrelation Algorithm

I want to detect speech signals pitch frequency using autocorrelation algorithm. I have a MATLAB code but the results are wrong. I would be grateful if you could solve the mistake in my code.
[y,Fs]=audioread('Sample1.wav');
y=y(:,1);
auto_corr_y=xcorr(y);
subplot(2,1,1);plot(y)
subplot(2,1,2);plot(auto_corr_y)
[pks,locs] = findpeaks(auto_corr_y);
[mm,peak1_ind]=max(pks);
period=locs(peak1_ind+1)-locs(peak1_ind);
pitch_Hz=Fs/period
Thank you for your help in this matter.
Seems, your code do not works because the Sample1.wav must contains only the short quazi-periodic part of the vocalized record. Also note, the pitch frequency is not the constant over time, so your estimation must takes this into account.
If you just want to estimate the frequency, you can take the RAPT method from the Speech Filling System (see the sfs_rapt.m wrapper for Windows).

feature extraction for machine learning

Looking for some advice. I am playing around with an accelerometer, combined with the machine learning app in matlab. Clearly there are many ways to extract features from the received data, both in time and frequency domains. However, I have recently come across time-frequency analysis, specifically using wavelets.
Has anyone got any advice on using wavelet analysis for classifying accelerometer (or similar) data and the benefits of using it ? Or if indeed this would be a valid way of extracting features ? I'm not too sure what sort of data I should be extracting using this method ?
Thanks in advance.
Few points to note,
1)You can transform a number of samples (should be a dyadic number and depends on your sampling frequency) into wavelet domain and classify that data. (eg. if you transform 64 accelerometer samples then you also have 64 points in wavelet domain).
2) Apart from time-frequency information from wavelet transformation, wavelet transformation has sparsity property
(https://en.wikipedia.org/wiki/Sparse_approximation) that would be useful for your classification model.
3) Also, you can try different wavelet basis functions (mother wavelets),
and try to figure out which basis is most suitable for your data. Maybe you can start with Haar basis function as it is more suitable to capture the singular behaviour of your data.

Feature Extraction for Digit Speech Recognition

I'm looking for a way to extract features from audio where I said a digit for speech recognition of the digits 1-10 using backpropagation with neural networks (10 samples for each digit and 5 samples of each digit for testing).
I tried using raw audio data and I also tried feeding the data after fft, and feeding the data with only the ten top frequencies and failed.
Can you suggest a way to extract features of the audio which will help the neural network to gain reasonable results? It's a simple project so I'm not aiming for extremely high performance, but a reasonable performance to demonstrate the ability of such network to learn.
Why don't you try MFCCs ? MFCCs is de facto a standard in ASR.
They weren't design with DNN in mind, but they proved to work with several other ASR implementation (most notably, HMM).