I am trying to generate a pn sequence using five shift registers. The shift register should be composed out of 10 states, giving us a period of 1023 bits. I was trying hard to get it done, but I am completely confused as how to generate 1023 bits using 5 shift registers. I need to submit it by tomorrow and I am feeling really tense. Can someone please help me with this matlab code.
P.S. I am not allowed to use MATLAB's built-in functions to generate the sequence.
Related
While reading about Transfer Learning with MATLab I came across a piece of code which says...
rng(2016) % For reproducibility
convnet = trainNetwork(trainDigitData,layers,options);
...before training the network so that the results can be reproduced exactly as given in the example by anyone who tries that code. I would like to know how generating a pseudo-random number using rng(seed_value) function can help with reproduciblity of the entire range of results?
Not random number generation, the random number generator seed.
There is no such things as random numbers, just pseudo-random numbers, numbers that behave almost as random, generally arising from some complex mathematical function, function that usually requires an initial value. Often, computers get this initial value from the time register in the microchip in your PC, thus "ensuring" randomness.
However, if you have an algorithm that is based in random numbers (e.g. a NN), reproducibility may be a problem when you want to share your results. Someone that re-runs your code will be ensured to get different results, as randomness is part of the algorithm. But, you can tell the random number generator to instead of starting from a seed taken randomly, to start from a fixed seed. That will ensure that while the numbers generated are random between themseves, they are the same each time (e.g. [3 84 12 21 43 6] could be the random output, but ti will always be the same).
By setting a seed for your NN, you ensure that for the same data, it will output the same result, thus you can make your code "reproducible", i.e. someone else can run your code and get EXACTLY the same results.
As a test I suggest you try the following:
rand(1,10)
rand(1,10)
and then try
rng(42)
rand(1,10)
rng(42)
rand(1,10)
Wikipedia for Pseudo-random number generator
Because some times is good to use the same random numbers, this is what matlab says about that
Set the seed and generator type together when you want to:
Ensure that the behavior of code you write today returns the same results when you run that code in a future MATLABĀ® release.
Ensure that the behavior of code you wrote in a previous MATLAB release returns the same results using the current release.
Repeat random numbers in your code after running someone else's random number code
this is te point of repating the seed, and generate the same random numbers. matlab points it out in two good articles one for repeating numbers and one for different numbers
You dont want to start with weights all equal zeros, so in the initializing stage you give the weights some random value. There maybe other random values involved in searching for minimum later in the learning process, or in the way you feed your data.
So the real input to all neural network learning process is your data and the random number generator.
If they are the same, than all going to be the same.
And 'rng' command put the random number generator in predefined state so it will generate same sequence of number.
anquegi's answer, pretty much answers your question, so this post is just to elaborate a bit more.
Whenever you ask for a random number, what MATLAB really does, is that it generates a pseudo random number, which has distribution U(0,1) (that is the uniform on [0,1]) This is done via some deterministic formula, typically something like, see Linear congruential generator:
X_{n+1} = (a X_{n} + b) mod M
then a uniform number is obtained by U = X_{n+1}/M.
There is, however, a problem, If you want X_{1}, then you need X_{0}. You need to initialise the generator, this is the seed. This also means that once X_{0} is specified you will draw the same random numbers, every time. Try open a new MATLAB instance, run randn, close MATLAB, open it again and run randn again. It will be the same number. That is because MATLAB always uses the same seed whenever it is opened.
So what you do with rng(2016) is that you "reset" the generator, and put X_{0} = 2016, such that you now know all numbers that you ask for, and thus reproduce the results.
I want to avoid repetitions as much as possible. I've run the same Matlab program that uses "random" numbers multiple times and gotten exactly the same results. I learned that I should put the command shuffle at the beginning, and then the program will replace the default seed with a new one, based on the time on the clock. But the sequence of outputs from the pseudo-random number generator will
still contain a pattern.
I recently learned about a quantum box random number generator (this or something like it), and in the process of looking it up online I found a couple web servers that deliver random numbers that are continuously generated by quantum mechanical means: ANU Photonics and ANU QRNG.
To buy a quantum box looks pretty hard to afford, so how might I integrate one of the online servers into Matlab?
On http://qrng.anu.edu.au, click the "download" link in the text, and it takes you to a FAQ page where it tells what to download to use the random number generator in different ways. The last on the list is Matlab, for which it gives a link to directly download some code to access the random numbers and a link to Matlab Central to download the JSON Parser which is necessary for it to work.
The code is very simple, and as a script only displays the values it fetches, but can easily be turned into a function. I unzipped the contents of parse_json.zip into C:/Program Files/MATLAB/[version]/toolbox/JSONparser, a new folder in the Toolboxes, navigated to the Toolboxes in the Current Folder in Matlab, right clicked JSONparser, and clicked Add to Path.
Read the comments on the Matlab Central page for JSON Parser to get an idea of the limits of how many random numbers you can pull down at a time.
The random numbers are 16-bit non-negative integers; to create a random integer with more bits, say, 32 bits, I'd recommend taking two of these integers, multiplying one by 2^16, and adding them. If you want a number between 0 and 1, divide the sum by 2^32.
I'm working on some fixed point coding these days.
If I have a bunch of 16 bit samples from an ADC and I do multiplication with a 16 bit filter coefficient, the result could be a 32 bit fixed point number right? Now that's fine because I'm targeting a 32 bit fixed point DSP. However, if I want to multiply that by another 16 bit fixed point coefficient or something then I get overflow right? So does that mean I need to do intermediate truncation? Eventually I'll be truncating anyway because I need to send the result to a 16 bit DAC.
Does anyone have experience with doing this in MATLAB?
EDIT I do have fixed point toolbox. What I don't understand is that right now if I set up a number with a 16 bit word length, then set the max product length to 16, then multiply it by another 16 bit word it gives me an error? If I have to perform all the truncations to prevent an error how does the fixed point toolbox even really help me? I guess I'm looking for an example on how to use the fixed point toolbox to ensure best possible rounding/overflow conditions given that my inputs are 16 bits and I have 32 bit registers.
Thanks
As you noted, a 16-bit multiply can result in a 32-bit result. In continuing, I'm assuming you're fixed-point notation is 16.16.
In order to perform your second multiplication, you should first shift the initial mul's result back down by 16 bits. Since the result is now back into the desired 16.16 format, you may proceed with the second mul ("...if I want to multiply that by another 16 bit fixed point coefficient..."). After this second multiplication, shift the result down by 16 bits to restore the 16.16 notation.
Before shipping the value out the DAC, I would expect that you need to leave fixed-point notation and revert to integer form. To do this, simply shift the value down by 16 bits. Before leaving fixed-point notation, you might consider rounding the result. Assuming a positive fixed-point number, this can be accomplished by adding 0.5f to the result prior to the final right shift. (In 16.16, 0.5f is 2^15.)
As always, sequential fixed-point arithmetic operations should be studied closely to avoid overflowing the left hand side. The operations may be re-ordered or factored to prevent overflow. There are a number of good tutorials on the web that can help tutorial.
As for performing fixed-point math in matlab, the bitshift functions are easy enough to use: reference. Of course, the fixed-point toolbox makes this all the more easy.
This is more of a computer science / information theory question than a straightforward programming one, so if anyone knows of a better site to post this, please let me know.
Let's say I have an N-bit piece of data that will be sent redundantly in M messages, where at least M-1 of those messages will be received successfully. I am interested in different ways of encoding the N-bit piece of data in fewer bits per message. (this is similar to RAID but at a much smaller level, where N = 8 or 16 or 32)
Example: suppose N = 16 and M = 4. Then I could use the following algorithm:
1st and 3rd message: send "0" + bits 0-7
2nd and 4th message: send "1" + bits 8-15
If I can guarantee that 3 messages of the 4 will get through, then at least one message from each group will get through. Thus I can make this work with 9 bits or less, there's probably a way to do this with fewer total bits but I'm not sure how.
Are there some simple encoding/decoding algorithms to do this kind of thing? Does this problem have a name? (if I know what it's called, I can google it!)
note: in my particular case, the messages either arrive correctly or do not arrive at all (no messages arrive with errors).
(edit: moved 2nd part to a separate question)
(Incomplete answer follows. I may add more later.)
The term you may be interested in is channel coding: adding redundancy to a source in order to make it robust during transmission over a noisy channel. In information theory, the complementary problem to channel coding is source coding: reducing the redundancy in a source to represent it using fewer bits. (The combination of these two problems is called joint source-channel coding.)
Your first question asks to find a channel code. The simple example you give is similar to a repetition code, i.e., you send the same message more than twice (usually an odd number of times), and then the message which is received most often is accepted as the original message.
This code is inefficient. To use standard notation, let k = number of bits in original message, and n = number of bits in the transmitted message. For your example, k = 16 and n = 36. A measure of coding efficiency is k/n, where higher means more efficient. In your case, k/n = 0.44. This is low.
The repetition code is a simple kind of block code, i.e., redundancy is added to each block of k bits to create a codeword of n bits. So are the Hamming and Reed-Solomon codes as others mentioned. Hamming codes are relatively easy to understand with some basic linear algebra.
These should be enough terms for you to search on your own. Good luck.
I'm not sure if I understood all the details of your question correctly, but your problem is definitely aboud designing some kind of error correcting code. This is a vast area of computer science and thick tomes have been written about it. Start with wikipedia and see if you can get any simple schemes (like Hamming or Reed-Solomon codes) to work in your case.
If you want to deal not only with symbol corruption, but also deletion of symbols, you should look at erasure codes, this is definitely a more difficult task but good methods exist in many cases.
EDIT: This material from hackersdelight.org seems a nice introduction.
See erasure codes.
You're looking for a packet erasure code. There are only two useful packet erasure codes that are not totally encumbered by patents, and there's only one open-source library to implement those. Find it here: http://planete-bcast.inrialpes.fr/rubrique.php3?id_rubrique=5
Here's a trivially simple scheme that's almost twice as efficient as your example.
You chopped the message into blocks of (N/M)*2 bits. Instead, chop it into N/(M-1)-bit blocks. (Round it up if necessary.) The first block, src[0], encodes as itself: enc[0]=src[0]. The same for the last block: enc[M-1]=src[M-1]. Each of the other blocks gets XORed with its left neighbor: enc[i]=src[i-1]^src[i].
Prefix each encoded block with a log(M)-bit sequence number, essentially as you did, so the receiver can tell which was dropped. (If you can be sure that whichever blocks arrive will arrive in order, then a 1-bit sequence number will do. Just alternate 0 and 1.)
To decode, successively XOR from the left and the right until you hit the dropped block. E.g. src[1] == enc[0]^enc[1]. (Dropping one of the endpoint blocks isn't a special case -- e.g. if the first block is dropped, the scan from the right recovers it, and the scan from the left is of length 0.)
I am simulating a digital filter, which is 4-stage.
Stages are:
CIC
half-band
OSR
128
Input is 4 bits and output is 24 bits. I am confused about the 24 bits output.
I use MATLAB to generate a 4 bits signed sinosoid input (using SD tool), and simulated with modelsim. So the output should be also a sinosoid. The issue is the output only contains 4 different data.
For 24 bits output, shouldn't we get a 2^24-1 different data?
What's the reason for this? Is it due to internal bit width?
I'm not familiar with Modelsim, and I don't understand the filter terminology you used, but...Are your filters linear systems? If so, an input at a given frequency will cause an output at the same frequency, though possibly different amplitude and phase. If your input signal is a single tone, sampled such that there are four values per cycle, the output will still have four values per cycle. Unless one of the stages performs sample rate conversion the system is behaving as expected. As as Donnie DeBoer pointed out, the word width of the calculation doesn't matter as long as it can represent the four values of the input.
Again, I am not familiar with the particulars of your system so if one of the stages does indeed perform sample rate conversion, this doesn't apply.
Forgive my lack of filter knowledge, but does one of the filter stages interpolate between the input values? If not, then you're only going to get a maximum of 2^4 output values (based on the input resolution), regardless of your output resolution. Just because you output to 24-bit doesn't mean you're going to have 2^24 values... imagine running a digital square wave into a D->A converter. You have all the output resolution in the world, but you still only have 2 values.
Its actually pretty simple:
Even though you have 4 bits of input, your filter coefficients may be more than 4 bits.
Every math stage you do adds bits. If you add two 4-bit values, the answer is a 5 bit number, so that adding 0xf and 0xf doesn't overflow. When you multiply two 4-bit values, you actually need 8 bits of output to hold the answer without the possibility of overflow. By the time all the math is done, your 4-bit input apparently needs 24-bits to hold the maximum possible output.