I'm relatively new to Simulink and I am looking for a possibility to extract 1-3 specific bits from one byte.
As far as I know the input format (bin, dec, hex) of the constant is irrelevant for the following!? But how can I say that the constant "1234" is hex and not dec?
In my model I use the "Constant Block" as my source (will be parametrised by a MATLAB variable which comes from a m-file).
A further processing with the "Extract Bits Block" causes an error on incompatible data types.
Can someone help me to deal with this issue?
Greets, poeschlorn
You should probably do the conversion hex->dec in your .m initialization file and use this value in Simulink.
Maybe this is not the most elegant solution, but I converted my input to decimal and then created a BCD representation of it via OR and AND logic blocks for further use.
If you have the Communications Toolbox/Blockset then you can use the Integer to Bit Converter block to do a conversion to a vector of binary digits then just extract the "bits" that you want. The Bit to Integer Converter block will do the reverse transformation.
If you don't have the Communicatins Blockset then it wouldn't be hard to do a similar thing to this using a plain MATLAB Function block.
Related
I am working on a compression project, and I used the default save() function in Matlab for the purpose of lossless (entropy) encoding. The transform module is all figured out.
I used the save() function to encode a 3d array that includes a bunch of zeros. I am sure that Matlab is using some kind of lossless compression with the save() function since, when I save that array, it ends up taking far less space than an array, say, containing no zeros at all. I had no success finding out what type of entropy encoding schemes are behind the function. Because it is a core part of the algorithm, I think I must at least know what is behind the function.
Plus, if you know any other type of entropy encoder that would do a better job in compressing a 3d array that contains zeros, I would really appreciate you sharing. Or, if you think I could easily write the code for that myself, then please let me know.
The v7 format uses deflate.
The v7.3 format uses the HDF5 format, which supports gzip (deflate) and szip compression. It also has an option to not compress.
The MATLAB save function supports compression for some of the formats that are available. Specifically, -v7 (default format) and -v7.3 support compression. The details of the compression are not documented.
I've been trying to implement a JPEG compression algorithm on Matlab.
The only part I'm having trouble implementing is the huffman encoding. I do understand the DCT into quantization and zig-zag'ing that 8x8 matrix. I also understand how does huffman encoding work, in general.
What I do not understand is, after I have an output bitstream and a dictionary that translates consecutive bits to their original form, what do I do with the output? How can I tell a computer to translate that output bitstream using the dictionary I created for it?
In addition, each 8x8 matrix will have its own output and dictionary. How can all these outputs be combined into one? Because at the end of the day, the result is supposed to be an image.
I might have misunderstood some of the steps, in which case my apologies for any confusion caused by this.
Any help would be extremely appriciated!
EDIT: I'm sorry, my question appearntly hasn't been clear enough. Say I use Matlabs built in huffman functions (huffmanenco and huffmandict), what am I supposed to do with the value the huffmanenco returns?
The part of what to do with the output string of bits hasn't been clear to me as far as huffman encoding goes in other IDE's and programming languages aswell.
You have two choices with the huffman coding.
Use a pre-canned huffman table.
Make two passes over the data where the first pass generates the huffman tables and the second pass encode.
You cannot have a different dictionary for each MCU.
You say you have the run length encoded values. You huffman encode those and write to the output stream.
EDIT:
You need to be sure that the matlab huffman endocoder is JPEG-compatible. There are different ways to huffman encode.
You need to write the bits from the encoder to the JPEG stream. This means you need a bit level I/O routine. PLUS you need to convert FF values in the compressed data into FF00 values in the JPEG stream.
I suggest getting a copy of
http://www.amazon.com/Compressed-Image-File-Formats-JPEG/dp/0201604434/ref=pd_sim_14_1?ie=UTF8&dpID=41XJBED6RCL&dpSrc=sims&preST=_AC_UL160_SR127%2C160_&refRID=1DYN5VCQQP0N88E64P5Q
to show how the encoding is done.
I'm trying to send and receive data through a serial port using simulink (matlab 7.1) and d-space. The values I want to send and receive are doubles. Unfortunately for me the send and receive blocks use uint8 values. My question is how can I convert doubles into an array of uint8 values and vice versa? Are there simulink blocks for this or should I use embedded matlab functions?
Use the aptly named Data Type Conversion block, which does just that.
EDIT following discussion in the comments
Regarding scaling, here's a snapshot of something I did a long time ago. It's using CAN rather than serial, but the principle is the same. Here, it's slightly easier in that the signals are always positive, so I don't have to worry about scaling a negative number. 65535 is the max value for a uint16, and I would do the reverse scaling on the receiving end. When converting to uint16 (or uint8 as in your case, it automatically rounds the value, and you can specify that behaviour in the block mask).
There are pack and unpack blocks in simulink, search for them in simulink library browser. You could need som additional product, not sure which.
I know that we need to convert decimal, octal, and hexadecimal into binary, but I am confused about conversion of decimal to octal or octal to hexadecimal or decimal to hexadecimal.
Why and where we need these types of conversion?
Different bases are good for different purposes.
Decimal is obviously what most people know how to deal with, so is good for output of real quantities to end users.
Hex is short and has an even ratio of exactly 2 characters per byte, so it's good for expressing large numbers like SHA1 hashes or private keys and the like in a type-able format, particularly since those numbers don't really represent a quantity, so users don't need to be able to understand them as numbers.
Octal is mostly for legacy reasons -- UNIX file permission codes are traditionally expressed as octal numbers, for example, because three bits per digit corresponds nicely to the three bits per user-category of the UNIX permission encoding scheme.
One sometimes will want to use numbers in one base for a purpose where another base is desired. Thus, the various conversion functions available. In truth, however, my experience is that in practice you almost never convert from one base to another much, except to convert numbers from some non-binary base into binary (in the form of your language of choice's native integral type) and back out into whatever base you need to output. Most of the time one goes from one non-binary base to another is when learning about bases and getting a feel for what numbers in different bases look like, or when debugging using hexadecimal output. Even then, if a computer does it the main method is to convert to binary and then back out, because current computers are just inherently good at dealing with base-2 numbers and not-so-good at anything else.
One important place you see numbers actually stored and operated on in decimal is in some financial applications or others where it's important that "number-of-decimal-place" level precision be preserved. Sometimes fixed-point arithmetic can work for currency, but not always, and if it doesn't using binary-floating-point is a bad idea. Older systems actually had built in support for this in the form of binary-coded-decimal arithmetic. In BCD, each 4 bits acts as a decimal digit, so you give up a chunk of every 4 bits of storage in exchange for maintaining your level of precision in the base-of-choice of the non-computing world.
Oddly enough, there is one common use case for other bases that's a bit hidden. Modern languages with large number support (e.g. Python 2.x's long type or Java's BigInteger and BigDecimal type) will usually store the numbers internally in an array with each element being a digit in some base. Then they implement the math they support on strings of digits of that base. Really efficient bigint implementations may actually use use a base approaching 2^(bits in machine native word size); a base 2^64 number is obviously impossible to usefully output in that form, but doing the calculations in chunks of that size ends up making the best use of space and the CPU. (I don't know if that's the best base; it may be best to use a base of half that number of bits to simplify overflow handling from one digit to the next. It's been awhile since I wrote my own bigint and I never implemented the faster/more-complicated versions of multiplication and division.)
MIME uses hexadecimal system for Quoted Printable encoding (e.g. mail subject in Unicode) abd 64-based system for Base64 encoding.
If your workplace is stuck in IPv4 CIDR - you'll be doing quite a lot of bin -> hex -> decimal conversions managing most of the networking equipment until you get them memorized (or just use some random, simple tool).
Even that usage is a bit few-and-far-between - most businesses just adopt the lazy "/24 everything" approach.
If you do a lot of graphics work - there's the chance you'll want to convert colors between systems and need to convert from hex -> dec... most tools have this built in to the color picker, though.
I suppose there's no practical reason to be able to do other than it's really simple and there's no point not learning how to do it. :)
... unless, for some reason, you're trying to do mantissa binary math in your head.
All of these bases have their uses. Hexadecimal in particular is useful as a shorthand for binary. Every hexadecimal digit is equivalent to 4 bits, so you can write a full 32-bit value as a string of 8 hex digits. Likewise, octal digits are equivalent to 3 bits, and are used frequently as a shorthand for things like Unix file permissions (777 = set read, write, execute bits for user/group/other).
No one base is special--they all have their (obscure) uses. Decimal is special to us because it reflects human experience (10 fingers) but that's really the only reason.
A real world use case: a program prints error code in decimal, to get info from a database or the internet you need the hexadecimal format, because the bits of the error 'number' convey extra info you need to look at it in binary.
I'm there are occasional uses for this. One use case would be a little app that allows user who wants to convert decimal to octal ... like you can with lots of calculators.
But I'm not sure I understand the point of the question. Standard libraries typically don't provide methods like String toOctal(String decimal). Instead, you would normally convert from a decimal String to a primitive integer and then from the primitive integer to (say) an octal String.
I know its a very basic question. But still, I am struggling to convert Binary to Integer and vice-versa in Simulink.
I could use a function block and use inbuilt Matlab functions to do it. But I, intend to use the Simulink blocks to convert Binary to decimal number.
Please suggest me how to do it or any pointers in the internet would be helpful.
You can use a Conversion block to convert back and forth between binary (i.e. boolean) types and various integer (int8, uint8, int16, etc.) or floating point (single or double) types.
I think this is what you're looking for:
How do I visualize the fixed-point data in binary or hex format using Simulink?