I was reviewing this NEC document for programming a projector which shows RS232 command examples using number formats such as: 20h + 81h + 01h + 60h + 01h + 00h = 103h from other sections in the document it would seem that h = 15, though I could be wrong.
I'm a bit embarrassed to ask, but what number format is this? 20h or 103h
It's hexadecimal
20h == 0x20 == 32
I hadn't seen this kind of notation in a while. I remember it being used for old PC BIOS/DOS interrupt tables: http://spike.scu.edu.au/~barry/interrupts.html
It's hex.
20h = 32
01h = 1
Similar to the 0x notation. E.g. 0x20 = 20h = 32.
In Section 2.1 of the document you linked:
Command/response A series of strings enclosed in a frame represents a
command or response (in hexadecimal notation).
Its hexadecimal.
That means 16 instead of 10 as number base.
So the numbers are 0 1 2 3 4 5 6 7 8 9 A B C D E F.
Adding 20h + 81h equals A1h
or 32 + 129 = 161.
Other notations are 0x00 (C languages)
Related
When I apply the function dwt2() on an image, I get the four subband coefficients. By choosing any of the four subbands, I work with a 2D matrix of signed numbers.
In each value of this matrix I want to embed 3 bits of information, i.e., the numbers 0 to 7 in decimal, in the last 3 least significant bits. However, I don't know how to do that when I deal with negative numbers. How can I modify the coefficients?
First of all, you want to use an Integer Wavelet Transform, so you only have to deal with integers. This will allow you a lossless transformation between the two spaces without having to round float numbers.
Embedding bits in integers is a straightforward problem for binary operations. Generally, you want to use the pattern
(number AND mask) OR bits
The bitwise AND operation clears out the desired bits of number, which are specified by mask. For example, if number is an 8-bit number and we want to zero out the last 3 bits, we'll use the mask 11111000. After the desired bits of our number have been cleared, we can substitute them for the bits we want to embed using the bitwise OR operation.
Next, you need to know how signed numbers are represented in binary. Make sure you read the two's complement section. We can see that if we want to clear out the last 3 bits, we want to use the mask ...11111000, which is always -8. This is regardless of whether we're using 8, 16, 32 or 64 bits to represent our signed numbers. Generally, if you want to clear the last k bits of a signed number, your mask must be -2^k.
Let's put everything together with a simple example. First, we generate some numbers for our coefficient subband and embedding bitstream. Since the coefficient values can take any value in [-510, 510], we'll use 'int16' for the operations. The bitstream is an array of numbers in the range [0, 7], since that's the range of [000, 111] in decimal.
>> rng(4)
>> coeffs = randi(1021, [4 4]) - 511
coeffs =
477 202 -252 371
48 -290 -67 494
483 486 285 -343
219 -504 -309 99
>> bitstream = randi(8, [1 10]) - 1
bitstream =
0 3 0 7 3 7 6 6 1 0
We embed our bitstream by overwriting the necessary coefficients.
>> coeffs(1:numel(bitstream)) = bitor(bitand(coeffs(1:numel(bitstream)), -8, 'int16'), bitstream, 'int16')
coeffs =
472 203 -255 371
51 -289 -72 494
480 486 285 -343
223 -498 -309 99
We can then extract our bitstream by using the simple mask ...00000111 = 7.
>> bitand(coeffs(1:numel(bitstream)), 7, 'int16')
ans =
0 3 0 7 3 7 6 6 1 0
I'm trying to make a linear algebra-based algorithm for shift(Ceasar) cryptography cipher . Supposing I have a string : 'hello ' . When I'm trying to convert it into a (int)number matrix I do this :
'hello' - 'a'
And the result is
ans =
7 4 11 11 14
This is the desired result . But if I subtract the character 'g' the result will be
ans =
1 -2 5 5 8
I'd like to ask what happens in Matlab(or Octave) when I subtract a character and I get the results above .
As Mohit Jain wrote, the results you get are based on a conversion to ASCII which is the most widely accepted way to numerically encode textual information. ASCII is also included as a subset in the current standard of Unicode, and on supporting platforms Matlab actually uses a 16-bit Unicode encoding, which enables it to not only represent the 95 printable characters of ASCII which support English text, but a large number of international scripts, special characters for applications in mathematics, typography and many other fields. Explicit conversion between numeric and character data in Matlab is done through char and double:
>> double('aAΔ')
ans =
97 65 916
A small latin letter 'a' has the ASCII code 97, a large latin letter 'A' the ASCII code 65, and a large greek letter Delta has the Unicode number 916. Since the latin letters are encoded in sequence with codes 97 to 122 for small letters and 65 to 90 for capitals, you can generate the English alphabet e.g. like this:
>> char(65 : 90)
ans =
ABCDEFGHIJKLMNOPQRSTUVWXYZ
When you apply an arithmetic operator like - to character strings, the characters are implicitly converted to numbers as if you had used double
>> double('hello')
ans =
104 101 108 108 111
>> double('g')
ans =
103
and therefore 'hello' - 'a' is the same as
>> [104 101 108 108 111] - 103
ans =
1 -2 5 5 8
It changes characters of string to their ascii value and then subtracts each value
'hello' - 'a' = 7 4 11 11 14 because h - a = 8 -1 =7
(these should be ascii values but i am using these values for simplicity because its all relative)
e-a=5-1=4
l-a = 12-1 =11 and so on
'hello' - 'g'
h-g=8-7=1
e-g=5-7=-2 and so on
need some help to understand how DEFLATE Encoding works. I know that is a combination of the LZSS algorithm and Huffman coding.
So let encode for example "Deflate late". Params: [Search buffer: 8kb and Look-ahead buffer 4kb] Well, the output of LZSS algorithm is "Deflate <5, 4>" The next step uses static huffman coding to reduce the redundancy. Here is my problem, I dont know how should i encode this pair <5, 4> with huffman.
[Edited]
D 000
f 001
l 010
a 011
t 100
_ 101
e 11
So well, according to this table the string "Deflate " is written as 000 11 001 010 011 100 11 101. As a next step lets encode the pair (5, 4). The fixed prefix code of the length 4 according to the book "Data Compression - The Complete Reference" is 258, followed by fixed prefix code of the distance 5 (Code 4 + 1 Extra bit).
That can be summarized as:
length 4 -> 258 -> 0000010
distance 5 -> 4 + 1 extra bit -> 00100|0
So, the encoded string is written as [header: 1 01] 000 11 001 010 011 100 11 101 0000010 001000 [end-of-block: 0000000], BUT if i create a huffman tree, it is not a static huffman anymore, right?
Good day
D 000
f 001
l 010
a 011
t 100
_ 101
e 11
is not the Deflate static code. The static literal/length codes are all 7, 8, or 9 bits, and the distance codes are all 5 bits. You asked about the static codes.
'Deflate late' encoded in static deflate format as the literals 'Deflate ' and a length 4, distance 5 match in hex is:
73 49 4d cb 49 2c 49 55 00 11 00
That is broken down as follows (bits are read from the least significant part of each byte first):
011 - 01 means fixed code, 1 means last block
00101110 - D
10101001 - e
01101001 - f
00111001 - l
10001001 - a
00100101 - t
10101001 - e
00001010 - space
0100000 - length 4
00100 - distance 5 or 6 depending on one extra bit
0 - extra bit -> distance 5
0000000 - end code
0 - fill bit to byte boundary
I'm working on a program that converts between number bases. For example Octal is 8, decimal is 10. Letters A to Z could be considered as base 26.
I want to convert a number like "A" into 0, Z into 25, "AA" into 27 and "BA" into 53.
Before I start coding I'm doing it on paper so I understand the process. To start out I'm trying to convert 533 to base 26.
What algorithm is best for doing this?
You need to assign a "digit" to each letter, like:
A = 0 N = 13
B = 1 O = 14
C = 2 P = 15
D = 3 Q = 16
E = 4 R = 17
F = 5 S = 18
G = 6 T = 19
H = 7 U = 20
I = 8 V = 21
J = 9 W = 22
K = 10 X = 23
L = 11 Y = 24
M = 12 Z = 25
Then, your {20,13} becomes UN.
Converting back is UN -> {20,13} -> (20 * 26 + 13) -> 52.
By way of further example, let's try the number 10163, just plucked out of the air at random.
Divide that by 26 until you get a number less than 26 (i.e., twice), and you get 15 with a fractional part of 0.03402366.
Multiply that by 26 and you get 0 with a fractional part of 0.88461516.
Multiply that by 26 and you get 23 (actually 22.99999416 on my calculator but, since the initial division was only two steps, we stop here - the very slight inaccuracy is due to the fact that the floating point numbers are being rounded).
So the "digits" are {15,0,23} which is the "number" PAX. Wow, what a coincidence?
To convert PAX back into decimal, its
P * 262 + A * 261 + X * 260
or
(15 * 676) + (0 * 26) + 23
= 10140 + 0 + 23
= 10163
Let's take a step back for a second, and look at decimal.
What does a number like "147" mean? Or rather, what do the characters '1', '4' and '7', when arranged like that, indicate?
There are ten digits in decimal, and after that, we add another digit to the left of the first, and so on as our number increases. So after "9" = 9*1, we get "10" = 1*10 + 0*1. So "147" is 1*10^2 + 4*10 + 7*1 = 147. Similarly, we can go backwards - 147/10^2 = 1, which maps to the character '1'. (147 % 10^2) / 10 = 4, which maps to the character '4'. And 147 % 10 = 7, which maps to the character '7'.
This works works for any base N - if we get the number 0, that maps to the first character in our set. The number 1 maps to the second character, and so on until the number N-1 maps to the last character in our set of digits.
You convert 20 and 13 to the symbols that represent 20 and 13 in your base 26 notation. It sounds like you are using the letters of the alphabet so, that would be UN (where A is 0 and Z is 25).
What language are you writing this in? If you're doing this in Perl you can use the CPAN module Math::Fleximal that I wrote many years ago while I was bored. If you're using a language with infinite precision integers, then life becomes much easier. All you have to do is take characters, convert them into an array of integers, then do the calculation to turn that into a number.
I currently have an instrument that sends 4 bytes representing a floating point number of 32-bit in little endian format, the data looks like:
Gz*=
<«�=
N×e=
or this
à|ƒ=
is there a conversion for this in matlab, Agilent vee and manually
To convert an array of char to single, you can use typecast:
c = 'Gz*=';
f = typecast(c, 'single')
f = 0.041621
Just implicitly!
>> data = ['Gz*=';'<«�=';'N×e=']
data =
Gz*=
<«�=
N×e=
>> data+0
ans =
71 122 42 61
60 171 65533 61
78 215 101 61
data+0 forces it to be interpreted as a number which is fine.
If it's interpreted it backwards (I'm not sure if MATLAB is big or little endian) just use the swapbytes function.