Does anyone know a efficient method in order to insert the ASCII value of some characters in the 8 least significant bits (LSB) of a 16 bit number?
The only idea that comes up in my mind is to convert both numbers to binary, then replace the last 8 characters, from 16 bit number, by the ASCII value in 8 bits. But as far as I know string operations are very expensive in computational time.
Thanks
I don't know Matlab syntax, but in C, it would be something like this:
short x; // a 16-bit integer in many implementations
... do whatever you need to to x ...
char a = 'a'; // some character
x = (x & 0xFF00) | (short)(a & 0x00FF);
The & operator is the arithmetic "and" operator. The | operator is the arithmetic "or" operator. Numbers beginning with 0x are in hexadecimal for easy readability.
Here is a MATLAB implementation of #user1118321 idea:
%# 16-bit integer number
x = uint16(30000);
%# character
c = 'a';
%# replace lower 8-bit
y = bitand(x,hex2dec('FF00'),class(x)) + cast(c-0,class(x))
Related
I am new to Matlab and am having trouble using the mod function.
I am given a scrambled vector of lowercase characters and a shift value that can be positive or negative that I am supposed to add to the vectorI am supposed to use the mod function to wrap around the lowercase letters in the alphabet.For example, if the letter is 'a' and the shift amount if 4 the letter will then become 'e'.A negative means shifting towards 'a' in the alphabet. The Shift should 'wrap' around the alphabet- 'x' shifted by 7 should become 'e'.
I have tried writing conditionals using if and elseif statements but I am supposed to use the mod function instead of conditionals.
mod(x,y) is the remainder of the division of x and y, having the same sign as y. Thus, given negative x, the sign is still positive. This is different from how mod is defined in other languages.
I obviously y must be the number of characters in the range a-z. x is the 0-based index to the shifted character, which should be 0 for “a”, and y-1 for “z”. You can get this by simple subtracting the ASCII value of “a”:
letter - 'a'
Note that 'a' is a char that implicitly converts to the ASCII value of the letter in arithmetic operations.
The result of the mod operation then again returns one such index, which you can turn into a character by adding the ASCII value of “a”:
char(index + 'a')
Putting it all together:
char(mod(letter-'a', 'z'-'a'+1) + 'a')
Instead of letter you can use a vector of letters (char array) in that expression.
I'd like to know how regexp is used for floating numbers or is there any other function to do this.
For example, the following returns {'2', '5'} rather than {'2.5'}.
nums= regexp('2.5','\d+','match')
Regular expressions are a tool for low-level text parsing and they have no concept of numeric datatypes. If you will want to parse decimals, you need to consider what characters compose a decimal number and design a regexp to explicitly match all of those characters.
Your example only returns the '2' and '5' because your pattern only matches characters that are digits (\d). To handle the decimal numbers, you need to explicitly include the . in your pattern. The following will match any number of digits followed by 0 or 1 radix points and 0 or more numbers after the radix point.
regexp('2.5', '\d+\.?\d*', 'match')
This assumes that you'll always have a leading digit (i.e. not '.5')
Alternately, you may consider using something like textscan or sscanf instead to parse your string which is going to be more robust than a custom regex since they are aware of different numeric datatypes.
C = textscan('2.5', '%f');
C = sscanf('2.5', '%f');
If your string only contains this floating point number, you can just use str2double
val = str2double('2.5');
#Suever answer has already been accepted, anyway here is some more complete one that should accept all sorts of floating points syntaxes (including NaN and +/-Inf by default):
% Regular expression for capturing a double value
function [s] = cdouble(supportPositiveInfinity, supportNegativeInfinity,
supportNotANumber)
%[
if (nargin < 3), supportNotANumber = true; end
if (nargin < 2), supportNegativeInfinity = true; end
if (nargin < 1), supportPositiveInfinity = true; end
% A double
s = '[+\-]?(?:(?:\d+\.\d*)|(?:\.\d+)|(?:\d+))(?:[eE][+\-]?\d+)?'; %% This means a numeric double
% Extra for nan or [+/-]inf
extra = '';
if (supportNotANumber), extra = ['nan|' extra]; end
if (supportNegativeInfinity), extra = ['-inf|' extra]; end
if (supportPositiveInfinity), extra = ['inf|\+inf|' extra]; end
% Adding capture
if (~isempty(extra))
s = ['((?i)(?:', extra, s, '))']; % (?i) => Locally case-insensitive for captured group
else
s = ['(', s, ')'];
end
%]
end
Basically above pattern says:
Eventually start with '+' or '-' sign
Then followed by either
One or more digits followed by a dot and eventually zero to many digits
A dot followed by one or more digits
One or more digits only
Then followed by exponant pattern, that is:
'e' or 'E' (eventually followed with '+' or '-') and one or more digits
Pattern is later completed with support of Inf, +Inf, -Inf and NaN in case insensitive way. Finally everthing is enclosed between ( and ) for capturing purpose.
Here is some online test example: https://regex101.com/r/K87z6e/1
Or you can test in matlab:
>> regexp('Hello 1.235 how .2e-7 are INF you +.7 doing ?', cdouble(), 'match')
ans =
'1.235' '.2e-7' 'INF' '+.7'
I'd like to have a function generate(n) that generates the first n lowercase characters of the alphabet appended in a string (therefore: 1<=n<=26)
For example:
generate(3) --> 'abc'
generate(5) --> 'abcde'
generate(9) --> 'abcdefghi'
I'm new to Matlab and I'd be happy if someone could show me an approach of how to write the function. For sure this will involve doing arithmetic with the ASCII-codes of the characters - but I've no idea how to do this and which types that Matlab provides to do this.
I would rely on ASCII codes for this. You can convert an integer to a character using char.
So for example if we want an "e", we could look up the ASCII code for "e" (101) and write:
char(101)
'e'
This also works for arrays:
char([101, 102])
'ef'
The nice thing in your case is that in ASCII, the lowercase letters are all the numbers between 97 ("a") and 122 ("z"). Thus the following code works by taking ASCII "a" (97) and creating an array of length n starting at 97. These numbers are then converted using char to strings. As an added bonus, the version below ensures that the array can only go to 122 (ASCII for "z").
function output = generate(n)
output = char(97:min(96 + n, 122));
end
Note: For the upper limit we use 96 + n because if n were 1, then we want 97:97 rather than 97:98 as the second would return "ab". This could be written as 97:(97 + n - 1) but the way I've written it, I've simply pulled the "-1" into the constant.
You could also make this a simple anonymous function.
generate = #(n)char(97:min(96 + n, 122));
generate(3)
'abc'
To write the most portable and robust code, I would probably not want those hard-coded ASCII codes, so I would use something like the following:
output = 'a':char(min('a' + n - 1, 'z'));
...or, you can just generate the entire alphabet and take the part you want:
function str = generate(n)
alphabet = 'a':'z';
str = alphabet(1:n);
end
Note that this will fail with an index out of bounds error for n > 26, so you might want to check for that.
You can use the char built-in function which converts an interger value (or array) into a character array.
EDIT
Bug fixed (ref. Suever's comment)
function [str]=generate(n)
a=97;
% str=char(a:a+n)
str=char(a:a+n-1)
Hope this helps.
Qapla'
I am looking for a way to convert an array of 16-bit unsigned integer into ASCII char array. I am using char to do the conversion
D=[65 65 65 65];
char(D)
which will show 4 'A'. However, since each number in D is 16-bit, I expect it to convert each number to 2 chars. For example, if I have
D=[16707]
char(D)
I expect it gives me two chars 'A' and 'C'. But char always return 1 character. Is that anyway to force char to convert like the way I stated? Thanks.
For this, you need to write your own function.
You can use char() to convert most significant byte and least significant byte separately.
k = 16707;
first = char(bitand(bitshift(k, -8), 255));
second = char(bitand(k, 255));
Have a look at
http://www.mathworks.com/help/matlab/ref/char.html
It cleatly states that the char function is valid only for 8 bit numbers. you can convert each part of cell of the array with this and contact the results for each two cells.
Use typecast to convert each uint16 to two uint8, and then apply char. Make sure that the input to typecastr is really of type uint16.
If you need to reverse char order, use swapbytes on the uint16 vector.
>> D = [16707 16708];
>> char(typecast(uint16(D),'uint8'))
ans =
CADA
>> char(typecast(swapbytes(uint16(D)),'uint8'))
ans =
ACAD
i donot know why there is error in this coding:
hex_str1 = '5'
bin_str1 = dec2bin(hex2dec(hex_str1))
hex_str2 = '4'
bin_str2 = dec2bin(hex2dec(hex_str2))
c=xor(bin_str1,bin_str2)
the value of c is not correct when i transform the hex to binary by using the xor function.but when i used the array the value of c is correct.the coding is
e=[1 1 1 0];
f=[1 0 1 0];
g=xor(e,f)
what are the mistake in my first coding to xor of hec to binary value??anyone can help me find the solution...
Your mistake is applying xor on two strings instead of actual numerical arrays.
For the xor command, logical "0"s are represented by actual zero elements. Any non-zero elements are interpreted as logical "1"s.
When you apply xor on two strings, the numerical value of each character (element) is its ASCII value. From xor's point of view, the zeroes in your string are not really zeroes, but simply non-zero values (being equal to the ASCII value of the character '0'), which are interpreted as logical "1"s. The bottom line is that in your example you're xor-ing 111b and 111b, and so the result is 0.
The solution is to convert your strings to logical arrays:
num1 = (bin_str1 == '1');
num2 = (bin_str2 == '1');
c = xor(num1, num2);
To convert the result back into a string (of a binary number), use this:
bin_str3 = sprintf('%d', c);
... and to a hexadecimal string, add this:
hex_str3 = dec2hex(bin2dec(bin_str3));
it is really helpful, and give me the correct conversion while forming HMAC value in matlab...
but in matlab you can not convert string of length more than 52 character using bin2dec() function and similarly hex2dec() can not take hexadecimal character string more than 13 length.