I am trying to follow this tutorial: https://cylab.be/blog/92/measure-ambient-temperature-with-temper-and-linux to get the TEMPer USB sensor to measure ambient temperature so that I can incorporate it into a Perl script that alerts me of a room's ambient temperature. In the tutorial's example they convert the following bytes of data from the device:
Response from device (8 bytes):
80 80 0b 92 4e 20 00 00
to:
In the response, the Bytes 3 and 4 (so 0b 92) indicate the ambient temperature:
0b 92 converted into decimal is 2932
2932 divided by 100 is 29.32 C
Does anyone know how I can use Perl to translate such bytes of data to a decimal and thus to celsius temperature?
Perl's hex function can translate hexadecimal numbers in text to Perl numbers, which you can then represent any way that you like:
my $string = '0b92';
my $number = hex($string);
print $number; # 2962
But, it sounds like you may be reading raw data from a device, and that the number you want is in two octets. Read those and turn them into a Perl number with unpack (with the appropriate format that respects the octet order):
my $buffer;
read $fh, $buffer, 2;
my $number = unpack 'S>', $buffer;
Related
When I read The Swift Programming Language Strings and Characters. I don't know how U+203C (means !!) can represented by (226, 128, 188) in utf-8.
How did it happen ?
I hope you already know how UTF-8 reserves certain bits to indicate that the Unicode character occupies several bytes. (This website can help).
First, write 0x203C in binary:
0x230C = 10000000111100
So this character takes 16 bits to represent. Due to the "header bits" in the UTF-8 encoding scheme, it would take 3 bytes to encode it:
0x230C = 10 000000 111100
1st byte 2nd byte 3rd byte
-------- -------- --------
header 1110 10 10
actual data 10 000000 111100
-------------------------------------------
full byte 11100010 10000000 10111100
decimal 226 128 188
I'm trying to translate the following command to Hex:
beq $s1,$t3,label
It's also given that the command address is 0x1500, and the label address is 0x1000.
So far i know that beq equals 4(hex) and the binary values of the registers.
I know that at first I need to convert to binary and then to Hex, but i can't understand what to do with the label address. Do i need to divide it by 4 to get the value?
BEQ opcode is 000100 (binary).
The instruction format for BEQ is:
OpCode|SR|DR|Offset
where
OpCode(6 bits) is 000100
SR(5 bits) is 10001 for $s1
DR(5bits) is 01011 for $t3
offset(16 bits) is a 16 bits signed offset(shifted 2 times) assuming starting PC is the following instruction after the branch, should be (0x1000 - 0x1504)>>2 = -0x141, which written in A2 compliment is 1111111010111111
You can now concatenate the bit fields and write them in hexadecimal if you wish:
0001 0010 0010 1011 1111 1110 1011 1111 which is 0x122BFEBF
[edit: added explanation of how to compute the offset]
To compute the offset you have to subtract the value of PC+4 (where PC stands for the address of the branch instruction) and the address of the target location. Then divide that address by 4 (or shift right two times). As the offset is encoded in A2 compliment, if the result of the operation is negative you have to apply A2's compliment to get the encoded value.
I am trying to represent the maximum 64-bit unsigned value in different bases.
For base 2 (binary) it would be 64 1's:
1111111111111111111111111111111111111111111111111111111111111111
For base 16 (hex) it would be 16 F's
FFFFFFFFFFFFFFFF
For base 10 (decimal) it would be:
18446744073709551615
I'm trying to get the representation of this value in base 36 (it uses 0-9 and A-Z). There are many online base converters, but they all fail to produce the correct representation because they are limited by 64-bit math.
Does anyone know how to use DC (which is an extremely hard to use string math processors that can handle numbers of unlimited magnitude) and know how to do this conversion? Either that or can anyone tell me how I can perform this conversion with a calculator that won't fail due to integer roll-over?
I mad a quick test with ruby:
i = 'FFFFFFFFFFFFFFFF'.to_i(16)
puts i #18446744073709551615
puts i.to_s(36) #3w5e11264sgsf
You may also use larger numbers:
i = 'FFFFFFFFFFFFFFFF'.to_i(16) ** 16
puts i
puts i.to_s(36)
result:
179769313486231590617005494896502488139538923424507473845653439431848569886227202866765261632299351819569917639009010788373365912036255753178371299382143631760131695224907130882552454362167933328609537509415576609030163673758148226168953269623548572115351901405836315903312675793605327103910016259918212890625
1a1e4vngailcqaj6ud31s2kk9s94o3tyofvllrg4rx6mxa0pt2sc06ngjzleciz7lzgdt55aedc9x92w0w2gclhijdmj7le6osfi1w9gvybbfq04b6fm705brjo535po1axacun6f7013c4944wa7j0yyg93uzeknjphiegfat0ojki1g5pt5se1ylx93knpzbedn29
A short explanation what happens with big numbers:
Normal numbers are Fixnums. If you get larger numbers, the number becomes a Bignum:
small = 'FFFFFFF'.to_i(16)
big = 'FFFFFFFFFFFFFFFF'.to_i(16) ** 16
puts "%i is a %s" % [ small, small.class ]
puts "%i\n is a %s" % [ big, big.class ]
puts "%i^2 is a %s" % [ small, (small ** 2).class ]
Result:
268435455 is a Fixnum
179769313486231590617005494896502488139538923424507473845653439431848569886227202866765261632299351819569917639009010788373365912036255753178371299382143631760131695224907130882552454362167933328609537509415576609030163673758148226168953269623548572115351901405836315903312675793605327103910016259918212890625
is a Bignum
268435455^2 is a Bignum
From the documentation of Bignum:
Bignum objects hold integers outside the range of Fixnum. Bignum objects are created automatically when integer calculations would otherwise overflow a Fixnum. When a calculation involving Bignum objects returns a result that will fit in a Fixnum, the result is automatically converted.
It can be done with dc, but the output is not extremely useful.
$ dc
36
o
16
i
FFFFFFFFFFFFFFFF
p
03 32 05 14 01 01 02 06 04 28 16 28 15
Here's the explanation:
Entering a number by itself pushes that number
o pops the stack and sets the output radix.
i pops the stack and sets the input radix.
p prints the top number on the stack, in the current output radix. However, dc prints any output with a higher radix than 16 as binary (not ASCII).
In dc, the commands may be all put on the same line, like so:
$ dc
36o16iFFFFFFFFFFFFFFFFp
03 32 05 14 01 01 02 06 04 28 16 28 15
Get any language that can handle arbitrarily large integers. Ruby, Python, Haskell, you name it.
Implement the basic step: modulo 36 gives you the next digit, division by 36 gives you the number with the last digit cut out.
Map the digits to characters the way you like. For instance, '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'[digit] is fine by me. Append digits to the result as you produce them.
???
Return the concatenated string of digits. Profit!
Why do we have Base64 encoding? I am a beginner and I really don't understand why would you obfuscate the bytes into something else (unless it is encryption). In one of the books I read Base64 encoding is useful when binary transmission is not possible. Eg. When we post a form it is encoded. But why do we convert bytes into letters? Couldn't we just convert bytes into string format with a space in between? For example, 00000001 00000004? Or simply 0000000100000004 without any space because bytes always come in pair of 8?
Base64 is a way to encode binary data into an ASCII character set known to pretty much every computer system, in order to transmit the data without loss or modification of the contents itself.
For example, mail systems cannot deal with binary data because they expect ASCII (textual) data. So if you want to transfer an image or another file, it will get corrupted because of the way it deals with the data.
Note: base64 encoding is NOT a way of encrypting, nor a way of compacting data. In fact a base64 encoded piece of data is 1.333… times bigger than the original datapiece. It is only a way to be sure that no data is lost or modified during the transfer.
Base64 is a mechanism to enable representing and transferring binary data over mediums that allow only printable characters.It is most popular form of the “Base Encoding”, the others known in use being Base16 and Base32.
The need for Base64 arose from the need to attach binary content to emails like images, videos or arbitrary binary content . Since SMTP [RFC 5321] only allowed 7-bit US-ASCII characters within the messages, there was a need to represent these binary octet streams using the seven bit ASCII characters...
Hope this answers the Question
Base64 is a more or less compact way of transmitting (encoding, in fact, but with goal of transmitting) any kind of binary data.
See http://en.wikipedia.org/wiki/Base64
"The general rule is to choose a set of 64 characters that is both part of a subset common to most encodings, and also printable."
That's a very general purpose and the common need is not to waste more space than needed.
Historically, it's based on the fact that there is a common subset of (almost) all encodings used to store chars into bytes and that a lot of the 2^8 possible bytes risk loss or transformations during simple data transfer (for example a copy-paste-emailsend-emailreceive-copy-paste sequence).
(please redirect upvote to Brian's comment, I just make it more complete and hopefully more clear).
For data transmission, data can be textual or non-text(binary) like image, video, file etc.
As we know, during transmission only a stream of data(textual/printable characters) can be sent or received, hence we need a way encode non-text data like image, video, file.
Binary and ASCII representation of non-text(image, video, file) is easily obtainable.
Such non-text(binary) represenation is encoded in textual format such that each ASCII character takes one out of sixty four(A-Z, a-z, 0-9, + and /) possible character set.
Table 1: The Base 64 Alphabet
Value Encoding Value Encoding Value Encoding Value Encoding
0 A 17 R 34 i 51 z
1 B 18 S 35 j 52 0
2 C 19 T 36 k 53 1
3 D 20 U 37 l 54 2
4 E 21 V 38 m 55 3
5 F 22 W 39 n 56 4
6 G 23 X 40 o 57 5
7 H 24 Y 41 p 58 6
8 I 25 Z 42 q 59 7
9 J 26 a 43 r 60 8
10 K 27 b 44 s 61 9
11 L 28 c 45 t 62 +
12 M 29 d 46 u 63 /
13 N 30 e 47 v
14 O 31 f 48 w (pad) =
15 P 32 g 49 x
16 Q 33 h 50 y
These sixty four character set is called Base64 and encoding a given data into this character set having sixty four allowed characters is called Base64 encoding.
Let us take examples of few ASCII characters when encoded to Base64.
1 ==> MQ==
12 ==> MTI=
123 ==> MTIz
1234 ==> MTIzNA==
12345 ==> MTIzNDU=
123456 ==> MTIzNDU2
Here few points are to be noted:
Base64 encoding occurs in size of 4 characters. Because an ASCII character can take any out of 256 characters, which needs 4 characters of Base64 to cover. If the given ASCII value is represented in lesser character then rest of characters are padded with =.
= is not part of base64 character set. It is used for just padding.
Hence, one can see that the Base64 encoding is not encryption but just a way to transform any given data into a stream of printable characters which can be transmitted over network.
I am trying to unpack a variable containing a string received from a spectrum analyzer:
#42404?û¢-+Ä¢-VÄ¢-oÆ¢-8æ¢-bÉ¢-ôÿ¢-+Ä¢-?Ö¢-sÉ¢-ÜÖ¢-¦ö¢-=Æ¢-8æ¢-uô¢-=Æ¢-\Å¢-uô¢-?ü¢-}¦¢-=Æ¢-)...
The format is real 32 which uses four bytes to store each value. The number #42404 represents 4 extra bytes present and 2404/4 = 601 points collected. The data starts after #42404. Now when I receive this into a string variable,
$lp = ibqry($ud,":TRAC:DATA? TRACE1;*WAI;");
I am not sure how to convert this into an array of numbers :(... Should I use something like the followin?
#dec = unpack("d", $lp);
I know this is not working, because I am not getting the right values and the number of data points for sure is not 601...
First, you have to strip the #42404 off and hope none of the following binary data happens to be an ASCII number.
$lp =~ s{^#\d+}{};
I'm not sure what format "Real 32" is, but I'm going to guess that it's a single precision floating point which is 32 bits long. Looking at the pack docs. d is "double precision float", that's 64 bits. So I'd try f which is "single precision".
#dec = unpack("f*", $lp);
Whether your data is big or little endian is a problem. d and f use your computer's native endianness. You may have to force endianness using the > and < modifiers.
#dec = unpack("f*>", $lp); # big endian
#dec = unpack("f*<", $lp); # little endian
If the first 4 encodes the number of remaining digits (2404) before the floats, then something like this might work:
my #dec = unpack "x a/x f>*", $lp;
The x skips the leading #, the a/x reads one digit and skips that many characters after it, and the f>* parses the remaining string as a sequence of 32-bit big-endian floats. (If the output looks weird, try using f<* instead.)