I've a function in gApps spreadsheet that takes a dollar figure multiplies by 100 and then returns the length of the string...
function checkWrite(amount) {
amount = amount * 100
amount = amount+'';
var aLength = amount.length;
return(aLength);
Return gives the following results:
amount--length
4. ---- 3
4.01 -- 3
4.02 -- 18
4.03 -- 3
4.04 -- 3
4.05 -- 3
4.06 -- 18
... and so on. I get many correct answers, 3, and some random wrong ones 18.
Can anyone shed some light on what's going on? I'm not great at coding, but I'm pretty sure that results should be consistent anyway.
Most likely the numbers are not rounded as you'd expect. Float point values are not exactly represented as you're thinking. To check that you should return the string representation generated as well, e.g. change the last line to return aLength+' '+amount;
Anyway, what's the usage of such function? If I have any idea on what you use it for, I could suggest some alternative solution.
Related
I was trying to add a few system calls to the xv6 source code developed at MIT, and upon reading this resource (https://pdos.csail.mit.edu/6.S081/2020/xv6/book-riscv-rev1.pdf), on page 26, they consider the maximum possible virtual address that XV6RISC-V supports.
When I tried to look at the source code for the definition of MAXVA (MAXimum Virtual Address), I came across the following code snippet.
// one beyond the highest possible virtual address.
// MAXVA is actually one bit less than the max allowed by
// Sv39, to avoid having to sign-extend virtual addresses
// that have the high bit set.
#define MAXVA (1L << (9 + 9 + 9 + 12 - 1))
I was intrigued by the expression '9 + 9 + 9 + 12 - 1' instead of simply writing '38'. I tried to look up the underlying reasoning for this but did not find anything related. Is this some kind of optimization? If so, at what level is this relevant and where else could this be relevant?
I have some experience in assembly language programming and understand the basics of how the C code written is being translated to the assembly structure and how the final assembly code corresponding to bit shifting might look like, (using x86 salq and sarq or RISC-V slli). Any hints/ thoughts would be appreciated as well.
The answer could be guessed from vm.c
// The risc-v Sv39 scheme has three levels of page-table
// pages. A page-table page contains 512 64-bit PTEs.
// A 64-bit virtual address is split into five fields:
// 39..63 -- must be zero.
// 30..38 -- 9 bits of level-2 index.
// 21..29 -- 9 bits of level-1 index.
// 12..20 -- 9 bits of level-0 index.
// 0..11 -- 12 bits of byte offset within the page.
9+9+9 correspond to each 9 bits index to address the page
12 correspond to the offset in the page
For the compiler, write 9+9+9+12-1 is the same than 38, but for the human -aware of bits fields- reading this, the first is more clear.
everyone, who read this.
I found an issue that latitude or longitude of geocoordinate with, for example, 2 minutes and 59 seconds, after converting to decimal format, has value "0.049722", but
2 minutes and 60 seconds has value "0.35", but I thought it must be equal to equal to
3 minutes and 00 seconds, that has value "0.05"
3 minutes and 00 seconds, that has value "0.05"
But again
2 minutes and 61 seconds, that has expected value "0.050278"
Is it global geocoordinate issue or online converter issue?
I use http://the-mostly.ru/konverter_geograficheskikh_koordinat.html
When looking at the source of the respective website, you notice the following line:
if (LAsec==60) {LAsec = 0;LAminutes = LAminutes+1;}
Since LAminutes is still a string, this represents a string concatenation, so 2 is converted to 21 instead of 3.
See: javascript (+) sign concatenates instead of giving sum?
In short, the website is very wrong!
Maybe you should use WolframAlpha for this:
https://www.wolframalpha.com/input/?i=0+deg+2%27+60%22+N,+0+deg+E
I am trying to program an ISL12022M RTC and am having trouble interpreting the register map (self taught with little experience). The documentation says that the RTC registers (SC,MN,HR,DT,MO,YR,DW) are BCD representations. In order to allow write capabilitiy into the RTC registers the WRTC bit(bit 6 of address 08h is set to '1'.The map looks like this:
The FAQ example from the Intersil site tells me that to set the WRTC bit I need to send DEh (slave address) 08h (register address) and 41 (Enable WRTC bit, other bits remain in default). Why not hex? Why 41 and not 40? And what does SC22 in SC bit 6, SC21 in bit 5, etc. mean?
Datasheet
Example
I've read the documentation until I can't see anymore and I've searched until I am just getting more confused. Any help is appreciated.
Well, it looks like these values in the map are nibbles. The range for the first register is 0 - 59. When represented in BCD, 4 bits are needed for the digit in the ones place and three bits are needed for the 10's place. So, bits 0 - 3 belong to the first nibble; bit 0 = SC(register name)1(first nibble)0(first bit). Bits 4, 5 and 6 belong to the second nibble. Bit 4 = SC(register name)2(second nibble)0(first bit). Bit 7 is not needed.
The example sheet from Intersil has a typo; the WRTC value needs to be 40h or 41h.
I understand how to define an array range in CoffeeScript
lng[1..10]
However if I have
data = 10
What's the best way to find if 10 is within a range of 1 and 11?
if data is between(1..11)
return true
There is no "between" keyword, but you can utilize a normal array-range:
if data in [1..11]
alert 'yay'
But that's a bit of an overkill, so in simple cases I'd recommend a normal comparison:
if 1 <= data <= 11
alert 'yay'
If you don't mind polluting the native prototypes, you can add a between method to the Number objects:
Number::between = (min, max) ->
min <= this <= max
if 10.between(1, 11)
alert 'yay'
Although i personally wouldn't use it. if 1 <= something <= 11 is more direct and everyone will understand it. The between method, instead, has to be looked up if you want to know what it does (or you'd have to guess), and i think it doesn't add that much.
I am working on a QR code encoding/decoding project.
I have been read through the ISO/IEC 18004 (2006) and some tutorials ( http://www.thonky.com/guides/
http://www.matchadesign.com/_blog/Matcha_Design_Blog/post/QR_Code_Demystified_-_Part_1/
http://www.swetake.com/qr/qr1_en.html
)
The ISO documentation and those very nice tutorials helped me a lot. But there’s still one thing I can’t understand, that’s how we can calculate the number of data/error blocks when creating a QR code at Version 3 or higher.
The image below is from the ISO/IEC 18004 – 2006:
A version 7-H (H is error correction capacity level ) symbol that has 66 data codewords and 130 error codewords. They split both of them into 5 blocks.
The document says that the n blocks number (in this case n = 5 ) can be calculated from Table 9 (ISO 18004) according to the version and error correction level. But it seems like I can’t get that number. Please show me how I can calculate it.
Now I got it. All needed information for block splitting actually is at Table 9 of the ISO/IEC 18004 document. Just because of my careless reading.