What do < or > mean in LDA/STA command? - 6502

I understand the basics of the 6502 instruction set but have come across this code which is confusing me.
Can't find any reference to these in the 6502 manuals I have.
What do the < and > signify?
CLBASE = $100
BPTR = $25
ARM .BYT $1,$2
LDA #<ARM
STA BPTR
LDA #>ARM
STA BPTR+1
LDA #>CLBASE

The prefix #< specifies the low byte of the operand, and #> specifies the high byte of the operand.
E.g.
LDA #>CLBASE ;This will be #$01
LDA #<CLBASE ;This will be #$00
There's an assembler convention across the range of 6502-derived devices supported by most assemblers, such as ACME for instance. Here's the relevant section from WDC's W65C816S 8/16–bit Microprocessor datasheet.
| Operand | One Byte Result | Two Byte Result |
|-------------|-----------------|-----------------|
| #$01020304 | 04 | 0403 |
| #<$01020304 | 04 | 0403 |
| #>$01020304 | 03 | 0302 |
| #^$01020304 | 02 | 0201 |

Related

Why do some characters become ? and others become ☐ (␇) when encoding into a code page?

Short version
What's the reasoning behind the mapper sometimes using ? and other times using ☐?
Unicode: €‚„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ
In 850: ?'".┼╬^%S<OZ''""☐--~Ts>ozY
^\_____________/^\_______/
| | | |
| best fit | best fit
| |
replacement replacement
CS Fiddle
Long Version
I was encoding some text to code page 850, and while a lot of characters that users use exist perfectly in the 850 code page, there are some that don't match exactly. Instead the mapper (e.g. .NET System.Text.Encoding, or Winapi WideStringToMultiByte) provides a best fit:
| Character | In code-page 850 |
|-----------|------------------|
| | |
| | |
| ‚ U+201A | ' |
| | |
| „ U+201E | " |
| … U+2026 | . |
| † U+2020 | ┼ |
| ‡ U+2021 | ╬ |
| ˆ U+02C6 | ^ |
| ‰ U+2030 | % |
| Š U+0160 | S |
| ‹ U+2039 | < |
| Œ U+0152 | O |
| | |
| Ž U+017D | Z |
| | |
| | |
| ‘ U+2018 | ' |
| ’ U+2019 | ' |
| “ U+201C | " |
| ” U+201D | " |
| | |
| – U+2013 | - |
| — U+2014 | - |
| ˜ U+02DC | ~ |
| ™ U+2122 | T |
| š U+0161 | s |
| › U+203A | > |
| œ U+0153 | o |
| | |
| ž U+017E | z |
| Ÿ U+0178 | Y |
These best fits are right, good, appropriate, wanted, and entirely reasonable:.
But some characters do not map:
| Character | In code-page 850 |
|-----------|------------------|
| € U+20AC | ? 0x37 | literally 0x37 (i.e. U+003F Question Mark)
| • U+2022 | ☐ 0x07 | literally 0x07 (i.e. U+0007 BELL)
What's the deal?
Why is it sometimes a question mark, and other times a ␇?
Note: This lack of mapping isn't terribly important to me. If the federal government doesn't support a reasonable encoding, then they'll take the garbage i give them. So i'm fine with it.
A problem comes later when i try to call MultiByteToWideChar to reverse the mapping, and the function fails due to invalid characters. And while i can try to figure out the issue with reverse encoding back into characters later; i'm curious what the encoding mapper is trying to tell me.
Bonus fun
The careful observer will understand why i chose the characters i did, in the order i did, and why there are gaps. I didn't want to mention it so as to confuse readers of the question.
The answer is both subtle, and obvious.
When performing a mapping, the encoder tries to perform a best-fit. And so while a lot of the characters don't exist in the target code-page; they can be approximated well enough.
Some characters don't have any equivalent, nor any best-fit mapping, and so are simply replaced with ?.
U+0037 QUESTION MARK
So the text:
The price of petrol is € 1.56 in Germany.
Will unfortunately become:
The price of petrol is ? 1.56 in Germany.
The question mark means that the character has no equivalent and was just lost.
The other character is more subtle
In ASCII, the first 32 characters are control characters, e.g.
13: Carriage Return (␍)
10: Line Feed (␊)
9: Horizontal Tab (␉)
11: Vertical Tab (␋)
7: Bell (␇)
27: Escape (␛)
30: Record Separator (␞)
These control code are generally unprintable. But code page 437 did something unique: they defined characters for those first 32 codes:
13: Eighth note (♪)
10: Inverse white circle (◙)
9: White circle (○)
11: Male Sign (♂)
7: Bullet (•)
27: Right Angle (∟)
30: Black up-pointing triangle (▲)
This has interesting implications if you had some text such as:
The price of petrol␍␊
• Germany: €1.56␍␊
• France: €1.49
When encoded in Code Page 850 becomes:
The price of petrol♪◙• Germany: ?1.56♪◙• France: €1.49
Notice 3 things:
The € symbol was lost; replaced with ?
The • symbol was retained
The CR LF symbols were lost; replaced with ♪ and ◙
Trying to decode the code page 437/850 back to real characters presents a problem:
If i want to retain my CRLF, i have to assume that the characters in the 1..32 range actually are ASCII control characters
The price of petrol␍␊
␇ Germany: €1.56␍␊
␇ France: €1.49
If i want to retain my characters (e.g. ¶, •, §) , i have to permanently lose my CRLF, and assume that the characters in 1..32 are actually characters:
The price of petrol♪◙• Germany: €1.56♪◙• France: €1.49
There's no good way out of this.
Ideally Code Page 437 would not have done this to the first 32 characters in the code page, and kept the control characters. And ideally anyone trying to convert the text to 437:
• The price of petrol is € 1.56 in Germany ♪sad song♪
would come back with
? The price of petrol is ? 1.56 in Germany ?sad song?
But that's not what the 437 code page is.
It's a horrible mess; where you have to pick your poison and die slowly.
Rest in Peace Michael Kaplan
This answer brought to you by "☼" (U+263c, a.k.a. WHITE SUN WITH RAYS)
A proud member of the glyph chars collection for more years than Seattle has seen sun
See Michael Kaplan's archived blog entry (🕗):
What the &%#$ does MB_USEGLYPHCHARS do?
I'm still angry at the Microsoft PM who shut down his blog out of spite.

Error in org-table-sum org-mode?

I am just getting started with Emacs org-mode and I am already getting really confused about a simple column sum (org-table-sum). I start with
| date | sum |
|------+-------|
| | 16.2 |
| | 6.16 |
| | 6.16 |
| | |
When I hit C-c + (org-table-sum) below the second column I get the correct sum 28.52. If I add another line to make it
| date | sum |
|------+-------|
| | 16.2 |
| | 6.16 |
| | 6.16 |
| | 13.11 |
| | |
C-c + gives me 41.629999999999995. ???
If I change the last line from 13.11to 13.12, C-c +will give me (the correct) 41.64.
WTF?
Any explanation appreciated! Thanks!
Most decimal numbers cannot be represented exactly in binary floating point encoding (either single or double precision).
Test 13.11 here, to see that after conversion to double precision, the nearest number represented is 13.109999656677246.
This problem is not emacs related, but is a fundamental issue when working with floating point representation in a different base (binary rather than decimal).
Using calc's vsum, the result is OK:
| date | sum |
|------+-------|
| | 16.2 |
| | 6.16 |
| | 6.16 |
| | 13.11 |
|------+-------|
| | 41.63 |
#+TBLFM: #6$2=vsum(#I..#II)
This works because calc works with arbitrary precision and will not encode the numbers in a binary floating point format.

Scala find missing values in a range

For a given range, for instance
val range = (1 to 5).toArray
val ready = Array(2,4)
the missing values (not ready) are
val missing = range.toSet diff ready.toSet
Set(5, 1, 3)
The real use case includes thousands of range instances with (possibly) thousands of missing or not ready values. Is there a more time-efficient approach in Scala?
The diff operation is implemented in Scala as a foldLeft over the left operand where each element of the right operand is removed from the left collection. Let's assume that the left and right operand have m and n elements, respectively.
Calling toSet on an Array or Range object will return a HashTrieSet, which is a HashSet implementation and, thus, offers a remove operation with complexity of almost O(1). Thus, the overall complexity for the diff operation is O(m).
Considering now a different approach, we'll see that this is actually quite good. One could also solve the problem by sorting both ranges and then traversing them once in a merge-sort fashion to eliminate all elements which occur in both ranges. This will give you a complexity of O(max(m, n) * log(max(m, n))), because you have to sort both ranges.
Update
I ran some experiments to investigate whether you can speed up your computation by using mutable hash sets instead of immutable. The result as shown in the tables below is that it depends on the size ration of range and ready.
It seems as if using immutable hash sets is more efficient if the ready.size/range.size < 0.2. Above this ratio, the mutable hash sets outperform the immutable hash sets.
For my experiments, I set range = (1 to n), with n being the number of elements in range. For ready I selected a random sub set of range with the respective number of elements. I repeated each run 20 times and summed up the times calculated with System.currentTimeMillis().
range.size == 100000
+-----------+-----------+---------+
| Fraction | Immutable | Mutable |
+-----------+-----------+---------+
| 0.01 | 28 | 111 |
| 0.02 | 23 | 124 |
| 0.05 | 39 | 115 |
| 0.1 | 113 | 129 |
| 0.2 | 174 | 140 |
| 0.5 | 472 | 200 |
| 0.75 | 722 | 203 |
| 0.9 | 786 | 202 |
| 1.0 | 743 | 212 |
+-----------+-----------+---------+
range.size == 500000
+-----------+-----------+---------+
| Fraction | Immutable | Mutable |
+-----------+-----------+---------+
| 0.01 | 73 | 717 |
| 0.02 | 140 | 771 |
| 0.05 | 328 | 722 |
| 0.1 | 538 | 706 |
| 0.2 | 1053 | 836 |
| 0.5 | 2543 | 1149 |
| 0.75 | 3539 | 1260 |
| 0.9 | 4171 | 1305 |
| 1.0 | 4403 | 1522 |
+-----------+-----------+---------+

scala specialization - using object instead of class causes slowdown?

I've done some benchmarks and have results that I don't know how to explain.
The situation in a nutshell:
I have 2 classes doing the same (computation heavy) thing with generic arrays, both of them use specialization (#specialized, later #spec). One class is defined like:
class A [#spec T] {
def a(d: Array[T], c: Whatever[T], ...) = ...
...
}
Second: (singleton)
object B {
def a[#spec T](d: Array[T], c: Whatever[T], ...) = ...
...
}
And in the second case, I get huge performance hit. Why this can happen? (Note: at the moment I don't understand Java bytecode very well, and Scala compiler internals too.)
More details:
Full code is here: https://github.com/magicgoose/trashbox/tree/master/sorting_tests/src/magicgoose/sorting
This is sorting algorithm ripped from Java, (almost)automagically converted to Scala and comparison operations changed to generic ones to allow using custom comparisons with primitive types without boxing. Plus simple benchmark (tests on different lengths, with JVM warmup and averaging)
The results looks like: (left column is original Java Arrays.sort(int[]))
JavaSort | JavaSortGen$mcI$sp | JavaSortGenSingleton$mcI$sp
length 2 | time 0.00003ms | length 2 | time 0.00004ms | length 2 | time 0.00006ms
length 3 | time 0.00003ms | length 3 | time 0.00005ms | length 3 | time 0.00011ms
length 4 | time 0.00005ms | length 4 | time 0.00006ms | length 4 | time 0.00017ms
length 6 | time 0.00008ms | length 6 | time 0.00010ms | length 6 | time 0.00036ms
length 9 | time 0.00013ms | length 9 | time 0.00015ms | length 9 | time 0.00069ms
length 13 | time 0.00022ms | length 13 | time 0.00028ms | length 13 | time 0.00135ms
length 19 | time 0.00037ms | length 19 | time 0.00040ms | length 19 | time 0.00245ms
length 28 | time 0.00072ms | length 28 | time 0.00060ms | length 28 | time 0.00490ms
length 42 | time 0.00127ms | length 42 | time 0.00096ms | length 42 | time 0.01018ms
length 63 | time 0.00173ms | length 63 | time 0.00179ms | length 63 | time 0.01052ms
length 94 | time 0.00280ms | length 94 | time 0.00280ms | length 94 | time 0.01522ms
length 141 | time 0.00458ms | length 141 | time 0.00479ms | length 141 | time 0.02376ms
length 211 | time 0.00731ms | length 211 | time 0.00763ms | length 211 | time 0.03648ms
length 316 | time 0.01310ms | length 316 | time 0.01436ms | length 316 | time 0.06333ms
length 474 | time 0.02116ms | length 474 | time 0.02158ms | length 474 | time 0.09121ms
length 711 | time 0.03250ms | length 711 | time 0.03387ms | length 711 | time 0.14341ms
length 1066 | time 0.05099ms | length 1066 | time 0.05305ms | length 1066 | time 0.21971ms
length 1599 | time 0.08040ms | length 1599 | time 0.08349ms | length 1599 | time 0.33692ms
length 2398 | time 0.12971ms | length 2398 | time 0.13084ms | length 2398 | time 0.51396ms
length 3597 | time 0.20300ms | length 3597 | time 0.20893ms | length 3597 | time 0.79176ms
length 5395 | time 0.32087ms | length 5395 | time 0.32491ms | length 5395 | time 1.30021ms
The latter is the one defined inside object and it's awful (about 4 times slower).
Update 1
I've run benchmark with and without scalac optimise option, and there are no noticeable differences (only slower compilation with optimise).
It's just one of many bugs in specialization--I'm not sure whether this one's been reported on the bug tracker or not. If you throw an exception from your sort, you'll see that it calls the generic version not the specialized version of the second sort:
java.lang.Exception: Boom!
at magicgoose.sorting.DualPivotQuicksortGenSingleton$.magicgoose$sorting$DualPivotQuicksortGenSingleton$$sort(DualPivotQuicksortGenSingleton.scala:33)
at magicgoose.sorting.DualPivotQuicksortGenSingleton$.sort$mFc$sp(DualPivotQuicksortGenSingleton.scala:13)
Note how the top thing on the stack is DualPivotQuicksortGenSingleton$$sort(...) instead of ...sort$mFc$sp(...)? Bad compiler, bad!
As a workaround, you can wrap your private methods inside a final helper object, e.g.
def sort[# spec T](a: Array[T]) { Helper.sort(a,0,a.length) }
private final object Helper {
def sort[#spec T](a: Array[T], i0: Int, i1: Int) { ... }
}
For whatever reason, the compiler then realizes that it ought to call the specialized variant. I haven't tested whether every specialized method that is called by another needs to be inside its own object; I'll leave that to you via the exception-throwing trick.

Emacs: Org-mode spreadsheet: targeting via hline

I'm trying to log my hours in org mode:
** Bob Johnson, Bob's SEO
| subject | time | minutes | total hours |
|-----------------------------------------------+---------------------------+---------+-------------|
| optimization report | 2011/07/11 8-10:00 PM PST | 120 | 2 |
| phonecall to dicuss report and plan of action | 2011/07/13 5:41 PM | 43 | 0.71666667 |
|-----------------------------------------------+---------------------------+---------+-------------|
| | | 249 | 4.15 |
#+TBLFM: $4=#0$-1/60::#6$3=vsum(#I..#II)
The main problem is the expression above that sums up total hours:
#6$3=vsum(#I..#II)
Should be something like:
#||+1$3=vsum(#I..#II)
So that as the spread sheet grows the last item in the second to last column will always sum the total. It doesn't work when I set it this way (and +II, and other ways) however.
Any ideas?
Thanks!
The closest you could come to relative references would be to change your formula to:
#+TBLFM: $4=$-1/60::#>$3=vsum(#I..#II)
The #0 is implied by -1.
For the second formula the #> means the last row, so as long as you don't add additional rows below your total row the results will be in the right place. If you add additional below it you will simply have to adjust the number of > signs.
EDIT:
You can also name the cell in question so that it doesn't get changed regardless:
Org-Manual
|-----------------------------------------------+---------------------------+---------+-------------|
| | | 163 | 2.7166667 |
| ^ | | total | total |
#+TBLFM: $4=$-1/60::$total=vsum(#I..#II)
You need the total name for both total rows, otherwise your minutes won't add up.
Use the M-S-up,down,left,right family of commands to manipulate (insert/delete row/column) the table, and the formula will be adjusted automatically.