How swift know value of memory is address or actual value that I assigned - swift

struct ValueType {
var member: Int
}
class ReferenceType {
var member: Int
init(member: Int) {
self.member = member
}
}
var valueTypeObject = ValueType(member: 3)
var referenceTypeObject = ReferenceType(member: 4)
withUnsafePointer(to: &referenceTypeObject) {
print("referenceTypeObject address: \($0)")
}
withUnsafePointer(to: &valueTypeObject) {
print("valueTypeObject address: \($0)")
}
When executing the above code, the address of each object appears like this.
valueTypeObject address: 0x0000000100008218
referenceTypeObject address: 0x0000000100008220
First, if I view memory of valueTypeObject address (0x0000000100008218), I can check the 03 value within 64 bits that I actually allocated (03 00 00 00 00 00 00 00 00 00 00 00 00 00. Maybe data is stored as little endian.)
Next, if I view memory of referenceTypeObject address (0x0000000100008220), I can check 0x000000010172f8b0 is stored in 64bit. (I don't know why right side of ..r..... is also highlighted, and what it is 🤔)
I know that the referenceTypeObject is reference type, so the actual value is in the heap area. So I can guess 0x000000010172f8b0 is an address that stores the actual value that I assigned (in this case, 4.)
But how does Swift know that this is the address that points to heap area instead of 0x000000010172f8b0 value that can be assied by me?
In addition, if I view memory of address 0x000000010172f8b0 where the actual value is stored, there are some 32 bytes values in front of the value that I allocated (in this case, 4). What are those?

Related

Sending Midi Sysex messages in C#

I'm trying to figure out how to make a simple Winform app that sends a SysEx MIDI message when I click a button, so button 1 sends:-
F0 7F 22 02 50 01 31 00 31 00 31 F7
Button 2 sends:
F0 7F 22 02 50 01 32 00 31 00 31 F7
and so on...
I was able to find lots of info about sending notes and instrument data but nothing really about sysex. I have played with the Sanford reference which seemed to get me closer but still nothing about Sysex usage.
There are various MIDI libraries available, and most support SysEx in one form or another. I've mostly used the managed-midi package although I've used the Sanford package before.
In managed-midi at least, sending a SysEx message is as simple as obtaining an IMidiOutput (usually from a MidiAccessManager) and then calling the Send method. For example:
// You'd do this part just once, of course...
var accessManager = MidiAccessManager.Default;
var portId = access.Outputs.First().Id;
var port = await access.OpenOutputAsync(portId);
// You'd port this part in your button click handler.
// The data is copied from the question, so I'm assuming it's okay...
var message = new byte[] {
0xF0, 0x7F, 0x22, 0x02, 0x50, 0x01,
0x31, 0x00, 0x31, 0x00, 0x31, 0xF7 };
port.Send(message, 0, message.Length, timestamp: 0);

Two byte report count for hid report descriptor

I'm trying to create an HID report descriptor for USB 3.0 with a report count of 1024 bytes.
The documentation at usb.org for HID does not seem to mention a two byte report count. Nonetheless, I have seen some people use 0x96 (instead of 0x95) to enter a two byte count, such as:
0x96, 0x00, 0x02, // REPORT_COUNT (512)
which was taken from here:
Custom HID device HID report descriptor
Likewise, from this same example, 0x26 is used for a two byte logical maximum.
Where did this 0x96 and 0x26 field come from? I don't see any documentation for it.
REPORT_COUNT is defined in the Device Class Definition for HID 1.11 document in section 6.2.2.7 Global Items on page 36 as:
Report Count 1001 01 nn Unsigned integer specifying the number of data
fields for the item; determines how many fields are included in the
report for this particular item (and consequently how many bits are
added to the report).
The nn in the above code is the item length indicator (bSize) and is defined earlier in section 6.2.2.2 Short Items as:
bSize Numeric expression specifying size of data:
0 = 0 bytes
1 = 1 byte
2 = 2 bytes
3 = 4 bytes
Rather confusingly, the valid values of bSize are listed in decimal. So, in binary, the bits for nn would be:
00 = 0 bytes (i.e. there is no data associated with this item)
01 = 1 byte
10 = 2 bytes
11 = 4 bytes
Putting it all together for REPORT_COUNT, which is an unsigned integer, the following alternatives could be specified:
1001 01 00 = 0x94 = REPORT_COUNT with no length (can only have value 0?)
1001 01 01 = 0x95 = 1-byte REPORT_COUNT (can have a value from 0 to 255)
1001 01 10 = 0x96 = 2-byte REPORT_COUNT (can have a value from 0 to 65535)
1001 01 11 = 0x97 = 4-byte REPORT_COUNT (can have a value from 0 to 4294967295)
Similarly, for LOGICAL_MAXIMUM, which is a signed integer (usually, there is an exception):
0010 01 00 = 0x24 = LOGICAL_MAXIMUM with no length (can only have value 0?)
0010 01 01 = 0x25 = 1-byte LOGICAL_MAXIMUM (can have a values from -128 to 127)
0010 01 10 = 0x26 = 2-byte LOGICAL_MAXIMUM (can have a value from -32768 to 32767)
0010 01 11 = 0x27 = 4-byte LOGICAL_MAXIMUM (can have a value from -2147483648 to 2147483647)
The specification is unclear on what value a zero-length item defaults to in general. It only mentions, at the end of section 6.2.2.4 Main Items, that MAIN item types and, within that type, INPUT item tags, have a default value of 0:
Remarks - The default data value for all Main items is zero (0).
- An Input item could have a data size of zero (0) bytes. In this case the value of
each data bit for the item can be assumed to be zero. This is functionally
identical to using a item tag that specifies a 4-byte data item followed by four
zero bytes.
It would be reasonable to assume 0 as the default for other item types too, but for REPORT_COUNT (a GLOBAL item) a value of 0 is not really a sensible default (IMHO). The specification doesn't really say.

Why Groovy file write with UTF-16LE produce BOM char?

Do you have idea why first and secod lines below do not produce BOM to the file and third line does? I thought UTF-16LE is correct encoding name and that encoding does no create BOM automatically to beginning of the file.
new File("foo-wo-bom.txt").withPrintWriter("utf-16le") {it << "test"}
new File("foo-bom1.txt").withPrintWriter("UnicodeLittleUnmarked") {it << "test"}
new File("foo-bom.txt").withPrintWriter("UTF-16LE") {it << "test"}
Another samples
new File("foo-bom.txt").withPrintWriter("UTF-16LE") {it << "test"}
new File("foo-bom.txt").getBytes().each {System.out.format("%02x ", it)}
prints
ff fe 74 00 65 00 73 00 74 00
and with java
PrintWriter w = new PrintWriter("foo.txt","UTF-16LE");
w.print("test");
w.close();
FileInputStream r = new FileInputStream("foo.txt");
int c;
while ((c = r.read()) != -1) {
System.out.format("%02x ",c);
}
r.close();
prints
74 00 65 00 73 00 74 00
With Java is does not produce BOM and with Groovy there is BOM.
There appears to be a difference in behavior with withPrintWriter. Try this out in your GroovyConsole
File file = new File("tmp.txt")
try {
String text = " "
String charset = "UTF-16LE"
file.withPrintWriter(charset) { it << text }
println "withPrintWriter"
file.getBytes().each { System.out.format("%02x ", it) }
PrintWriter w = new PrintWriter(file, charset)
w.print(text)
w.close()
println "\n\nnew PrintWriter"
file.getBytes().each { System.out.format("%02x ", it) }
} finally {
file.delete()
}
It outputs
withPrintWriter
ff fe 20 00
new PrintWriter
20 00
This is because calling new PrintWriter calls the Java constructor, but calling withPrintWriter eventually calls org.codehaus.groovy.runtime.ResourceGroovyMethods.writeUTF16BomIfRequired(), which writes the BOM.
I'm uncertain whether this difference in behavior is intentional. I was curious about this, so I asked on the mailing list. Someone there should know the history behind the design.
Edit: GROOVY-7465 was created out of the aforementioned discussion.

having trouble returning a best possible interface from a set of routing entries

so i am trying to return a best possible matching interface from routing entries. However, it is not exactly working the way i want it to. I got 5 out 6 values returned the way should be but I am pretty sure I have a million entries in a routing table my algorithm would not work.
I am using Binary Search to solve this problem. But, for example, the interface that i want to return has a ipaddress which is smaller than the ipaddress i am passing as an argument, then the binary search algorithm does not work. the structure looks like this:
struct routeEntry_t
{
uint32_t ipAddr;
uint32_t netMask;
int interface;
};
routeEntry_t routing_table[100000];
let's say the routing table looks like this:
{ 0x00000000, 0x00000000, 1 }, // [0]
{ 0x0A000000, 0xFF000000, 2 }, // [1]
{ 0x0A010000, 0xFFFF0000, 10 }, // [2]
{ 0x0D010100, 0xFFFFFF00, 4 }, // [3]
{ 0x0B100000, 0xFFFF0000, 3 }, // [4]
{ 0x0A010101, 0xFFFFFFFF, 5 }, // [5]
Example input/output:
Regular search
Input: 0x0D010101 Output: 4 (entry [3])
Input: 0x0B100505 Output: 3 (entry [4])
To find an arbitrary address, it should go to the default interface.
Input: 0x0F0F0F0F Output: 1 (entry [0])
To find an address that matches multiple entries, take the best-match.
Input: 0x0A010200 Output: 10 (entry [2])
Input: 0x0A050001 Output: 2 (entry [1])
Input: 0x0A010101 Output: 5 (entry [5])
But my output looks like 2, 3, 1, 10, 1, 5. I don't understand where I am messing things up. Could you please explain where I am doing wrong? any help would be great. Thanks in advance. However this is what my algorithm looks like (assuming the entries are sorted):
int interface(uint32_t ipAddr)
{
int start = 0;
int end = SIZE-1;
int mid = 0;
vector<int> matched_entries;
vector<int>::iterator it;
matched_entries.reserve(SIZE);
it = matched_entries.begin();
if (start > end)
return -1;
while (start <= end)
{
mid = start + ((end-start)/2);
if (routing_table[mid].ipAddr == ipAddr)
return routing_table[mid].interface;
else if (routing_table[mid].ipAddr > ipAddr)
{
uint32_t result = routing_table[mid].netMask & ipAddr;
if (result == routing_table[mid].ipAddr)
{
matched_entries.push_back(mid);
}
end = mid-1;
}
else
{
uint32_t result = routing_table[mid].netMask & ipAddr;
if (result == routing_table[mid].ipAddr)
{
matched_entries.insert(it,mid);
}
start = mid+1;
}
}
int matched_ip = matched_entries.back();
if (routing_table[matched_ip].netMask & ipAddr)
return routing_table[matched_ip].interface;
else
return routing_table[0].interface;
}
The "right" interface is the entry with the most specific netmask whose IP address is on the same subnet as your input.
Let's look at what netmasks are, and how they work, in more detail.
Notation
Although netmasks are usually written in dotted-decimal or hex notation, the binary representation of an IPv4 netmask is always 32 bits; that is, it's exactly the same length as an IP address. The netmask always starts with zero or more 1 bits and is padded with 0 bits to complete its 32-bit length. When a netmask is applied to an IP address, they're "lined up" bit by bit. The bits in the IP address that correspond to the 1 bits in the netmask determine the network number of the IP address; those corresponding to the 0 bits in the netmask determine the device number.
Purpose
Netmasks are used to divide an address space into smaller subnets. Devices on the same subnet can communicate with each other directly using the TCP/IP protocol stack. Devices on different subnets must use one or more routers to forward data between them. Because they isolate subnets from each other, netmasks are a natural way to create logical groupings of devices. For example, each location or department within a company may have its own subnet, or each type of device (printers, PCs, etc.) may have its own subnet.
Example netmasks:
255.255.255.128 → FF FF FF 10 → 1111 1111 1111 1111 1111 1111 1000 0000
This netmask specifies that the first 25 bits of an IP address determine the network number; the final 7 bits determine the device number. This means there can be 225 different subnets, each with 27 = 128 devices.*
255.255.255.0 → FF FF FF 00 → 1111 1111 1111 1111 1111 1111 0000 0000
This netmask specifies an address space with 224 subnets, each with 28 = 256 individual addresses. This is a very common configuration—so common that it's known simply as a "Class C" network.
255.255.192.0 → FF FF FC 00 → 1111 1111 1111 1111 1111 1100 0000 0000
This netmask specifies 222 subnets, each with 210 = 1024 addresses. It might be used inside a large corporation, where each department has several hundred devices that should be logically grouped together.
An invalid netmask (note the internal zeroes):
255.128.255.0 → FF 80 FF 00 → 1111 1111 1000 0000 1111 1111 0000 0000
Calculations
Here are a few examples that show how a netmask determines the network number and the device number of an IP address.
IP Address: 192.168.0.1 → C0 A8 00 01
Netmask: 255.255.255.0 → FF FF FF 00
This device is on the subnet 192.168.0.0. It can communicate directly with other devices whose IP addresses are of the form 192.168.0.x
IP Address: 192.168.0.1 → C0 A8 00 01
IP Address: 192.168.0.130 → C0 A8 00 82
Netmask: 255.255.255.128 → FF FF FF 80
These two devices are on different subnets and cannot communicate with each other without a router.
IP Address: 10.10.195.27 → 0A 0A C3 1B
Netmask: 255.255.0.0 → FF FF 00 00
This is an address on a "Class B" network that can communicate with the 216 addresses on the 10.10.0.0 network.
You can see that the more 1 bits at the beginning of a netmask, the more specific it is. That is, more 1 bits create a "smaller" subnet that consists of fewer devices.
Putting it all together
A routing table, like yours, contains triplets of netmasks, IP addresses, and interfaces. (It may also contain a "cost' metric, which indicates which of several interfaces is the "cheapest" to use, if they are both capable of routing data to a particular destination. For example, one may use an expensive dedicated line.)
In order to route a packet, the router finds the interface with the most specific match for the packet's destination. An entry with an address addr and a netmask mask matches a destination IP address dest if (addr & netmask) == (dest & netmask), where & indicates a bitwise AND operation. In English, we want the smallest subnet that is common to both addresses.
Why? Suppose you and a colleague are in a hotel that's part of a huge chain with both a corporate wired network and a wireless network. You've also connected to your company's VPN. Your routing table might look something like this:
Destination Netmask Interface Notes
----------- -------- --------- -----
Company email FFFFFF00 VPN Route ALL company traffic thru VPN
Wired network FFFF0000 Wired Traffic to other hotel addresses worldwide
Default 00000000 Wireless All other traffic
The most specific rule will route your company email safely through the VPN, even if the address happens to match the wired network also. All traffic to other addresses within the hotel chain will be routed through the wired network. And everything else will be sent through the wireless network.
* Actually, in every subnet, 2 of the addresses—the highest and the lowest—are reserved. The all-ones address is the broadcast address: this address sends data to every device on the subnet. And the all-zeroes address is used by a device to refer to itself when it doesn't yet have it's own IP address. I've ignored those for simplicity.
So the algorithm would be something like this:
initialize:
Sort routing table by netmask from most-specific to least specific.
Within each netmask, sort by IP address.
search:
foreach netmask {
Search IP addresses for (input & netmask) == (IP address & netmask)
Return corresponding interface if found
}
Return default interface
Ok so I this is what my structure and algorithm looks like now. It works and gives me the results that I want, however I still don't know how to sort ip addresses within netmasks. I used STL sort to sort the netmasks.
struct routeEntry_t
{
uint32_t ipAddr;
uint32_t netMask;
int interface;
bool operator<(const routeEntry_t& lhs) const
{
return lhs.netMask < netMask;
}
};
const int SIZE = 6;
routeEntry_t routing_table[SIZE];
void sorting()
{
//using STL sort from lib: algorithm
sort(routing_table, routing_table+SIZE);
}
int interface(uint32_t ipAddr)
{
for (int i = 0; i < SIZE; ++i)
{
if ((routing_table[i].ipAddr & routing_table[i].netMask) == (ipAddr & routing_table[i].netMask))
return routing_table[i].interface;
}
return routing_table[SIZE-1].interface;
}

Windows debugging - WinDbg

I got the following error while debuggging a process with its core dump.
0:000> !lmi test.exe
Loaded Module Info: [test.exe]
Module: test
Base Address: 00400000
Image Name: test.exe
Machine Type: 332 (I386)
Time Stamp: 4a3a38ec Thu Jun 18 07:54:04 2009
Size: 27000
CheckSum: 54c30
Characteristics: 10f
Debug Data Dirs: Type Size VA Pointer
MISC 110, 0, 21000 [Debug data not mapped]
FPO 50, 0, 21110 [Debug data not mapped]
CODEVIEW 31820, 0, 21160 [Debug data not mapped] - Can't validate symbols, if present.
Image Type: FILE - Image read successfully from debugger.
test.exe
Symbol Type: CV - Symbols loaded successfully from image path.
Load Report: cv symbols & lines
Does any body know what the error CODEVIEW 31820, 0, 21160 [Debug data not mapped] - Can't validate symbols, if present. really mean?
Is this error meant that i can't read public/private symbols from the executable?
If it is not so, why does the WinDbg debugger throws this typr of error?
Thanks in advance,
Santhosh.
Debug data not mapped can mean the section of the executable that holds the debug information hasn't been mapped into memory. If this is a crash dump, your options are limited, but if it's a live debug session. you can use the WinDbg .pagein command to retrieve the data. To do that you need to know the address to page in. If you use the !dh command on the module start address (which you can see with lm - in my case, lm mmsvcr90 for msvcr90.dll), you may see something like this (scrolling down a ways):
Debug Directories(1)
Type Size Address Pointer
cv 29 217d0 20bd0 Can't read debug data cb=0
This shows you that the debug data is at offset 217d0 from the module start and is length 29. If you attempt to dump those bytes you'll see (78520000 is the module's start address):
kd> db 78520000+217d0 l29
785417d0 ?? ?? ?? ?? ?? ?? ?? ??-?? ?? ?? ?? ?? ?? ?? ?? ????????????????
785417e0 ?? ?? ?? ?? ?? ?? ?? ??-?? ?? ?? ?? ?? ?? ?? ?? ????????????????
785417f0 ?? ?? ?? ?? ?? ?? ?? ??-?? ?????????
If you execute .pagein /p 82218b90 785417d0, then F5, when the debugger breaks back in you'll see (82218b90 is the _EPROCESS address of the process that I'm debugging):
kd> db 78520000+217d0 l29
785417d0 52 53 44 53 3f d4 6e 7a-e8 62 44 48 b2 54 ec 49 RSDS?.nz.bDH.T.I
785417e0 ae f1 07 8c 01 00 00 00-6d 73 76 63 72 39 30 2e ........msvcr90.
785417f0 69 33 38 36 2e 70 64 62-00 i386.pdb.
Now executing .reload /f msvcr90.dll will load the symbols. For a crash dump, if you can find the 0x29 bytes you're missing (from another dump maybe), you may be able to insert them and get the symbols loaded that way.
Have you set your symbol path for WinDbg (see Step 2 # http://blogs.msdn.com/iliast/archive/2006/12/10/windbg-tutorials.aspx) and are your PDB files in the symbol path?
I assume you're testing an executable built in debug mode which generates the necessary PDB files.