AppKit/CoreServices KeyCode and keyboard mapping (without NSEvent) [duplicate] - swift

I have some code I've been using to get the current keyboard layout and convert a virtual key code into a string. This works great in most situations, but I'm having trouble with some specific cases. The one that brought this to light is the accent key next to the backspace key on german QWERTZ keyboards. http://en.wikipedia.org/wiki/File:KB_Germany.svg
That key generates the VK code I'd expect kVK_ANSI_Equal but when using a QWERTZ keyboard layout I get no description back. Its ending up as a dead key because its supposed to be composed with another key. Is there any way to catch these cases and do the proper conversion?
My current code is below.
TISInputSourceRef currentKeyboard = TISCopyCurrentKeyboardInputSource();
CFDataRef uchr = (CFDataRef)TISGetInputSourceProperty(currentKeyboard, kTISPropertyUnicodeKeyLayoutData);
const UCKeyboardLayout *keyboardLayout = (const UCKeyboardLayout*)CFDataGetBytePtr(uchr);
if(keyboardLayout)
{
UInt32 deadKeyState = 0;
UniCharCount maxStringLength = 255;
UniCharCount actualStringLength = 0;
UniChar unicodeString[maxStringLength];
OSStatus status = UCKeyTranslate(keyboardLayout,
keyCode, kUCKeyActionDown, 0,
LMGetKbdType(), kUCKeyTranslateNoDeadKeysBit,
&deadKeyState,
maxStringLength,
&actualStringLength, unicodeString);
if(actualStringLength > 0 && status == noErr)
return [[NSString stringWithCharacters:unicodeString length:(NSInteger)actualStringLength] uppercaseString];
}

That key is a dead key, as you can see if you try it yourself or look at the Keyboard Viewer with the German layout active.
On the Mac, the way to enter a dead key's actual character, without composing it with another character, is to press a space after it. So try that: Turn off kUCKeyTranslateNoDeadKeysBit, and if UCKeyTranslate sets the dead-key state, translate a space after it.
EDIT (added by asker)
Just for future people, here is the fixed code with the right solution.
TISInputSourceRef currentKeyboard = TISCopyCurrentKeyboardInputSource();
CFDataRef uchr = (CFDataRef)TISGetInputSourceProperty(currentKeyboard, kTISPropertyUnicodeKeyLayoutData);
const UCKeyboardLayout *keyboardLayout = (const UCKeyboardLayout*)CFDataGetBytePtr(uchr);
if(keyboardLayout)
{
UInt32 deadKeyState = 0;
UniCharCount maxStringLength = 255;
UniCharCount actualStringLength = 0;
UniChar unicodeString[maxStringLength];
OSStatus status = UCKeyTranslate(keyboardLayout,
keyCode, kUCKeyActionDown, 0,
LMGetKbdType(), 0,
&deadKeyState,
maxStringLength,
&actualStringLength, unicodeString);
if (actualStringLength == 0 && deadKeyState)
{
status = UCKeyTranslate(keyboardLayout,
kVK_Space, kUCKeyActionDown, 0,
LMGetKbdType(), 0,
&deadKeyState,
maxStringLength,
&actualStringLength, unicodeString);
}
if(actualStringLength > 0 && status == noErr)
return [[NSString stringWithCharacters:unicodeString length:(NSUInteger)actualStringLength] uppercaseString];
}

Related

unity custom editor with arrays

I'm in the process of creating a game and the scripts are getting pretty big. For more clarity I want to work with the custom editor. But I have problems with types like classes, gameobjects and especially arrays.
As you can hopefully see in my picture, it is very easy to make basic types like string and float etc visible. But how does it work with an array ? and especially if the array is still a class or a rigidbody or a gameobject ? I need if possible a good explanation or direct solution. I would be very grateful for your help
enter image description here
Note: If your just looking to beautify/improve your inspector, rather than doing it as an actual project/experience, your better off looking for plugins.
The Custom-Editor API that Unity provides are a bunch of Tools, not a House. You will end up pouring a lot of effort into making your inspector 'look neater'.
If you just want to create a game, use plugins to decorate your inspector.
MyBox is one of the Plugins I use, and recommend.
Now back to the question
I managed to pull it off by using a combination of EditorGUILayout.Foldout and looping through the array size to create multiple EditorGUILayout.IntField.
// True if the user 'opened' up the array on inspector.
private bool countIntsOpened;
public override void OnInspectorGUI() {
var myTarget = (N_USE)target;
myTarget.myTest.name = EditorGUILayout.TextField("see name", myTarget.myTest.name);
myTarget.myTest.countOnly = EditorGUILayout.FloatField("see float", myTarget.myTest.countOnly);
// Create int array field
myTarget.myTest.countInts = IntArrayField("see int[]", ref countIntsOpened, myTarget.myTest.countInts);
}
public int[] IntArrayField(string label, ref bool open, int[] array) {
// Create a foldout
open = EditorGUILayout.Foldout(open, label);
int newSize = array.Length;
// Show values if foldout was opened.
if (open) {
// Int-field to set array size
newSize = EditorGUILayout.IntField("Size", newSize);
newSize = newSize < 0 ? 0 : newSize;
// Creates a spacing between the input for array-size, and the array values.
EditorGUILayout.Space();
// Resize if user input a new array length
if (newSize != array.Length) {
array = ResizeArray(array, newSize);
}
// Make multiple int-fields based on the length given
for (var i = 0; i < newSize; ++i) {
array[i] = EditorGUILayout.IntField($"Value-{i}", array[i]);
}
}
return array;
}
private static T[] ResizeArray<T>(T[] array, int size) {
T[] newArray = new T[size];
for (var i = 0; i < size; i++) {
if (i < array.Length) {
newArray[i] = array[i];
}
}
return newArray;
}
Not as nice or neat as Unity's default. But gets the job done.
P.S: You can copy-paste your code when asking questions, rather than posting it as an image. It helps a lot.

switch off caps lock programmatically macos [duplicate]

I have seen many post on this topic. But haven't found a clear answer anywhere.
Is there a way to toggle CAPS LOCK in Objective-C or C code? I am not looking for a solution using X11 libs. I am not bothered about the LED on/off status. But just the functionality of CAPS LOCK (changing the case of letters and printing the special characters on number keys).
Why is CGEvent not supporting this the way it does for other keys?
var ioConnect: io_connect_t = .init(0)
let ioService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching(kIOHIDSystemClass))
IOServiceOpen(ioService, mach_task_self_, UInt32(kIOHIDParamConnectType), &ioConnect)
var modifierLockState = false
IOHIDGetModifierLockState(ioConnect, Int32(kIOHIDCapsLockState), &modifierLockState)
modifierLockState.toggle()
IOHIDSetModifierLockState(ioConnect, Int32(kIOHIDCapsLockState), modifierLockState)
IOServiceClose(ioConnect)
The following method also works when you want CapsLock to toggle the current keyboard language (if you have the CapsLock key configured to do that).
CFMutableDictionaryRef mdict = IOServiceMatching(kIOHIDSystemClass);
io_service_t ios = IOServiceGetMatchingService(kIOMasterPortDefault, (CFDictionaryRef)mdict);
if (ios) {
io_connect_t ioc = 0;
IOServiceOpen(ios, mach_task_self(), kIOHIDParamConnectType, &ioc);
if (ioc) {
NXEventData event{};
IOGPoint loc{};
// press CapsLock key
UInt32 evtInfo = NX_KEYTYPE_CAPS_LOCK << 16 | NX_KEYDOWN << 8;
event.compound.subType = NX_SUBTYPE_AUX_CONTROL_BUTTONS;
event.compound.misc.L[0] = evtInfo;
IOHIDPostEvent(ioc, NX_SYSDEFINED, loc, &event, kNXEventDataVersion, 0, FALSE);
// release CapsLock key
evtInfo = NX_KEYTYPE_CAPS_LOCK << 16 | NX_KEYUP << 8;
event.compound.subType = NX_SUBTYPE_AUX_CONTROL_BUTTONS;
event.compound.misc.L[0] = evtInfo;
IOHIDPostEvent(ioc, NX_SYSDEFINED, loc, &event, kNXEventDataVersion, 0, FALSE);
IOServiceClose(ioc);
}
}
I got this working, after a long struggle.
Invoke the method given below twice. Once for up event and another for down event. For example for simulating CAPS A, we need to do the following.
[self handleKeyEventWithCapsOn:0 andKeyDown:NO];
[self handleKeyEventWithCapsOn:0 andKeyDown:YES];
0 is the keycode for 'a'.
- (void) handleKeyEventWithCapsOn:(int) keyCode andKeyDown:(BOOL)keyDown
{
if(keyDown)
{
CGEventRef eventDown;
eventDown = CGEventCreateKeyboardEvent(NULL, (CGKeyCode)keyCode, true);
CGEventSetFlags(eventDown, kCGEventFlagMaskShift);
CGEventPost(kCGSessionEventTap, eventDown);
CFRelease(eventDown);
}
else
{
CGEventRef eventUp;
eventUp = CGEventCreateKeyboardEvent(NULL, (CGKeyCode)keyCode, false);
CGEventSetFlags(eventUp, kCGEventFlagMaskShift);
CGEventPost(kCGSessionEventTap, eventUp);
// SHIFT Up Event
CGEventRef eShiftUp = CGEventCreateKeyboardEvent(NULL, (CGKeyCode)56, false);
CGEventPost(kCGSessionEventTap, eShiftUp);
CFRelease(eventUp);
CFRelease(eShiftUp);
}
}

How to simulate a click on home screen of (jailbroken) iPhone?

I want to simulate a click on the icon of an app on the home screen of jailbroken iPhone. I used the code below, but it could not take effect. Did I do something wrong?
Is the method to getFrontMostAppPort() right? Event if I tried to use GSSendSystemEvent(), it had no effect.
BTW, I mean jailbroken device. Could anyone give me a help? I very much appreciate it.
// Framework Paths
#define SBSERVPATH "/System/Library/PrivateFrameworks/SpringBoardServices.framework/SpringBoardServices"
static mach_port_t getFrontMostAppPort() {
bool locked;
bool passcode;
mach_port_t *port;
void *lib = dlopen(SBSERVPATH, RTLD_LAZY);
int (*SBSSpringBoardServerPort)() = dlsym(lib, "SBSSpringBoardServerPort");
void* (*SBGetScreenLockStatus)(mach_port_t* port, bool *lockStatus, bool *passcodeEnabled) = dlsym(lib, "SBGetScreenLockStatus");
port = (mach_port_t *)SBSSpringBoardServerPort();
dlclose(lib);
SBGetScreenLockStatus(port, &locked, &passcode);
void *(*SBFrontmostApplicationDisplayIdentifier)(mach_port_t *port, char *result) = dlsym(lib, "SBFrontmostApplicationDisplayIdentifier");
char appId[256];
memset(appId, 0, sizeof(appId));
SBFrontmostApplicationDisplayIdentifier(port, appId);
NSString * frontmostApp=[NSString stringWithFormat:#"%s",appId];
if([frontmostApp length] == 0 || locked)
return GSGetPurpleSystemEventPort();
else
return GSCopyPurpleNamedPort(appId);
}
static void sendTouchEvent(GSHandInfoType handInfoType, CGPoint point) {
uint8_t touchEvent[sizeof(GSEventRecord) + sizeof(GSHandInfo) + sizeof(GSPathInfo)];
// structure of touch GSEvent
struct GSTouchEvent {
GSEventRecord record;
GSHandInfo handInfo;
} * event = (struct GSTouchEvent*) &touchEvent;
bzero(touchEvent, sizeof(touchEvent));
// set up GSEvent
event->record.type = kGSEventHand;
event->record.subtype = kGSEventSubTypeUnknown;
event->record.windowLocation = point;
event->record.timestamp = GSCurrentEventTimestamp();
event->record.infoSize = sizeof(GSHandInfo) + sizeof(GSPathInfo);
event->handInfo.type = handInfoType;
event->handInfo.x52 = 1;
bzero(&event->handInfo.pathInfos[0], sizeof(GSPathInfo));
event->handInfo.pathInfos[0].pathIndex = 1;
event->handInfo.pathInfos[0].pathIdentity = 2;
event->handInfo.pathInfos[0].pathProximity = (handInfoType == kGSHandInfoTypeTouchDown || handInfoType == kGSHandInfoTypeTouchDragged || handInfoType == kGSHandInfoTypeTouchMoved) ? 0x03 : 0x00;;
event->handInfo.pathInfos[0].pathLocation = point;
mach_port_t port = (mach_port_t)getFrontMostAppPort();
GSSendEvent((GSEventRecord *)event, port);
}
// why nothing happened?
static clickOnHome() {
sendTouchEvent(kGSHandInfoTypeTouchDown, CGPointMake(100, 200));
sleep(1);
sendTouchEvent(kGSHandInfoTypeTouchUp, CGPointMake(100, 200));
}
Yes, I think your getFrontMostAppPort() method is fine, if you got it from here.
I'm trying to understand what you want:
If you simply want to open up a particular app (by bundleId), then I would recommend using the command-line open utility available on Cydia. You can call this programmatically with system(), or an exec call. Or, if you want to build this "open" capability into your own app, see my answer here.
Now, maybe you are writing code for an app or tweak that's in the background, not the foreground app. And, maybe you really do need to touch a specific {x,y} coordinate, not open an app by its bundleId. If you really want that, this code works for me (based on this answer ... with corrections):
static void sendTouchEvent(GSHandInfoType handInfoType, CGPoint point) {
uint8_t gsTouchEvent[sizeof(GSEventRecord) + sizeof(GSHandInfo) + sizeof(GSPathInfo)];
// structure of touch GSEvent
struct GSTouchEvent {
GSEventRecord record;
GSHandInfo handInfo;
} * touchEvent = (struct GSTouchEvent*) &gsTouchEvent;
bzero(touchEvent, sizeof(touchEvent));
touchEvent->record.type = kGSEventHand;
touchEvent->record.subtype = kGSEventSubTypeUnknown;
touchEvent->record.location = point;
touchEvent->record.windowLocation = point;
touchEvent->record.infoSize = sizeof(GSHandInfo) + sizeof(GSPathInfo);
touchEvent->record.timestamp = GSCurrentEventTimestamp();
//touchEvent->record.window = winRef;
//touchEvent->record.senderPID = 919;
bzero(&touchEvent->handInfo, sizeof(GSHandInfo));
bzero(&touchEvent->handInfo.pathInfos[0], sizeof(GSPathInfo));
GSHandInfo touchEventHandInfo;
touchEventHandInfo._0x5C = 0;
touchEventHandInfo.deltaX = 0;
touchEventHandInfo.deltaY = 0;
touchEventHandInfo.height = 0;
touchEventHandInfo.width = 0;
touchEvent->handInfo = touchEventHandInfo;
touchEvent->handInfo.type = (handInfoType == kGSHandInfoTypeTouchDown) ? 2 : 1;
touchEvent->handInfo.deltaX = 1;
touchEvent->handInfo.deltaY = 1;
touchEvent->handInfo.pathInfosCount = 0;
touchEvent->handInfo.pathInfos[0].pathIndex = 1;
touchEvent->handInfo.pathInfos[0].pathIdentity = 2;
touchEvent->handInfo.pathInfos[0].pathProximity = (handInfoType == kGSHandInfoTypeTouchDown || handInfoType == kGSHandInfoTypeTouchDragged || handInfoType == kGSHandInfoTypeTouchMoved) ? 0x03: 0x00;
touchEvent->handInfo.x52 = 1; // iOS 5+, I believe
touchEvent->handInfo.pathInfos[0].pathLocation = point;
//touchEvent->handInfo.pathInfos[0].pathWindow = winRef;
GSEventRecord* record = (GSEventRecord*) touchEvent;
record->timestamp = GSCurrentEventTimestamp();
GSSendEvent(record, getFrontMostAppPort());
}
- (void) simulateHomeScreenTouch {
CGPoint location = CGPointMake(50, 50);
sendTouchEvent(kGSHandInfoTypeTouchDown, location);
double delayInSeconds = 0.1;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
sendTouchEvent(kGSHandInfoTypeTouchUp, location);
});
}
This will send a touch down/up event pair to SpringBoard, as long as SpringBoard is the frontmost app (no other app open). Also, you can see where the handInfoType value is mapped to 2 or 1 for touch down and up events. The code above assumes that those are the only two types of events you're generating.
Note: I first tried with your coordinates of x=100, y=200, and nothing happened. I think that coordinate is between two home screen icons. Using other coordinates (e.g. x=50, y=50) works fine.
Note II: I don't believe this solution actually requires jailbreaking. It uses Private APIs, so it can't be submitted to the App Store, but this should work for jailed phones in an enterprise, or hobbyist deployment.

IOSurfaces - Artefacts in video and unable to grab video surfaces

This is a 2-part Question. I have the following code working which grabs the current display surface and creates a video out of the surfaces (everything happens in the background).
for(int i=0;i<100;i++){
IOMobileFramebufferConnection connect;
kern_return_t result;
IOSurfaceRef screenSurface = NULL;
io_service_t framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleH1CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleM2CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleCLCD"));
result = IOMobileFramebufferOpen(framebufferService, mach_task_self(), 0, &connect);
result = IOMobileFramebufferGetLayerDefaultSurface(connect, 0, &screenSurface);
uint32_t aseed;
IOSurfaceLock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
uint32_t width = IOSurfaceGetWidth(screenSurface);
uint32_t height = IOSurfaceGetHeight(screenSurface);
m_width = width;
m_height = height;
CFMutableDictionaryRef dict;
int pitch = width*4, size = width*height*4;
int bPE=4;
char pixelFormat[4] = {'A','R','G','B'};
dict = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(dict, kIOSurfaceIsGlobal, kCFBooleanTrue);
CFDictionarySetValue(dict, kIOSurfaceBytesPerRow, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &pitch));
CFDictionarySetValue(dict, kIOSurfaceBytesPerElement, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &bPE));
CFDictionarySetValue(dict, kIOSurfaceWidth, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &width));
CFDictionarySetValue(dict, kIOSurfaceHeight, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &height));
CFDictionarySetValue(dict, kIOSurfacePixelFormat, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, pixelFormat));
CFDictionarySetValue(dict, kIOSurfaceAllocSize, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &size));
IOSurfaceRef destSurf = IOSurfaceCreate(dict);
IOSurfaceAcceleratorRef outAcc;
IOSurfaceAcceleratorCreate(NULL, 0, &outAcc);
IOSurfaceAcceleratorTransferSurface(outAcc, screenSurface, destSurf, dict, NULL);
IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
CFRelease(outAcc);
// MOST RELEVANT PART OF CODE
CVPixelBufferCreateWithBytes(NULL, width, height, kCVPixelFormatType_32BGRA, IOSurfaceGetBaseAddress(destSurf), IOSurfaceGetBytesPerRow(destSurf), NULL, NULL, NULL, &sampleBuffer);
CMTime frameTime = CMTimeMake(frameCount, (int32_t)5);
[adaptor appendPixelBuffer:sampleBuffer withPresentationTime:frameTime];
CFRelease(sampleBuffer);
CFRelease(destSurf);
frameCount++;
}
P.S: The last 4-5 lines of code are the most relevant(if you need to filter).
1) The video that is produced has artefacts. I have worked on videos previously and have encountered such an issue before as well. I suppose there can be 2 reasons for this:
i. The PixelBuffer that is passed to the adaptor is getting modified or released before the processing (encoding + writing) is complete. This can be due to asynchronous calls. But I am not sure if this itself is the problem and how to resolve it.
ii. The timestamps that are passed are inaccurate (e.g. 2 frames having the same timestamp or a frame having a lower timestamp than the previous frame). I logged out the timestamp values and this doesn't seem to be the problem.
2) The code above is not able to grab surfaces when a video is played or when we play games. All I get is a blank screen in the output. This might be due to hardware accelerated decoding that happens in such cases.
Any inputs on either of the 2 parts of the questions will be really helpful. Also, if you have any good links to read on IOSurfaces in general, please do post them here.
I did a bit of experimentation and concluded that the screen surface from which the content is copied is changing even before the transfer of contents is complete (call to IOSurfaceAcceleratorTransferSurface() ). I am using a lock (tried both asynchronous and read-only) but it is being overridden by the iOS. I changed the code between the lock/unlock part to the following minimal:
IOSurfaceLock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
aseed1 = IOSurfaceGetSeed(screenSurface);
IOSurfaceAcceleratorTransferSurface(outAcc, screenSurface, destSurf, dict, NULL);
aseed2 = IOSurfaceGetSeed(screenSurface);
IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
The GetSeed function tells if the contents of the surface have changed. And, I logged a count indicating the number of frames for which the seed changes. The count was non-zero. So, the following code resolved the problem:
if(aseed1 != aseed2){
//Release the created surface
continue; //Do not use this surface/frame since it has artefacts
}
This however does affect performance since many frames/surfaces are rejected due to artefacts.
Any additions/corrections to this will be helpful.

ClearCanvas DICOM - How to create a Tag with a 'VR' of 'OW'

Ok, so what i am doing is adding a new Overlay to an existing DICOM file & saving it(The DICOM file now has two overlays). Everything saves without errors & both DICOM viewers Sante & ClearCanvas-Workstation open the file, but only Sante displays both overlays.
Now when I look at the tags within the DICOM file, the OverlayData(6000) 'VR' is 'OW' & the OverlayData(6002) 'VR' is 'OB'.
So my problem is how to create a new Tag with a 'VR' of 'OW' because that is the correct one to use for OverlayData.
Here is the code i'm using to add the new Overlay to the DicomFile.DataSet::
NOTE, after I create the overlay I do write visible pixel data into it.
void AddOverlay()
{
int newOverlayIndex = 0;
for(int i = 0; i != 16; ++i)
{
if(!DicomFile.DataSet.Contains(GetOverlayTag(i, 0x3000)))
{
newOverlayIndex = i;
break;
}
}
//Columns
uint columnsTag = GetOverlayTag(newOverlayIndex, 0x0011);
DicomFile.DataSet[columnsTag].SetUInt16(0, (ushort)CurrentData.Width);
//Rows
uint rowTag = GetOverlayTag(newOverlayIndex, 0x0010);
DicomFile.DataSet[rowTag].SetUInt16(0, (ushort)CurrentData.Height);
//Type
uint typeTag = GetOverlayTag(newOverlayIndex, 0x0040);
DicomFile.DataSet[typeTag].SetString(0, "G");
//Origin
uint originTag = GetOverlayTag(newOverlayIndex, 0x0050);
DicomFile.DataSet[originTag].SetUInt16(0, 1);
DicomFile.DataSet[originTag].SetUInt16(1, 1);
//Bits Allocted
uint bitsAllocatedTag = GetOverlayTag(newOverlayIndex, 0x0100);
DicomFile.DataSet[bitsAllocatedTag].SetUInt16(0, 1);
//Bit Position
uint bitPositionTag = GetOverlayTag(newOverlayIndex, 0x0100);
DicomFile.DataSet[bitPositionTag].SetUInt16(0, 0);
//Data
uint dataTag = GetOverlayTag(newOverlayIndex, 0x3000);
DicomFile.DataSet[dataTag].SetNullValue();//<<< Needs to be something else
byte[] bits = new byte[(CurrentData.Width*CurrentData.Height)/8];
for(int i = 0; i != bits.Length; ++i) bits[i] = 0;
DicomFile.DataSet[dataTag].Values = bits;
}
public static uint GetOverlayTag(int overlayIndex, short element)
{
short group = (short)(0x6000 + (overlayIndex*2));
byte[] groupBits = BitConverter.GetBytes(group);
byte[] elementBtis = BitConverter.GetBytes(element);
return BitConverter.ToUInt32(new byte[]{elementBtis[0], elementBtis[1], groupBits[0], groupBits[1]}, 0);
}
So it would seem to me there would be some method like 'DicomFile.DataSet[dataTag].SetNullValue();' to create the tag with a 'VR' of 'OW'. Or maybe theres a totally different way to add an overlay in ClearCanvas idk...
Ok, my confusion was accually caused by a bug in my program.
I was trying to create the "Bit Position" tag by using element "0x0100" instead of "0x0102".
OW vs OB is irrelevant.
Sorry about that...