Cocoa NSButton with underlined title - swift

I'm trying to create a NSButton with an underlined title.
#IBOutlet var myButton: NSButton!
let underlineAttribute = [NSAttributedString.Key.underlineStyle: NSUnderlineStyle.single]
let underlineAttributedString = NSAttributedString(string: "Button Title", attributes: underlineAttribute)
myButton.attributedTitle = underlineAttributedString
But I get the error:
<NSATSTypesetter: 0x600003719500>: Exception -[__SwiftValue _getValue:forType:]: unrecognized selector sent to instance 0x600000ca4060 raised during typesetting layout manager <NSLayoutManager: 0x100e1a140>
1 containers, text backing has 5 characters
selected character range {0, 0} affinity: upstream granularity: character
marked character range {0, 0}
Currently holding 5 glyphs.
Glyph tree contents: 5 characters, 5 glyphs, 1 nodes, 64 node bytes, 256 storage bytes, 320 total bytes, 64.00 bytes per character, 64.00 bytes per glyph
Layout tree contents: 5 characters, 5 glyphs, 0 laid glyphs, 0 laid line fragments, 1 nodes, 64 node bytes, 0 storage bytes, 64 total bytes, 12.80 bytes per character, 12.80 bytes per glyph, 0.00 laid glyphs per laid line fragment, 0.00 bytes per laid line fragment
, glyph range {0 5}. Ignoring...
Is it possible to create an underlined title in general?
Thanks!

Looks like you figured it out, but for documentation purposes: NSUnderlineStyle.single -> NSUnderlineStyle.single.rawValue
#IBOutlet var myButton: NSButton!
let underlineAttribute = [NSAttributedString.Key.underlineStyle: NSUnderlineStyle.single.rawValue]
let underlineAttributedString = NSAttributedString(string: "Button Title", attributes: underlineAttribute)
myButton.attributedTitle = underlineAttributedString

Related

Is UTF-16 encoding handles data compression by default?

I've unicode char த
When I convert id to data,
UTF 8 -> Size: 3 bytes Array: [224, 174, 164]
UTF 16 -> Size: 4 bytes Array: [2980]
Seems pretty simple UTF8 tooks 1 byte per code and UTF16 takes 4 bytes per code. But, If I use "தததத" using Swift programming language in macOS,
let tamil = "தததத"
let utf8Data = tamil.data(using: .utf8)!
let utf16Data = tamil.data(using: .utf16)!
print("UTF 8 -> Size: \(utf8Data.count) bytes Array: \(tamil.utf8.map({$0}))")
print("UTF 16 -> Size: \(utf16Data.count) bytes Array: \(tamil.utf16.map({$0}))")
Then the output is
UTF 8 -> Size: 12 bytes Array: [224, 174, 164, 224, 174, 164, 224, 174, 164, 224, 174, 164]
UTF 16 -> Size: 10 bytes Array: [2980, 2980, 2980, 2980]
The UTF16 data for "தததத" => 4x4 = 16 bytes. But it is 10 bytes only still have 4 codes in the array. Why it is? Where the 6 bytes gone?
The actual byte representation of those strings is this:
UTF-8:
e0ae a4e0 aea4 e0ae a4e0 aea4
UTF-16:
feff 0ba4 0ba4 0ba4 0ba4
The UTF-8 representation is e0aea4 times four.
The UTF-16 representation is 0ba4 times four plus one leading BOM feff.
UTF-16 text should start with a BOM, but this is only required once at the start of the string, not once for each character.

How to get correct RGB color values from PNG with PLTE palette using NSBitmapImageRep in Swift?

Given a PNG image, I would like to get the RGB color at a given pixel using Swift on macOS.
This is simply done by:
let d = Data(base64Encoded: "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAIAAACQd1PeAAAADElEQVQImWP4z8AAAAMBAQCc479ZAAAAAElFTkSuQmCC")!
let nsImage = NSImage(data: d)!
let bitmapImageRep = NSBitmapImageRep(cgImage: nsImage.cgImage(forProposedRect: nil, context: nil, hints: [:])!)
let color = bitmapImageRep.colorAt(x: 0, y: 0)
print(color!.redComponent, color!.greenComponent, color!.blueComponent, separator: " ")
// 1.0 0.0 0.0
This minimal example contains a one pixel PNG in RGB encoding with a red pixel in base64 encoding to make this easily reproducible.
printf iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAIAAACQd1PeAAAADElEQVQImWP4z8AAAAMBAQCc479ZAAAAAElFTkSuQmCC | base64 --decode | pngcheck -v
File: stdin (140703128616967 bytes)
chunk IHDR at offset 0xfffffffffffffffb, length 13
1 x 1 image, 24-bit RGB, non-interlaced
chunk IDAT at offset 0xfffffffffffffffb, length 12
zlib: deflated, 256-byte window, default compression
chunk IEND at offset 0xfffffffffffffffb, length 0
No errors detected in stdin (3 chunks, -133.3% compression).
When changing this PNG to an indexed PNG with a PLTE Palette
printf iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAMAAAAoyzS7AAAAA1BMVEX/AAAZ4gk3AAAACklEQVQImWNgAAAAAgAB9HFkpgAAAABJRU5ErkJggg== | base64 --decode | pngcheck -v
File: stdin (140703128616967 bytes)
chunk IHDR at offset 0xfffffffffffffffb, length 13
1 x 1 image, 8-bit palette, non-interlaced
chunk PLTE at offset 0xfffffffffffffffb, length 3: 1 palette entry
chunk IDAT at offset 0xfffffffffffffffb, length 10
zlib: deflated, 256-byte window, default compression
chunk IEND at offset 0xfffffffffffffffb, length 0
No errors detected in stdin (4 chunks, -600.0% compression).
the snippet above changes to
let d = Data(base64Encoded: "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAMAAAAoyzS7AAAAA1BMVEX/AAAZ4gk3AAAACklEQVQImWNgAAAAAgAB9HFkpgAAAABJRU5ErkJggg==")!
let nsImage = NSImage(data: d)!
let bitmapImageRep = NSBitmapImageRep(cgImage: nsImage.cgImage(forProposedRect: nil, context: nil, hints: [:])!)
let color = bitmapImageRep.colorAt(x: 0, y: 0)
print(color!.redComponent, color!.greenComponent, color!.blueComponent, separator: " ")
// 0.984313725490196 0.0 0.027450980392156862
It produces the unexpected output with a red component close to one and some non-zero blue component. Where does this change in color come from?

Two byte report count for hid report descriptor

I'm trying to create an HID report descriptor for USB 3.0 with a report count of 1024 bytes.
The documentation at usb.org for HID does not seem to mention a two byte report count. Nonetheless, I have seen some people use 0x96 (instead of 0x95) to enter a two byte count, such as:
0x96, 0x00, 0x02, // REPORT_COUNT (512)
which was taken from here:
Custom HID device HID report descriptor
Likewise, from this same example, 0x26 is used for a two byte logical maximum.
Where did this 0x96 and 0x26 field come from? I don't see any documentation for it.
REPORT_COUNT is defined in the Device Class Definition for HID 1.11 document in section 6.2.2.7 Global Items on page 36 as:
Report Count 1001 01 nn Unsigned integer specifying the number of data
fields for the item; determines how many fields are included in the
report for this particular item (and consequently how many bits are
added to the report).
The nn in the above code is the item length indicator (bSize) and is defined earlier in section 6.2.2.2 Short Items as:
bSize Numeric expression specifying size of data:
0 = 0 bytes
1 = 1 byte
2 = 2 bytes
3 = 4 bytes
Rather confusingly, the valid values of bSize are listed in decimal. So, in binary, the bits for nn would be:
00 = 0 bytes (i.e. there is no data associated with this item)
01 = 1 byte
10 = 2 bytes
11 = 4 bytes
Putting it all together for REPORT_COUNT, which is an unsigned integer, the following alternatives could be specified:
1001 01 00 = 0x94 = REPORT_COUNT with no length (can only have value 0?)
1001 01 01 = 0x95 = 1-byte REPORT_COUNT (can have a value from 0 to 255)
1001 01 10 = 0x96 = 2-byte REPORT_COUNT (can have a value from 0 to 65535)
1001 01 11 = 0x97 = 4-byte REPORT_COUNT (can have a value from 0 to 4294967295)
Similarly, for LOGICAL_MAXIMUM, which is a signed integer (usually, there is an exception):
0010 01 00 = 0x24 = LOGICAL_MAXIMUM with no length (can only have value 0?)
0010 01 01 = 0x25 = 1-byte LOGICAL_MAXIMUM (can have a values from -128 to 127)
0010 01 10 = 0x26 = 2-byte LOGICAL_MAXIMUM (can have a value from -32768 to 32767)
0010 01 11 = 0x27 = 4-byte LOGICAL_MAXIMUM (can have a value from -2147483648 to 2147483647)
The specification is unclear on what value a zero-length item defaults to in general. It only mentions, at the end of section 6.2.2.4 Main Items, that MAIN item types and, within that type, INPUT item tags, have a default value of 0:
Remarks - The default data value for all Main items is zero (0).
- An Input item could have a data size of zero (0) bytes. In this case the value of
each data bit for the item can be assumed to be zero. This is functionally
identical to using a item tag that specifies a 4-byte data item followed by four
zero bytes.
It would be reasonable to assume 0 as the default for other item types too, but for REPORT_COUNT (a GLOBAL item) a value of 0 is not really a sensible default (IMHO). The specification doesn't really say.

Fiji get a 16bit tiff stack with unknown tags after importing sequences

In Fiji, I import sequences to get a tiff stack image over 5GB. I cannot see the detailed information in the property, such as Width, Height, Bit depth. The original depth is 16bit. When I use "imfinfo" in Matlab, it always shows 1 but not the length of the stack. Can anyone help me solve this problem? I would like to appreciate your kindness.
Below is the feedback from Matlab when I use imfinfo.
info_red=imfinfo('C:\Users\MyDoc\Desktop\Background Subtraction\FluoRed.tif')
info_red=
Filename: 'C:\Users\MyDoc\Desktop\Background Subtraction\FluoRed.tif'
FileModDate: '02-Sep-2014 07:09:51'
FileSize: 5.3701e+09
Format: 'tif'
FormatVersion: []
Width: 1388
Height: 1040
BitDepth: 16
ColorType: 'grayscale'
FormatSignature: [73 73 42 0]
ByteOrder: 'little-endian'
NewSubFileType: 0
BitsPerSample: 16
Compression: 'Uncompressed'
PhotometricInterpretation: 'BlackIsZero'
StripOffsets: 230904
SamplesPerPixel: 1
RowsPerStrip: 1040
StripByteCounts: 2887040
XResolution: []
YResolution: []
ResolutionUnit: 'Inch'
Colormap: []
PlanarConfiguration: 'Chunky'
TileWidth: []
TileLength: []
TileOffsets: []
TileByteCounts: []
Orientation: 1
FillOrder: 1
GrayResponseUnit: 0.0100
MaxSampleValue: 65535
MinSampleValue: 0
Thresholding: 1
Offset: 8
ImageDescription: 'ImageJ=1.49b
images=1860
frames=1860
finterval=3
tunit=min
loop=false
min=30...'
UnknownTags: [2x1 struct]
when importing tiff stacks with Matlab, you can use the tiff class to get information about your stack, as well as actually opening them.
For example, if you use this:
t = Tiff('FluoRed.tif','r')
you will get a description of your stack which looks somewhat similar to what you get with imfinfo, that is you get a structure in which you can access its fields using standard dot notation. Here is an example with a stack named 'OriginalStack.tif' on my computer:
t =
TIFF File: '/Documents/MATLAB/OriginalStack.tif'
Mode: 'r'
Current Image Directory: 1
Number Of Strips: 1
SubFileType: Tiff.SubFileType.Default
Photometric: Tiff.Photometric.RGB
ImageLength: 364
ImageWidth: 460
RowsPerStrip: 364
BitsPerSample: 8
Compression: Tiff.Compression.None
SampleFormat: Tiff.SampleFormat.UInt
SamplesPerPixel: 3
PlanarConfiguration: Tiff.PlanarConfiguration.Chunky
ImageDescription: ImageJ=1.48v
images=20
slices=20
loop=false
Orientation: Tiff.Orientation.TopLeft
For instance, you can get the image width like so:
Width = t.getTag('ImageWidth')
As you can see, there is a field in the structure called 'ImageDescription', which tells you that there are 20 images in your stack (well my stack actually :). You could manage to get this information with structure indexing, but it's cumbersome since using
t.getTag('ImageDescription')
returns a character array and you would need to play around with regular expressions, for instance, to get the actual number of images.
EDIT: here is how you could do to retrieve the number of slices in your stack:
1) Assign a variable name to the 'ImageDescription' tag from the tiff class object:
ImageDes = t.getTag('ImageDecription');
2) Then use regular expressions to look for numbers present in the character array:
NumberSlices = regexp(ImageDes,'\d*','match')
In my case (20 slices), I get the following:
NumberSlices =
'1' '48' '20' '20'
The first 2 numbers are in ImageJ 1.48 so we don't want them, however you can fetch any of the last two numbers and you're good to go:
NumberSlices = str2double(NumberSlices{3})
If not, the simplest solution in my opinion is to use the output from imfinfo like so:
NumberImages = length(info_red);
which in your case should give 1860.
Sorry if it was very long as an answer; anyhow I think you will find useful the information about the Tiff class if you are to work with stacks in Matlab :)
Hope that helps!

Qt Embedded exchanged colors bits: Red and Blue

I'm using QtEmbedded (4.8.0) on an ARM display device with 16bits color depth on the framebuffer (/dev/fb0). At this scenario we are getting the colors bits RED and BLUE exchanged.
We are using the following compile flags:
./configure -embedded arm -xplatform qws/linux-arm-gnueabi-g++ -prefix /home/rchaves/Toolchain -release -opensource -shared -fast -depths 16 -largefile -no-exceptions -no-accessibility -stl -no-sql-mysql -no-sql-psql -no-sql-oci -no-sql-odbc -no-sql-tds -no-sql-db2 -no-sql-sqlite -no-sql-sqlite2 -no-sql-ibase -no-qt3support -no-xmlpatterns -no-multimedia -no-audio-backend -no-phonon-backend -no-svg -no-webkit -no-javascript-jit -no-script -no-scripttools -no-declarative -no-declarative-debug -qt-zlib -qt-libtiff -qt-libpng -qt-libmng -qt-libjpeg -no-openssl -no-nis -no-cups -iconv -no-pch -no-dbus -qt-freetype -no-opengl -qt-gfx-linuxfb -qt-kbd-linuxinput -qt-mouse-tslib -nomake demos -nomake examples
And the following parameters to execute the application:
QWS_DISPLAY=LinuxFb:/dev/fb0:depth=16 ./app -qws
Here there are the application framebuffer (samples) log:
The framebuffer device was opened successfully.
Fixed screen info:
id: DISP3 BG
smem_start: 0x93800000
smem_len: 7864320
type: 0
type_aux: 0
visual: 2
xpanstep: 1
ypanstep: 1
ywrapstep: 0
line_length: 2048
mmio_start: 0x0
mmio_len: 0
accel: 0
The framebuffer device was mapped to memory successfully.
Successfully switched to graphics mode.
Variable screen info:
xres: 1024
yres: 768
xres_virtual: 1024
yres_virtual: 3840
yoffset: 0
xoffset: 0
bits_per_pixel: 16
grayscale: 0
red: offset: 0, length: 5, msb_right: 0
green: offset: 5, length: 6, msb_right: 0
blue: offset: 11, length: 5, msb_right: 0
transp: offset: 0, length: 0, msb_right: 0
nonstd: 0
activate: 64
height: -1
width: -1
accel_flags: 0x0
pixclock: 15385
left_margin: 157
right_margin: 157
upper_margin: 16
lower_margin: 15
hsync_len: 5
vsync_len: 1
sync: 0
vmode: 0
Frame Buffer Performance test...
Average: 43020 usecs
Bandwidth: 174.338 MByte/Sec
Max. FPS: 23.245 fps
Will draw 3 rectangles on the screen,
they should be colored red, green and blue (in that order).
Done.
Better late than never. I had this exact problem with a SAM5 processor using Qt5.5.1 and the linuxfb plugin. Reconfigure or recompile the Qt5 framework will NOT solve the problem.
Apparently the LinuxFB plugin does not support the BGR format. There is an open bug tracking this issue. Check the determineFormat function in ../src/plugins/platforms/linuxfb/qlinuxfbscreen.cpp, in which you will find out that the ImageFormats are hardcoded to **RGB no matter what kind of framebuffer info was provided.
To solve the problem, applying the patch attached to the bug may help you to resolve this issue.
I said "may" because my framebuffer driver was falsely reporting it is in the RBG format. So watch out for that. If that is the case, just hardcode the swapRgb flag until you fix your framebuffer driver.
Update: Try setting -depths generic in ./configure and run with -display linuxfb:genericcolors. This is as per this thread which discusses the problem.
Old answer: It sounds like your endian-ness of the display is swapped.
As per the documentation, you can try to pass the littleendian option to the display string. The other option is to consult the linux fb documentation about performing endian swaps.