ARKIT - 3d text - state of the art - arkit

request is to be able to render in arkit test input from keyboard.
So far I have used SCNText, which is 3d render of plain 2D text. I am not satisfied with that result as the text is clearly a projection in 3D space of a 2D text in X/Y plane. Also applying effects like "neon" is not effective.
The only thing pops out from my mind at the moment is to:
a- Create for every char a .DAE object representing the character
b- Import the .DAEs and map them with the characters
This approach is very time consuming and don't even know possible complications. Also considering the different languages it will involve further work to adapt to the different languages.
Questions:
1- Is anyone familiar with that and has implemented that already?
2- Are there around better solutions to that problem?
3- Is there any way to have already build .DAEs for the single characters ?
Thanks.

You can set the extrusiondepth paramater of SCNText to make it look more '3D':
Apple: For example, if its extrusionDepth property is 1.0, the geometry extends from -0.5 to 0.5 along the z-axis. An extrusion depth of zero creates a flat, one-sided shape—the geometry is confined to the plane whose z-coordinate is 0.0, and viewable only from its front unless its material’s isDoubleSided property is true.
You can also use an NSAttributedString which would add some extra' effects'.
And finally you say that you want the SCNText to appear dynamically based on Keyboard input?
Create a TextField and add it to your view e.g:
let textField = UITextField(frame: self.view.bounds)
self.view.addSubview(textField)
textField.backgroundColor = .clear
textField.becomeFirstResponder()
textField.addTarget(self, action: #selector(textFieldDidChange(_:)), for: .editingChanged)
Respond to the input:
#objc func textFieldDidChange(_ textField: UITextField) {
//Set The SCNText Here
displayLabel.string = textField.text
}

Related

iOS Charts: Determine the positioning and color of the highlighted entry

I have a pie chart implemented with the Charts library. Now when an entry is tapped and becomes highlighted, I would like to show a tooltip that shows the value of the selected entry, positioned in the middle of the entry, and outlined in the same color of the selected piece of the pie.
In the delegate function chartValueSelected(_ chartView: ChartViewBase, entry: ChartDataEntry, highlight: Highlight) I can obtain the highlight's xPx and yPx, but this is the touch point where the user tapped to highlight this entry, not the center point of the entry itself, so centering the tooltip on these values results in the tooltip jumping around depending on where you tap the entry. And I don't see any way to get the color of the highlighted entry.
I am trying to obtain the following, and outline it in that pink color:
:: 1 :: Position of the Popup
Hmm... I've been looking and I don't think the ChartDataEntry has an available frame to use as a reference point for setting your popup's position.
I think doing it based on the tap location is a decent alternative, though. That at least gets the popup to show in the same general region as the pie segment. It should still be a good user experience, in my opinion. :)
:: 2 :: Color of the Currently Selected Segment
PieChartDataSet has a values property that has your ChartDataEntry objects in it. So you could find the index of the matching one since the chartValueSelected delegate method passes you the current ChartDataEntry object (entry).
PieChartView has the marker property which you can use for display marker when a value is clicked on the chart. In my opinion, using marker property and creating the custom marker class it is a preferred way of resolving your task.
The custom marker class must conform to IMarker protocol and can be instantiated from MarkerImage class. Pay attention to two functions:
draw(context: CGContext, point: CGPoint) function with two input params which pass position and context for drawing. So you have information about the position for your balloon.
refreshContent(entry: ChartDataEntry, highlight: Highlight). In this function, you can choose a color for the balloon. MarkerImage class has chartView property so you have access to datasets. If you use more than one dataset, you can use a value of highlight.dataSetIndex to define dataset which was clicked. When the dataset is chosen you can get a color from dataSet.colors property using index obtained by dataSet.entryIndex(entry: entry).
Example of color choosing:
open override func refreshContent(entry: ChartDataEntry, highlight: Highlight)
{
let dataSet = chartView!.data!.dataSets[highlight.dataSetIndex]
let colorIndex = dataSet.entryIndex(entry: entry) % dataSet.colors.count
color = dataSet.colors[colorIndex]
super.refreshContent(entry: entry, highlight: highlight)
}

Using .roundedRect vs. cornerRadius of the layer for rounded corners

I was experimenting with creating a simple textfield in code and have been using UITextField.layer.cornerRadius to create a rounded corner, rather than using .roundedRect property of borderStyle, which I thought looked more restrictive.
So I just came back to wondering about it, and would like to know if there is any advantage to using .roundedRect?
It seems to display a default standard roundedness of the corners - can this be adjusted, or is it just there to be available off the shelf?
You can programmatically tune the border width and corner radius of the text field and any other view for the matter, by accessing its layer properties:
UITextField.layer.cornerRadius = 5.0
UITextField.layer.borderWidth = 3.0
On top of that, UITextField has a borderStyle property which you might want to play with. It has four possible values: None, Line, Bezel, and RoundedRect.
more check roundedRect apple doc
Displays a rounded-style border for the text field.
Advantage is if you are using .roundedRect it will give standard rounded-style & border Width whereas if you use .cornerRadius you can tune programatically the border width and corner radius.

Auto layout constraints not taking effect on child controls until parent container is resized (swift)

I have a program with an NSSplitView (containing two panes) filling the window. The left pane is an NSStackView and it contains three NSBoxes. The top two each contain an NSTextField for displaying a string of weather information that can change length every time it is updated by code in the background. I have added the relevant constraints to the top two NSBoxes (and their child NSTextFields) to make them expand or shrink to fit the size of the text fields inside them as the text changes length. The third NSBox fills up the remaining space in the stack view.
The problem is that when the window loads, the NSBoxes (as expected) display the correct size as designed in interface builder to fit the default string in the text fields. However, when the update code kicks in and changes the text of the text fields to the downloaded data, which is longer than the default string, the NSBoxes do not adjust their height to fit the larger text. They only change their height when the splitter of the split view is moved. This is annoying because I have to move the splitter every time the number of lines in the text fields changes. Why is this happening and how can I make the boxes update their height to fit the text when it is longer or shorter than before?
It is much like this question (the only source of information I found on my problem): NSScrollView with auto layout not resizing until first manual window resize, however the solution did not work for me.
Below is an image of the interface (again, the top two boxes should resize to fit their text but this only happens when the splitter is moved). It shows longer text than the boxes can display and they haven't resized:
One option is to create a subclass of NSBox and override the intrinsicContentSize property. In your implementation you'd measure the height of the text contained in the box's text field, take account of any vertical padding that exists between the vertical edges of the text field and the corresponding edges of the box, and return the result of your calculation as the height part of the NSSize return value. Then, any time you want the boxes to resize, you can call invalidateIntrinsicContentSize.
I created a sample app to see if I could get this to work which I've posted here. There are a couple of tricky parts which you should be aware of:
Calculating Text Height
The most involved bit of coding in this approach is calculating the height of the text. Fortunately, Apple has documentation that tells you exactly how to do it; here's how that looks in Swift:
// Properties of your NSBox subclass
var layoutManager: NSLayoutManager {
return textStorage.layoutManagers.first! as! NSLayoutManager
}
var textContainer: NSTextContainer {
return layoutManager.textContainers.first! as! NSTextContainer
}
var typesetter: NSTypesetter {
return layoutManager.typesetter
}
// The textStorage object lies at the heart of the text stack. Create this
// object first, the rest follows.
lazy var textStorage: NSTextStorage! = {
var initialString: String = ""
var ts = NSTextStorage(attributedString: self.textField.attributedStringValue)
var lm = NSLayoutManager()
lm.typesetterBehavior = NSTypesetterBehavior.Behavior_10_2_WithCompatibility
var tc = NSTextContainer()
lm.addTextContainer(tc)
ts.addLayoutManager(lm)
return ts
}()
Setting the hugging-resisting priorities for your NSBox subclass
To get the demo working correctly I found that I needed to set the vertical hugging priority and vertical compression resistance values of the NSBox objects to 900.

Displaying a 500x500 pixel image in a 640x852 pixel NSImageView without any kind or blurriness (Swift on OS X)

I have spent hours on Google searching for an answer to this and trying pieces of code but I have just not been able to find one. I also recognise that this is a question that has been asked lots of times, however I do not know what else to do now.
I have access to 500x500 pixel rainfall radar images from the Met Offices' DataPoint API, covering the UK. They must be displayed in a 640x852 pixel area (an NSImageView, which I currently have the scaling property set to axis independent) because this is the correct size of the map generated for the boundaries covered by the imagery. I want to display them at the enlarged size of 640x852 using the nearest neighbour algorithm and in an aliased format. This can be achieved in Photoshop by going to Image > Image Size... and setting resample to nearest neighbour (hard edges). The source images should remain at 500x500 pixels, I just want to display them in a larger view.
I have tried setting the magnificationFilter of the NSImageView.layer to all three of the different kCAFilter... options but this has made no difference. I have also tried setting the shouldRasterize property of the NSImageView.layer to true, which also had no effect. The images always end up being smoothed or anti-aliased, which I do not want.
Having recently come from C#, there could be something I have missed as I have not been programming in Swift for very long. In C# (using WPF), I was able to get what I want by setting the BitmapScalingOptions of the image element to NearestNeighbour.
To summarise, I want to display a 500x500 pixel image in a 640x852 pixel NSImageView in a pixelated form, without any kind of smoothing (irrespective of whether the display is retina or not) using Swift. Thanks for any help you can give me.
Below is the image source:
Below is the actual result (screenshot from a 5K iMac):
This was created by simply setting the image property on an NSImageSource with the tableViewSelectionDidChange event of my NSTableView used to select the times to show the image for, using:
let selected = times[timesTable.selectedRow]
let formatter = NSDateFormatter()
formatter.dateFormat = "d/M/yyyy 'at' HH:mm"
let date = formatter.dateFromString(selected)
formatter.dateFormat = "yyyyMMdd'T'HHmmss"
imageData.image = NSImage(contentsOfFile: basePathStr +
"RainObs_" + formatter.stringFromDate(date!) + ".png")
Below is what I want it to look like (ignoring the background and cropped out parts). If you save the image yourself you will see it is pixellated and aliased:
Below is the map that the source is displayed over (the source is just in an NSImageView laid on top of another NSImageView containing the map):
Try using a custom subclass of NSView instead of an NSImageView. It will need an image property with a didSet observer that sets needsDisplay. In the drawRect() method, either:
use the drawInRect(_:fromRect:operation:fraction:respectFlipped:hints:) method of the NSImage with a hints dictionary of [NSImageHintInterpolation:NSImageInterpolation.None], or
save the current value of NSGraphicsContext.currentContext.imageInterpolation, change it to .None, draw the NSImage with any of the draw...(...) methods, and then restore the context's original imageInterpolation value

how to efficiently find a rect # some x,y point with iphone sdk

I am looking for an efficient way to handle the image/frame detection from touch methods. Let's say i am building a keyboard or similar to this. I have 'n' number of images placed on the UI. When someone touches an alphabet (which is an image), i can do the following to detect the corresponding letter
1) CGRectIntersectsRect(..,..) : if i use this, then i need to check each & every letter to find out what letter exists at that touch point (let's say 100,100). This becomes O(n). If i move my finger accross the screen, then i will get m points & all corresponding image detection becomes O(n*m) which is not good.
2) Other way is building a hash for each & every x,y position so that the look up will be simply O(1). But again this will be a memory constraint as i need to store 300*300 ( assuming i am using 300*300 screen size). if i reshuffle my letters, then everything needs to calculated again. So this is not good
In other words, i need some thing like , given a point (x,y), i need some way of finding which rectangle is covering that point efficiently.
Sorry for long post & any help would be grateful.
Thanks
If there are in a regular grid, then integer division by the grid size. Assuming you've a small, fixed screen size, a bucket array gives as similar gain ( a 2D grid, where each entry is a list of the rectangles which intersect that part of the grid ) is very fast if tuned correctly so the lists only have a few members. For unbounded or large spaces, KD trees can be used.
It's useful to have the rects you want as final targets set up as subviews as a larger UIView (or subclass) where you expect all these related hits to occur. For example, if you're building your own keyboard, you could add a bunch of UIButton objects as subviews and hit test those.
So, the easy and traditional way of hit testing a bunch of subviews is to simply have code triggered by someone hitting those buttons. For example, you could add the subviews as UIControl objects (which is a subclass of UIView that adds some useful methods for catching user touch events), and call addTarget:action:forControlEvents: to specify some method to be triggered when the user does something in the rect of that UIControl. For example, you can catch things like UIControlEventTouchDown or UIControlEventTouchDragEnter. You can read the complete list in the UIControl class reference.
Now, it sounds like you might be going for something even more customized. If you really want to start with a random (x,y) coordinate and know which rect it's in, you can also use the hitTest:withEvent: method of UIView. This method takes a point in a view, and finds the most detailed (lowest in the hierarchy) subview which contains that point.
If you want to use these subviews purely for hit testing and not for displaying, then you can set their background color to [UIColor clearColor], but don't hide them (i.e. set the hidden property to YES), disable user interaction with them (via the userInteractionEnabled BOOL property), or set the alpha below 0.1, since any of those things will cause the hitTest:withEvent: method to skip over that subview. But you can still use an invisible subview with this method call, as long as it meets these criteria.
Look at UIView's tag property. If your images are in UIViews or subviews of UIView, then you can set each tag and use the tag to look up in an array.
If not, then you can improve the speed by dividing your array of rectangles into sets that fit into larger rectangles. Test the outer rectangles first, then the inner rectangles. 25 rectangles would need only 10 tests worst case as 5 sets of 5.
Thanks Pete Kirkham & Tyler for your answers which are really helpful. Lets say i dont wanna use buttons as i am mainly displaying images as a small rectangles. To check for the rect # (x,y) , i can trigger that easily by making my grid as a square & finding
gridcolumn = Math.floor(pos.x / cellwidth); gridrow = Math.floor(pos.y / cellheight);
But my problem is with touchesMoved. Lets say i started # grid-1 & dragged till grid-9 (in a 3*3 matrix), in this case, i am assuming i will get 100-300 (x,y) positions, so everytime i need to run above formula to determine corresponding grid. This results in 300 calculations which might effect the performance.
So when i display an image as a rect, can i associate some id for that image? so that i can simply just save the ids in a list (from grid-1 to grid-9) so that i can avoid the above calculation.
Thanks for your help