Using a vector image with TVMonogramView - swift

I've looked around and unfortunately there's very little on the web about the latest addition to the tvOS 12.0 which is the TVUIKit and its new controls. I have a UICollectionView with UICollectionViewCells of the following type:
import UIKit
import TVUIKit
class AirportsCollectionViewCell: UICollectionViewCell {
#IBOutlet weak var imageView: UIImageView!
var airportView: TVMonogramView!
override func awakeFromNib() {
airportView = TVMonogramView(frame: self.contentView.frame)
airportView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
addSubview(airportView)
}
}
I have some images in my assets catalog that are all PDF files and I've naturally checked the "Preserve vector data" and they work absolutely fine when I assign them to my UIImageView of choice with any given size. However, when I assign them to the image property of my TVMonogramView, I get a terrible quality on the images. They seem to have been rendered with a very low resolution.
Here's how I use them:
if let flag = UIImage(named: countryCode) {
cell.airportView.image = flag
} else {
cell.airportView.image = nil
}
How can I fix this behaviour and what is the alternative?

Related

How can I reduce the opacity of the shadows in RealityKit?

I composed a scene in Reality Composer and added 3 objects in it. The problem is that the shadows are too intense (dark).
I tried using the Directional Light in RealityKit from this answer rather than a default light from Reality Composer (since you don't have an option to adjust light in it).
Update
I implemented the spotlight Lighting as explained by #AndyFedo in the answer. The shadow is still so dark.
In case you need soft and semi-transparent shadows in your scene, use SpotLight lighting fixture which is available when you use a SpotLight class or implement HasSpotLight protocol. By default SpotLight is north-oriented. At the moment there's no opacity instance property for shadows in RealityKit.
outerAngleInDegrees instance property must be not more than 179 degrees.
import RealityKit
class Lighting: Entity, HasSpotLight {
required init() {
super.init()
self.light = SpotLightComponent(color: .yellow,
intensity: 50000,
innerAngleInDegrees: 90,
outerAngleInDegrees: 179, // greater angle – softer shadows
attenuationRadius: 10) // can't be Zero
}
}
Then create shadow instance:
class ViewController: NSViewController {
#IBOutlet var arView: ARView!
override func awakeFromNib() {
arView.environment.background = .color(.black)
let spotLight = Lighting().light
let shadow = Lighting().shadow
let boxAndCurlAnchor = try! Experience.loadBoxAndCurl()
boxAndCurlAnchor.components.set(shadow!)
boxAndCurlAnchor.components.set(spotLight)
arView.scene.anchors.append(boxAndCurlAnchor)
}
}
Here's an image produced without this line: boxAnchor.components.set(shadow!).
Here's an image produced with the following value outerAngleInDegrees = 140:
Here's an image produced with the following value outerAngleInDegrees = 179:
In a room keep SpotLight fixture at a height of 2...4 meters from a model.
For bigger objects you must use higher values for intensity and attenuationRadius:
self.light = SpotLightComponent(color: .white,
intensity: 625000,
innerAngleInDegrees: 10,
outerAngleInDegrees: 120,
attenuationRadius: 10000)
Also you can read my STORY about RealityKit lights on Medium.
The shadows appear darker when I use "Hide" action sequence on "Scene Start" and post a notification to call "Show" action sequence on tap gesture.
The shadows were fixed when I scaled the Object to 0% and post Notification to call "Move,Rotate,Scale to" action sequence on tap gesture.
Scaled Image
Unhide Image
Object Difference with hidden and scaled actions
import UIKit
import RealityKit
import ARKit
class Lighting: Entity, HasDirectionalLight {
required init() {
super.init()
self.light = DirectionalLightComponent(color: .red, intensity: 1000, isRealWorldProxy: true)
}
}
class SpotLight: Entity, HasSpotLight {
required init() {
super.init()
self.light = SpotLightComponent(color: .yellow,
intensity: 50000,
innerAngleInDegrees: 90,
outerAngleInDegrees: 179, // greater angle – softer shadows
attenuationRadius: 10) // can't be Zero
}
}
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
enum TapObjects {
case None
case HiddenChair
case ScaledChair
}
var furnitureAnchor : Furniture._Furniture!
var tapObjects : TapObjects = .None
override func viewDidLoad() {
super.viewDidLoad()
furnitureAnchor = try! Furniture.load_Furniture()
arView.scene.anchors.append(furnitureAnchor)
addTapGesture()
}
func addTapGesture() {
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(onTap))
arView.addGestureRecognizer(tapGesture)
}
#objc func onTap(_ sender: UITapGestureRecognizer) {
switch tapObjects {
case .None:
furnitureAnchor.notifications.unhideChair.post()
tapObjects = .HiddenChair
case .HiddenChair:
furnitureAnchor.notifications.scaleChair.post()
tapObjects = .ScaledChair
default:
break
}
}
}

Using TVPosterImage in TVUIKit for tvOS 12

tvOS 12 has a new framework TVUIKit, which introduces the lockup views. The class I am interested in is TVPosterView, which is basically designed as such:
Swift 4.2
open class TVPosterView : TVLockupView { // One may use UIControl to implement it in iOS
public init(image: UIImage?)
open var image: UIImage? // default is nil
open var title: String?
open var subtitle: String?
}
In my storyboard I have added an UIView element, and rightly (at least I hope so. Yesterday the IB agent kept crashing!) changed its class to TVPosterView in Identity Inspector. then in my UIViewController I have defined:
#IBOutlet var aPosterView: TVPosterView!
override func viewDidLoad() {
super.viewDidLoad()
if let myIcon = UIImage(named: "icon.png") {
self.aPosterView = TVPosterView(image: myIcon)
self.aPosterView.title = "My Title"
self.aPosterView.subtitle = "A Sub Title"
}
else {
print("ERROR: I couldn't load the icon!")
}
}
}
It compiles without any warning. When I run it, it just shows the UIView white background, nothing changes. I did some tests trying to add an UIImageView, but they were all inconclusive (apart, off course, when I did set the image of UIImageView to self.aPosterView.image).
I am far from being an expert, I just started learning a couple of weeks ago. Any idea about what I am missing? I thought the concept was that once I initiated the class the framework was taking care of displaying the poster with the (optional) title and subtitles, also taking care of all nice animations!
Eventually I managed to make it working programmatically. I tested both TVPosterView and TVMonogramView (basically a round button). The following code works with Xcode 10 beta 5, on tvOS 12 beta 5:
Swift 4.2
import UIKit
import TVUIKit
class SecondViewController: UIViewController {
var myPoster = TVPosterView()
var myMonogram = TVMonogramView()
override func viewDidLoad() {
super.viewDidLoad()
myPoster.frame = CGRect(x: 100, y: 100, width: 550, height: 625)
myPoster.image = UIImage(named: "image1.png")
myPoster.imageView.masksFocusEffectToContents = true
myPoster.title = "Poster"
myPoster.subtitle = "This is the poster subtitle"
self.view.addSubview(myPoster)
var myName = PersonNameComponents()
myName.givenName = "Michele"
myName.familyName = "Dall'Agata"
myMonogram.frame = CGRect(x: 700, y: 100, width: 500, height: 475)
myMonogram.image = UIImage(named: "image2.png")
myMonogram.title = "Monogram"
myMonogram.subtitle = "This is the Monogram subtitle"
myMonogram.personNameComponents = myName
self.view.addSubview(myMonogram)
print(myMonogram.personNameComponents)
}
}
I didn't manage to scale the poster's image with scaleAspectFit, though. Also my first image was rounded with a transparency, and only choosing a frame size that perfectly fit the squared image plus the titles (so no aspect fit needed), the glowing effect became transparent on the corners. Otherwise the whole image was opaque, using its own (quite small) rounded corners.

UIImage(named: String) or UIImage swift 4

What way is more convenient to set image from my app assets in my app Image View? I have two ways: the first one is function UIImage(named: String) or UIImage and both is working for me, but I want to know which one is the best ,so I can use one in the future
here is two examples
// first
let myImages1 = ["dice1", "dice2", "dice3", "dice4", "dice5", "dice6"]
#IBOutlet weak var diceImageView1: UIImageView!
diceImageView1.image = UIImage(named: myImages1[index1])
// second
let myImages2 = [ image1, image2, image3, image4, image5, image6 ]
#IBOutlet weak var diceImageView2: UIImageView!
diceImageView2.image = myImages2[index2]
We're using Images from assets catalog instead String names. It's the better way to set UIImage. But from swift 4.2 we can't use asset names anymore.
We should use image literal.
Here's the example:
let logoImageView: UIImageView = {
let iv = UIImageView()
// Set image with image literal
iv.image = #imageLiteral(resourceName: "feedback")
return iv
}()
Look at the screenshot below.
After you've typed Image Literal you can double click on that and choose your image from assets catalog. Here is the screenshot.
But last time I use this lib R.swift. Get strong typed, autocompleted resources like images, fonts and segues in Swift projects.
How it looks in code:
You can create an extension for UIImage and have static variables holding your images.
extension UIImage {
static var dice1: UIImage? {
return UIImage(named: "dice1")
}
}
let imageView = UIImageView()
imageView.image = UIImage.dice1
Or you can use image literals in Xcode which are nice because you can see the actual image

Loading many UIImages from disk blocks main thread

I have a group of local UIImages that I need to load and present in successive order when their respective cell is tapped. For example, I have 20 images of a hot dog that combine to form an animation. When the user taps the hot dog cell, the cell's UIImageView should animate the images.
I know how to use UIImageView's animationImages to achieve the animation. My problem is that retrieving all of these images from disk takes ~1.5 seconds and blocks the main thread.
I could instantiate a helper class in application(_:didFinishLaunchingWithOptions:) that loads these images from disk on a background thread so that they'll be in memory when needed, but this seems hacky.
Are there any better ways of quickly loading many images from disk?
Edit: These images are illustrations and thus are .png.
Edit2: Assume the sum of each image sequence is 1 MB. The image dimensions I'm testing with are 33-60% larger than the UIImageView's #3x requirements. I am waiting to confirm final UIImageView size before getting correct image sets from our designers, so the time should be cut significantly with properly sized assets, but I'm also testing on a physical iPhone X.
class ViewModel {
func getImages() -> [UIImage] {
var images: [UIImage] = []
for i in 0..<44 {
if let image = UIImage(named: "hotDog\(i).png") {
images.append(image)
}
}
return images
}
}
class ViewController: UIViewController {
private var viewModel: ViewModel!
func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {
let cell = tableView.cellForRow(at: indexPath) as! CustomCell
let images = viewModel.getImages()
cell.animateImageView(withImages: images)
}
}
class CustomCell: UITableViewCell {
#IBOutlet weak var imageView: UIImageView!
func animateImageView(withImages images: [UIImage]) {
imageView.image = images.last
imageView.animationImages = images
imageView.animationDuration = TimeInterval(images.count / 20)
imageView.animationRepeatCount = 1
imageView.startAnimating()
}
}
I would suggest you try UIImage(contentsOfFile:) instead of UIImage(named:). In my quick test and found it to be more than an order of magnitude faster. It's somewhat understandable because it's doing a lot more (searching for the asset, cacheing the asset, etc.).
// slow
#IBAction func didTapNamed(_ sender: Any) {
let start = CFAbsoluteTimeGetCurrent()
imageView.animationImages = (0 ..< 20).map {
UIImage(named: filename(for: $0))!
}
imageView.animationDuration = 1.0
imageView.animationRepeatCount = 1
imageView.startAnimating()
print(CFAbsoluteTimeGetCurrent() - start)
}
// faster
#IBAction func didTapBundle(_ sender: Any) {
let start = CFAbsoluteTimeGetCurrent()
let url = Bundle.main.resourceURL!
imageView.animationImages = (0 ..< 20).map {
UIImage(contentsOfFile: url.appendingPathComponent(filename(for: $0)).path)!
}
imageView.animationDuration = 1.0
imageView.animationRepeatCount = 1
imageView.startAnimating()
print(CFAbsoluteTimeGetCurrent() - start)
}
Note, this presumes that you had the files in the resource directory, and you may have to modify this accordingly depending upon where they are in your project. Also note that I avoided doing Bundle.main.url(forResource:withExtension:) within the loop, because even that had an observable impact on performance.

Check text field Live

I have found this answer How to check text field input at real time?
This is what I am looking for. However I am having trouble actually implementing this code. Also my current geographical location makes googling almost impossible.
I want to be able to change the background color of the next text field if the correct number is entered into the previous text field. textfieldTwo background color will change to green if the correct value is entered in textFieldOne. If the value is incorrect then nothing will happen. Please help me out. I have two text fields called textFieldOne and textFieldTwo and nothing else in the code.
Just pop this in your main view controller in an empty project (try using iphone 6 on the simulator)
import UIKit
class ViewController: UIViewController {
var txtField:UITextField!
var txtFieldTwo:UITextField!
var rightNumber = 10
override func viewDidLoad() {
super.viewDidLoad()
//txtFieldOne
var txtField = UITextField()
txtField.frame = CGRectMake(100, 100, 200, 40)
txtField.borderStyle = UITextBorderStyle.None
txtField.backgroundColor = UIColor.blueColor()
txtField.layer.cornerRadius = 5
self.view.addSubview(txtField)
//txtFieldTwo
var txtFieldTwo = UITextField()
txtFieldTwo.frame = CGRectMake(100, 150, 200, 40)
txtFieldTwo.borderStyle = UITextBorderStyle.None
txtFieldTwo.backgroundColor = UIColor.blueColor()
txtFieldTwo.layer.cornerRadius = 5
self.view.addSubview(txtFieldTwo)
txtField.addTarget(self, action: "checkForRightNumber", forControlEvents: UIControlEvents.AllEditingEvents)
self.txtField = txtField
self.txtFieldTwo = txtFieldTwo
}
func checkForRightNumber() {
let number:Int? = self.txtField.text.toInt()
if number == rightNumber {
self.txtFieldTwo.backgroundColor = UIColor.greenColor()
} else {
self.txtFieldTwo.backgroundColor = UIColor.blueColor()
}
}
}
EDIT: Adding a version with IBOutlets and IBActions
Note that in this example the IBAction is connected to txtFieldOne on Sent Events / Editing Changed
Also, make sure your Text Fields border colors are set to None. In the storyboard, the way to do this is to choose the left most option with the dashed border around it. That's so you can color the backgrounds. You can use layer.cornerRadius to set the roundness of the border's edges.
import UIKit
class ViewController: UIViewController {
#IBOutlet weak var txtField: UITextField!
#IBOutlet weak var txtFieldTwo: UITextField!
var rightNumber = 10
override func viewDidLoad() {
super.viewDidLoad()
}
#IBAction func checkForRightNumber(sender: AnyObject) {
let number:Int? = self.txtField.text.toInt()
if number == rightNumber {
self.txtFieldTwo.backgroundColor = UIColor.greenColor()
} else {
self.txtFieldTwo.backgroundColor = UIColor.blueColor()
}
}
}