Animated view on mono touch iPhone project with c# - iphone

public override void ViewDidLoad ()
{
base.ViewDidLoad ();
// Perform any additional setup after loading the view, typically from a nib.
List<UIImage> myImages = new List<UIImage>();
myImages.Add(UIImage.FromFile(#"image/1.jpeg"));
myImages.Add(UIImage.FromFile(#"image/2.jpeg"));
var myAnimatedView = new UIImageView(this.hatchAgeSubView.Bounds);
myAnimatedView.AnimationImages = myImages.ToArray();
myAnimatedView.AnimationDuration = 3; // Seconds
myAnimatedView.AnimationRepeatCount = 4; // 0 = Loops Forever
myAnimatedView.StartAnimating();
hatchAgeSubView.AddSubview(myAnimatedView);
}
as you see above there is an animation with two images for 12 seconds.
I want to start another another animation just after 12 seconds. (at the end of the animation above)
Are there any call back method or are there any idea to make this next animation playing automatically.??

If you're using iOS 4 or better you can use the new Animate method and get notified upon completion of the first animation sequence:
var delay = 0.0f;
UIView.Animate(delay, () => {
UIView.SetAnimationRepeatCount(4);
//Regular animation code goes here.
}, () => {
// This fires when the first animation block above is completed
//Second animation sequence code goes here.
});
UPDATE: Please do not call animations from ViewDidLoad it will make the experience laggy and your users unhappy. Try calling it async from ViewDidAppear. For more information on View Events check out my blog post on the subject:
http://blog.devnos.com/uiviewcontroller-y-u-no-fire-view-events

Related

UIView replicate CAAnimation of another view, in real time?

So I've got a background view with a gradient sublayer, animating continuously to change the colors slowly. I'm doing it with a CATransaction, because I need to animate other properties as well:
CATransaction.begin()
gradientLayer.add(colorAnimation, forKey: "colors")
// other animations
CATransaction.setCompletionBlock({
// start animation again, loop forever
}
CATransaction.commit()
Now I want to replicate this gradient animation, let's say, for the title of a button for instance.
Note 1: I can't just "make a hole" in the button, if such a thing is possible, because I might have other opaque views between the button and the background.
Note 2: The gradient position on the button is not important. I don't want the text gradient to replicate the exact colors underneath, but rather to mimic the "mood" of the background.
So when the button is created, I add its gradient sublayer to a list of registered layers, that the background manager will update as well:
func register(layer: CAGradientLayer) {
let pointer = Unmanaged.passUnretained(layer).toOpaque()
registeredLayers.addPointer(pointer)
}
So while it's easy to animate the text gradient at the next iteration of the animation, I would prefer that the button starts animating as soon as it's added, since the animation usually takes a few seconds. How can I copy the background animation, i.e. set the text gradient to the current state of the background animation, and animate it with the right duration left and timing function?
The solution was indeed to use the beginTime property, as suggested by #Shivam Gaur's comment. I implemented it as follows:
// The background layer, with the original animation
var backgroundLayer: CAGradientLayer!
// The animation
var colorAnimation: CABasicAnimation!
// Variable to store animation begin time
var animationBeginTime: CFTimeInterval!
// Registered layers replicating the animation
private var registeredLayers: NSPointerArray = NSPointerArray.weakObjects()
...
// Somewhere in our code, the setup function
func setup() {
colorAnimation = CABasicAnimation(keyPath: "colors")
// do the animation setup here
...
}
...
// Called by an external class when we add a view that should replicate the background animation
func register(layer: CAGradientLayer) {
// Store a pointer to the layer in our array
let pointer = Unmanaged.passUnretained(layer).toOpaque()
registeredLayers.addPointer(pointer)
layer.colors = colorAnimation.toValue as! [Any]?
// HERE'S THE KEY: We compute time elapsed since the beginning of the animation, and start the animation at that time, using 'beginTime'
let timeElapsed = CACurrentMediaTime() - animationBeginTime
colorAnimation.beginTime = -timeElapsed
layer.add(colorAnimation, forKey: "colors")
colorAnimation.beginTime = 0
}
// The function called recursively for an endless animation
func animate() {
// Destination layer
let toLayer = newGradient() // some function to create a new color gradient
toLayer.frame = UIScreen.main.bounds
// Setup animation
colorAnimation.fromValue = backgroundLayer.colors;
colorAnimation.toValue = toLayer.colors;
// Update background layer
backgroundLayer.colors = toLayer.colors
// Update registered layers (iterate is a custom function I declared as an extension of NSPointerArray)
registeredLayers.iterate() { obj in
guard let layer = obj as? CAGradientLayer else { return }
layer.colors = toLayer.colors
}
CATransaction.begin()
CATransaction.setCompletionBlock({
animate()
})
// Add animation to background
backgroundLayer.add(colorAnimation, forKey: "colors")
// Store starting time
animationBeginTime = CACurrentMediaTime();
// Add animation to registered layers
registeredLayers.iterate() { obj in
guard let layer = obj as? CAGradientLayer else { return }
layer.add(colorAnimation, forKey: "colors")
}
CATransaction.commit()
}

Take a Video with ARKIT

Hello Community,
I try to build a App with Swift 4 and the great upcoming ARKit-Framework but I am stuck. I need to take a Video with the Framework or at least a UIImage-sequence but I dont know how.
This is what I've tried:
In ARKit you have a session which tracks your world. This session has a capturedImage instance where you can get the current Image. So I createt a Timer which appends the capturedImage every 0.1s to a List. This would work for me but If I start the Timer by clicking a "start"-button, the camera starts to lag. Its not about the Timer i guess because If I invalidate the Timer by clicking a "stop"-button the camera is fluent again.
Is there a way to solve the lags or even a better way?
Thanks
I was able to use ReplayKit to do exactly that.
To see what ReplayKit is like
On your iOS device, go to Settings -> Control Center -> Customize Controls. Move "Screen Recording" to the "Include" section, and swipe up to bring up Control Center. You should now see the round Screen Recording icon, and you'll notice that when you press it, iOS starts to record your screen. Tapping the blue bar will end recording and save the video to Photos.
Using ReplayKit, you can make your app invoke the screen recorder and capture your ARKit content.
How-to
To start recording:
RPScreenRecorder.shared().startRecording { error in
// Handle error, if any
}
To stop recording:
RPScreenRecorder.shared().stopRecording(handler: { (previewVc, error) in
// Do things
})
After you're done recording, .stopRecording gives you an optional RPPreviewViewController, which is
An object that displays a user interface where users preview and edit a screen recording created with ReplayKit.
So in our example, you can present previewVc if it isn't nil
RPScreenRecorder.shared().stopRecording(handler: { (previewVc, error) in
if let previewVc = previewVc {
previewVc.delegate = self
self.present(previewVc, animated: true, completion: nil)
}
})
You'll be able to edit and save the vide right from the previewVc, but you might want to make self (or someone) the RPPreviewViewControllerDelegate, so you can easily dismiss the previewVc when you're finished.
extension MyViewController: RPPreviewViewControllerDelegate {
func previewControllerDidFinish(_ previewController: RPPreviewViewController) {
// Called when the preview vc is ready to be dismissed
}
}
Caveats
You'll notice that startRecording will record "the app display", so if any view you have (buttons, labels, etc) will be recorded as well.
I found it useful to hide the controls while recording and let my users know that tapping the screen stops recording, but I've also read about others having success putting their essential controls on a separate UIWindow.
Excluding views from recording
The separate UIWindow trick works. I was able to make an overlay window where I had my a record button and a timer and these weren't recorded.
let overlayWindow = UIWindow(frame: view.frame)
let recordButton = UIButton( ... )
overlayWindow.backgroundColor = UIColor.clear
The UIWindow will be hidden by default. So when you want to show your controls, you must set isHidden to false.
Best of luck to you!
Use a custom renderer.
Render the scene using the custom renderer, then get texture from the custom renderer, finally covert that to a CVPixelBufferRef
- (void)viewDidLoad {
[super viewDidLoad];
self.rgbColorSpace = CGColorSpaceCreateDeviceRGB();
self.bytesPerPixel = 4;
self.bitsPerComponent = 8;
self.bitsPerPixel = 32;
self.textureSizeX = 640;
self.textureSizeY = 960;
// Set the view's delegate
self.sceneView.delegate = self;
// Show statistics such as fps and timing information
self.sceneView.showsStatistics = YES;
// Create a new scene
SCNScene *scene = [SCNScene scene];//[SCNScene sceneNamed:#"art.scnassets/ship.scn"];
// Set the scene to the view
self.sceneView.scene = scene;
self.sceneView.preferredFramesPerSecond = 30;
[self setupMetal];
[self setupTexture];
self.renderer.scene = self.sceneView.scene;
}
- (void)setupMetal
{
if (self.sceneView.renderingAPI == SCNRenderingAPIMetal) {
self.device = self.sceneView.device;
self.commandQueue = [self.device newCommandQueue];
self.renderer = [SCNRenderer rendererWithDevice:self.device options:nil];
}
else {
NSAssert(nil, #"Only Support Metal");
}
}
- (void)setupTexture
{
MTLTextureDescriptor *descriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatBGRA8Unorm_sRGB width:self.textureSizeX height:self.textureSizeY mipmapped:NO];
descriptor.usage = MTLTextureUsageShaderRead | MTLTextureUsageRenderTarget;
id<MTLTexture> textureA = [self.device newTextureWithDescriptor:descriptor];
self.offscreenTexture = textureA;
}
- (void)renderer:(id <SCNSceneRenderer>)renderer willRenderScene:(SCNScene *)scene atTime:(NSTimeInterval)time
{
[self doRender];
}
- (void)doRender
{
if (self.rendering) {
return;
}
self.rendering = YES;
CGRect viewport = CGRectMake(0, 0, self.textureSizeX, self.textureSizeY);
id<MTLTexture> texture = self.offscreenTexture;
MTLRenderPassDescriptor *renderPassDescriptor = [MTLRenderPassDescriptor new];
renderPassDescriptor.colorAttachments[0].texture = texture;
renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadActionClear;
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0, 1, 0, 1.0);
renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreActionStore;
id<MTLCommandBuffer> commandBuffer = [self.commandQueue commandBuffer];
self.renderer.pointOfView = self.sceneView.pointOfView;
[self.renderer renderAtTime:0 viewport:viewport commandBuffer:commandBuffer passDescriptor:renderPassDescriptor];
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> _Nonnull bf) {
[self.recorder writeFrameForTexture:texture];
self.rendering = NO;
}];
[commandBuffer commit];
}
Then in the recorder, set up the AVAssetWriterInputPixelBufferAdaptor with AVAssetWriter. And convert the texture to CVPixelBufferRef:
- (void)writeFrameForTexture:(id<MTLTexture>)texture {
CVPixelBufferPoolRef pixelBufferPool = self.assetWriterPixelBufferInput.pixelBufferPool;
CVPixelBufferRef pixelBuffer;
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(nil, pixelBufferPool, &pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pixelBufferBytes = CVPixelBufferGetBaseAddress(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
MTLRegion region = MTLRegionMake2D(0, 0, texture.width, texture.height);
[texture getBytes:pixelBufferBytes bytesPerRow:bytesPerRow fromRegion:region mipmapLevel:0];
[self.assetWriterPixelBufferInput appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CVPixelBufferRelease(pixelBuffer);
}
Make sure the custom renderer and the adaptor share the same pixel encoding.
I tested this for the default ship.scn and it and it only consume 30% CPU compared to almost 90% compared to use snapshot method for every frame. And this will not pop up a permission dialog.
I have released an open source framework taking care of this. https://github.com/svtek/SceneKitVideoRecorder
It works by getting the drawables from scene views metal layer.
You can attach a display link to get your renderer called as the screen refreshes:
displayLink = CADisplayLink(target: self, selector: #selector(updateDisplayLink))
displayLink?.add(to: .main, forMode: .commonModes)
And then grab the drawable from metal layer by:
let metalLayer = sceneView.layer as! CAMetalLayer
let nextDrawable = metalLayer.nextDrawable()
Be wary that nextDrawable() call expends the drawables. You should call this as less as possible and do so in an autoreleasepool{} so the drawable gets released properly and replaced with a new one.
Then you should read the MTLTexture from the drawable to a pixel buffer which you can append to AVAssetWriter to create a video.
let destinationTexture = currentDrawable.texture
destinationTexture.getBytes(...)
With these in mind the rest is pretty straightforward video recording on iOS/Cocoa.
You can find all these implemented in the repo I've shared above.
I had a similar need and wanted to record the ARSceneView in the app internally, and without ReplayKit so that I can manipulate the video that is generated from the recording. I ended up using this project: https://github.com/lacyrhoades/SceneKit2Video . The project is made to render a SceneView to a video, but you can configure it to accept ARSceneViews. It works pretty well, and you can choose to get an imagefeed instead of the video using the delegate function if you like.

swift 3: Watch app: if watch goes to sleep there's a delay between interface controllers

I have a countdown timer interface controller that will, once the timer gets down to 00:00, launch another interface controller. If I keep the watch active until the timer reaches 00:00, then the second interface controller launches as it should. However, if the watch goes to sleep, even if it's active right before the timer reaches 00:00, there will be a delay of several seconds to over a minute before the second interface controller launches.
This defect doesn't appear when running in the watch simulator, just when I'm running on the actual device.
I'm using Xcode 8 and swift 3.
Here's my code from the first interface controller:
// this func will update the countdown timer
#objc private func updateTimer() {
totalNumberOfSeconds += 1
numberOfSeconds += 1
if (numberOfSeconds == numSecondsInMinute) {
numberOfSeconds = 0
}
// only attempt to open the RacingTimer interface if this IC is visible
if (isStillVisible) {
// change to the Racing Timer if the countdown timer hits 00:00
if (totalNumberOfSeconds > originalSecondsTimeInterval) {
// the watch must have gone to sleep when the countdown timer
// hit 00:00, so the total num secs is past the orig timer
// set the numberOfSeconds to total - original to pass to RacingTimer
numberOfSeconds = totalNumberOfSeconds - originalSecondsTimeInterval
// launch the racing timer
WKInterfaceController.reloadRootControllers(withNames: ["RacingTimer"], contexts: [numberOfSeconds])
// destroy the timer and reset the vars
countdownClock.invalidate()
numberOfSeconds = 0
totalNumberOfSeconds = 0
} else if (totalNumberOfSeconds == originalSecondsTimeInterval) {
// launch the racing timer
WKInterfaceController.reloadRootControllers(withNames: ["RacingTimer"], contexts: nil)
// destroy the timer and reset the vars
countdownClock.invalidate()
numberOfSeconds = 0
totalNumberOfSeconds = 0
}
}
}
override func awake(withContext context: Any?) {
super.awake(withContext: context)
// get race and timer data
let numSecs = raceDS.timer * 60
originalSecondsTimeInterval = numSecs
cdt = NSDate(timeIntervalSinceNow: TimeInterval(numSecs))
countdownTimer.setDate(cdt as Date)
countdownClock = Timer.scheduledTimer(timeInterval: 1, target: self, selector: #selector(updateTimer), userInfo: nil, repeats: true)
countdownTimer.start()
}
override func willActivate() {
// This method is called when watch view controller is about to be visible to user
super.willActivate()
nearestMinuteButtonOutlet.setTitle("will activate") // debug only
didAppear()
}
// set the visible boolean to true
override func didAppear() {
super.didAppear()
isStillVisible = true
nearestMinuteButtonOutlet.setTitle("did appear") // debug only
}
// set the boolean to false
override func didDeactivate() {
// This method is called when watch view controller is no longer visible
super.didDeactivate()
isStillVisible = false
nearestMinuteButtonOutlet.setTitle("sleeping") // debug only
}
I'm really at a loss as to why there's a delay if the watch goes to sleep. Any help would be greatly appreciated. TIA.
As of watchOS3 there is no solution to have a Timer running in the background. Timer objects shouldn't be used for precise time measurements on iOS either. On iOS you have the alternative to use a CADisplayLink for accurate timings, however, this isn't available on watchOS3/4.
For measuring time in the background, you should save the current date before the app is going to background and calculate the elapsed time when the app is launched again.
If you simply need the other InterfaceController to be visible by the time the user opens your app, you can use the the method described using dates and you can navigate to your other InterfaceController as soon as the user opens your app again.
If you need some code to execute when the countdown is finished, you should rather schedule a background task, they are the only methods of running code in the background on watchOS at the moment.

Lock/Unlock critical section of code in UIView animation completion block - Swift

How do I lock and unlock a critical section of code in a UIView animation completion block using Swift (in a subclassed UIView)?
func MoveCard(sourcePile: Pile, destPile: Pile) {
// Temporarily disable user interaction
disableUserInteraction()
// Move card from source pile to destination pile
UIView.animateWithDuration(0.1) {
() -> Void in
// Move card center
self.center = destPile.center
// CRITICAL SECTION OF CODE
// Add card to destination pile array
destPile.cards.append(sourcePile.cards.last!)
// Remove card from source pile array
sourcePile.cards.removeLast()
// Reenable user interaction
enableUserInteraction()
}
}
Try this:
UIView.animateWithDuration(0.1, animations: {
// your animation code
}, completion: { (complete: Bool) in
// your completion code
})

Swift - Stopping an animation after the last image has appeared?

I'm trying to learn animation in swift. I have an explosion composed of 77 images, and I've stumbled upon a couple issues.
1) I'm trying to make the animation automatically stop once 77.png has appeared. Here is what I have so far. Obviously, it is currently in a continuous animation loop.
2) There is ~1 second delay for the animation to start. However, after it has animated once, and I click animate again, it is instant from then on. How can I make the first animation instant as well?
#IBOutlet var explosionSequence: UIImageView
var imgListArray :NSMutableArray = []
for countValue in 1...77 {
var strImageName : String = "\(countValue).png"
var image = UIImage(named:strImageName)
imgListArray .addObject(image)
}
explosionSequence.animationImages = imgListArray as [AnyObject];
explosionSequence.startAnimating()
//i want to stop animation here after all 77 .pngs have appeared
thank you in advance!!
You can use UIImageView method animationRepeatCount to limit your animation loop to 1.
The default value is 0, which specifies to repeat the animation
indefinitely:
explosionSequence.animationRepeatCount = 1
You can also use animationDuration to adjust the time of you animation.