Sharing screenshots in share sheets - swift

I am making an aracade-style game and when the player loses I give them the option to share their score via the iOS share sheet. What I want to know is, how can I have them share a screenshot taken right when they die along with some text. I already know how to make it so that they share text but I want the screenshot as well. I set it up like this so that the game takes a screenshot right when the player dies:
func screenShotMethod() {
//Create the UIImage
UIGraphicsBeginImageContext(view!.frame.size)
view!.layer.renderInContext(UIGraphicsGetCurrentContext())
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//Save it to the camera roll
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
println("screenshot")
}
Then I run this function at the GameOver Sequence like this:
if gameOver == 0{
gameOver = 1
***screenShotMethod()***
movingObjects.speed = 0
movingObjects.removeFromParent()
backgroundMusicPlayer.stop()
Now what I want to be able to do is access this screenshot so that it can be used in the sharing option, but deleted as soon as the player hits replay if the player doesn't share that score. Right now I have sharing set up like this:
if shareButton.containsPoint(location){
UIGraphicsBeginImageContext(view!.frame.size)
view!.layer.renderInContext(UIGraphicsGetCurrentContext())
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//Save it to the camera roll
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
println("screenshot")
var postImage = UIImage(named: "\(image)")
socialShare(sharingText: "I just got \(score) points in Deez Nuts! Bet you can't beat that! #DeezNuts", sharingImage: UIImage(named: "\(postImage)"), sharingURL: NSURL(string: "http://itunes.apple.com/app/"))
}
Please be specific and straightforward because I am new to developing apps. Also I am using Swift if you didn't already notice. Thank you very much.

You just need to delete this line
var postImage = UIImage(named: "\(image)")
Because image is already an UIImage so just use sharingImage: image
socialShare(sharingText: "I just got \(score) points in Deez Nuts! Bet you can't beat that! #DeezNuts", sharingImage: image , sharingURL: NSURL(string: "http://itunes.apple.com/app/")!)

Related

ARVideoNode in RealityKit

I was experimenting with Open AI's ChatGPT and when asked to give me a code for playing a video in a RealityKit's AR scene when reference image is tracked it used ARVideoNode instead of my expected AVPlayer and VideoMaterial solution. It even gave me an answer why ARVideoNode is better than AVPlayer on a VideoMaterial when I asked, but I never heard of ARVideoNode in RealityKit.
Am I missing something or is it just a flaw in the AI?
import RealityKit
// Set up image tracking
let configuration = ARImageTrackingConfiguration()
configuration.detectionImages = ["reference-image-1", "reference-image-2"]
arView.session.run(configuration)
// Create a dictionary to map reference image names to video file names
let videoFileNames = ["reference-image-1": "video-1.mp4",
"reference-image-2": "video-2.mp4"]
// Track the reference images and display the corresponding
// videos on top of them
var videoNodes = [String: ARVideoNode]()
arView.scene.subscribe(to: ARImageAnchor.self) { (anchor: ARImageAnchor) in
// Get the video file name for the tracked image
guard let videoFileName = videoFileNames[anchor.name] else { return }
// Load the video file
let videoURL = URL(fileURLWithPath: "path/to/\(videoFileName)")
let videoAsset = VideoAsset(url: videoURL)
// Create an ARVideoNode and add it to the scene
let videoNode = ARVideoNode(asset: videoAsset)
arView.scene.anchors.append(videoNode)
videoNodes[anchor.name] = videoNode
// Position the video node on top of the image
videoNode.transform = anchor.transform
// Play the video
videoNode.play()
}
// Monitor the tracking status of the reference images
// and pause/resume the videos as needed
arView.scene.subscribe(to: ARImageAnchor.self) { (anchor: ARImageAnchor) in
// Get the video node for the tracked image
guard let videoNode = videoNodes[anchor.name] else { return }
if anchor.isTracked {
// Resume playing the video if the image is being tracked
videoNode.play()
} else {
// Pause the video if the image is not being tracked
videoNode.pause()
}
}
It's not a flaw, it's rather ChatGPT's wrong answer. As far as I know, Kudan's ARVideoNode is a subclass of an ARNode parent class that is used to render video content. It has nothing to do with RealityKit even in terms of programming language - KudanAR SDK natively uses Objective-C for iOS and Java for Android.
Here's SO temporary policy regarding ChatGPT:
Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking and looking for correct answers.

How do you add an overlay while recording a video in Swift?

I am trying to record, and then save, a video in Swift using AVFoundation. This works. I am also trying to add an overlay, such as a text label containing the date, to the video.
For example: the video saved is not only what the camera sees, but the timestamp as well.
Here is how I am saving the video:
func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
saveVideo(toURL: movieURL!)
}
private func saveVideo(toURL url: URL) {
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: url)
}) { (success, error) in
if(success) {
print("Video saved to Camera Roll.")
} else {
print("Video failed to save.")
}
}
}
I have a movieOuput that is an AVCaptureMovieFileOutput. My preview layer does not contain any sublayers. I tried adding the timestamp label's layer to the previewLayer, but this did not succeed.
I have tried Ray Wenderlich's example as well as this stack overflow question. Lastly, I also tried this tutorial, all of which to no avail.
How can I add an overlay to my video that is in the saved video in the camera roll?
Without more information it sounds like what you are asking for is a WATERMARK.
Not an overlay.
A watermark is a markup on the video that will be saved with the video.
An overlay is generally showed as subviews on the preview layer and will not be saved with the video.
Check this out here: https://stackoverflow.com/a/47742108/8272698
func addWatermark(inputURL: URL, outputURL: URL, handler:#escaping (_ exportSession: AVAssetExportSession?)-> Void) {
let mixComposition = AVMutableComposition()
let asset = AVAsset(url: inputURL)
let videoTrack = asset.tracks(withMediaType: AVMediaType.video)[0]
let timerange = CMTimeRangeMake(kCMTimeZero, asset.duration)
let compositionVideoTrack:AVMutableCompositionTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))!
do {
try compositionVideoTrack.insertTimeRange(timerange, of: videoTrack, at: kCMTimeZero)
compositionVideoTrack.preferredTransform = videoTrack.preferredTransform
} catch {
print(error)
}
let watermarkFilter = CIFilter(name: "CISourceOverCompositing")!
let watermarkImage = CIImage(image: UIImage(named: "waterMark")!)
let videoComposition = AVVideoComposition(asset: asset) { (filteringRequest) in
let source = filteringRequest.sourceImage.clampedToExtent()
watermarkFilter.setValue(source, forKey: "inputBackgroundImage")
let transform = CGAffineTransform(translationX: filteringRequest.sourceImage.extent.width - (watermarkImage?.extent.width)! - 2, y: 0)
watermarkFilter.setValue(watermarkImage?.transformed(by: transform), forKey: "inputImage")
filteringRequest.finish(with: watermarkFilter.outputImage!, context: nil)
}
guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPreset640x480) else {
handler(nil)
return
}
exportSession.outputURL = outputURL
exportSession.outputFileType = AVFileType.mp4
exportSession.shouldOptimizeForNetworkUse = true
exportSession.videoComposition = videoComposition
exportSession.exportAsynchronously { () -> Void in
handler(exportSession)
}
}
And heres how to call the function.
let outputURL = NSURL.fileURL(withPath: "TempPath")
let inputURL = NSURL.fileURL(withPath: "VideoWithWatermarkPath")
addWatermark(inputURL: inputURL, outputURL: outputURL, handler: { (exportSession) in
guard let session = exportSession else {
// Error
return
}
switch session.status {
case .completed:
guard NSData(contentsOf: outputURL) != nil else {
// Error
return
}
// Now you can find the video with the watermark in the location outputURL
default:
// Error
}
})
Let me know if this code works for you.
It is in swift 3 so some changes will be needed.
I currently am using this code on an app of mine. Have not updated it to swift 5 yet
I do not have an actual development environment for Swift that can utilize AVFoundation. Thus, I can't provide you with any example code.
For adding meta data(date, location, timestamp, watermark, frame rate, etc...) as an overlay to the video while recording, you would have to process the video feed, frame by frame, live, while recording. Most likely you would have to store the frames in a buffer and process them before actually record them.
Now when it come to the meta data, there are two type, static and dynamic. For static type such as a watermark, it should be easy enough, as all the frames will get the same thing.
However, for dynamic meta data type such as timestamp or GPS location, there are a few things that needed to be taken into consideration. It takes computational power and time to process the video frames. Thus, depends on the type of dynamic data and how you got them, sometime the processed value may not be a correct value. For example, if you got a frame at 1:00:01, you process it and add a timestamp to it. Just pretend that it took 2 seconds to process the timestamp. The next frame you got is at 1:00:02, but you couldn't process it until 1:00:03 because processing the previous frame took 2 seconds. Thus, depend on how you got that new timestamp for the new frame, that timestamp value may not be the value that you wanted.
For processing dynamic meta data, you should also take into consideration of hardware lag. For example, the software is supposed to add live GPS location data to each frame and there weren't any lags in development or in testing. However, in real life, a user used the software in an area with a bad connection, and his phone lag while obtaining his GPS location. Some of his lags lasted as long as 5 seconds. What do you do in that situation? Do you set a time out for the GPS location and used the last good position? Do you report the error? Do you defer that frame to be process later when the GPS data become available(This may ruin live recording) and using an expensive algorithm to try to predict the user's location for that frame?
Besides those to take into consideration, I have some references here that I think may help you. I thought the one from medium.com looked pretty good.
https://medium.com/ios-os-x-development/ios-camera-frames-extraction-d2c0f80ed05a
Adding watermark to currently recording video and save with watermark
Render dynamic text onto CVPixelBufferRef while recording video
Adding on to #Kevin Ng, you can do an overlay on video frames with an UIViewController and an UIView.
UIViewController will have:
property to work with video stream
private var videoSession: AVCaptureSession?
property to work with overlay(the UIView class)
private var myOverlay: MyUIView{view as! MyUIView}
property to work with video output queue
private let videoOutputQueue = DispatchQueue(label:
"outputQueue", qos: .userInteractive)
method to create video session
method to process and display overlay
UIView will have task-specific helper methods needed to to act as overlay. For example, if you are doing hand detection, this overlay class can have helper methods to draw points on coordinates(ViewController class will detect coordinates of hand features, do necessary coordinate conversions, then pass the coordinates to the UIView class to display coordinates as an overlay)

Failed to get video thumbnail from AVPlayer using Fairplay HLS

I'm trying to build a custom progress bar for a video player app in tvOS, and would like to show thumbnails of the video while the user scans the video.
I'm using AVPlayer and Fairplay HLS to play remote video files. I've tried to do this using 2 methods. One with AVAssetImageGenerator's copyCGImage, and the other with AVPlayerItemVideoOutput's copyPixelBuffer method. Both return nil.
When I tried with a local video file, the first method worked.
Method 1:
let imageGenerator = AVAssetImageGenerator(asset: playerItem.asset)
let progressSeconds = playerItem.duration.seconds * Double(progress)
let time = CMTime(seconds: progressSeconds, preferredTimescale: 5)
if let imageRef = try? imageGenerator.copyCGImage(at: time, actualTime: nil) {
image = UIImage(cgImage:imageRef)
}
Method 2:
let videoThumbnailsOutput = AVPlayerItemVideoOutput(pixelBufferAttributes: [String(kCVPixelBufferPixelFormatTypeKey): NSNumber(value: kCVPixelFormatType_32BGRA)])
player?.currentItem?.add(videoThumbnailsOutput)
if let pixelBuffer = videoThumbnailsOutput.copyPixelBuffer(forItemTime: time, itemTimeForDisplay: nil) {
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
Any ideas what I'm doing wrong or is there any other way?
Thanks!
This is usually done by making use of the trick play stream associated to your actual stream.
https://en.wikipedia.org/wiki/Trick_mode
You can find it declared with the key EXT-X-I-FRAME-STREAM-INF in the manifest of your HLS stream. A regex might be needed in order to parse its value.
"#EXT-X-I-FRAME-STREAM-INF[^#]*URI=[^#]*"
Once you have the URL of the trick play stream, you can use a paused instance of AVPlayer as a thumbnail. And when the user swipe left and right, you should seek the player in the thumbnail to show the right frame.

iOS 11: [ImageManager] Unable to load image data

After update to iOS 11, photo assets now load slowly and I get this message in console:
[ImageManager] Unable to load image data,
/var/mobile/Media/DCIM/103APPLE/IMG_3064.JPG
I use static function to load image:
class func getAssetImage(asset: PHAsset, size: CGSize = CGSize.zero) -> UIImage? {
let manager = PHImageManager.default()
let option = PHImageRequestOptions()
option.isSynchronous = true
var assetImage: UIImage!
var scaleSize = size
if size == CGSize.zero {
scaleSize = CGSize(width: asset.pixelWidth, height: asset.pixelHeight)
}
manager.requestImage(for: asset, targetSize: scaleSize, contentMode: .aspectFit, options: option) { (image, nil) in
if let image = image {
assetImage = image
}
}
if assetImage == nil {
manager.requestImageData(for: asset, options: option, resultHandler: { (data, _, orientation, _) in
if let data = data {
if let image = UIImage.init(data: data) {
assetImage = image
}
}
})
}
return assetImage
}
Request image for asset usually always succeeds, but it prints this message. If I use requestImageData function only, there is no such message, but photos made with Apple camera lose their orientation and I get even more issues while loading big amount of images (I use image slideshow in my app).
Apple always sucks when it comes to updates, maybe someone got a solution how to fix this? It even fails to load an asset, when there is a big list of them in user camera. Switching to requestImageData is not an option for me as it brings nil data frequently now.
I would like to point out, that I call this function only once. It is not used in UITableView etc. I use other code for thumbs with globally initialised manager and options, so assets are definitely not nil or etc.
I call this function only when user clicks at certain thumb.
When gallery has like 5000 photos, maybe connection to assets is just overloaded and later it can't handle request and crashes?
So many questions.
Hey I was having the warning as well and here is what worked for me.
Replacing
CGSize(width: asset.pixelWidth, height: asset.pixelHeight)
by
PHImageManagerMaximumSize in requestImage call
removed the warning log 🎉
Hope this helps,
I had the same problem. Though this did not completely solve it, but it definitely helped.
option.isNetworkAccessAllowed = true
This helps only on the devices where Optimise iPhone Storage option for Photos app has been turned on.
Your code has some serious issues. You are saying .isSynchronous = true without stepping into a background thread to do the fetch. That is illegal and is what is causing the slowness. Plus, you are asking for a targetSize without also saying .resizeMode = .exact, which means you are getting much bigger images than you are asking for.
However, the warning you're seeing is irrelevant and can be ignored. It in no way signals a failure of image delivery; it seems to be just some internal message that has trickled up to the console by mistake.
This seems to be a bug with iOS 11, but I found I could work around by setting synchronous option false. I reworked my code to deal with the async delivery. Probably you can use sync(execute:) for quick fix.
Also, I believe the problem only occurred with photos delivered by iCloud sharing.
You can try method "requestImageData" with following options. This worked for me in iOS 11.2 (both on device and simulator).
let options = PHImageRequestOptions()
options.deliveryMode = .highQualityFormat
options.resizeMode = .exact
options.isSynchronous = true
PHImageManager.default().requestImageData(for: asset, options: options, resultHandler: { (data, dataUTI, orientation, info) in

Changing JUST .scale in UIImage?

Here, I'm creating a typical graphic (it's full-screen size, on all devices) on the fly...
func buildImage()->UIImage
{
let wrapperA:UIView = say, a picture
let wrapperB:UIView = say, some text to go on top
let mainSize = basicImage.bounds.size
UIGraphicsBeginImageContextWithOptions(mainSize, false, 0.0)
basicImage.drawHierarchy(in: basicImage.bounds, afterScreenUpdates:true)
wrapperA.drawHierarchy(in: wrapperA.bounds, afterScreenUpdates:true)
wrapperB.drawHierarchy(in: wrapperB.bounds, afterScreenUpdates: true)
let result:UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// so we've just created a nice big image for some reason,
// no problem so far
print( result?.scale )
// I want to change that image to have a scale of 1.
// I don't know how to do that, so I actually just
// make a new identical one, with scale of 1
let resultFixed:UIImage = UIImage(cgImage: result!.cgImage!,
scale: 1.0,
orientation: result!.imageOrientation)
print( resultFixed.scale )
print("Let's use only '1-scale' images to upload to things like Instagram")
// return result
return resultFixed
// be sure to ask on SO if there's a way to
// just "change the scale" rather than make new.
}
I need the final image to be .scale of 1 - but .scale is a read only property.
The only thing I know how to do is make a whole new image copy ... but set the scale to 1 as it's being created.
Is there a better way?
Handy tip -
This was motivated by: say you're saving a large image to the user's album, and also allowing UIActivityViewController so as to post to (example) Instagram. As a general rule, it seems to be best to make the scale 1 before sending to (example) Instagram; if the scale is say 3 you actually just get the top-left 1/3 of the image on your Instagram post. In terms of saving it to the iOS photo album, it does seem to be harmless (perhaps, better in some ways) to set the scale to 1. (I only say "better" as, if the image is, example, ultimately say emailed to a friend on PC, it can cause less confusion if the scale is 1.) Interestingly though, if you just use the iOS Photos album, and take a scale 2 or 3 image, and share it to Instagram: it does in fact appear properly on Instagram! (perhaps Apple's Photos indeed knows it os best to make it scale 1, before sending it to somewhere like Instagram!).
As you say, the scale property of UIImage is read-only – therefore you cannot change it directly.
However, using UIImage's init(cgImage:scale:orientation) initialiser doesn't really copy the image – the underlying CGImage that it's wrapping (which contains the actual bitmap data) is still the same instance. It's only a new UIImage wrapper that is created.
Although that being said, you could cut out the intermediate UIImage wrapper in this case by getting the CGImage from the context directly through CGContext's makeImage() method. For example:
func buildImage() -> UIImage? {
// ...
let mainSize = basicImage.bounds.size
UIGraphicsBeginImageContextWithOptions(mainSize, false, 0.0)
defer {
UIGraphicsEndImageContext()
}
// get the current context
guard let context = UIGraphicsGetCurrentContext() else { return nil }
// -- do drawing here --
// get the CGImage from the context by calling makeImage() – then wrap in a UIImage
// through using Optional's map(_:) (as makeImage() can return nil)
// by default, the scale of the UIImage is 1.
return context.makeImage().map(UIImage.init(cgImage:))
}
btw you can change scale of result image throw creating new image
let newScaleImage = UIImage(cgImage: oldScaleImage.cgImage!, scale: 1.0, orientation: oldScaleImage.imageOrientation)