I have a video layer I want to render onto an SCNPlane. It works on the Simulator, but not on the device.
Here's a visual:
Here's the code:
//DISPLAY PLANE
SCNPlane * displayPlane = [SCNPlane planeWithWidth:displayWidth height:displayHeight];
displayPlane.cornerRadius = cornerRadius;
SCNNode * displayNode = [SCNNode nodeWithGeometry:displayPlane];
[scene.rootNode addChildNode:displayNode];
//apply material
SCNMaterial * displayMaterial = [SCNMaterial material];
displayMaterial.diffuse.contents = [[UIColor greenColor] colorWithAlphaComponent:1.0f];
[displayNode.geometry setMaterials:#[displayMaterial]];
//move to front + position for rim
displayNode.position = SCNVector3Make(0, rimTop - 0.08, /*0.2*/ 1);
//create video item
NSBundle * bundle = [NSBundle mainBundle];
NSString * path = [bundle pathForResource:#"tv_preview" ofType:#"mp4"];
NSURL * url = [NSURL fileURLWithPath:path];
AVAsset * asset = [AVAsset assetWithURL:url];
AVPlayerItem * item = [AVPlayerItem playerItemWithAsset:asset];
queue = [AVQueuePlayer playerWithPlayerItem:item];
looper = [AVPlayerLooper playerLooperWithPlayer:queue templateItem:item];
queue.muted = true;
layer = [AVPlayerLayer playerLayerWithPlayer:queue];
layer.frame = CGRectMake(0, 0, w, h);
layer.videoGravity = AVLayerVideoGravityResizeAspectFill;
displayMaterial.diffuse.contents = layer;
displayMaterial.doubleSided = true;
[queue play];
//[self.view.layer addSublayer:layer];
I can confirm that the actual plane exists (appears as green in the image above if avplayerlayer isn't applied to it) - first image above. If the video layer is added directly to the parent view layer (bottom line above) it runs fine - final image above. I thought it might be file system issue, but then I imagine (?) the video wouldn't play in the final image.
EDIT: setting queue (AVPlayer) directly works on Simulator, albeit ugly as hell, and crashes on Device, with following error log:
Error: Could not get pixel buffer (CVPixelBufferRef)
validateFunctionArguments:3797: failed assertion `Fragment Function(commonprofile_frag): incorrect type of texture (MTLTextureType2D) bound at texture binding at index 4 (expect MTLTextureTypeCube) for u_radianceTexture[0].'
I think that my solution will suit you - the only thing is that small changes will be required here:
#import <SceneKit/SceneKit.h>
#import <AVKit/AVKit.h>
#import <SpriteKit/SpriteKit.h>
#interface ViewController: UIViewController
#end
It works for simulator (in Xcode 13.2.1) and for device (I run it on iPhone X with iOS 15.3). I used 640x360 H.265 video. The only problem here – a video looping is stuttering...
#import "ViewController.h"
#implementation ViewController
AVQueuePlayer *queue;
AVPlayerLooper *looper;
SKVideoNode *videoNode;
- (void)viewWillAppear:(BOOL)animated
{
[super viewWillAppear:animated];
SCNView *sceneView = (SCNView *)self.view;
sceneView.backgroundColor = [UIColor blackColor];
sceneView.autoenablesDefaultLighting = YES;
sceneView.allowsCameraControl = YES;
sceneView.playing = YES;
sceneView.loops = YES;
SCNScene *scene = [SCNScene scene];
sceneView.scene = scene;
// Display
SCNPlane *displayPlane = [SCNPlane planeWithWidth:1.6 height:0.9];
SCNNode *displayNode = [SCNNode nodeWithGeometry:displayPlane];
SCNMaterial *displayMaterial = [SCNMaterial material];
displayMaterial.lightingModelName = SCNLightingModelConstant;
displayMaterial.doubleSided = YES;
// Video
NSBundle *bundle = [NSBundle mainBundle];
NSString *path = [bundle pathForResource:#"art.scnassets/hevc"
ofType:#"mp4"];
NSURL *url = [NSURL fileURLWithPath:path];
AVAsset *asset = [AVAsset assetWithURL:url];
AVPlayerItem *item = [AVPlayerItem playerItemWithAsset:asset];
queue = [AVQueuePlayer playerWithPlayerItem:item];
queue.muted = NO;
looper = [AVPlayerLooper playerLooperWithPlayer:queue templateItem:item];
// SpriteKit video node
videoNode = [SKVideoNode videoNodeWithAVPlayer:queue];
videoNode.zRotation = -M_PI;
videoNode.xScale = -1;
SKScene *skScene = [SKScene sceneWithSize: CGSizeMake(
displayPlane.width * 1000,
displayPlane.height * 1000)];
videoNode.position = CGPointMake(skScene.size.width / 2,
skScene.size.height / 2);
videoNode.size = skScene.size;
[videoNode play];
[skScene addChild:videoNode];
displayMaterial.diffuse.contents = skScene;
[displayNode.geometry setMaterials:#[displayMaterial]];
[scene.rootNode addChildNode:displayNode];
}
#end
Related
I want to be able to visualize the planes that my ARKit app detects. How do I do that?
This is what I want to be able to do
Create a new AR project in Xcode with SceneKit and Obj-C, then add these to ViewController.m:
//as a class or global variable:
NSMapTable *planes;
//add to viewWillAppear:
configuration.planeDetection = ARPlaneDetectionHorizontal;
//to viewDidLoad:
planes = [NSMapTable mapTableWithKeyOptions:NSMapTableStrongMemory
valueOptions:NSMapTableWeakMemory];
//new functions:
- (void)renderer:(id<SCNSceneRenderer>)renderer didAddNode:(SCNNode *)node forAnchor:(ARAnchor *)anchor {
if( [anchor isKindOfClass:[ARPlaneAnchor class]] ){
[planes setObject:anchor forKey:node];
ARPlaneAnchor *pa = anchor;
SCNNode *pn = [SCNNode node];
[node addChildNode:pn];
pn.geometry = [SCNPlane planeWithWidth:pa.extent.x height:pa.extent.z];
SCNMaterial *m = [SCNMaterial material];
m.emission.contents = UIColor.blueColor;
m.transparency = 0.1;
pn.geometry.materials = #[m];
pn.position = SCNVector3Make(pa.center.x, -0.002, pa.center.z);
pn.transform = SCNMatrix4MakeRotation(-M_PI / 2.0, 1, 0, 0);
}
}
- (void)renderer:(id<SCNSceneRenderer>)renderer didUpdateNode:(SCNNode *)node forAnchor:(ARAnchor *)anchor {
if( [anchor isKindOfClass:[ARPlaneAnchor class]] ){
[planes setObject:anchor forKey:node];
ARPlaneAnchor *pa = anchor;
SCNNode *pn = [node childNodes][0];
SCNPlane *pg = pn.geometry;
pg.width = pa.extent.x;
pg.height = pa.extent.z;
pn.position = SCNVector3Make(pa.center.x, -0.002, pa.center.z);
}
}
- (void)renderer:(id<SCNSceneRenderer>)renderer didRemoveNode:(nonnull SCNNode *)node forAnchor:(nonnull ARAnchor *)anchor{
[planes removeObjectForKey:node];
}
You'll see translucent planes, give m.emission.contents a texture if you feel so.
Alternatively get the Example App from Apple in Swift
I have one problem to print photo using AirPrint. I printed 4 * 6 inch image but printed image size is too large! How can I resolve this problem.
Can I specify paper size and photo programmatically?
Here is screen shot url.
https://www.dropbox.com/s/1f6wa0waao56zqk/IMG_0532.jpg
` here is my code
-(void)printPhotoWithImage:(UIImage *)image
{
NSData *myData = UIImageJPEGRepresentation(image, 1.f);
UIPrintInteractionController *pic = [UIPrintInteractionController sharedPrintController];
if (pic && [UIPrintInteractionController canPrintData:myData]) {
pic.delegate = self;
UIPrintInfo *pinfo = [UIPrintInfo printInfo];
pinfo.outputType = UIPrintInfoOutputPhoto;
pinfo.jobName = #"My Photo";
pinfo.duplex = UIPrintInfoDuplexLongEdge;
pic.printInfo = pinfo;
pic.showsPageRange = YES;
pic.printingItem = myData;
pic.printFormatter = format;
[format release];
void(^completionHandler)(UIPrintInteractionController *, BOOL, NSError *) = ^(UIPrintInteractionController *print, BOOL completed, NSError *error) {
[self resignFirstResponder];
if (!completed && error) {
NSLog(#"--- print error! ---");
}
};
[pic presentFromRect:CGRectMake((self.view.bounds.size.width - 64) + 27, (self.view.bounds.size.height - 16) + 55, 0, 0) inView:self.view animated:YES completionHandler:completionHandler];
}
}
- (UIPrintPaper *)printInteractionController:(UIPrintInteractionController *)printInteractionController choosePaper:(NSArray *)paperList
{
CGSize pageSize = CGSizeMake(6 * 72, 4 * 72);
return [UIPrintPaper bestPaperForPageSize:pageSize withPapersFromArray:paperList];
}
Just this is my code. should I use UIPrintPageRenderer property to give draw area?
`
first you should set
/*
PrintPhotoPageRenderer *pageRenderer = [[PrintPhotoPageRenderer alloc]init];
pageRenderer.imageToPrint =image;
pic.printPageRenderer = pageRenderer;
*/
- (void)printImage {
// Obtain the shared UIPrintInteractionController
UIPrintInteractionController *controller = [UIPrintInteractionController sharedPrintController];
controller.delegate = self;
if(!controller){
NSLog(#"Couldn't get shared UIPrintInteractionController!");
return;
}
// We need a completion handler block for printing.
UIPrintInteractionCompletionHandler completionHandler = ^(UIPrintInteractionController *printController, BOOL completed, NSError *error) {
if(completed && error)
NSLog(#"FAILED! due to error in domain %# with error code %u", error.domain, error.code);
};
// Obtain a printInfo so that we can set our printing defaults.
UIPrintInfo *printInfo = [UIPrintInfo printInfo];
UIImage *image = ((UIImageView *)self.view).image;
[controller setDelegate:self];
printInfo.outputType = UIPrintInfoOutputPhoto;
if(!controller.printingItem && image.size.width > image.size.height)
printInfo.orientation = UIPrintInfoOrientationLandscape;
// Use this printInfo for this print job.
controller.printInfo = printInfo;
// Since the code below relies on printingItem being zero if it hasn't
// already been set, this code sets it to nil.
controller.printingItem = nil;
#if DIRECT_SUBMISSION
// Use the URL of the image asset.
if(self.imageURL && [UIPrintInteractionController canPrintURL:self.imageURL])
controller.printingItem = self.imageURL;
#endif
// If we aren't doing direct submission of the image or for some reason we don't
// have an ALAsset or URL for our image, we'll draw it instead.
if(!controller.printingItem){
// Create an instance of our PrintPhotoPageRenderer class for use as the
// printPageRenderer for the print job.
PrintPhotoPageRenderer *pageRenderer = [[PrintPhotoPageRenderer alloc]init];
// The PrintPhotoPageRenderer subclass needs the image to draw. If we were taking
// this path we use the original image and not the fullScreenImage we obtained from
// the ALAssetRepresentation.
//pageRenderer.imageToPrint = ((UIImageView *)self.view).image;
pageRenderer.imageToPrint =image;
controller.printPageRenderer = pageRenderer;
}
// The method we use presenting the printing UI depends on the type of
// UI idiom that is currently executing. Once we invoke one of these methods
// to present the printing UI, our application's direct involvement in printing
// is complete. Our delegate methods (if any) and page renderer methods (if any)
// are invoked by UIKit.
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {
//[controller presentFromBarButtonItem:self.printButton animated:YES completionHandler:completionHandler]; // iPad
[controller presentFromRect:CGRectMake(0, 0, 50, 50) inView:_btnPrint animated:YES completionHandler:completionHandler];
}else
[controller presentAnimated:YES completionHandler:completionHandler]; // iPhone
}
and then you should set PrintPhotoPageRenderer
UIPrintPageRenderer.h
#import <UIKit/UIKit.h>
#interface PrintPhotoPageRenderer : UIPrintPageRenderer { UIImage
*imageToPrint; }
#property (readwrite, retain) UIImage *imageToPrint;
#end
//
PrintPhotoPageRenderer.m
#import "PrintPhotoPageRenderer.h"
#implementation PrintPhotoPageRenderer
#synthesize imageToPrint;
// This code always draws one image at print time.
-(NSInteger)numberOfPages { return 1; }
/* When using this UIPrintPageRenderer subclass to draw a photo at
print
time, the app explicitly draws all the content and need only override
the drawPageAtIndex:inRect: to accomplish that.
The following scaling algorithm is implemented here:
1) On borderless paper, users expect to see their content scaled so that there is no whitespace at the edge of the paper. So this
code scales the content to fill the paper at the expense of
clipping any content that lies off the paper.
2) On paper which is not borderless, this code scales the content so that it fills the paper. This reduces the size of the
photo but does not clip any content.
*/
- (void)drawPageAtIndex:(NSInteger)pageIndex inRect:(CGRect)printableRect {
if(self.imageToPrint){
CGSize finialSize = CGSizeMake(560, 431);//you should set width and height for you self
int x = 20;
int y = (printableRect.size.height - finialSize.height);
CGRect finalRect = CGRectMake(x, y, finialSize.width, finialSize.height);
[self.imageToPrint drawInRect:finalRect];
}else {
NSLog(#"%s No image to draw!", __func__); } }
#end
Trying to get a basic filter working with GPUImage, but not sure how to properly set it up to display the crosshairs over the live video feed, when detecting corners. I tried adding the crosshairs to the blend filter, along with the video, then add that to the gpuimageview, but all I get is a white screen. Any ideas?
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view from its nib.
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
GPUImageView *filteredVideoView = (GPUImageView *)self.view;
GPUImageCrosshairGenerator *crosshairGenerator = [[GPUImageCrosshairGenerator alloc] init];
crosshairGenerator.crosshairWidth = 15.0;
[crosshairGenerator forceProcessingAtSize:CGSizeMake(480.0, 640.0)];
customFilter = [[GPUImageHarrisCornerDetectionFilter alloc] init];
[customFilter setCornersDetectedBlock:^(GLfloat* cornerArray, NSUInteger cornersDetected, CMTime frameTime){
[crosshairGenerator renderCrosshairsFromArray:cornerArray count:cornersDetected frameTime:frameTime];
NSLog(#"corners: %u", cornersDetected);
}];
GPUImageAlphaBlendFilter *blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
[blendFilter forceProcessingAtSize:CGSizeMake(480.0, 640.0)];
[videoCamera addTarget:blendFilter];
[crosshairGenerator addTarget:blendFilter];
[blendFilter addTarget:filteredVideoView];
[videoCamera startCameraCapture];
}
This is how ur code should be in Swift 2.0
var liveCam:GPUImageVideoCamera!
var edgesDetector:GPUImageHarrisCornerDetectionFilter!
var crosshairGenerator:GPUImageCrosshairGenerator!
#IBOutlet weak var camView: GPUImageView!
override func viewDidLoad() {
super.viewDidLoad()
liveCam = GPUImageVideoCamera(sessionPreset: AVCaptureSessionPreset640x480, cameraPosition: .Back)
liveCam.outputImageOrientation = .Portrait
crosshairGenerator = GPUImageCrosshairGenerator()
crosshairGenerator.crosshairWidth = 15
crosshairGenerator.forceProcessingAtSize(CGSize(width: 480, height: 640))
edgesDetector = GPUImageHarrisCornerDetectionFilter()
edgesDetector.blurRadiusInPixels = 2 //Default value
edgesDetector.threshold = 0.2 //Default value
edgesDetector.cornersDetectedBlock = {(cornerArray:UnsafeMutablePointer<GLfloat>,cornersDetected:UInt,frameTime:CMTime) -> Void in
self.crosshairGenerator.renderCrosshairsFromArray(cornerArray, count: cornersDetected, frameTime: frameTime)
print("\(cornerArray) =-= \(cornersDetected) =-= \(frameTime)")
}
liveCam.addTarget(edgesDetector)
edgesDetector.addTarget(crosshairGenerator)
crosshairGenerator.addTarget(camView)
liveCam.startCameraCapture()
}
The result :
Got this code:
videoSize = [[AVPlayerItem playerItemWithAsset:asset] presentationSize];
// nslogs -> height: 000 width 000
And this deprecated:
videoSize = [asset naturalSize];
// nslogs -> height: 360 width 480
Why is this happening? I don't get it.
Solved:
NSArray* allVideoTracks = [movieAsset tracksWithMediaType:AVMediaTypeVideo];
if ([allVideoTracks count] > 0) {
AVAssetTrack* track = [[movieAsset tracksWithMediaType:AVMediaTypeVideo]objectAtIndex:0];
CGSize size = [track naturalSize];
}
this made my day, hope it works for someone else...
The presentationSize in AVPlayerItem may return a value of CGSizeZero when the player item is not ready to paly.doc
The naturalSize in AVAsset is depressed. doc
Just like your code, it is recommended to use naturalSize and
preferredTransform in AVAssetTrack.
CGSize size = [[[movieAsset tracksWithMediaType:AVMediaTypeVideo] firstObject] naturalSize];
I recently started game programming on the iPhone using Cocos2d and Box2d. So here's my problem:
I've got a Player class which inherits from CCSprite, and within that class, there's this method:
-(void) createBox2dObject:(Player *)sender
forWorld:(b2World*)world {
b2BodyDef playerBodyDef;
playerBodyDef.type = b2_dynamicBody;
playerBodyDef.position.Set(sender.position.x/PTM_RATIO, sender.position.y/PTM_RATIO);
playerBodyDef.userData = sender;
body = world->CreateBody(&playerBodyDef);
b2PolygonShape dynamicBox;
dynamicBox.SetAsBox(sender.contentSize.width/PTM_RATIO,
sender.contentSize.height/PTM_RATIO);
b2FixtureDef polygonShapeDef;
polygonShapeDef.shape = &dynamicBox;
polygonShapeDef.density = 1.0f;
polygonShapeDef.friction = 1.0f;
polygonShapeDef.restitution = 0;
body->CreateFixture(&polygonShapeDef);
}
and here's how I call this:
self.player = [Player spriteWithSpriteFrameName:#"runningrupol-1.png"];
_player.position = ccp(_player.boundingBox.size.width/2 + 32, _player.boundingBox.size.height/2 + 32);
self.walkAction = [CCRepeatForever actionWithAction:
[CCAnimate actionWithAnimation:walkAnim restoreOriginalFrame:NO]];
[_player runAction:_walkAction];
[spriteSheet addChild:_player];
[_player createBox2dObject:_player forWorld:_world];
Obviously, I'm using a spritesheet which is animated.
Here's how I update the world:
- (void)tick:(ccTime) dt {
_world->Step(dt, 8, 10);
for(b2Body *b = _world->GetBodyList(); b; b=b->GetNext()) {
if (b->GetUserData() != NULL) {
CCSprite *playerData = (CCSprite *)b->GetUserData();
playerData.position = ccp(b->GetPosition().x * PTM_RATIO,
b->GetPosition().y * PTM_RATIO);
playerData.rotation = -1 * CC_RADIANS_TO_DEGREES(b->GetAngle());
}
}
}
And here's how I call it in the init method:
[self schedule:#selector(tick:)];
This is what I see:
Please help. And if you need additional info, just tell me.
SetAsBox() uses half-height and half-width (quirky I know), so divide the parameters by 2:
dynamicBox.SetAsBox(sender.contentSize.width/PTM_RATIO/2,
sender.contentSize.height/PTM_RATIO/2);
When you try this, leave the anchor point as is (the default value if you don't explicitly set it should be ccp(0.5,0.5), the center of your sprite, and you want that).
You can change the anchorpoint for the sprite. Here is a good tutorial:
http://www.qcmat.com/understanding-anchorpoint-in-cocos2d/