How to get the height of video show in SurfaceViewRenderer? - surfaceview

Example
surfaceView.getHeight() will return the surfaceView's height, not the video's height.
I can get the VideoFrame in onFrame(VideoFrame frame), but the frame's height is the height of source's video, not the ended height showed in the surfaceView.
For example, as above image, the surfaceView's height is 1280, the frame's height is 1080, the ended height showed in the surfaceView is 405. Now, i don't know how to get the 405.
I setted the scalingType: surfaceView.setScalingType(ScalingType.SCALE_ASPECT_BALANCED);
I try these, all didn't work.
`Display display = this.getDisplay();
SurfaceHolder sh = this.getHolder();
Rect rect = sh.getSurfaceFrame();
this.getLocalVisibleRect(rect);
//rect = this.getClipBounds();
this.getGlobalVisibleRect(rect);
this.getWindowVisibleDisplayFrame(rect);
int h3 = getResources().getDisplayMetrics().heightPixels;
this.measure(View.MeasureSpec.UNSPECIFIED, View.MeasureSpec.UNSPECIFIED);
this.post(new Runnable() {
#Override
public void run() {
int MeasuredHeight = ZoomSurfaceViewRenderer.this.getMeasuredHeight();
int MeasuredWidth = ZoomSurfaceViewRenderer.this.getMeasuredWidth();
int w1 = ZoomSurfaceViewRenderer.this.getWidth();
int h1 = ZoomSurfaceViewRenderer.this.getHeight();
int x = h1;
}
});
FrameLayout.LayoutParams layoutParams = (FrameLayout.LayoutParams)this.getLayoutParams();
int h2 = layoutParams.height;
int w2 = layoutParams.width;
Matrix matrix = this.getMatrix();
int MeasuredHeight = this.getMeasuredHeight();
int MeasuredWidth = this.getMeasuredWidth();
int PaddingTop = this.getPaddingTop();
float PivotY = this.getPivotY();
float ScaleY = this.getScaleY();
float TranslationX = this.getTranslationX();
float TranslationY = this.getTranslationY();
float TranslationZ = this.getTranslationZ();
int w = ZoomSurfaceViewRenderer.this.getWidth();
int h = ZoomSurfaceViewRenderer.this.getHeight();`

Related

What is the good way to read camera frame and make it a texture?

I use the ARcore Unity SDK on a Pixel 3.
I need to read the current frame of the camera at runtime, and make it a texture I can use on ma scene.
I see in the documentation that there a different ways (Frame.CameraImage.Texture, Frame.CameraImage.AcquireCameraImageBytes(), TextureReaderApi) but I can't find which one is depreciated or not.
I check the GitHub issues and try some solutions :
With Frame.CameraImage.Texture :
MyObject.GetComponent<Renderer>().material.mainTexture = Frame.CameraImage.Texture
This works but the texture is updated each frame,so I tried this :
yield return new WaitForEndOfFrame();
Texture2D texture2D = (Texture2D)Frame.CameraImage.Texture;
var pix = texture2D.GetPixels32();
var destTex = new Texture2D(texture2D.width, texture2D.height);
destTex.SetPixels32(pix);
destTex.Apply();
MyObject.GetComponent<Renderer>().material.mainTexture = destTex;
This work only in instant preview mode, not when I build the app and run it on device. ( I got a white texture)
I tried the TextureReaderAPI as follow :
void Start()
{
textureReader.OnImageAvailableCallback += OnImageAvailable;
}
public void OnImageAvailable(TextureReaderApi.ImageFormatType format, int width, int height, IntPtr pixelBuffer, int bufferSize)
{
Texture2D TextureToRender = new Texture2D(width, height,
TextureFormat.RGBA32, false, false);
byte[] Texture_Raw = new byte[width * height * 4];
System.Runtime.InteropServices.Marshal.Copy(pixelBuffer, Texture_Raw, 0,
bufferSize);
TextureToRender.LoadRawTextureData(Texture_Raw);
TextureToRender.Apply();
GetComponent<Renderer>().material.mainTexture = TextureToRender;
}
But I got this error : DllNotFoundException: arcore_camera_utility
GoogleARCore.Examples.ComputerVision.TextureReaderApi.Create, so the callback function is never fired.
I also tried the AcquireCameraImageBytes function, and then convert the result from YUV to RGB :
private void Update()
{
using (var image = Frame.CameraImage.AcquireCameraImageBytes())
{
if (!image.IsAvailable)
{
return;
}
_OnImageAvailable(image.Width, image.Height, image.Y, 0);
}
}
private void _OnImageAvailable(int width, int height, IntPtr pixelBuffer,
int bufferSize)
{
Debug.Log("UPDATE_Image");
Texture2D m_TextureRender = new Texture2D(width, height, TextureFormat.RGBA32, false, false);
bufferSize = width * height * 3 / 2;
byte[] bufferYUV = new byte[bufferSize];
System.Runtime.InteropServices.Marshal.Copy(pixelBuffer, bufferYUV, 0, bufferSize);
Color color = new Color();
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
float Yvalue = bufferYUV[y * width + x];
float Uvalue = bufferYUV[(y / 2) * (width / 2) + x / 2 + (width * height)];
float Vvalue = bufferYUV[(y / 2) * (width / 2) + x / 2 + (width * height) + (width * height) / 4];
color.r = Yvalue + (float)(1.37705 * (Vvalue - 128.0f));
color.g = Yvalue - (float)(0.698001 * (Vvalue - 128.0f)) - (float)(0.337633 * (Uvalue - 128.0f));
color.b = Yvalue + (float)(1.732446 * (Uvalue - 128.0f));
color.r /= 255.0f;
color.g /= 255.0f;
color.b /= 255.0f;
if (color.r < 0.0f)
color.r = 0.0f;
if (color.g < 0.0f)
color.g = 0.0f;
if (color.b < 0.0f)
color.b = 0.0f;
if (color.r > 1.0f)
color.r = 1.0f;
if (color.g > 1.0f)
color.g = 1.0f;
if (color.b > 1.0f)
color.b = 1.0f;
color.a = 1.0f;
m_TextureRender.SetPixel(width - 1 - x, y, color);
}
}
m_TextureRender.Apply();
MyObject.GetComponent<Renderer>().material.mainTexture = m_TextureRender;
}
In instant preview I got this error : EntryPointNotFoundException: AImage_getPlaneData.
It almost works in build, but something is wrong in my conversion from YUV to RGB I guess, I got this kind of image :
I can't figure out what is wrong.
I'm running out of solutions, and don't know what is supposed to work and where I'm wrong. Any advice is welcome :)
Thanks in advance for your help.
What you're trying to do is just change the frame of the camera into a texture?
Make a render texture asset and put it in the "Target Texture" of said camera, if you're looking to make a minimap system (as i'm guessing that's the most common use) you can feed said texture into a raw image, as they take textures

MotionEvent getX() and getY(), getRawX() and getRawY() returns wrong value in a TouchImageView

Adding my TouchImageView class below,
public class TouchImageView extends ImageView {
Matrix matrix;
// We can be in one of these 3 states
static final int NONE = 0;
static final int DRAG = 1;
static final int ZOOM = 2;
int mode = NONE;
// Remember some things for zooming
PointF last = new PointF();
PointF start = new PointF();
float minScale = -3f;
float maxScale = 3f;
float[] m;
int viewWidth, viewHeight;
static final int CLICK = 3;
float saveScale = 1f;
protected float origWidth, origHeight;
int oldMeasuredWidth, oldMeasuredHeight;
ScaleGestureDetector mScaleDetector;
Context context;
public TouchImageView(Context context) {
super(context);
sharedConstructing(context);
}
public TouchImageView(Context context, AttributeSet attrs) {
super(context, attrs);
sharedConstructing(context);
}
private void sharedConstructing(final Context context) {
super.setClickable(true);
this.context = context;
mScaleDetector = new ScaleGestureDetector(context, new ScaleListener());
matrix = new Matrix();
m = new float[9];
setImageMatrix(matrix);
setScaleType(ScaleType.MATRIX);
setOnTouchListener(new OnTouchListener() {
#Override
public boolean onTouch(View v, MotionEvent event) {
mScaleDetector.onTouchEvent(event);
PointF curr = new PointF(event.getX(), event.getY());
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
float x =event.getRawX();
float y = event.getRawY();
Log.d("X & Y",""+x+" "+y);
TouchImageView view = new TouchImageView(context);
ImagePresize imagePresize = new ImagePresize(context);
imagePresize.getXYCordinates(view,x, y);
last.set(curr);
start.set(last);
mode = DRAG;
break;
case MotionEvent.ACTION_MOVE:
if (mode == DRAG) {
float deltaX = curr.x - last.x;
float deltaY = curr.y - last.y;
float fixTransX = getFixDragTrans(deltaX, viewWidth, origWidth * saveScale);
float fixTransY = getFixDragTrans(deltaY, viewHeight, origHeight * saveScale);
matrix.postTranslate(fixTransX, fixTransY);
fixTrans();
last.set(curr.x, curr.y);
}
break;
case MotionEvent.ACTION_UP:
mode = NONE;
int xDiff = (int) Math.abs(curr.x - start.x);
int yDiff = (int) Math.abs(curr.y - start.y);
if (xDiff < CLICK && yDiff < CLICK)
performClick();
break;
case MotionEvent.ACTION_POINTER_UP:
mode = NONE;
break;
}
setImageMatrix(matrix);
invalidate();
return true; // indicate event was handled
}
});
}
public void setMaxZoom(float x) {
maxScale = x;
}
private class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
#Override
public boolean onScaleBegin(ScaleGestureDetector detector) {
mode = ZOOM;
return true;
}
#Override
public boolean onScale(ScaleGestureDetector detector) {
float mScaleFactor = detector.getScaleFactor();
float origScale = saveScale;
saveScale *= mScaleFactor;
if (saveScale > maxScale) {
saveScale = maxScale;
mScaleFactor = maxScale / origScale;
} else if (saveScale < minScale) {
saveScale = minScale;
mScaleFactor = minScale / origScale;
}
if (origWidth * saveScale <= viewWidth || origHeight * saveScale <= viewHeight)
matrix.postScale(mScaleFactor, mScaleFactor, viewWidth / 2, viewHeight / 2);
else
matrix.postScale(mScaleFactor, mScaleFactor, detector.getFocusX(), detector.getFocusY());
fixTrans();
return true;
}
}
void fixTrans() {
matrix.getValues(m);
float transX = m[Matrix.MTRANS_X];
float transY = m[Matrix.MTRANS_Y];
float fixTransX = getFixTrans(transX, viewWidth, origWidth * saveScale);
float fixTransY = getFixTrans(transY, viewHeight, origHeight * saveScale);
if (fixTransX != 0 || fixTransY != 0)
matrix.postTranslate(fixTransX, fixTransY);
}
float getFixTrans(float trans, float viewSize, float contentSize) {
float minTrans, maxTrans;
if (contentSize <= viewSize) {
minTrans = 0;
maxTrans = viewSize - contentSize;
} else {
minTrans = viewSize - contentSize;
maxTrans = 0;
}
if (trans < minTrans)
return -trans + minTrans;
if (trans > maxTrans)
return -trans + maxTrans;
return 0;
}
float getFixDragTrans(float delta, float viewSize, float contentSize) {
if (contentSize <= viewSize) {
return 0;
}
return delta;
}
#Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
viewWidth = MeasureSpec.getSize(widthMeasureSpec);
viewHeight = MeasureSpec.getSize(heightMeasureSpec);
//
// Rescales image on rotation
//
if (oldMeasuredHeight == viewWidth && oldMeasuredHeight == viewHeight
|| viewWidth == 0 || viewHeight == 0)
return;
oldMeasuredHeight = viewHeight;
oldMeasuredWidth = viewWidth;
if (saveScale == 1) {
//Fit to screen.
float scale;
Drawable drawable = getDrawable();
if (drawable == null || drawable.getIntrinsicWidth() == 0 || drawable.getIntrinsicHeight() == 0)
return;
int bmWidth = drawable.getIntrinsicWidth();
int bmHeight = drawable.getIntrinsicHeight();
Log.d("bmSize", "bmWidth: " + bmWidth + " bmHeight : " + bmHeight);
float scaleX = (float) viewWidth / (float) bmWidth;
float scaleY = (float) viewHeight / (float) bmHeight;
scale = Math.min(scaleX, scaleY);
matrix.setScale(scale, scale);
// Center the image
float redundantYSpace = (float) viewHeight - (scale * (float) bmHeight);
float redundantXSpace = (float) viewWidth - (scale * (float) bmWidth);
redundantYSpace /= (float) 2;
redundantXSpace /= (float) 2;
matrix.postTranslate(redundantXSpace, redundantYSpace);
origWidth = viewWidth - 2 * redundantXSpace;
origHeight = viewHeight - 2 * redundantYSpace;
setImageMatrix(matrix);
}
fixTrans();
}
}
getX(), getY(), getRawX(), getRawY() of MotionEvent is returning wrong values. (0,0) coordinate is always shifted to left-bottom from the original.

Convert for loop from Objective-C

I am trying to convert an old Objective-C code into Swift, however I cannot find the way to make this for loop to work without errors. How could this be converted into Swift?
for var y = 0; y < columns; y++ { //C-style for statement is deprecated and will be removed in a future version of Swift
xPos = 0.0
for var x = 0; x < rows; x++ { //C-style for statement is deprecated and will be removed in a future version of Swift
var rect: CGRect = CGRectMake(xPos, yPos, width, height)
var cImage: CGImageRef = CGImageCreateWithImageInRect(image.CGImage, rect)!
var dImage: UIImage = UIImage(CGImage: cImage)
var imageView: UIImageView = UIImageView(frame: CGRectMake(x * width, y * height, width, height))
imageView.image = dImage
imageView.layer.borderColor = UIColor.blackColor().CGColor
imageView.layer.borderWidth = 1.0
//self.view!.addSubview(imageView)
arrayImages.append(dImage)
xPos += width
}
yPos += height
}
In your case, this should do it.
for y in 0..<columns {
for x in 0..<rows {
// do what you want with x and y
}
}

MKMapRect zooms too much

I use this code to show all my annotations on my map:
MKMapRect zoomRect = MKMapRectNull;
for (id <MKAnnotation> annotation in mapView.annotations)
{
MKMapPoint annotationPoint = MKMapPointForCoordinate(annotation.coordinate);
MKMapRect pointRect = MKMapRectMake(annotationPoint.x, annotationPoint.y, 0, 1000);
if (MKMapRectIsNull(zoomRect)) {
zoomRect = pointRect;
} else {
zoomRect = MKMapRectUnion(zoomRect, pointRect);
}
}
[mapView setVisibleMapRect:zoomRect animated:YES];
But my problem is that when the annotations are close to each other, it zooms too much as the rectangle is small.
Any way to fix this?
In my code, I add extra spacing around, so it will automatically adjust the zoom level in order to fit.
[aMapView setVisibleMapRect:zoomRect edgePadding:UIEdgeInsetsMake(-100, -50, -50, -50) animated:YES];
Ariel's answer didn't work for me, but I made a few small changes to it and it's working great now (especially with maps with a single pin):
double minimumZoom = 6000; // for my purposes the width/height have same min zoom
BOOL needChange = NO;
double x = MKMapRectGetMinX(zoomRect);
double y = MKMapRectGetMinY(zoomRect);
double w = MKMapRectGetWidth(zoomRect);
double h = MKMapRectGetHeight(zoomRect);
double centerX = MKMapRectGetMidX(zoomRect);
double centerY = MKMapRectGetMidY(zoomRect);
if (h < minimumZoom) { // no need to call MKMapRectGetHeight again; we just got its value!
// get the multiplicative factor used to scale old height to new,
// then apply it to the old width to get a proportionate new width
double factor = minimumZoom / h;
h = minimumZoom;
w *= factor;
x = centerX - w/2;
y = centerY - h/2;
needChange = YES;
}
if (w < minimumZoom) {
// since we've already adjusted the width, there's a chance this
// won't even need to execute
double factor = minimumZoom / w;
w = minimumZoom;
h *= factor;
x = centerX - w/2;
y = centerY - h/2;
needChange = YES;
}
if (needChange) {
zoomRect = MKMapRectMake(x, y, w, h);
}
[mapView setVisibleMapRect:zoomRect animated:YES];
Ariel's solution wasn't working for me either but n00neimp0rtant's was. I have rewritten it in Swift as a function rectWithMinimumZoom(_ minimumZoom: Double) in an extension of MKMapRect. In return it the adjusted rect (if adjustment is needed). Also I added a extra safety that minimumZoom cannot be divided by 0.
SWIFT
extension MKMapRect {
func rectWithMinimumZoom(_ minimumZoom: Double = 750) -> MKMapRect{
var needChange = false
var x = MKMapRectGetMinX(self)
var y = MKMapRectGetMinY(self)
var w = MKMapRectGetWidth(self)
var h = MKMapRectGetHeight(self)
let centerX = MKMapRectGetMidX(self)
let centerY = MKMapRectGetMidY(self)
if(h < minimumZoom){
let factor = minimumZoom / max(h,1)
h = minimumZoom
w *= factor;
x = centerX - w / 2
y = centerY - h / 2
needChange = true
}
if(w < minimumZoom){
let factor = minimumZoom / max(w,1)
w = minimumZoom
h *= factor
x = centerX - w / 2
y = centerY - h / 2
needChange = true
}
if(needChange){
return MKMapRectMake(x, y, w, h);
}
return self
}
For those of us who like one liners:
let minRectSize: Double = 5000
zoomRect = MKMapRect(
x: zoomRect.minX - max(0, minRectSize - zoomRect.width) / 2,
y: zoomRect.minY - max(0, minRectSize - zoomRect.height) / 2,
width: max(zoomRect.width, minRectSize),
height: max(zoomRect.height, minRectSize))
Try this code:
//here comes your for loop...
double minMapHeight = 10; //choose some value that fit your needs
double minMapWidth = 10; //the same as above
BOOL needChange = NO;
double x = MKMapRectGetMinX(zoomRect);
double y = MKMapRectGetMinY(zoomRect);
double w = MKMapRectGetWidth(zoomRect);
double h = MKMapRectGetHeight(zoomRect); //here was an error!!
if(MKMapRectGetHeight(zoomRect) < minMapHeight){
x -= minMapWidth/2;
w += minMapWidth/2;
needChange = YES;
}
if(MKMapRectGetWidth(zoomRect) < minMapWidth){
y -= minMapHeight/2;
h += minMapHeight/2;
needChange = YES;
}
if(needChange){
zoomRect = MKMapRectMake(x, y, w, h);
}
[mapView setVisibleMapRect:zoomRect animated:YES];
EDITED ->
double minMapHeight = 250; //choose some value that fit your needs
double minMapWidth = 250; //the same as above
BOOL needChange = NO;
double x = MKMapRectGetMinX(zoomRect);
double y = MKMapRectGetMinY(zoomRect);
double w = MKMapRectGetWidth(zoomRect);
double h = MKMapRectGetHeight(zoomRect);
double centerX = MKMapRectGetMidX(zoomRect);
double centerY = MKMapRectGetMidY(zoomRect);
if(MKMapRectGetHeight(zoomRect) < minMapHeight){
//x -= minMapWidth/2;
//w += minMapWidth/2;
x = centerX - w/2;
needChange = YES;
}
if(MKMapRectGetWidth(zoomRect) < minMapWidth){
//y -= minMapHeight/2;
//h += minMapHeight/2;
y = centerY - h/2;
needChange = YES;
}
if(needChange){
zoomRect = MKMapRectMake(x, y, w, h);
}
[mapView setVisibleMapRect:zoomRect animated:YES];

Cocos2d Scrolling game TileMap Collision

I have different tile layers that I move at different speeds, my player is fixed in place(horizontally). Now how do I detect that my player is colliding width a tile? How do I know the coordinates of the tiles in the different layers?
I figured this out with the help of this tutorial http://paulsonapps.wordpress.com/2010/03/12/tutorial-1-tilemap-with-collision-game-cocos2d/
-(void)handleCollision
{
for (CCTMXLayer *lr in layers) {
// Determine the four corners of my player
int tlXright = floor(player.playerSprite.position.x+player.playerSprite.contentSize.width-lr.position.x);
int tlXleft = floor(player.playerSprite.position.x-lr.position.x);
int tlYup = floor(player.playerSprite.position.y+player.playerSprite.contentSize.height);
int tlYdown = floor(player.playerSprite.position.y);
//Convert our Map points
CGPoint nodeSpace1 = [tileMap convertToNodeSpace:ccp(tlXright,tlYdown)];
pos1X = floor(nodeSpace1.x / tileMap.tileSize.width);
pos1Y = floor(tileMap.mapSize.height - (nodeSpace1.y / tileMap.tileSize.height));
CGPoint nodeSpace2 = [tileMap convertToNodeSpace:ccp(tlXright,tlYup)];
pos2X = floor(nodeSpace2.x / tileMap.tileSize.width);
pos2Y = floor(tileMap.mapSize.height - (nodeSpace2.y / tileMap.tileSize.height));
CGPoint nodeSpace3 = [tileMap convertToNodeSpace:ccp(tlXleft,tlYdown)];
pos3X = floor(nodeSpace3.x / tileMap.tileSize.width);
pos3Y = floor(tileMap.mapSize.height - (nodeSpace3.y / tileMap.tileSize.height));
CGPoint nodeSpace4 = [tileMap convertToNodeSpace:ccp(tlXleft,tlYup)];
pos4X = floor(nodeSpace4.x / tileMap.tileSize.width);
pos4Y = floor(tileMap.mapSize.height - (nodeSpace4.y / tileMap.tileSize.height));
unsigned int gid5 = [lr tileGIDAt:ccp(pos1X,pos1Y)];
unsigned int gid6 = [lr tileGIDAt:ccp(pos2X,pos2Y)];
unsigned int gid7 = [lr tileGIDAt:ccp(pos3X,pos3Y)];
unsigned int gid8 = [lr tileGIDAt:ccp(pos4X,pos4Y)];
//NSLog(#"gid5:%d gid6:%d gid7:%d gid8:%d",gid5,gid6,gid7,gid8);
/ NSLog(#"pos1x:%d pos1y:%d pos4x:%d pos4y:%d",pos1X ,pos1Y,pos4X,pos4Y);
if (gid5 == 5) {
[lr removeTileAt:ccp(pos1X,pos1Y)];
}
if (gid6 == 5) {
[lr removeTileAt:ccp(pos2X,pos2Y)];
}
if (gid7 == 5) {
[lr removeTileAt:ccp(pos3X,pos3Y)];
}
if (gid8 == 5) {
[lr removeTileAt:ccp(pos4X,pos4Y)];
}
}
}
to know the coordinate of the tile, i subtracted the layer position to the position of the player
int tlXright = floor(player.playerSprite.position.x+player.playerSprite.contentSize.width-lr.position.x);
and converted it like in the tutorial.
CGPoint nodeSpace1 = [tileMap convertToNodeSpace:ccp(tlXright,tlYdown)];
pos1X = floor(nodeSpace1.x / tileMap.tileSize.width);
pos1Y = floor(tileMap.mapSize.height - (nodeSpace1.y / tileMap.tileSize.height));
...