How to get pictures metadatas on windows phone? - metadata

I choose a picture with a PhotoChooserTask and I would like to get metadatas (longitute, latitude, device, ect) from this picture. How can I do that ?
My code:
void photochoosertask_Completed(object sender, PhotoResult e)
{
BitmapImage bmp = new BitmapImage();
bmp.SetSource(e.ChosenPhoto);
im.Source = bmp;
//get metadatas ?
}

If by chance you happen to be uploading the image to a service that has access to the full .NET Framework, you can also use the BitmapMetadata class - http://msdn.microsoft.com/en-us/library/ms619225.aspx.

Related

AndroidX Camera Core ImageAnalysis.Analyser results in distorted image

I am using ImageAnalysis library to extract live previews to then barcode scanning and OCR on.
I'm not having any issues with barcode scanning at all, but OCR is resulting in some weak results. I'm sure this could be from a few reasons. My current attempt at working on the solution is to send the frames to GCP - Storage before I run OCR (or barcode) on the frames in order to look at them in bulk. All of them look very similar:
My best guess is the way i'm processing the frames could be causing the pixels to be organized in the buffer incorrectly (i'm inexperienced to Android - sorry). Meaning rather than organizing 0,0 then 0,1.....it's randomly taking pixels and putting them in random areas. I can't figure out where this is happening though. Once I can look at the image quality, then i'll be able to analyze what the issue is with OCR but this is my current blocker unfortunately.
Extra note: I am uploading the image to GCP - Storage prior to even running OCR, so for the sake of looking at this, we can ignore the OCR statement I made - I just wanted to give some background.
Below is the code where I initiate the camera and analyzer then observe the frames
private void startCamera() {
//make sure there isn't another camera instance running before starting
CameraX.unbindAll();
/* start preview */
int aspRatioW = txView.getWidth(); //get width of screen
int aspRatioH = txView.getHeight(); //get height
Rational asp = new Rational (aspRatioW, aspRatioH); //aspect ratio
Size screen = new Size(aspRatioW, aspRatioH); //size of the screen
//config obj for preview/viewfinder thingy.
PreviewConfig pConfig = new PreviewConfig.Builder().setTargetResolution(screen).build();
Preview preview = new Preview(pConfig); //lets build it
preview.setOnPreviewOutputUpdateListener(
new Preview.OnPreviewOutputUpdateListener() {
//to update the surface texture we have to destroy it first, then re-add it
#Override
public void onUpdated(Preview.PreviewOutput output){
ViewGroup parent = (ViewGroup) txView.getParent();
parent.removeView(txView);
parent.addView(txView, 0);
txView.setSurfaceTexture(output.getSurfaceTexture());
updateTransform();
}
});
/* image capture */
//config obj, selected capture mode
ImageCaptureConfig imgCapConfig = new ImageCaptureConfig.Builder().setCaptureMode(ImageCapture.CaptureMode.MAX_QUALITY)
.setTargetRotation(getWindowManager().getDefaultDisplay().getRotation()).build();
final ImageCapture imgCap = new ImageCapture(imgCapConfig);
findViewById(R.id.imgCapture).setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
Log.d("image taken", "image taken");
}
});
/* image analyser */
ImageAnalysisConfig imgAConfig = new ImageAnalysisConfig.Builder().setImageReaderMode(ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE).build();
ImageAnalysis analysis = new ImageAnalysis(imgAConfig);
analysis.setAnalyzer(
Executors.newSingleThreadExecutor(), new ImageAnalysis.Analyzer(){
#Override
public void analyze(ImageProxy imageProxy, int degrees){
Log.d("analyze", "just analyzing");
if (imageProxy == null || imageProxy.getImage() == null) {
return;
}
Image mediaImage = imageProxy.getImage();
int rotation = degreesToFirebaseRotation(degrees);
FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(toBitmap(mediaImage));
if (!isMachineLearning){
Log.d("analyze", "isMachineLearning is about to be true");
isMachineLearning = true;
String haha = MediaStore.Images.Media.insertImage(getContentResolver(), toBitmap(mediaImage), "image" , "theImageDescription");
Log.d("uploadingimage: ", haha);
extractBarcode(image, toBitmap(mediaImage));
}
}
});
//bind to lifecycle:
CameraX.bindToLifecycle(this, analysis, imgCap, preview);
}
Below is how I structure my detection (pretty straightforward and simple):
FirebaseVisionBarcodeDetectorOptions options = new FirebaseVisionBarcodeDetectorOptions.Builder()
.setBarcodeFormats(FirebaseVisionBarcode.FORMAT_ALL_FORMATS)
.build();
FirebaseVisionBarcodeDetector detector = FirebaseVision.getInstance().getVisionBarcodeDetector(options);
detector.detectInImage(firebaseVisionImage)
Finally, when I'm uploading the image to GCP - Storage, this is what it looks like:
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bmp.compress(Bitmap.CompressFormat.JPEG, 100, baos); //bmp being the image that I ran barcode scanning on - as well as OCR
byte[] data = baos.toByteArray();
UploadTask uploadTask = storageRef.putBytes(data);
Thank you all for your kind help (:
My problem was that I was trying to convert to a bitmap AFTER barcode scanning. The conversion wasn't properly written but I found a way around without having to write my own bitmap conversion function (though I plan on going back to it as I see myself needing it, and genuine curiosity wants me to figure it out)

How to get the path and file size of some Unity built-in assets?

Background
I am developing a Unity editor plugin that enables users to send a selected image file to a REST API endpoint in the cloud for processing (e.g. adding transforms and optimizations). The plugin also shows a comparison of the selected image's details before and after processing (e.g. width/height/size before vs after).
The user selects the desired image through the following piece of code:
selected_texture = (Texture2D) EditorGUI.ObjectField(drawing_rect, selected_texture, typeof(Texture2D), false);
Once its selected, I can then get the respective file size by doing this:
file_size = new FileInfo(AssetDatabase.GetAssetPath(selected_texture)).Length;
Problem
This works for most textures selected, but I encounter an error when I choose a built-in Unity texture. Any guidance would be greatly appreciated.
FileNotFoundException: Could not find file 'Resources/unity_builtin_extra'
There are two built-in asset-librarys in Unity:
BuiltIn-Library in "Resources/unity_builtin_extra": contains UGUI sprite、Default-Material、Shader and so on.
BuiltIn-Library in "Library/unity default resources": contains built-in 3D mesh and OnGUI assets.
If you are using AssetDatabase.GetAssetPath, you will always get one or another path above.
To solve the problem, you need do something like below code:
public const string BuiltinResources = "Resources/unity_builtin_extra";
public const string BuiltinExtraResources = "Library/unity default resources";
public static bool IsBuiltInAsset(string assetPath)
{
return assetPath.Equals(BuiltinResources) || assetPath.Equals(BuiltinExtraResources);
}
public static long GetTextureFileLength(Texture texture)
{
string texturePath = AssetDatabase.GetAssetPath(texture);
if (IsBuiltInAsset(texturePath))
{
/*
* You can get all built-in assets by this way.
*
var allAssets = AssetDatabase.LoadAllAssetsAtPath(BuiltinResources);
var allExtraAssets = AssetDatabase.LoadAllAssetsAtPath(BuiltinExtraResources);
*/
// not supportted
// return -1;
// using MemorySize
return Profiler.GetRuntimeMemorySizeLong(texture);
}
else
{
return new FileInfo(texturePath).Length;
}
}

Unity3d Issue NGUI UITexture/Facebook www Profile Pic

Having small issue here with Unity and NGUI. NGUI has UITexture as its main texture such as Unity has GUITexture.
I sent a request to facebook to get the users profile image which sends back a perfect url which if I put in the browser works fine.
My issue is taking that Texture2D (Facebook API does it as a Texture2D) and putting it on my UITexture. For some reason it just does not take it correctly. I keep getting a null value for it. I am also using Mobile Social as well from the asset store any help, helps.
Here is my snippet of code.
private void UserDataLoaded(FB_Result result)
{
SPFacebook.OnUserDataRequestCompleteAction -= UserDataLoaded;
if (result.IsSucceeded)
{
Debug.Log("User Data Loaded!");
IsUserInfoLoaded = true;
string FBNametxt = SPFacebook.Instance.userInfo.Name;
UILabel Name = GameObject.Find("FBName").GetComponent<UILabel>();
Name.text = FBNametxt;
Texture2D FBLoadTex = SPFacebook.Instance.userInfo.GetProfileImage(FB_ProfileImageSize.normal);
FBGetProfile.GetComponent<UITexture>().mainTexture = FBLoadTex != null ? FBLoadTex : PlaceHolderImg;
}
else {
Debug.Log("User data load failed, something was wrong");
}
}
The placeholder image is just a already selected image used if the fbprofile pic does is null. Which I keep getting.....
It's possible you're looking for RawImage which exist for this purpose
http://docs.unity3d.com/Manual/script-RawImage.html
Since the Raw Image does not require a sprite texture, you can use it to display any texture available to the Unity player. For example, you might show an image downloaded from a URL using the WWW class or a texture from an object in a game.
Use Unity.UI and use that feature.
Add "Ngui Unity2d Sprite" component instead of UiTexture to your profile picture in editor and use below code in your Fb callback
private void ProfilePicCallBack (FBResult result)
{
if (result.Error != null) {
Debug.Log ("Problem with getting Profile Picture");
return;
}
ProfileImage.GetComponent<UI2DSprite> ().sprite2D = Sprite.Create(result.Texture, new Rect(0,0,128,128), new Vector2(0,0));
}
there might be an another way to do this. but this worked for me

Can I take a photo in Unity using the device's camera?

I'm entirely unfamiliar with Unity3D's more complex feature set and am curious if it has the capability to take a picture and then manipulate it. Specifically my desire is to have the user take a selfie and then have them trace around their face to create a PNG that would then be texture mapped onto a model.
I know that the face mapping onto a model is simple, but I'm wondering if I need to write the photo/carving functionality into the encompassing Chrome app, or if it can all be done from within Unity. I don't need a tutorial on how to do it, just asking if it's something that is possible.
Yes, this is possible. You will want to look at the WebCamTexture functionality.
You create a WebCamTexture and call its Play() function which starts the camera. WebCamTexture, as any Texture, allows you to get the pixels via a GetPixels() call. This allows you to take a snapshot in when you like, and you can save this in a Texture2D. A call to EncodeToPNG() and subsequent write to file should get you there.
Do note that the code below is a quick write-up based on the documentation. I have not tested it. You might have to select a correct device if there are more than one available.
using UnityEngine;
using System.Collections;
using System.IO;
public class WebCamPhotoCamera : MonoBehaviour
{
WebCamTexture webCamTexture;
void Start()
{
webCamTexture = new WebCamTexture();
GetComponent<Renderer>().material.mainTexture = webCamTexture; //Add Mesh Renderer to the GameObject to which this script is attached to
webCamTexture.Play();
}
IEnumerator TakePhoto() // Start this Coroutine on some button click
{
// NOTE - you almost certainly have to do this here:
yield return new WaitForEndOfFrame();
// it's a rare case where the Unity doco is pretty clear,
// http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
// be sure to scroll down to the SECOND long example on that doco page
Texture2D photo = new Texture2D(webCamTexture.width, webCamTexture.height);
photo.SetPixels(webCamTexture.GetPixels());
photo.Apply();
//Encode to a PNG
byte[] bytes = photo.EncodeToPNG();
//Write out the PNG. Of course you have to substitute your_path for something sensible
File.WriteAllBytes(your_path + "photo.png", bytes);
}
}
For those trying to get the camera to render live feed, here's how I managed to pull it off. First, I edited Bart's answer so the texture would be assigned on Update rather than just on Start:
void Start()
{
webCamTexture = new WebCamTexture();
webCamTexture.Play();
}
void Update()
{
GetComponent<RawImage>().texture = webCamTexture;
}
Then I attached the script to a GameObject with a RawImage component. You can easily create one by Right Click -> UI -> RawImage in the Hierarchy in the Unity Editor (this requires Unity 4.6 and above). Running it should show a live feed of the camera in your view. As of this writing, Unity 5 supports the use of webcams in the free personal edition of Unity 5.
I hope this helps anyone looking for a good way to capture live camera feed in Unity.
It is possible. I highly recommend you look at WebcamTexture Unity API. It has some useful functions:
GetPixel() -- Returns pixel color at coordinates (x, y).
GetPixels() -- Get a block of pixel colors.
GetPixels32() -- Returns the pixels data in raw format.
MarkNonReadable() -- Marks WebCamTexture as unreadable
Pause() -- Pauses the camera.
Play() -- Starts the camera.
Stop() -- Stops the camera.
Bart's answer has a required modification. I used his code and the pic I was getting was black. Required modification is that we have to
convert TakePhoto to a coroutine and add
yield return new WaitForEndOfFrame();
at the start of Coroutine. (Courtsey #fafase)
For more details see
http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
You can also refer to
Take photo using webcam is giving black output[Unity3D]
Yes, You can. I created Android Native camera plugin that can open your Android device camera, capture image, record video and save that in the desired location of your device with just a few lines of code.
you need to find your webcam device Index by search it in the devices list and select it for webcam texture to play.
you can use this code:
using UnityEngine;
using System.Collections;
using System.IO;
using UnityEngine.UI;
using System.Collections.Generic;
public class GetCam : MonoBehaviour
{
WebCamTexture webCam;
string your_path = "C:\\Users\\Jay\\Desktop";// any path you want to save your image
public RawImage display;
public AspectRatioFitter fit;
public void Start()
{
if(WebCamTexture.devices.Length==0)
{
Debug.LogError("can not found any camera!");
return;
}
int index = -1;
for (int i = 0; i < WebCamTexture.devices.Length; i++)
{
if (WebCamTexture.devices[i].name.ToLower().Contains("your webcam name"))
{
Debug.LogError("WebCam Name:" + WebCamTexture.devices[i].name + " Webcam Index:" + i);
index = i;
}
}
if (index == -1)
{
Debug.LogError("can not found your camera name!");
return;
}
WebCamDevice device = WebCamTexture.devices[index];
webCam = new WebCamTexture(device.name);
webCam.Play();
StartCoroutine(TakePhoto());
display.texture = webCam;
}
public void Update()
{
float ratio = (float)webCam.width / (float)webCam.height;
fit.aspectRatio = ratio;
float ScaleY = webCam.videoVerticallyMirrored ? -1f : 1f;
display.rectTransform.localScale = new Vector3(1f, ScaleY, 1f);
int orient = -webCam.videoRotationAngle;
display.rectTransform.localEulerAngles = new Vector3(0, 0, orient);
}
public void callTakePhoto() // call this function in button click event
{
StartCoroutine(TakePhoto());
}
IEnumerator TakePhoto() // Start this Coroutine on some button click
{
// NOTE - you almost certainly have to do this here:
yield return new WaitForEndOfFrame();
// it's a rare case where the Unity doco is pretty clear,
// http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
// be sure to scroll down to the SECOND long example on that doco page
Texture2D photo = new Texture2D(webCam.width, webCam.height);
photo.SetPixels(webCam.GetPixels());
photo.Apply();
//Encode to a PNG
byte[] bytes = photo.EncodeToPNG();
//Write out the PNG. Of course you have to substitute your_path for something sensible
File.WriteAllBytes(your_path + "\\photo.png", bytes);
}
}
There is a plugin available for this type of functionality called Camera Capture Kit - https://www.assetstore.unity3d.com/en/#!/content/56673 and while the functionality provided is geared towards mobile it contains a demo of how you can use the WebCamTexture to take a still image.
If you want to do that without using a third party plugin then #FuntionR solution will help you. But, if you want to save the captured photo to the gallery (Android & iOS)then it's not possible within unity, you have to write native code to transfer photo to gallery and then call it from unity.
Here is a summarise blog which will guide you to achieve your goal.
http://unitydevelopers.blogspot.com/2018/07/pick-image-from-gallery-in-unity3d.html
Edit: Note that, the above thread describes image picking from the gallery, but the same process will be for saving the image to the gallery.

Google Plus Profile Picture Google Play Game Services

I've already seen all post in this forum but i haven't found anything.
How can i show in an ImageView the Profile picture of google plus of person who log in my app? with ImageManager?
can you show some code please?
The method below does what you want.
public class AvatarImage extends ImageView {
Bitmap image;
public static final String TAG = "AvatarImage";
public AvatarImage(Context context) {
super(context);
setImageResource(R.drawable.avatar);
}
/**
* This method takes the participant object and attempts to
* download avatar of the participant from google plus
* if this is unsuccesful default avatar is shown
* #param p
*/
public void setImageFromParticipant(Participant p) {
ImageManager im = ImageManager.create(getContext());
im.loadImage(this, p.getIconImageUri(), R.drawable.avatar);
try{
Bitmap bitmap = ((BitmapDrawable)this.getDrawable()).getBitmap();
setImageBitmap(bitmap);
}
catch(NullPointerException e)
{
e.printStackTrace();
}
}
}
The code as shown for the other answer from John is correct, however, there is currently an issue with Google Play Service Library as shown here Google Issue Tracker for anyone using ver 10 of the library (and perhaps ver 9.. version 8 is good). Unfortunately for this issue, if you even attempt to get the URI decoded by ImageManager, Google Play Services quits working. Either wait for next ver of the library, or try to get ahold of ver 8.
(I couldn't put this in as a comment on the answer above, so had to go with a new answer...)