AudioClip.Length is incorrect when loading from UnityWebRequestMultimedia GetAudioClip - unity3d

When I get an unity audio clip from firebase by url
var request = UnityWebRequestMultimedia.GetAudioClip(url, AudioType.MPEG);
yield return request.SendWebRequest();
var clip = DownloadHandlerAudioClip.GetContent(request);
print("success..." + clip.length);
audio clip real length is 4.86, but after download clip.length is 8.1

my upload audioclip used unity micphone record , and use "SavWav" Convert to mp3 data,
download audioclip length is a unity bug bug not fixed ,
i use SavWav.TrimSilence(clip, 0, (clip_)=>{...} get the correct length
Hope that helps

Related

Is there an AR SDK for creation of Image Targets during runtime?

For a Demo application I need an reliable AR SDK that does allow the creation of Image Targets during Runtime.
The SDK has to run on a mobile device and the Targets should not be created by some cloud server or during development. In this scenario Users would photograph their own markers (e.g. magazine covers) and they get cropped and warped to be used as markers (3D Objects have to be assigned to these markers at random). Neither vuforia nor ARToolkit allow this scenario. Some other SDKs that might
support it: Kudan, EasyAR or MAXST.
If this is not possible at all, is there a AR SDK that allows to use the exact same Image Target (or Marker of any kind) multiple times for rendering the same 3D Object? Again vuforia and ARToolkit do not support this.
Kudan seams to be not supporting this feature in unity
i think it's supported in the native SDKs.
Unlike the native SDKs, the Unity Plugin can't just get a raw image file from the assets and load it into the tracker. This is a feature we will be adding to the plugin in the future.
source :- https://kudan.readme.io/docs/markers-1
EasyAR on the other hand support creating imageTarget out of .png or .jpg one image at a time or by .json to add multiple images in one batch
and it's provided in the example projects in EasyAR_SDK_2.2.0_Basic_Samples_Unity here
Note:- to run the example project you need to
1 - sign up on their site https://www.easyar.com/
2 - creat SDK license key from here.
3 - follow this Guide to place the key and run on unity.
4 - Your goal is achieved in the "HelloARTarget" project
and here is the example project script loading an AR experience from .pjg images
using UnityEngine;
using System.Linq;
using EasyAR;
namespace Sample
{
public class HelloARTarget : MonoBehaviour
{
private const string title = "Please enter KEY first!";
private const string boxtitle = "===PLEASE ENTER YOUR KEY HERE===";
private const string keyMessage = ""
+ "Steps to create the key for this sample:\n"
+ " 1. login www.easyar.com\n"
+ " 2. create app with\n"
+ " Name: HelloARTarget (Unity)\n"
+ " Bundle ID: cn.easyar.samples.unity.helloartarget\n"
+ " 3. find the created item in the list and show key\n"
+ " 4. replace all text in TextArea with your key";
private void Awake()
{
if (FindObjectOfType<EasyARBehaviour>().Key.Contains(boxtitle))
{
#if UNITY_EDITOR
UnityEditor.EditorUtility.DisplayDialog(title, keyMessage, "OK");
#endif
Debug.LogError(title + " " + keyMessage);
}
}
void CreateTarget(string targetName, out SampleImageTargetBehaviour targetBehaviour)
{
GameObject Target = new GameObject(targetName);
Target.transform.localPosition = Vector3.zero;
targetBehaviour = Target.AddComponent<SampleImageTargetBehaviour>();
}
void Start()
{
SampleImageTargetBehaviour targetBehaviour;
ImageTrackerBehaviour tracker = FindObjectOfType<ImageTrackerBehaviour>();
// dynamically load from image (*.jpg, *.png)
CreateTarget("argame01", out targetBehaviour);
targetBehaviour.Bind(tracker);
targetBehaviour.SetupWithImage("sightplus/argame01.jpg", StorageType.Assets, "argame01", new Vector2());
GameObject duck02_1 = Instantiate(Resources.Load("duck02")) as GameObject;
duck02_1.transform.parent = targetBehaviour.gameObject.transform;
// dynamically load from json file
CreateTarget("argame00", out targetBehaviour);
targetBehaviour.Bind(tracker);
targetBehaviour.SetupWithJsonFile("targets.json", StorageType.Assets, "argame");
GameObject duck02_2 = Instantiate(Resources.Load("duck02")) as GameObject;
duck02_2.transform.parent = targetBehaviour.gameObject.transform;
// dynamically load from json string
string jsonString = #"
{
""images"" :
[
{
""image"" : ""sightplus/argame02.jpg"",
""name"" : ""argame02""
}
]
}
";
CreateTarget("argame02", out targetBehaviour);
targetBehaviour.Bind(tracker);
targetBehaviour.SetupWithJsonString(jsonString, StorageType.Assets, "argame02");
GameObject duck02_3 = Instantiate(Resources.Load("duck02")) as GameObject;
duck02_3.transform.parent = targetBehaviour.gameObject.transform;
// dynamically load all targets from json file
var targetList = ImageTargetBaseBehaviour.LoadListFromJsonFile("targets2.json", StorageType.Assets);
foreach (var target in targetList.Where(t => t.IsValid).OfType<ImageTarget>())
{
CreateTarget("argame03", out targetBehaviour);
targetBehaviour.Bind(tracker);
targetBehaviour.SetupWithTarget(target);
GameObject duck03 = Instantiate(Resources.Load("duck03")) as GameObject;
duck03.transform.parent = targetBehaviour.gameObject.transform;
}
targetBehaviour = null;
}
}
}
edit
although it's easy to make an ImageTarget from .png but i wonder how to check that the image contains the sufficient features for being detected in EasyAR ?
Google AR Core supports this feature but they have limited number of supported devices
https://developers.google.com/ar/develop/java/augmented-images/guide
edit 2
Looks like Vuforia is supporting the creation of the image target at runtime. also drag and drop the image as texture in the editor without having to generate a dataset from the portal. although you still need the api key from the portal.
You can definitely do that with Vuforia and UserDefinedTargetBuildingBehaviour
https://library.vuforia.com/articles/Training/User-Defined-Targets-Guide
Kudan and EasyAR seem to offer this option. I will try to integrate them with Google Cardboard.
I have seen an OpenSpace3D video doing that. I believe they integrated ARToolKit5 into OpenSpace3D and made it work somehow. OpenSpace3D seems to be OpenSource so you might be able to look into their solution.
This is the link to the video:
https://www.youtube.com/watch?v=vSF1ZH1CwQI
Look at around minute 8:50 to 9:50.

Unity 5.1 Distorted image after download from web

When I load my png after compressing with tiny png, they get distorted( all purple and transparent)
http://s22.postimg.org/b39g0bhn5/Screen_Shot_2015_06_28_at_10_39_50_AM.png
the background for example should be blue
http://postimg.org/image/fez234o6d/
this only happens when i use pictures that got compressed by tinypng.com
and only after i updated to unity 5.1.
Im downloading the image with WWW class and loading texture using Texture2D.
is this problem known to anyone?
I had exactly the same issue. I was able to solve it using the following code
mat.mainTexture = new Texture2D(32, 32, TextureFormat.DXT5, false);
Texture2D newTexture = new Texture2D(32, 32, TextureFormat.DXT5, false);
WWW stringWWW = new WWW(texture1URL);
yield return stringWWW;
if(stringWWW.error == null)
{
stringWWW.LoadImageIntoTexture(newTexture);
mat.mainTexture = newTexture;
}
The key seemed to be using DXT5 as the texture format, and using the method LoadImageIntoTexture(...);

MovieTexture won't play audio

I'm trying to dynamically load and play a video file. No matter what I do, I cannot seem to figure out why the audio does not play.
var www = new WWW("http://unity3d.com/files/docs/sample.ogg");
var movieTexture = www.movie;
var movieAudio = www.movie.audioClip;
while (!movieTexture.isReadyToPlay) yield return 0;
// Assign movie texture and audio
var videoAnimation = videoAnimationPrefab.GetComponent<VideoAnimation>();
var videoRenderer = videoAnimation.GetVideoRenderer();
var audioSource = videoAnimation.GetAudioSource();
videoRenderer.material.mainTexture = movieTexture;
audioSource.clip = movieAudio;
// Play the movie and sound
movieTexture.Play();
audioSource.Play();
// Double check audio is playing...
Debug.Log("Audio playing: " + audioSource.isPlaying);
Every time I receive Audio playing: False
I've also tried using a GUITexture using this as a guide, but no dice. There are no errors displayed in the console.
What am I doing wrong that makes the audio never work?
Thanks in advance for any help!
Changed to:
while (!movieTexture.isReadyToPlay) yield return 0;
var movieAudio = movieTexture.audioClip;
Even though AudioClip inherits from Object, a call to movieTexture.audioClip seems to return a copied version instead of returning a reference by value to the object. So at the time I was assigning it, it had not been created yet and had to wait until the movie was "Ready to Play" until fetching the audioClip.

Can I take a photo in Unity using the device's camera?

I'm entirely unfamiliar with Unity3D's more complex feature set and am curious if it has the capability to take a picture and then manipulate it. Specifically my desire is to have the user take a selfie and then have them trace around their face to create a PNG that would then be texture mapped onto a model.
I know that the face mapping onto a model is simple, but I'm wondering if I need to write the photo/carving functionality into the encompassing Chrome app, or if it can all be done from within Unity. I don't need a tutorial on how to do it, just asking if it's something that is possible.
Yes, this is possible. You will want to look at the WebCamTexture functionality.
You create a WebCamTexture and call its Play() function which starts the camera. WebCamTexture, as any Texture, allows you to get the pixels via a GetPixels() call. This allows you to take a snapshot in when you like, and you can save this in a Texture2D. A call to EncodeToPNG() and subsequent write to file should get you there.
Do note that the code below is a quick write-up based on the documentation. I have not tested it. You might have to select a correct device if there are more than one available.
using UnityEngine;
using System.Collections;
using System.IO;
public class WebCamPhotoCamera : MonoBehaviour
{
WebCamTexture webCamTexture;
void Start()
{
webCamTexture = new WebCamTexture();
GetComponent<Renderer>().material.mainTexture = webCamTexture; //Add Mesh Renderer to the GameObject to which this script is attached to
webCamTexture.Play();
}
IEnumerator TakePhoto() // Start this Coroutine on some button click
{
// NOTE - you almost certainly have to do this here:
yield return new WaitForEndOfFrame();
// it's a rare case where the Unity doco is pretty clear,
// http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
// be sure to scroll down to the SECOND long example on that doco page
Texture2D photo = new Texture2D(webCamTexture.width, webCamTexture.height);
photo.SetPixels(webCamTexture.GetPixels());
photo.Apply();
//Encode to a PNG
byte[] bytes = photo.EncodeToPNG();
//Write out the PNG. Of course you have to substitute your_path for something sensible
File.WriteAllBytes(your_path + "photo.png", bytes);
}
}
For those trying to get the camera to render live feed, here's how I managed to pull it off. First, I edited Bart's answer so the texture would be assigned on Update rather than just on Start:
void Start()
{
webCamTexture = new WebCamTexture();
webCamTexture.Play();
}
void Update()
{
GetComponent<RawImage>().texture = webCamTexture;
}
Then I attached the script to a GameObject with a RawImage component. You can easily create one by Right Click -> UI -> RawImage in the Hierarchy in the Unity Editor (this requires Unity 4.6 and above). Running it should show a live feed of the camera in your view. As of this writing, Unity 5 supports the use of webcams in the free personal edition of Unity 5.
I hope this helps anyone looking for a good way to capture live camera feed in Unity.
It is possible. I highly recommend you look at WebcamTexture Unity API. It has some useful functions:
GetPixel() -- Returns pixel color at coordinates (x, y).
GetPixels() -- Get a block of pixel colors.
GetPixels32() -- Returns the pixels data in raw format.
MarkNonReadable() -- Marks WebCamTexture as unreadable
Pause() -- Pauses the camera.
Play() -- Starts the camera.
Stop() -- Stops the camera.
Bart's answer has a required modification. I used his code and the pic I was getting was black. Required modification is that we have to
convert TakePhoto to a coroutine and add
yield return new WaitForEndOfFrame();
at the start of Coroutine. (Courtsey #fafase)
For more details see
http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
You can also refer to
Take photo using webcam is giving black output[Unity3D]
Yes, You can. I created Android Native camera plugin that can open your Android device camera, capture image, record video and save that in the desired location of your device with just a few lines of code.
you need to find your webcam device Index by search it in the devices list and select it for webcam texture to play.
you can use this code:
using UnityEngine;
using System.Collections;
using System.IO;
using UnityEngine.UI;
using System.Collections.Generic;
public class GetCam : MonoBehaviour
{
WebCamTexture webCam;
string your_path = "C:\\Users\\Jay\\Desktop";// any path you want to save your image
public RawImage display;
public AspectRatioFitter fit;
public void Start()
{
if(WebCamTexture.devices.Length==0)
{
Debug.LogError("can not found any camera!");
return;
}
int index = -1;
for (int i = 0; i < WebCamTexture.devices.Length; i++)
{
if (WebCamTexture.devices[i].name.ToLower().Contains("your webcam name"))
{
Debug.LogError("WebCam Name:" + WebCamTexture.devices[i].name + " Webcam Index:" + i);
index = i;
}
}
if (index == -1)
{
Debug.LogError("can not found your camera name!");
return;
}
WebCamDevice device = WebCamTexture.devices[index];
webCam = new WebCamTexture(device.name);
webCam.Play();
StartCoroutine(TakePhoto());
display.texture = webCam;
}
public void Update()
{
float ratio = (float)webCam.width / (float)webCam.height;
fit.aspectRatio = ratio;
float ScaleY = webCam.videoVerticallyMirrored ? -1f : 1f;
display.rectTransform.localScale = new Vector3(1f, ScaleY, 1f);
int orient = -webCam.videoRotationAngle;
display.rectTransform.localEulerAngles = new Vector3(0, 0, orient);
}
public void callTakePhoto() // call this function in button click event
{
StartCoroutine(TakePhoto());
}
IEnumerator TakePhoto() // Start this Coroutine on some button click
{
// NOTE - you almost certainly have to do this here:
yield return new WaitForEndOfFrame();
// it's a rare case where the Unity doco is pretty clear,
// http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
// be sure to scroll down to the SECOND long example on that doco page
Texture2D photo = new Texture2D(webCam.width, webCam.height);
photo.SetPixels(webCam.GetPixels());
photo.Apply();
//Encode to a PNG
byte[] bytes = photo.EncodeToPNG();
//Write out the PNG. Of course you have to substitute your_path for something sensible
File.WriteAllBytes(your_path + "\\photo.png", bytes);
}
}
There is a plugin available for this type of functionality called Camera Capture Kit - https://www.assetstore.unity3d.com/en/#!/content/56673 and while the functionality provided is geared towards mobile it contains a demo of how you can use the WebCamTexture to take a still image.
If you want to do that without using a third party plugin then #FuntionR solution will help you. But, if you want to save the captured photo to the gallery (Android & iOS)then it's not possible within unity, you have to write native code to transfer photo to gallery and then call it from unity.
Here is a summarise blog which will guide you to achieve your goal.
http://unitydevelopers.blogspot.com/2018/07/pick-image-from-gallery-in-unity3d.html
Edit: Note that, the above thread describes image picking from the gallery, but the same process will be for saving the image to the gallery.

Posting screenshot through Facebook SDK for Unity

I have a problem with posting screenshot to FB. Connection is not the problem because I collect, deserialize and show my fb data. Post score and message to friend work fine but I cannot manage to post a shot from my phone. I am using Eclipse for debugging but I don't get any errors. My code look like this:
`
IEnumerator TakeScreenShot()
{
//coroutine because I want to wait until end of the frame for shot
yield return new WaitForEndOfFrame();
//create new texture that will be my screenshot
Texture2D tex = new Texture2D(Screen.width,Screen.height,TextureFormat.RGB24, false);
// read pixels from screen
tex.ReadPixels(new Rect(0,0,Screen.width,Screen.height),0,0);
//and apply them to texture
tex.Apply();
// now we have screenshot
// now we need to encode texture to array of bytes
byte[] buffer = tex.EncodeToPNG();
//so we can save it to storage
// System.IO.File.WriteAllBytes(Application.dataPath + "/screenshots/screen/screen" + ".png", buffer);
//or to convert it to string for posting to facebook
string s = Convert.ToBase64String(buffer);
//????? maybe my string format is not correct
var query = new Dictionary<string, string>();
query["photos"] = s;
FB.API("me?fields=photos",Facebook.HttpMethod.POST, delegate (string r) { ; }, query);
}
`
when I go to GraphExplorer and paste command "me?fields=photos" I get nothing. It means function did nothing. I forgot to say that i granted permission for user_photos. This is frustrating. I have lost three days on problem that looked trivial, and yet no solution in sight. I will appreciate any suggestions.
Right now there's no easy way to handle this through FB.API since it takes in a Dictionary<string, string>.
However FB.API() is really just a wrapper around Unity's WWW class. You can call the graph endpoints at graph.facebook.com and pass in the byte[] buffer = tex.EncodeToPNG() as a param in WWWForm().
UPDATE: FB.API() has been updated in version 4.3.3 to support WWWForm as well. You can now upload photos via that method now.