I'm building an App with actionscript 3.0 in my Flash builder. This is a followup question this question.
I need to upload the bytearray to my server, but the function i use to convert the bitmapdata to a ByteArray is super slow, so slow it freezes up my mobile device. My code is as follows:
var jpgenc:JPEGEncoder = new JPEGEncoder(50);
trace('encode');
//encode the bitmapdata object and keep the encoded ByteArray
var imgByteArray:ByteArray = jpgenc.encode(bitmap);
temp2 = File.applicationStorageDirectory.resolvePath("snapshot.jpg");
var fs:FileStream = new FileStream();
trace('fs');
try{
//open file in write mode
fs.open(temp2,FileMode.WRITE);
//write bytes from the byte array
fs.writeBytes(imgByteArray);
//close the file
fs.close();
}catch(e:Error){
Is there a different way to convert it to a byteArray? Is there a better way?
Try to use blooddy library: http://www.blooddy.by . But i didn't test it on mobile devices. Comment if you will have success.
Use BitmapData.encode(), it's faster by orders of magnitude on mobile http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/BitmapData.html#encode%28%29
You should try to find a JPEG encoder that is capable of encoding asynchronously. That way the app can still be used while the image is being compressed. I haven't tried any of the libraries, but this one looks promising:
http://segfaultlabs.com/devlogs/alchemy-asynchronous-jpeg-encoding-2
It uses Alchemy, which should make it faster than the JPEGEncoder from as3corelib (which I guess is the one you're using at the moment.)
A native JPEG encoder is ideal, asynchronous would be good, but possibly still slow (just not blocking). Another option:
var pixels:ByteArray = bitmapData.getPixels(bitmapData.rect);
pixels.compress();
I'm not sure of native performance, and performance definitely depends on what kind of images you have.
The answer from Ilya was what did it for me. I downloaded the library and there is an example of how to use it inside. I have been working on getting the CameraUI in flashbuilder to take a picture, encode / compress it, then send it over via a web service to my server (the data was sent as a compressed byte array). I did this:
by.blooddy.crypto.image.JPEGEncoder.encode( bmp, 30 );
Where bmp is my bitmap data. The encode took under 3 seconds and was easily able to fit into my flow of control synchronously. I tried async methods but they ultimately took a really long time and were difficult to track for things like when a user moved from cell service to wifi or from tower to tower while an upload was going on.
Comment here if you need more details.
Related
I need to load/save the number of coins a user has earned in my Unity game with saved games for Play Games Services.
There is an example on how to save an image on this page: https://developer.android.com/games/pgs/unity/saved-games#write_a_saved_game
Can someone tell me how I can load/save a number instead of an image?
To be precise, I wouldn't say you are exactly saving an image. I mean, you do save it, but only as a cover for your game save file and I don't know if you can retrieve it (maybe you can, I haven't checked that really).
Probably the most important part of using google play save system is byte[] savedData argument. It's just a byte array, and it's up to you what are you going to pass there and how are you going to interpret that data on game load.
There are a lot of ways you could approach it. I personally create a custom GameSave object with all my data that I want to save, then I serialize it using JsonUtility.
string json = JsonUtility.ToJson(gameSave);
After that, I use MemoryStream and BinaryFormatter to convert the data in my json to byte array:
MemoryStream memoryStream = new MemoryStream();
BinaryFormatter binaryFormatter = new BinaryFormatter();
binaryFormatter.Serialize(memoryStream, json);
byte[] data = memoryStream.GetBuffer();
Then you would have to pass that data to savedGameClient.CommitUpdate method as an argument.
Of course, that's just one of the ways of doing that, you can send something else than json from serialized class object.
Other stuff is pretty well documented, so once you handle that, you should manage to do the rest.
I would like to fill an array from a stream for around ten seconds.{I wish to do some processing on the data)So far I can:
(a) obtain the microphone stream using mediaRecorder
(b) use analyser and analyser.getFloatTimeDomainData(dataArray) to obtain an array but it is size limited to only a little over half a second of data.I can also successfully output the data after processing back onto a stream and to outDestination.
(c) I have also experimented with obtaining a 'chunks' array from mediaRecorder directly but the problem then is that I can't find any mime type that would give me a simple array of values - ie an uncompressed sample by sample single channel set of value - ie a longer version of 'dataArray' in (b).
I am wondering if I am missing a simple way round this problem?
Solutions I have seen tend to use step (b) and do regular polls then reassemble a longer array - however it seems the timing is a bit tricky ..
I'v also seen suggestions to use audio workouts - I might have to do this but would prefer a simpler solution!
Or again, if someone knows how to drive mediaRecorder to output the chunks array in a simple array format FLOAT32.of one channel.That would do the trick.
Or maybe I'm missing something simpler?
I have code showing those steps that have been successful and will upload if anyone requests.
I want to be able to send BufferedImages generated from my java program over the local network in real time, some my second application can show them.
I have been looking through a lot of websites over the last 2 days but I wasn't able to find anything. Only thing I found was this:
Can I use Xuggler to encode video/audio to a byte array?
I tried implementing the URLHandler but problem is, MediaWriter still wants an URL and as soon as I add a VideoStream, it opens the container a second time with the url and then in crashes.
I hope you can help me and thanks in advance.
Code I have right now:
val clientSocket = serverSocket.accept()
connectedClients.add(clientSocket)
val container = IContainer.make()
val writer = ToolFactory.makeWriter("localhost", container)
container.open(VTURLProtocolHandler(clientSocket.getOutputStream()), IContainer.Type.WRITE, IContainerFormat.make())
writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_H264, width, height)
I'm trying to grab a screenshot with renderer.domElement.toDataURL("image/png"), and save it to a file.
The image is the right size, but it's black.
I have preserveDrawingBuffer turned on.
I think I'm decoding and saving the file correctly, because when I hexdump it I can see the correct initial characters for the PNG format, as well as the IHDR and IDAT chunk headers. However the closing IEND is missing.
Any known issues here? Hints? Windows 7/Firefox up to date if it matters.
Thanks... (Sorry if this is dumb, I'm very new to three.js)
I had somewhat similar problems with Windows 7/Firefox. PNG Data URL's would be randomly truncated or something, much shorter than a successful PNG export. Trying to set that data url as image src resulted in "Image corrupt" exception or something in FF. As little sense it makse, setting a small window.setTimeout (10ms) between rendering and getting the data URL helped in my case. Maybe Firefox needs a rest from the JS engine before it refreshes some canvas internal state or something.. weird.
I switched to JPG format (smaller files => truncation less of an issue?) and still saw it not working, then I tried this tip which I found here
If you want to save data that is derived from a Javascript
canvas.toDataURL() function, you have to convert blanks into plusses.
If you do not do that, the decoded data is corrupted:
<?php
$encodedData = str_replace(' ','+',$encodedData);
$decodedData = base64_decode($encodedData);
?>
This worked. Thanks, Mekal.
This tip seems to apply to JPGs only. I saw PNGs decoding correctly without the + replacement, and corruptly with it. I can use JPGs so my personal problem is solved. However I never saw a PNG that wasn't black even when decoded correctly and not truncated.
Kind of a lousy situation either way, I feel like. What is up with the +'s?
A black texture is a sign that you did not indicate the texture needs to be updated.
Also, you do not need to use canvas.toDataURL(). You can pass in the canvas reference to the THREE.Texture object.
var canvas = document.getElementById('#myCanvas');
var texture = new THREE.Texture(canvas);
texture.needsUpdate = true;
// Now render the scene
Obj-C or Monotouch.Net C# answers are fine.
I have a Base64 string that is a PDF document received over a web service. I can get the NSData.
How do I take the NSData and save it as a PDF?
-- I get the NSData this way --
byte[] encodedDataAsBytes = System.Convert.FromBase64String (myBase64String);
string decoded = System.Text.Encoding.Unicode.GetString (encodedDataAsBytes);
NSData data = NSData.FromString (decoded, NSStringEncoding.ASCIIStringEncoding);
The simplest way to save it is probably to use NSData's writeToFile:options:error: method.
I found that using the .NET framework works better than trying to use the iOS framework for this problem. This will take any file and convert it to it's original then save it to the iPhone/iPad device. "path" is just a folder on the dev ice.
using (var f = System.IO.File.Create (path))
{
byte[] encodedDataAsBytes = System.Convert.FromBase64String (Base64PDFString);
f.Write (encodedDataAsBytes, 0, encodedDataAsBytes.Length);
}
I'm working on a project where I recently had to accomplish the same thing you are describing. I get base64 encoded PDF files as strings from a .NET web service which need to be decoded to their original and saved as PDF files in the applications documents directory.
My solution was:
Use ASIHTTPRequest to communicate with the web service.
I then use TBXML to parse incoming xml and get the base64 as an NSString.
To decode the string I use a method from QSUtilities library called decodeBase64WithString.
Finally I save the result with NSData's writeToFile.
I have tested and successfully used this method with PDF files that are up to 25mb. I also had a couple of test runs with a 48mb file but that file made the decodeBase64WithString method take up too much memory and my app crashed. Haven't found a solution to this yet..
If you are working with multiple large files be sure to free up your memory once in a while. I got all my files in one loop in which I had to use my own nsautorelease pool and drain it at the end of the loop to free up any autoreleased objects.