Setting Pixels in Cocos2d-js - how to? - texture2d

My goal is to write directly to pixels in a texture/sprite, to be able to update the game's details.
I was successful in doing this in Cocos2d-x with C++ (sample code below). The question is, "How can I do the equivalent in Javascript?"
//C++ WORKS:
board_texture= *sprite->getTexture();
// initialize pixels (array of GLubyte)
for (i=0 ; i<num_pix; i++){
pixels[i] = (GLubyte) 0;
}
// editing colors at specific positions:
pixels[i]=(GLubyte) 255; // r
pixels[i+1]=(GLubyte) 0; // g
pixels[i+2]=(GLubyte) 0;// b
pixels[i+3]=(GLubyte) 255; // a
board_texture.updateWithData(pixels,0,0,board_w,board_h);
sprite->setTexture(&board_texture);
This runs extremely fast on an iPad updating a 400x400 texture easily at 60FPS.
I've looked at Cocos2d-JS, however, and can find NO equivalent working functions for "updateWithData". I was able to get the texture2D into a local variable, but can find no way to update it's pixels (yet)
//JS does NOT work
var t;
t= this.map_sprite.getTexture(); // OK
// but does NOT allow me to updateWithData
t.setTexture(pixels) // fails, with Invalid Native Format. What's the proper format then?
Why? Cocos2d's developers are saying C++ is not supported in the future as much as Javascript (Cocos2d-js), as evidenced by the lack of JS support in the Cocos IDE. So, since I'm just starting out, I figured, I should do it in JS, not C++.

Related

Can Flutter render images from raw pixel data? [duplicate]

Setup
I am using a custom RenderBox to draw.
The canvas object in the code below comes from the PaintingContext in the paint method.
Drawing
I am trying to render pixels individually by using Canvas.drawRect.
I should point out that these are sometimes larger and sometimes smaller than the pixels on screen they actually occupy.
for (int i = 0; i < width * height; i++) {
// in this case the rect size is 1
canvas.drawRect(
Rect.fromLTWH(index % (width * height),
(index / (width * height)).floor(), 1, 1), Paint()..color = colors[i]);
}
Storage
I am storing the pixels as a List<List<Color>> (colors in the code above). I tried differently nested lists previously, but they did not cause any noticable discrepancies in terms of performance.
The memory on my Android Emulator test device increases by 282.7MB when populating the list with a 999x999 image. Note that it only temporarily increases by 282.7MB. After about half a minute, the increase drops to 153.6MB and stays there (without any user interaction).
Rendering
With a resolution of 999x999, the code above causes a GPU max of 250.1 ms/frame and a UI max of 1835.9 ms/frame, which is obviously unacceptable. The UI freezes for two seconds when trying to draw a 999x999 image, which should be a piece of cake (I would guess) considering that 4k video runs smoothly on the same device.
CPU
I am not exactly sure how to track this properly using the Android profiler, but while populating or changing the list, i.e. drawing the pixels (which is the case for the above metrics as well), CPU usage goes from 0% to up to 60%. Here are the AVD performance settings:
Cause
I have no idea where to start since I am not even sure what part of my code causes the freezing. Is it the memory usage? Or the drawing itself?
How would I go about this in general? What am I doing wrong? How should I store these pixels instead.
Efforts
I have tried so much that did not help at all that I will try to only point out the most notable ones:
I tried converting the List<List<Color>> to an Image from the dart:ui library hoping to use Canvas.drawImage. In order to do that, I tried encoding my own PNG, but I have not been able to render more than a single row. However, it did not look like that would boost performance. When trying to convert a 9999x9999 image, I ran into an out of memory exception. Now, I am wondering how video is rendered as all as any 4k video will easily take up more memory than a 9999x9999 image if a few seconds of it are in memory.
I tried implementing the image package. However, I stopped before completing it as I noticed that it is not meant to be used in Flutter but rather in HTML. I would not have gained anything using that.
This one is pretty important for the following conclusion I will draw: I tried to just draw without storing the pixels, i.e. is using Random.nextInt to generate random colors. When trying to randomly generate a 999x999 image, this resulted in a GPU max of 1824.7 ms/frames and a UI max of 2362.7 ms/frame, which is even worse, especially in the GPU department.
Conclusion
This is the conclusion I reached before trying my failed attempt at rendering using Canvas.drawImage: Canvas.drawRect is not made for this task as it cannot even draw simple images.
How do you do this in Flutter?
Notes
This is basically what I tried to ask over two months ago (yes, I have been trying to resolve this issue for that long), but I think that I did not express myself properly back then and that I knew even less what the actual problem was.
The highest resolution I can properly render is around 10k pixels. I need at least 1m.
I am thinking that abandoning Flutter and going for native might be my only option. However, I would like to believe that I am just approaching this problem completely wrong. I have spent about three months trying to figure this out and I did not find anything that lead me anywhere.
Solution
dart:ui has a function that converts pixels to an Image easily: decodeImageFromPixels
Example implementation
Issue on performance
Does not work in the current master channel
I was simply not aware of this back when I created this answer, which is why I wrote the "Alternative" section.
Alternative
Thanks to #pslink for reminding me of BMP after I wrote that I had failed to encode my own PNG.
I had looked into it previously, but I thought that it looked to complicated without sufficient documentation. Now, I found this nice article explaining the necessary BMP headers and implemented 32-bit BGRA (ARGB but BGRA is the order of the default mask) by copying Example 2 from the "BMP file format" Wikipedia article. I went through all sources but could not find an original source for this example. Maybe the authors of the Wikipedia article wrote it themselves.
Results
Using Canvas.drawImage and my 999x999 pixels converted to an image from a BMP byte list, I get a GPU max of 9.9 ms/frame and a UI max of 7.1 ms/frame, which is awesome!
| ms/frame | Before (Canvas.drawRect) | After (Canvas.drawImage) |
|-----------|---------------------------|--------------------------|
| GPU max | 1824.7 | 9.9 |
| UI max | 2362.7 | 7.1 |
Conclusion
Canvas operations like Canvas.drawRect are not meant to be used like that.
Instructions
First of, this is quite straight-forward, however, you need to correctly populate the byte list, otherwise, you are going to get an error that your data is not correctly formatted and see no results, which can be quite frustrating.
You will need to prepare your image before drawing as you cannot use async operations in the paint call.
In code, you need to use a Codec to transform your list of bytes into an image.
final list = [
0x42, 0x4d, // 'B', 'M'
...];
// make sure that you either know the file size, data size and data offset beforehand
// or that you edit these bytes afterwards
final Uint8List bytes = Uint8List.fromList(list);
final Codec codec = await instantiateImageCodec(bytes));
final Image image = (await codec.getNextFrame()).image;
You need to pass this image to your drawing widget, e.g. using a FutureBuilder.
Now, you can just use Canvas.drawImage in your draw call.

Can Photoshop guide lines' coordinates be programmatically retrieved from a psd file?

I am curious if guide line coordinates (x coordinates actually) can be retrieved from a psd file.
Basically, let's say, I want to read the file from any programming language environment or open it using something that will provide me with the guide coordinates.
For clarity, I am talking about these lines:
Don't know about Java but you can do that easily with Javascript. This script will alert coordinates for all the guides of the active document in default Photoshop units:
var myGuides = [];
for (var i = 0; i < activeDocument.guides.length; i++) {
myGuides.push(activeDocument.guides[i].coordinate);
}
alert(myGuides);
So you can use values from myGuides for whatever reasons you need.

VL_SLIC MATLAB Output for VL_FEAT

I am using the VL_SLIC function in MATLAB and I am following the tutorial for the function here: http://www.vlfeat.org/overview/slic.html
This is the code I have written so far:
im = imread('slic_image.jpg');
regionSize = 10 ;
regularizer = 10;
vl_setup;
segments = vl_slic(single(im), regionSize, regularizer);
imshow(segments);
I just get a black image and I am not able to see the segmented image with the superpixels. Is there a way that I can view the result as shown in the webpage?
The reason why is because segments is actually a map that tells you which regions of your image are superpixels. If a pixel in this map belongs to ID k, this means that this pixel belongs to superpixel k. Also, the map is of type uint32 and so when you try doing imshow(segments); it really doesn't show anything meaningful. For that image that is seen on the website, there are 1023 segments given your selected parameters. As such, the map spans from 0 to 1023. If want to see what the segments look like, you could do imshow(segments,[]);. What this will do is that the region with the ID of 1023 will get mapped to white, while the pixels that don't belong to any superpixel region (ID of 0), gets mapped to black. You would actually get something like this:
Not very meaningful! Now, to get what you see on the webpage, you're going to have to do a bit more work. From what I know, VLFeat doesn't have built-in functionality that shows you the results like what is seen on their webpage. As such, you will have to write code to do it yourself. You can do this by following these steps:
Create a map that is true that is the same size as the image
For each superpixel region k:
Create another map that marks true for any pixel belonging to the region k, and false otherwise.
Find the perimeter of this region.
Set these perimeter pixels to false in the map created in Step #1
Repeat Step #2 until we have finished going through all of the regions.
Use this map to mask out all of the pixels in the original image to get what you see in the website.
Let's go through that code now. Below is the setup that you have established:
vl_setup;
im = imread('slic_image.jpg');
regionSize = 10 ;
regularizer = 10 ;
segments = vl_slic(single(im), regionSize, regularizer);
Now let's go through that algorithm that I just mentioned:
perim = true(size(im,1), size(im,2));
for k = 1 : max(segments(:))
regionK = segments == k;
perimK = bwperim(regionK, 8);
perim(perimK) = false;
end
perim = uint8(cat(3,perim,perim,perim));
finalImage = im .* perim;
imshow(finalImage);
We thus get:
Bear in mind that this is not exactly the same as what you get on the website. I simply went to the website and saved that image, then proceeded with the code I just showed you. This is probably because the slic_image.jpg image is not the exact original that was given in their example. There seems to be superpixels in areas where there are some bad quantization artifacts. Also, I'm using a relatively old version of VLFeat - Version 0.9.16. There may have been improvements to the algorithm since then, so I may not be using the most up to date version. In any case, this is something for you that you can start with.
Hope this helps!
I found these lines in vl_demo_slic.m may be useful.
segments = vl_slic(im, regionSize, regularizer, 'verbose') ;
% overaly segmentation
[sx,sy]=vl_grad(double(segments), 'type', 'forward') ;
s = find(sx | sy) ;
imp = im ;
imp([s s+numel(im(:,:,1)) s+2*numel(im(:,:,1))]) = 0 ;
It generates edges from the gradient of the superpixel map (segments).
While I want to take nothing away from ~ rayryeng's ~ beautiful answer.
This could also help.
http://www.vlfeat.org/matlab/demo/vl_demo_slic.html
Available in: toolbox/demo

Optimisation of zxing.net QR decode

I am having great success with zxing on .NET and trying to get the best speed for decoding QR Barcodes (I have a lot to do -- 1.8M). The code I am using (well bits of it):
// Create Barcode decoder
BarcodeReader q = new BarcodeReader();
q.PossibleFormats = new List<BarcodeFormat>();
q.PossibleFormats.Add(BarcodeFormat.QR_CODE);
q.AutoRotate = true; // Not necessary for QR?
q.TryHarder = false;
// Decode result
Result[] r = q.DecodeMultiple(imageFile);
My code is a little smarter in that it is in a loop and tries harder if it doesn't find it the first time.
Is there a way to add a zone, ROI or smaller area to speed up the detection?
Any other recommendations to improve performance?
The fastest way with ZXing.Net for QR codes is the following:
// Create Barcode decoder
BarcodeReader q = new BarcodeReader();
q.PossibleFormats = new List<BarcodeFormat>();
q.PossibleFormats.Add(BarcodeFormat.QR_CODE);
q.AutoRotate = false;
q.TryHarder = false;
// Decode result
Result r = q.Decode(imageFile);
But it decodes only the first QR code which is found.
Avoid DecodeMultiple if you don't need it.
All other options should only be used if really necessary.
AutoRotate isn't necessary for QR code decoding.
If your images are really big shrink them before decoding.
For most cases there is no need for images with a bigger resolution than 1000 pixels.
The only exceptions are really tiny QR codes.
Another good optimization is the use of an image source which gives grayscale images.
A lot of CPU cycles are needed for the calculation of the luminance values from
RGB images. The fastest option are 8 bit grayscale images.

MATLAB: Making a video using writeVideo

I am currently trying to make a video using the writeVideo function in MATLAB. I have made a GUI using GUIDE which includes a slider, a few checkboxs, and a single axes (tagged as axes1). When I move the slider, the axes will plot certain shapes that change according to the slider value.
What I am trying to do is record a video of the GUI being used to show the functionality in a presentation. However, when I play back the video (after making it using writeVideo), it shows the slider value moving and the checkboxes being checked correctly, but the plot never changes (i.e. it will only show the original shape). This seems to be some refresh error, however, anything I have tried has not worked (refresh, drawnow, etc.)
Any idea why this is happening? The following is the code I am trying to implement:
vidObj = VideoWriter('test.avi','Motion JPEG AVI');
open(vidObj);
flag = 0;
if flag<12 %movie will be 12 frames long
flag = flag+1;
if slider<1
plot something...
elseif slider>=1 && slider<2
plot something else...
etc...
elseif slider<=5
plot something else...
end
hFigure = findobj('Name','gui');
currFrame = getframe(hFigure);
writeVideo(vidObj,currFrame);
clear hfigure currFrame image;
else
fprintf('done\n')
close(vidObj);
end
As stated, I can then use implay to play back the test.avi file, however, the plot never updates.
Thanks in advance
Note: I am using MATLAB R2012b
EDIT:
The following is how I ended up creating my video: maybe this will help someone who was facing similar issues to the one stated above.
I basically gave up on using getframe and decided to 1) get screenshots, then 2) turn the screenshots into a movie. To get the screenshots, I first ran my program then, in the command window, invoked the following code using the java toolkit
i = 1;
while true
robo = java.awt.Robot;
t = java.awt.Toolkit.getDefaultToolkit();
%# Set screen size
rectangle = java.awt.Rectangle(0,0,1000,640);
%# Get the capture
image = robo.createScreenCapture(rectangle);
%# Save it to file
filehandle = java.io.File(sprintf('capture%d.jpg', i));
javax.imageio.ImageIO.write(image,'jpg',filehandle);
pause(.4) %# Wait for 0.4 seconds
i = i + 1;
end
This then continually ran in the background and took snap shots of the screen and stored them into the current directory. to stop it from running, just use Ctrl C. Once I had the screen shots, I used the following code to create the movie:
vidObj = VideoWriter('test.avi','Motion JPEG AVI');
open(vidObj);
for i=7:87 %these are the frames I wanted in my movie
x = num2str(i);
im = horzcat('capture',x);
im1 = horzcat(im,'.jpg')
imdata = imread(im1);
writeVideo(vidObj,imdata);
end
close(vidObj);
getframe is sometimes problematic. I'm not sure I can give an answer, and I can't simply comment because of my reputation, but this link might be of help. After you get the figure from the GUI, turn it into an image and then into a frame. Worth a shot.
If you change your monitor settings to 16 bit color it will solve the problem you are having. This has been documented on the link provided. My previous answer was deleted because I only supplied the link and told you how to solve the problem (sorry) but if you actually click on the link and see what they say or change your monitor settings to 16 bit color everything will work. From the link you can see that people have had this problem since 2009 but it was updated in april 2013 so it is still a problem and changing your monitor settings to 16 bit color is still a solution.
Hope this helps!
http://www.mathworks.com/matlabcentral/newsreader/view_thread/257389