Is there a maximum value for the volume depth of a render texture? - unity3d

I'm using render texture as an array of textures by specifying the size of the array in volume depth property. But, sometimes when I exceed some value (eg. for 128x128 textures it's 45...) it return me an error : D3D11: Failed to create RenderTexture (128 x 128 fmt 39 aa 1), error 0x80070057 which isn't very clear. Therefore, I supposed it's because this property has a maximum value ? But I did not find it in unity manual either on internet.
Does anyone know this value or could tell me where I could find it ?

The width, height, and depth must be equal to or less than D3D11_REQ_TEXTURE3D_U_V_OR_W_DIMENSION (2048).
Likely you are having issues with some other parameter. Try enabling the Direct3D Debug Device for better information. Use -force-d3d11-debug. With Windows 10 or Windows 11, you have to install it by enabling the Windows optional feature Graphics Tools.
See Microsoft Docs.

Related

Decoding Keyence LJ-X8000 Bitmap-Height Image

I have a Keyence Line Laser System LJ-X 8000, that I use to scan the surface of different objects.
The Controller saves the height information as a bitmap, with each pixel representing one height value. After a lot of tinkering, I found out, that Keyence is not using the actual colors, rather than using the 24-Bit RGB-triplets as some form of binary storage. However, no combination of these bytes seems to work for me. Are there any common storage methods for 24-bit Integers?
To decode those values, I did a scan covering the whole measurement range of the scanner, including some out of range values in the beginning and the end. If you look at the distribution of the values of each color plane, you can see, that the first and third plane actually only use values up to 8/16 which means only 3/4 Bits. This is also visible in the image itself, as it mainly shows a green color.
I concluded that Keyence uses the full byte of the green color plane, 3 Bits of the first and 4 Bits of the last plane to store the height information. Keyence seems to have chosen some weird 15 Bit Integer Format to store their data.
With a little bit-shifting and knowing that the scanner has a valid range from [-2.2, 2.2], I was able to build the following simple little (Matlab-) script to calculate the height information for each pixel:
HeightValBin = bitshift(scanIm(:,:,2),7, 'uint16') ...
+ bitshift(scanIm(:,:,1),4, 'uint16')...
+ bitshift(scanIm(:,:,3),0, 'uint16');
scanBinValScaled = interp1([0,2^15], [-2.2, 2.2], double(scanBinVal));
Keyence offers a software to convert those .bmp into .csv-files, but without an API to automate the process. As I will have to deal with a lot of these files I needed to automate this process.
The calculated values from the rgb triplets are actually even more precise than the exported csv, as the csv only shows 4 digits after the decimal point.

Can Flutter render images from raw pixel data? [duplicate]

Setup
I am using a custom RenderBox to draw.
The canvas object in the code below comes from the PaintingContext in the paint method.
Drawing
I am trying to render pixels individually by using Canvas.drawRect.
I should point out that these are sometimes larger and sometimes smaller than the pixels on screen they actually occupy.
for (int i = 0; i < width * height; i++) {
// in this case the rect size is 1
canvas.drawRect(
Rect.fromLTWH(index % (width * height),
(index / (width * height)).floor(), 1, 1), Paint()..color = colors[i]);
}
Storage
I am storing the pixels as a List<List<Color>> (colors in the code above). I tried differently nested lists previously, but they did not cause any noticable discrepancies in terms of performance.
The memory on my Android Emulator test device increases by 282.7MB when populating the list with a 999x999 image. Note that it only temporarily increases by 282.7MB. After about half a minute, the increase drops to 153.6MB and stays there (without any user interaction).
Rendering
With a resolution of 999x999, the code above causes a GPU max of 250.1 ms/frame and a UI max of 1835.9 ms/frame, which is obviously unacceptable. The UI freezes for two seconds when trying to draw a 999x999 image, which should be a piece of cake (I would guess) considering that 4k video runs smoothly on the same device.
CPU
I am not exactly sure how to track this properly using the Android profiler, but while populating or changing the list, i.e. drawing the pixels (which is the case for the above metrics as well), CPU usage goes from 0% to up to 60%. Here are the AVD performance settings:
Cause
I have no idea where to start since I am not even sure what part of my code causes the freezing. Is it the memory usage? Or the drawing itself?
How would I go about this in general? What am I doing wrong? How should I store these pixels instead.
Efforts
I have tried so much that did not help at all that I will try to only point out the most notable ones:
I tried converting the List<List<Color>> to an Image from the dart:ui library hoping to use Canvas.drawImage. In order to do that, I tried encoding my own PNG, but I have not been able to render more than a single row. However, it did not look like that would boost performance. When trying to convert a 9999x9999 image, I ran into an out of memory exception. Now, I am wondering how video is rendered as all as any 4k video will easily take up more memory than a 9999x9999 image if a few seconds of it are in memory.
I tried implementing the image package. However, I stopped before completing it as I noticed that it is not meant to be used in Flutter but rather in HTML. I would not have gained anything using that.
This one is pretty important for the following conclusion I will draw: I tried to just draw without storing the pixels, i.e. is using Random.nextInt to generate random colors. When trying to randomly generate a 999x999 image, this resulted in a GPU max of 1824.7 ms/frames and a UI max of 2362.7 ms/frame, which is even worse, especially in the GPU department.
Conclusion
This is the conclusion I reached before trying my failed attempt at rendering using Canvas.drawImage: Canvas.drawRect is not made for this task as it cannot even draw simple images.
How do you do this in Flutter?
Notes
This is basically what I tried to ask over two months ago (yes, I have been trying to resolve this issue for that long), but I think that I did not express myself properly back then and that I knew even less what the actual problem was.
The highest resolution I can properly render is around 10k pixels. I need at least 1m.
I am thinking that abandoning Flutter and going for native might be my only option. However, I would like to believe that I am just approaching this problem completely wrong. I have spent about three months trying to figure this out and I did not find anything that lead me anywhere.
Solution
dart:ui has a function that converts pixels to an Image easily: decodeImageFromPixels
Example implementation
Issue on performance
Does not work in the current master channel
I was simply not aware of this back when I created this answer, which is why I wrote the "Alternative" section.
Alternative
Thanks to #pslink for reminding me of BMP after I wrote that I had failed to encode my own PNG.
I had looked into it previously, but I thought that it looked to complicated without sufficient documentation. Now, I found this nice article explaining the necessary BMP headers and implemented 32-bit BGRA (ARGB but BGRA is the order of the default mask) by copying Example 2 from the "BMP file format" Wikipedia article. I went through all sources but could not find an original source for this example. Maybe the authors of the Wikipedia article wrote it themselves.
Results
Using Canvas.drawImage and my 999x999 pixels converted to an image from a BMP byte list, I get a GPU max of 9.9 ms/frame and a UI max of 7.1 ms/frame, which is awesome!
| ms/frame | Before (Canvas.drawRect) | After (Canvas.drawImage) |
|-----------|---------------------------|--------------------------|
| GPU max | 1824.7 | 9.9 |
| UI max | 2362.7 | 7.1 |
Conclusion
Canvas operations like Canvas.drawRect are not meant to be used like that.
Instructions
First of, this is quite straight-forward, however, you need to correctly populate the byte list, otherwise, you are going to get an error that your data is not correctly formatted and see no results, which can be quite frustrating.
You will need to prepare your image before drawing as you cannot use async operations in the paint call.
In code, you need to use a Codec to transform your list of bytes into an image.
final list = [
0x42, 0x4d, // 'B', 'M'
...];
// make sure that you either know the file size, data size and data offset beforehand
// or that you edit these bytes afterwards
final Uint8List bytes = Uint8List.fromList(list);
final Codec codec = await instantiateImageCodec(bytes));
final Image image = (await codec.getNextFrame()).image;
You need to pass this image to your drawing widget, e.g. using a FutureBuilder.
Now, you can just use Canvas.drawImage in your draw call.

Caffe bvlc_googlenet minimum accepted dimensions

What is the minimum image input size accepted by bvlc_googlenet model implemented by Caffe?
I'm using 50 x 50 images with crop_size = 36, where i get the following error when running the solver:
caffe::Blob<>::Reshape() - Floating point exception
I have to resize my images to 256 x 256 (default input size of the bvlc_googlenet model) with crop_size = 224 to avoid the error.
Do this model only accept its default sizes or i have to hack around a bit to make it happen?
Thanks!!
After several hours of trying to fix the problem, i figured out why i was facing it.
GoogleNet accepts 224*224 images as input by default, so because it is so deep and after a set of convolution and pooling layers, using a 50*50 images (or 36*36 after crop) will lead into a very small sized output, after passing the input into some layers, smaller than the kernel size of the next layer. This will cause a Reshape exception similar to the one i faced here.
Solution:
Although its not preferred to edit the kernel_size param of the layer causing the exception (to keep working up to the NN's specifications), this will fix the problem, Where you could choose a smaller kernel size and then test the results until it works.
Follows the default GoogleNet's specifications by resizing your input images into 254*254 (keeping crop size to 224) or directly changing it to 224*224 and removing the crop_size param.

BMP image header - biXPelsPerMeter

I have read a lot about BMP file format structure but I still cannot get what is the real meaning of the fields "biXPelsPermeter" and "biYPelsPermeter". I mean in practical way, how is it used or how it can be utilized. Any example or experience? Thanks a lot
biXPelsPermeter
Specifies the horizontal print resolution, in pixels per meter, of the target device for the bitmap.
biYPelsPermeter
Specifies the vertical print resolution.
Its not very important. You can leave them on 2835 its not going to ruin the image.
(72 DPI × 39.3701 inches per meter yields 2834.6472)
Think of it this way: The image bits within the BMP structure define the shape of the image using that much data (that much information describes the image), but that information must then be translated to a target device using a measuring system to indicate its applied resolution in practical use.
For example, if the BMP is 10,000 pixels wide, and 4,000 pixels high, that explains how much raw detail exists within the image bits. However, that image information must then be applied to some target. It uses the relationship to the dpi and its target to derive the applied resolution.
If it were printed at 1000 dpi then it's only going to give you an image with 10" x 4" but one with extremely high detail to the naked eye (more pixels per square inch). By contrast, if it's printed at only 100 dpi, then you'll get an image that's 100" x 40" with low detail (fewer pixels per square inch), but both of them have the same overall number of bits within. You can actually scale an image without scaling any of its internal image data by merely changing the dpi to non-standard values.
Also, using 72 dpi is a throwback to ancient printing techniques (https://en.wikipedia.org/wiki/Twip) which are not really relevant in moving forward (except to maintain compatibility with standards) as modern hardware devices often use other values for their fundamental relationships to image data. For video screens, for example, Macs use 72 dpi as the default. Windows uses 96 dpi. Others are similar. In theory you can set it to whatever you want, but be warned that not all software honors the internal settings and will instead assume a particular size. This can affect the way images are scaled within the app, even though the actual image data within hasn't changed.

iOS 256 Colors (VGA) to RGB

I'd like to convert a VGA color (256 colors; 8 bit) to a RGB color on iOS.
Is it possible to compute this or do I have to use color tables (using CGColorSpaceCreateIndexed).
UIColor does not support 256 Colors.
Thanks :)
Somewhere, the title you're porting should have set the palette. On the VGA, the 256 colours are mapped through a table that the programmer has previously set to convert them into 18 bit RGB colour (at a uniform 6 bits per channel). If you're running the original title through emulation then watch for writes to ports 0x3c6, 0x3c8 and 0x3c9 or calls to the BIOS via int 10h, with ax = 0x1010 (to set a single colour) or 0x1012 (to set a range). If you have the original code, obviously look for the source of the palette table.
In drawing terms, you can keep the palette yourself, for example as a C-style array of 256 CGColorRefs, or use CGColorSpaceCreateIndexed as you suggest (ignore Apple's slight documentation error; the colour table can contain up to 256 entries, not up to 255) probably with a bitmap context to just pass your buffer off to CoreGraphics and forget about it.
I expect the remapping will be performed on the CPU, so if that gets a bit too costly then consider using GL ES 2.x and writing a suitable pixel shader — you'd upload your actual image as, say, a luminance (ie, single channel) texture, plus a 256x1 texture where the colour at each spot is a palette entry, then write a shader that reads from the first texture for the current texture coordinates and uses that value to index the second.