How to convert DIB to Bitmap in .NET Core 3.1? - asp.net-core-3.1

How do I properly retrieve a Device Independent Bitmap(DIB) pointer as a Bitmap in C# ASP.NET Core 3.1?
Issue: When I attempt to save a Bitmap from a Dib pointer, I receive System.AccessViolationException: 'Attempted to read or write protected memory. This is often an indication that other memory"
I'm using the following constructor
Bitmap(Int32, Int32, Int32, PixelFormat, IntPtr) with the assumption that the final parameter can use the DIB pointer.
using System.Drawing.Common.dll
Bitmap(Int32, Int32, Int32, PixelFormat, IntPtr)
Using the following syntax:
public class Example
{
[DllImport("kernel32.dll", CharSet = CharSet.Auto, ExactSpelling = true)]
public static extern IntPtr GlobalLock(int handle);
//https://stackoverflow.com/questions/2185944/why-must-stride-in-the-system-drawing-bitmap-constructor-be-a-multiple-of-4
public void GetStride(int width, PixelFormat format, ref int stride, ref int bytesPerPixel)
{
int bitsPerPixel = System.Drawing.Image.GetPixelFormatSize(format);
bytesPerPixel = (bitsPerPixel + 7) / 8;
stride = 4 * ((width * bytesPerPixel + 3) / 4);
}
public Bitmap ConvertDibToBitmap(IntPtr dib)
{
IntPtr dibPtr = GlobalLock(dib);
BITMAPINFOHEADER bmi = (BITMAPINFOHEADER)Marshal.PtrToStructure(dibPtr, typeof(BITMAPINFOHEADER));
/*
biBitCount: 24
biClrImportant: 0
biClrUsed: 0
biCompression: 0
biHeight: 2219
biPlanes: 1
biSize: 40
biSizeImage: 12058624
biWidth: 1704
biXPelsPerMeter: 7874
biYPelsPerMeter: 7874
*/
//We're able to initalize the object here:
Bitmap img = new Bitmap(bmi.biWidth, bmi.biHeight, stride, PixelFormat.Format32bppArgb, dibPtr);
img.Save("OutputTest.bmp");
/*This line throws the exception "System.AccessViolationException: 'Attempted to read or write protected memory. This is often an indication that other memory"*/
}
}
I'm assuming the issue is in the calculation of stride and/or the pixel format. I'm unsure of how to determine PixelFormat from BITMAPINFOHEADER

Related

How to read a byte in Dart.?

I have below code in Java. How to implement this in Dart.?? Can someone help on this.?
//byte array
private byte[] data;
//read a byte
public byte readByte(){
byte x = data[offset];
offset ++;
return x;
}
You can use ByteData Class like following, this will create ByteArray of size 8, from which you can access the data by index like next line:
var bdata = new ByteData(8);
int huh = bdata.getInt32(0);

How to create a screenshot of a transparant Unity window + OS

So for an app we are trying to create we would like to turn the Unity window of a standalone app transparant(everything besides a couple of buttons) and have the user take a screenshot of their view/OS + the unity layer together.
So example:
The user opens our application, clicks on a button and the whole unity window except for a couple of buttons turn transparant. The user can then use it's OS like normal, while the buttons mentioned earlier stay on top. The user can then click on of the buttons to create a screenshot of their OS, which we will then be saved to their system. This way we can for example show anything from within Unity(3D model, images) on top of the users OS, via a screenshot.
Currently, we can turn the window transparant with a simular setup like this: https://alastaira.wordpress.com/2015/06/15/creating-windowless-unity-applications/
That works fine, and so does clicking in windows etc. However, we would now like to create a screenshot and save it somewhere. For this we tried multiple things, and a couple of years back before we put this project aside, we got it working through a custom dll that uses the "using System.Drawing" code that we called from inside unity. See an example of this dll and code below.
using System.Drawing;
namespace ScreenShotDll
{
public class ScreenShotClass
{
public static void TakeScreenShotRect(int srcX, int srcY, int dstX, int dstY) //both fullscreen screenshot and cropped rectangle screenshot
{
int width = Math.Abs(srcX - dstX);
int height = Math.Abs(srcY - dstY);
Bitmap memoryImage;
memoryImage = new Bitmap(width, height);
Size s = new Size(memoryImage.Width, memoryImage.Height);
Graphics memoryGraphics = Graphics.FromImage(memoryImage);
memoryGraphics.CopyFromScreen(srcX, srcY, 0, 0, s);
string str = "";
try
{
str = string.Format(AppDomain.CurrentDomain.BaseDirectory + #"Screenshot.png");
}
catch (Exception er)
{
Console.WriteLine("Sorry, there was an error: " + er.Message);
Console.WriteLine();
}
memoryImage.Save(str);
}
However this does not seem to work anymore. We are on the IL2CPP backend in Unity and get the error: NotSupportedException: System.Drawing.Bitmap
We are also trying to use the user32.dll from within Unity and using the GetPixel, ReleaseDC, and GetActiveWindow functions of this, as posted on a couple of forums, but all we get there is a white image.
Any ways to adjust our custom dll or any other way to do this would be highly appreciated. Please let me know if you need more information.
After a couple of days, I managed to resolve this from within Unity.
I used the following code, to make a screenshot of the window, and save it to a bitmap. Then save it on my disk to a .png.
[DllImport("user32.dll", SetLastError = true)] static extern int GetSystemMetrics(int smIndex);
[DllImport("user32.dll", SetLastError = false)] static extern IntPtr GetDesktopWindow();
[DllImport("gdi32.dll", EntryPoint = "CreateCompatibleDC", SetLastError=true)] static extern IntPtr CreateCompatibleDC([In] IntPtr hdc);
[DllImport("gdi32.dll", EntryPoint = "DeleteDC")] public static extern bool DeleteDC([In] IntPtr hdc);
[DllImport("gdi32.dll", EntryPoint = "DeleteObject")] public static extern bool DeleteObject([In] IntPtr hObject);
[DllImport("gdi32.dll", EntryPoint = "CreateCompatibleBitmap")] static extern IntPtr CreateCompatibleBitmap([In] IntPtr hdc, int nWidth, int nHeight);
[DllImport("gdi32.dll", EntryPoint = "SelectObject")] public static extern IntPtr SelectObject([In] IntPtr hdc, [In] IntPtr hgdiobj);
[DllImport("gdi32.dll", EntryPoint = "BitBlt", SetLastError = true)] static extern bool BitBlt([In] IntPtr hdc, int nXDest, int nYDest, int nWidth, int nHeight, [In] IntPtr hdcSrc, int nXSrc, int nYSrc, uint dwRop);
private void screenShot()
{
Debug.Log("In screenShot!");
int nScreenWidth = GetSystemMetrics(0);
int nScreenHeight = GetSystemMetrics(1);
IntPtr hDesktopWnd = GetDesktopWindow();
IntPtr hDesktopDC = GetDC(hDesktopWnd);
IntPtr hCaptureDC = CreateCompatibleDC(hDesktopDC);
IntPtr hCaptureBitmap = CreateCompatibleBitmap(hDesktopDC, nScreenWidth, nScreenHeight);
SelectObject(hCaptureDC, hCaptureBitmap);
BitBlt(hCaptureDC, 0, 0, nScreenWidth, nScreenHeight, hDesktopDC, 0, 0, 0x00CC0020 | 0x40000000);
Bitmap bmp = Image.FromHbitmap(hCaptureBitmap);
ImageConverter converter = new ImageConverter();
byte[] bytes = (byte[])converter.ConvertTo(bmp, typeof(byte[]));
string path = Application.dataPath;
if (Application.platform == RuntimePlatform.OSXPlayer) {
path += "/../../";
}
else if (Application.platform == RuntimePlatform.WindowsPlayer) {
path += "/../";
}
File.WriteAllBytes(path + "Screenshot" + ".png", bytes);
ReleaseDC(hDesktopWnd, hDesktopDC);
DeleteDC(hCaptureDC);
DeleteObject(hCaptureBitmap);
}
I was able to get screenshots working by creating a class library. I setup my project as a .Net 4.7.2 library so this might not work for you (IL2CPP). I set my unity project scripting version to .Net 4.x I'm not sure if that was necessary. I just copy my libraries dll file into the assets folder.
These are the using statements that I needed, and CapturePng takes in a memory stream from Unity.
using System.Drawing;
using System.Drawing.Imaging;
using System.IO;
public static void CapturePng(MemoryStream memoryStream, int width, int height, int resolutionX, int resolutionY)
{
//Create a new bitmap.
using (var bmpScreenshot = new Bitmap(
width,
height,
PixelFormat.Format32bppArgb))
{
// Create a graphics object from the bitmap.
using (var gfxScreenshot = Graphics.FromImage(bmpScreenshot))
{
// Take the screenshot from the upper left corner to the right bottom corner.
gfxScreenshot.CopyFromScreen(
new Point(0, 0),
new Point(0, 0),
new Size(width, height),
CopyPixelOperation.SourceCopy);
// Save the screenshot to the specified path that the user has chosen.
new Bitmap(bmpScreenshot, new Size(resolutionX, resolutionY)).Save(memoryStream, ImageFormat.Png);
}
}
}
This is what my Unity code looks like.
Texture2D tex = new Texture2D(200, 300, TextureFormat.RGB24, false);
MemoryStream ms = new MemoryStream();
Screenshot.CapturePng(ms, Screen.currentResolution.width, Screen.currentResolution.height);
ms.Seek(0, SeekOrigin.Begin);
tex.LoadImage(ms.ToArray());
MeshRenderer.material.mainTexture = tex;
I hope this helps you some how, at least it doesn't require any "user32.dll" imports or "gdi32.dll" imports.

Modify WebcamTexture from unity native plugin

I just want to pass a WebcamTexture to a native plugin (using GetNativeTexturePtr() for performance), draw something on it and render results into a plane on the screen.
I guess that some Unity thread would be updating the WebcamTexture contents every frame so writing to that texture may produce some flickering effects as there would be 2 threads updating its contents. Because of that I'm using another texture "drawTexture" to render the result.
The algorithm is easy, on every render event:
Read camTexture
Copy camTexture contents to drawTexture
Draw things on drawTexture
Render drawTexture (OpenGL bindTexture)
I'm following the NativeRenderingPlugin example, but every time I try to copy contents from camTexture to drawTexture (even only 1 pixel) the main thread freezes.
I think that may be happening because I'm trying to read camTexture while it is being modified by an external thread.
Here it is some code:
C# plugin source
[DllImport ("plugin")]
private static extern void setTexture (IntPtr cam, IntPtr draw, int width, int height);
[DllImport ("plugin")]
private static extern IntPtr GetRenderEventFunc();
WebCamTexture webcamTexture;
Texture2D drawTexture;
GameObject plane;
bool initialized = false;
IEnumerator Start () {
// PlaneTexture
// Create a texture
drawTexture = new Texture2D(1280,720,TextureFormat.BGRA32,true);
drawTexture.filterMode = FilterMode.Point;
drawTexture.Apply ();
plane = GameObject.Find("Plane");
plane.GetComponent<MeshRenderer> ().material.mainTexture = drawTexture;
// Webcam texture
webcamTexture = new WebCamTexture();
webcamTexture.filterMode = FilterMode.Point;
webcamTexture.Play();
yield return StartCoroutine("CallPluginAtEndOfFrames");
}
private IEnumerator CallPluginAtEndOfFrames()
{
while (true) {
// Wait until all frame rendering is done
yield return new WaitForEndOfFrame();
// wait for webcam and initialize
if (!initialized && webcamTexture.width > 16) {
setTexture (webcamTexture.GetNativeTexturePtr(), drawTexture.GetNativeTexturePtr(), webcamTexture.width, webcamTexture.height);
initialized = true;
} else if (initialized) {
// Issue a plugin event with arbitrary integer identifier.
// The plugin can distinguish between different
// things it needs to do based on this ID.
// For our simple plugin, it does not matter which ID we pass here.
GL.IssuePluginEvent (GetRenderEventFunc (), 1);
}
}
}
C++ plugin (draw texture method)
static void ModifyTexturePixels()
{
void* textureHandle = drawTexturePointer;
int width = textureWidth;
int height = textureHeight;
if (!textureHandle)
return;
int textureRowPitch;
void* newEmptyTexturePointer = (unsigned char*)s_CurrentAPI->BeginModifyTexture(textureHandle, width, height, &textureRowPitch);
memcpy(newEmptyTexturePointer, webcamTexturePointer, width * height * 4);
s_CurrentAPI->EndModifyTexture(textureHandle, width, height, textureRowPitch, newEmptyTexturePointer);
}
Any help would be appreciated.

X11 Equivalent of gdk_pixbuf_add_alpha

In x11 I have obtained the binary blob by using XGetImage.
In GDK we have this function that adds alpha of 255 to the Pixbuf: https://developer.gnome.org/gdk-pixbuf/stable/gdk-pixbuf-Utilities.html#gdk-pixbuf-add-alpha
I was wondering if there is a convenient function like this in X11.
So I have a struct like this:
typedef struct _XImage {
int width, height; /* size of image */
int xoffset; /* number of pixels offset in X direction */
int format; /* XYBitmap, XYPixmap, ZPixmap */
char *data; /* pointer to image data */
int byte_order; /* data byte order, LSBFirst, MSBFirst */
int bitmap_unit; /* quant. of scanline 8, 16, 32 */
int bitmap_bit_order; /* LSBFirst, MSBFirst */
int bitmap_pad; /* 8, 16, 32 either XY or ZPixmap */
int depth; /* depth of image */
int bytes_per_line; /* accelerator to next scanline */
int bits_per_pixel; /* bits per pixel (ZPixmap) */
unsigned long red_mask; /* bits in z arrangement */
unsigned long green_mask;
unsigned long blue_mask;
XPointer obdata; /* hook for the object routines to hang on */
struct funcs { /* image manipulation routines */
struct _XImage *(*create_image)();
int (*destroy_image)();
unsigned long (*get_pixel)();
int (*put_pixel)();
struct _XImage *(*sub_image)();
int (*add_pixel)();
} f;
} XImage;
And in the ximage.data field I have RGB binary data. Im helping a friend and I need to make that RGBA binary data.
Thanks
Unfortually there is no such abstraction as in GDK.
You have to do it manually by iterating on the *char data member.

How do I save an UIImage as BMP?

Can I save (write to file) UIImage object as bitmap file (.bmp extension) in iPhone’s document directory?
Thanks in advance.
I don’t think BMP is supported on iPhone. Maybe somebody wrote a category for UIImage that does saving into BMP, but I don’t know about any. I guess You’ll have to get the bitmap data from the UIImage and write them yourself, BMP is quite a simple file format. All you have to do is write out the header and then the uncompressed data. The header is a structure called BITMAPINFOHEADER, see MSDN. Getting the bitmap data of an UIImage is described in Apple’s Technical Q&A1509.
Right now I am not concerned about the size. Just want to know can I write image data as .bmp file.
Realize this is an old post, but in case someone finds it like I did looking for a solution.
I basically needed to FTP UIImage as a small BMP, so I hacked this crude class in MonoTouch.
I borrowed from zxing.Bitmap.cs and Example 1 from wikipedia BMP article. It appears to work. Might have been slicker to do an extension like AsBMP() or something like that.
(I don't know what the objective-c equivalent is, but hopefully this is helpful to someone.)
using System;
using System.Drawing;
using System.Runtime.InteropServices;
using MonoTouch.Foundation;
using MonoTouch.UIKit;
using MonoTouch.CoreGraphics;
public class BitmapFileRGBA8888
{
public byte[] Data; // data needs to be BGRA
public const int PixelDataOffset = 54;
public BitmapFileRGBA8888(UIImage image)
{
CGImage imageRef = image.CGImage;
int width = imageRef.Width;
int height = imageRef.Height;
Initialize((uint)width, (uint)height);
CGColorSpace colorSpace = CGColorSpace.CreateDeviceRGB();
IntPtr rawData = Marshal.AllocHGlobal(height*width*4);
CGContext context = new CGBitmapContext(
rawData, width, height, 8, 4*width, colorSpace, CGImageAlphaInfo.PremultipliedLast
);
context.DrawImage(new RectangleF(0.0f,0.0f,(float)width,(float)height),imageRef); // RGBA
byte[] pixelData = new byte[height*width*4];
Marshal.Copy(rawData,pixelData,0,pixelData.Length);
Marshal.FreeHGlobal(rawData);
int di = PixelDataOffset;
int si;
for (int y = 0; y < height; y++)
{
si = (height-y-1) * 4 * width;
for (int x = 0; x < width; x++)
{
CopyFlipPixel(pixelData, si, Data, di);
di += 4; // destination marchs forward
si += 4;
}
}
}
private void CopyFlipPixel(byte[] Src, int Src_offset, byte[] Dst, int Dst_offset)
{
int S = Src_offset;
int D = Dst_offset + 2;
Dst[D--] = Src[S++]; // R
Dst[D--] = Src[S++]; // G
Dst[D--] = Src[S++]; // B
Dst[Dst_offset+3] = Src[S]; // alpha
}
private void Initialize(uint W, uint H)
{
uint RawPixelDataSize = W * H * 4;
uint Size = RawPixelDataSize + 14 + 40;
Data = new byte[Size];
Data[0] = 0x42; Data[1] = 0x4D; // BITMAPFILEHEADER "BM"
SetLong(0x2, Size); // file size
SetLong(0xA, PixelDataOffset); // offset to pixel data
SetLong(0xE, 40); // bytes in DIB header (BITMAPINFOHEADER)
SetLong(0x12, W);
SetLong(0x16, H);
SetShort(0x1A, 1); // 1 plane
SetShort(0x1C, 32); // 32 bits
SetLong(0x22, RawPixelDataSize);
SetLong(0x26, 2835); // h/v pixels per meter device resolution
SetLong(0x2A, 2835);
}
private void SetShort(int Offset, UInt16 V)
{
var byts = BitConverter.GetBytes(V);
if (!BitConverter.IsLittleEndian) Array.Reverse(byts);
Array.Copy(byts,0,Data,Offset,byts.Length);
}
private void SetLong(int Offset, UInt32 V)
{
var byts = BitConverter.GetBytes(V);
if (!BitConverter.IsLittleEndian) Array.Reverse(byts);
Array.Copy(byts,0,Data,Offset,byts.Length);
}
} // END CLASS
Basically
var Bmp = new BitmapFileRGBA8888(TempImage);
FTP.UploadBin(Bmp.Data, "test.bmp"); // or just write as binary file
Since BMP is not a compressed format, is this a good idea?
Presumably, image size is even more important on portable devices.