OpenGL DSA and FBO - framebuffer

I upgraded my FBO code to use DSA (Direct Access State) features from OpenGL 4.5.
All is fine, but I still need to use glBindFramebuffer() before drawing. Is there something I missed ?
I was thinking about this call before drawing to my FBO.
glNamedFramebufferDrawBuffer(m_FBO, GL_COLOR_ATTACHMENT0);
Then use this one to revert back to default framebuffer.
glDrawBuffer(GL_BACK);
But it doesn't work. Should I still use glBindFramebuffer() ? And finally what is the use of glNamedFramebufferDrawBuffer() if so ?
I hardly found clear topics about this.

glNamedFramebufferDrawBuffer does not bind the frambuffer to the target. It only specifies the color buffer for a named framebuffer object.
See OpenGL 4.6 core profile specification - 17.4.1 Selecting Buffers for Writing, p. 513:
17.4.1 Selecting Buffers for Writing
The first such operation is controlling the color buffers into which each of the fragment color values is written. This is accomplished with either DrawBuffer or DrawBuffers commands described below.
The set of buffers of a framebuffer object to which fragment color zero is written is controlled with the commands
void DrawBuffer( enum buf );
void NamedFramebufferDrawBuffer( uint framebuffer, enum buf );
For DrawBuffer, the framebuffer object is that bound to the DRAW_FRAMEBUFFER binding. For NamedFramebufferDrawBuffer, framebuffer is zero or the name of a framebuffer object. If framebuffer is zero, then the default draw framebuffer is affected.
See OpenGL 4.6 core profile specification - 9.2 Binding and Managing Framebuffer Objects, p. 297:
A framebuffer object is created by binding a name returned by GenFramebuffers (see below) to DRAW_FRAMEBUFFER or READ_FRAMEBUFFER. The binding is effected by calling
void BindFramebuffer( enum target, uint framebuffer );
with target set to the desired framebuffer target and framebuffer set to the framebuffer object name. The resulting framebuffer object is a new state vector ...

Related

How to release a RenderTexture with antialiasing in Unity?

I'm struggling with multisampling. I allocated a RenderTexture with the property 'antiAliasing' which could not be released totally and I don't know why.
I had found a way to allocate a RenderTexture which can be released as follows.
//allocate a temporary RenderTarget
multisampleRT = RenderTexture.GetTemporary(Camera.pixelWidth, Camera.pixelHeight, 32, RenderTextureFormat.Depth, RenderTextureReadWrite.Default, multisampleCount);
//release a temporary RenderTarget
RenderTexture.ReleaseTemporary(multisampleRT);
But this way is followed by a default resolve pass which will affect my later operation.
Then I tried another properties 'bindTextureMS'.
multisampleRT.bindTextureMS = true;
However, this causes the memory explosion again.
Above all, I'll appreciate a resolution that:
Allocate and release RenderTarget correctly.
Allocate RenderTarget with RenderTextureFromat not GraphicsFormat.(I must use this.)
Allocate RenderTarget without resloved by default.
Thanks a lot:)

GamePlayKit GKObstacleGraph save and load

Can GKObstacleGraph be saved to a file and loaded from there ?
I cant find anything on this.
I would love to save and load precalculated graphs for my levels.
I have tried so far
NSArray * obstacles = [SKNode obstaclesFromNodePhysicsBodies:arrayOfBodies];
_graph = [GKObstacleGraph graphWithObstacles:obstacles bufferRadius:[(BaseUnit *)[_units firstObject] size].width/2];
[NSKeyedArchiver archiveRootObject:_graph toFile:#"/Users/roma/Desktop/myGraph.graph"];
But this is what I got:
-[GKObstacleGraph encodeWithCoder:]: unrecognized selector sent to instance 0x6180000432d0
GKObstacleGraph is a subclass of GKGraph, which (as of macOS 10.12, iOS 10, and tvOS 10) declares conformance to the NSCoding protocol. That means you can serialize one to data or a file (and deserialize to create an instance from a file) using NSKeyedArchiver (and NSKeyedUnarchiver) just like you can for any other object that supports NSCoding.
For general info on archiving (which applies to any NSCoding-compatible class), see Apple's Archives and Serializations Programming Guide.
Also, in Xcode 8 (when deploying to macOS 10.12, iOS 10, or tvOS 10), you can create GKGraphs in a visual editor to go along with your SpriteKit scenes. When you do that, you use the GKScene class to load both the SpriteKit scene and the GK objects (which can include not just pathfinding graphs, but also entity/component info) from the .sks file Xcode writes.
In older OS versions, the GKGraph family doesn't support NSCoding. However, all the information you need to reconstruct a GKObstacleGraph is publicly accessible. So you could implement your own serialization by reading a graph's buffer radius and list of obstacles, and reading each obstacle's list of vertices. Write that info to a file however you like... then when you want to reconstruct a graph, create GKPolygonObstacles from the vertices you saved, and create a new GKObstacleGraph from those obstacles and your saved buffer radius.

Using CGDataProviderCreateWithData callback

I'm using CGDataProviderCreateWithData to (eventually) create a UIImage from a malloced array of bytes. I call CGDataProviderCreateWithData like this:
provider = CGDataProviderCreateWithData(NULL, dataPtr, dataLen, callbackFunc);
where
dataPtr is the previously malloced array of data bytes for the image,
dataLen is the number of bytes in the dataPtr array, and
callbackFunc is as described in the CGDataProviderCreateWithData documentation:
void callbackFunc(void *info, const void *data, size_t size);
The callback function is called when the data provider is released so I could free() dataPtr there, but I may want to continue using it (dataPtr) and at some later stage free it. This block of code will be called multiple times, and the flow will look something like:
malloc(dataPtr)
create image (call CGDataProviderCreateWithData etc)
display image
release image (and so release data provider created by CGDataProviderCreateWithData)
continue to use dataPtr
free(dataPtr)
so 1..6 may be executed multiple times. I don't want dataPtr hanging around for the entire execution of the program (and it may change in size anyway), so I want to malloc/free it as necessary.
The problem is that I can't free(dataPtr) in the callback from CGDataProviderCreateWithData because I still want to use it, so I want to free it some time later - and I can't free it until I know that the data provider no longer needs it (as far as I can tell CGDataProviderCreateWithData uses the array I pass, it doesn't take a copy).
I can't do (1) above until I know it is ok to free and re-malloc dataPtr, so what I really want to do is block waiting for the data provider to be freed (well, I want to know whether I should re-enter the 1..6 block of code, which I can't do until the data provider is freed). It will be - I create the data provider, create the image and immediately display it and release the data provider. The trouble is that the data provider isn't actually released until the UIImage is released and is finished with it.
I'm reasonably new to objective-c and iOS. Am I missing something obvious?
If you malloc the memory for the provider's data, you really want to to free it in the callback. Do not be tempted to try to circumvent this in any manner. It be too easy to leak and would be susceptible to simple memory issues.
Having said that, there are two simple solutions that address your question. Either:
Make your own copy of the data that you'll manage separately from the provider; or
Instead of using the void * renditions of the CGDataProvider methods, use the CFData rendition (e.g. CGDataProviderCreateWithCFData) and then you can maintain your own strong reference to this data object (or if in non-ARC code, do your own retain of the data object). The object will not be deallocated until all strong references are resolved (or, in non-ARC code, all of your manual retain calls are resolved with a corresponding release or autorelease call).
With either of these approaches, you can continue to let CGProviderRef manage the memory as it sees fit, but you can continue to use the data object for your own purposes, too.
I ran into a similar problem as well. I wanted to use CGImageCreateWithJPEGDataProvider, so used CGDataProviderRef with a malloc'd byte array. I was getting the same problem, when trying to free the byte array at some point after creating the Data Provider Reference.
It occurs because the data provider reference takes over the ownership of the byte array, so can only be freed by the reference, when it is done with it.
I agree with #TheBlack in that if you need the data elsewhere, make a copy of it.
What I'm doing is essentially the same to what you want to achieve, I just took different route.
This whole process is wrapped into NSOperation so you have control over scheduling memory usage.
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGImageGetColorSpace(imageRef);
NSUInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
UInt8 *rawData = calloc((height * width * bitsPerComponent), sizeof(UInt8));
NSUInteger bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host);
if (context)
{
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
// This is the place you get access to data trough rawData. Do whatever you want with it, when done, get image with
CGImageRef newImg = CGBitmapContextCreateImage(context);
}
OR
subclass CALayer, override drawInContext and use CGBitmapContextGetData to get raw data. Don't have much experience with this though, sorry, but if you want to see instant changes
on image based on user input, I'd do it this way:
Subclass UIView and CALayer which becomes layer for view and displays image. View gets input and image data (from CGBitmapContextGetData) in layer class is manipulated based on input.
Even CATiledLayer can be used for huge images in this way. When done with image, just release UIView and replace it with new one. I'll gladly provide help if you need any, just ping here.

How to release this?

I'm creating an bitmap context, and in my code there is this:
bitmapData = malloc(bitmapByteCount);
context = CGBitmapContextCreate (bitmapData,
pixelsWidth,
pixelsHeight,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
before the method returns the CGContextRef object, I think I must release the bitmapData. Can I safely call free(bitmapData) before I return the context?
The documentation for CGBitmapContextCreate says this:
In iOS 4.0 and later, and Mac OS X
v10.6 and later, you can pass NULL if
you want Quartz to allocate memory for
the bitmap. This frees you from
managing your own memory, which
reduces memory leak issues.
I would suggest you pass NULL instead of a malloc'd pointer and you will be free of worrying about its memory.
However, be mindful that CGBitmapContextCreate has 'create' in its name, so by convention you will own the object returned. You will need to release this at some point with CFRelease().
Jasarien's answer is best if you're developing for iOS version 4.0 or newer. If you want to support older versions, then keep reading.
You have to keep the bitmapData around as long as the context is being used. If you try to draw into the bitmap context and you've freed bitmapData, Bad Things will happen. The best solution is to free bitmapData after you call CFRelease on the context. If you called CGBitmapContextCreateImage to extract a CGImage from the bitmap context then don't worry... when you release the bitmap context, the CGImage will make its own copy of the bitmap data.
What this means is that making a method or function that creates and returns a bitmap context might not be the greatest idea. If you can, it would be best to create the context at the top of the method, use it in that methpd, and then release the context and free the bitmap at the end of the method. If you can't do that, consider storing the context and its bitmapData in ivars. If you need multiple bitmap contexts at one time, you'll probably want to create an object to track the context and its bitmapContext.
This is why it's best to pass NULL for the bitmapData if you're only supporting iOS version 4.0 or newer. If you're on 4.0+ and pass NULL, you can safely ignore the stuff I said above and just make sure that the caller eventually calls CFRelease on the context you return.

_NSAutoreleaseNoPool Breaking but No Helpful Stack Trace

I am getting the message:
*** _NSAutoreleaseNoPool(): Object 0x3f43660 of class UICFFont
autoreleased with no pool in place -
just leaking
I have placed a break point using the symbol _NSAutoreleaseNoPool and the program does break, however, the stack trace does not show me any of my code only some UIView and Core Animation layer code.
alt text http://img.skitch.com/20100614-fw7u4qtb5bprpwrkh9rdkwn3rq.png
Is there a better way to get to the bottom of the issue? There is apparently a thread that does not have an auto release pool, but I can't figure out where.
Thanks.
Are you using CATiledLayer instances ? This is the only type of layer I know that can have their drawLayer:inContext: method called from an arbitrary thread:
As more data is required by the
renderer, the layer's
drawLayer:inContext: method is called
on one or more background threads to
supply the drawing operations to fill
in one tile of data. The clip bounds
and CTM of the drawing context can be
used to determine the bounds and
resolution of the tile being
requested.