Data ownership in GTK - gtk

In almost every page of the Gtk documentation there are the following phrases:
The data is owned by the caller of the function.
The data is owned by the called function.
The data is owned by the instance.
What do they mean, and what is the implication for memory management (g_free or g_object_unref)?
(I've read Introduction to Memory Management in GTK+, but it doesn't seem to cover "owned by the instance".)

You should read this like so:
the data: The parameter, the returned value, etc.
is owned by X: X is responsible to clean up (in most cases, this means calling g_object_unref on the data) the data.
With this in in mind:
The data is owned by the caller of the function:
The gtk_application_window_new function works this way (as far as the application parameter is concerned). This means that memory management (i.e g_object_unrefing application) is to be done by the caller of gtk_application_window_new. See this example here. Notice that the caller of gtk_application_window_new, here main (through activate) is managing the reference count: it is calling g_object_unref on app.
The data is owned by the called function:
The gtk_application_window_new function works this way (as the returned value is concerned). This means that memory management of the returned GtkWidget instance is to be done by gtk_application_window_new itself. So no need to call g_object_unref yourself. See this example here: window is created by gtk_application_window_new but is never explicitly freeed. This id because the called function (here gtk_application_window_new) is taking care of this.
The data is owned by the instance:
The gtk_builder_get_object works this way (as far as the returned value is concerned). This means that the memory management of the GObject* returned is to be performed by the builder instance itself. Because of this, calling g_object_unref is not wanted. See this example here: The builder object is managed, but now the widgets returned by calls to gtk_builder_get_object. Even if written in C, GTK is object oriented. This means that instance, here, means the same as a class instance in most OO language.

Related

In "cache" class, where does directory setup go?

I have a class that is used to store and retrieve image data from a cache using NSFileManager. When an instance of it is created, I want to check whether the image directory already exists, and if not, create it. Where is the most appropriate place to put this code? Is this something one would override the designated initializer for?
Thanks for reading.
The initialiser (init) function would be the best place to do it from a programming perspective because the rest of the instance methods would probably rely on having access to the directory to store/retrieve images.
You'd also want any instances created to know whether the accessing/creating was successful so in your initialiser you may wish to put some error handling which returns a nil instance (or throws an exception) if it can't be accessed which can then be handled by the classes which use the instance of your cache class.

How to append request contexts while they have different receiver implementation

In google io 2011, David Chandler mentioned that you can chain different request context by using append() method,but in practice, I don't know how to chain them up while they have different receiver,using to() and then fire()?
Please help.
There are two kinds of receivers: the ones bound to each method invocation (that you pass to the Request's to() method), and the context-level one (that you pass to the RequestContext's fire() method). The Request's fire(Receiver) method is a short-hand for to(receiver).fire(), i.e. it binds the Receiver to the method.
The method-level receivers depend on the method only, their generic parameterization depends on the method's return value (the generic parameterization of the Request or InstanceRequest), so whether you append() several RequestContexts together changes absolutely nothing.
The context-level receiver is always parameterized with Void. When you append() contexts together, they actually form a single context with several interfaces, so you only call fire() once, on any one of the appended contexts.
Now let's go back to the basics: without using append(), you can only batch together calls for methods that are declared on the context interface. If you have two distinct context interfaces you want to use, you have to make two fire(), i.e. two HTTP requests. The introduction of append() allows you to batch together calls for methods declared on any context interface: simply append a context to another one and the calls on both contexts will be batched together in the same HTTP request, triggered by a unique fire() on any one of the context being appended.
Now into the technical details: internally, a context is nothing more than a thin wrapper around a state object. When you edit() or create() a proxy, you add it to the internal state, and when you call a service method, the method name (actually, its obfuscated token) and the arguments are captured and pushed to the state as well. When you append() a context, you're only making it share its internal state with the one of the context you append it to. That way, when you call a service method on the appended context, its pushed on the exact same state as the one of the other context, and when you fire() any one of them, the state is serialized into a single HTTP request.
Note that, to append a context, its own internal state has to be empty, otherwise an exception will be raised, as the state would be thrown away to be replaced by the one of the other context.
In brief, and in practice:
FirstContext first = rf.first();
SomeProxy proxy = first.create(SomeProxy.class);
...
SecondContext second = first.append(rf.second());
OtherProxy other = second.create(OtherProxy.class);
other.setSome(proxy);
...
second.saveAndReturnSelf(other).to(new Receiver<OtherProxy>() {
...
});
...
first.fire();
Note that the line that creates and appends the second context could equally be written:
SecondContext second = rf.second();
first.append(second);
The append method returns its argument as a convenience, but it's really the same value you passed as the argument. This is only to allow writing the one-liner above, instead of being forced to use the two-liner.

Correct usage of std::shared_ptr and std::auto_ptr

I know the basic definition of the following smart types and how to use them. However I am not very sure on the places/circumstances
where :
std::auto_ptr should be preferred over std::shared_ptr.
std::shared_ptr should be preferred over std::auto_ptr.
std::auto_ptr : used to ensures that the object to which it points gets destroyed automatically when control leaves a block.
std::shared_ptr : wraps a reference-counted smart pointer around a dynamically allocated object.
auto_ptr should never be used because it is deprecated as of C++111.
Use
std::shared_ptr if ownership is to be shared
std::unique_ptr if there should only be a unique view of the object, i.e. only one owner
auto_ptr can also not be used in standard containers as it is not copyable.
1: D.10 auto_ptr: "The class template auto_ptr is deprecated. [ Note: The class template unique_ptr (20.7.1) provides a better solution. —end note"
In the case there is only one owner, use lightweight std::unique_ptr. For more complicated scenarios use std::shared_ptr.
There is no reason to use std::auto_ptr: new smart pointers shared_ptr, unique_ptr and weak_ptr contain all required functionality. unique_ptr class supersedes auto_ptr
You use an auto_ptr (or unique_ptr in C++11) when one distinguished pointer instance has full ownership of the pointee. That is, if you can always look at the code and point with your finger at one instance of std::auto_ptr that owns the object the pointer points at, you have a good use case fo auto_ptr.
If things are not so clear, you use a shared_ptr. If in doubt and in a single-threaded environment, use a shared_ptr.

GWT - Where should i use code splitting while using places/activities/mappers?

"core" refers to the initial piece of the application that is loaded.
In order to bind url to places, GWT uses PlaceTokenizer<P extends Place>. When loading the application from the url, it calls the method P getPlace(String token) to retrieve a new instance of the place to call.
due to the asynchronous nature of code splitting, I can't create the place inside a runAsync in this method. So I have to put all the places of my app in the core.
To link places to activity, GWT callsActivity getActivity(Place place) (from com.google.gwt.activity.shared.ActivityMapper) to retrieve a new instance of the activity.
Once again, i have to put all my activities in the core.
Here's what I want to try: Write a custom com.google.gwt.place.shared.Delegate that
bind itself on PlaceChangeRequestEvent. If the AppPiece corresponding to the requestedPlace isn't loaded, it calls event.setWarning(NEED_TO_LOAD_MODULE)
in the confirm(String message) method, always return false when the message equals NEED_TO_LOAD_MODULE (so it doesn't bother the user), and load the module via RunAsync.
Once the module is loaded, call goTo(requestedPlace)
Each AppPiece of my application contains a bunch of activies and the corresponding views. Since the mappers are only called when PlaceChangeEventis fired, i could generate a new instance of my activity via AppPiece.getSomeActivityInstance().
I'm pretty sure this will work, but what bother me is that
Finding wich AppPiece to load depending on the requestedPlace will force me to write code that will be very similar to my mappers
I would like to have my places inside the corresponding AppPiece
Overriding Delegate for this purpose is tricky, and I'm looking for a better solution
You don't have to put all your activities in the core (as you call it): while an Activity instance is retrieved synchronously, it's allowed to start asynchronously. This is where you'd put your GWT.runAsync call.
See http://code.google.com/p/google-web-toolkit/issues/detail?id=5129 and https://groups.google.com/d/topic/google-web-toolkit/8_P_d4aT-0E/discussion

Class Design: Demeter vs. Connection Lifetimes

Okay, so here's a problem I'm running into.
I have some classes in my application that have methods that require a database connection. I am torn between two different ways to design the classes, both of which are centered around dependency injection:
Provide a property for the connection that is set by the caller prior to method invocation. This has a few drawbacks.
Every method relying on the connection property has to validate that property to ensure that it isn't null, it's open and not involved in a transaction if that's going to muck up the operation.
If the connection property is unexpectedly closed, all the methods have to either (1.) throw an exception or (2.) coerce it open. Depending on the level of robustness you want, either case is appropriate. (Note that this is different from a connection that is passed to a method in that the reference to the connection exists for the lifetime of the object, not simply for the lifetime of the method invocation. Consequently, the volatility of the connection just seems higher to me.)
Providing a Connection property seems (to me, anyway) to scream out for a corresponding Transaction property. This creates additional overhead in the documentation, since you'd have to make it fairly obvious when the transaction was being used, and when it wasn't.
On the other hand, Microsoft seems to favor the whole set-and-invoke paradigm.
Require the connection to be passed as an argument to the method. This has a few advantages and disadvantages:
The parameter list is naturally larger. This is irksome to me, primarily at the point of call.
While a connection (and a transaction) must still be validated prior to use, the reference to it exists only for the duration of the method call.
The point of call is, however, quite clear. It's very obvious that you must provide the connection, and that the method won't be creating one behind your back automagically.
If a method doesn't require a transaction (say a method that only retrieves data from the database), no transaction is required. There's no lack of clarity due to the method signature.
If a method requires a transaction, it's very clear due to the method signature. Again, there's no lack of clarity.
Because the class does not expose a Connection or a Transaction property, there's no chance of callers trying to drill down through them to their properties and methods, thus enforcing the Law of Demeter.
I know, it's a lot. But on the one hand, there's the Microsoft Way: Provide properties, let the caller set the properties, and then invoke methods. That way, you don't have to create complex constructors or factory methods and the like. Also, avoid methods with lots of arguments.
Then, there's the simple fact that if I expose these two properties on my objects, they'll tend to encourage consumers to use them in nefarious ways. (Not that I'm responsible for that, but still.) But I just don't really want to write crappy code.
If you were in my shoes, what would you do?
Here is a third pattern to consider:
Create a class called ConnectionScope, which provides access to a connection
Any class at any time, can create a ConnectionScope
ConnectionScope has a property called Connection, which always returns a valid connection
Any (and every) ConnectionScope gives access to the same underlying connection object (within some scope, maybe within the same thread, or process)
You then are free to implement that Connection property however you want, and your classes don't have a property that needs to be set, nor is the connection a parameter, nor do they need to worry about opening or closing connections.
More details:
In C#, I'd recommend ConnectionScope implement IDisposable, that way your classes can write code like "using ( var scope = new ConnectionScope() )" and then ConnectionScope can free the connection (if appropriate) when it is destroyed
If you can limit yourself to one connection per thread (or process) then you can easily set the connection string in a [thread] static variable in ConnectionScope
You can then use reference counting to ensure that your single connection is re-used when its already open and connections are released when no one is using them
Updated: Here is some simplified sample code:
public class ConnectionScope : IDisposable
{
private static Connection m_Connection;
private static int m_ReferenceCount;
public Connection Connection
{
get
{
return m_Connection;
}
}
public ConnectionScope()
{
if ( m_Connection == null )
{
m_Connection = OpenConnection();
}
m_ReferenceCount++;
}
public void Dispose()
{
m_ReferenceCount--;
if ( m_ReferenceCount == 0 )
{
m_Connection.Dispose();
m_Connection = null;
}
}
}
Example code of how one (any) of your classes would use it:
using ( var scope = new ConnectionScope() )
{
scope.Connection.ExecuteCommand( ... )
}
I would prefer the latter method. It sounds like your classes use the database connection as a conduit to the persistence layer. Making the caller pass in the database connection makes it clear that this is the case. If the connection/transaction were represented as a property of the object, then things are not so clear and all of the ownership and lifetime issues come out. Better to avoid them from the start.