I went through the implementation of SyncHashtable in defined in .Net framework BCL.
This class provides synchronized access to multiple readers and writers.
One of the methods is implemented as
public override Object this[Object key] {
get {
return _table[key];
}
set {
lock(_table.SyncRoot) {
_table[key] = value;
}
}
}
In my opinion the get method should also have a lock on Syncroot before accessing the object.
Consider the scenario :
Thread 1 : Deleting keys from the Hashtable.
Thread 2 : Reading objects using keys.
If a context switch occurs in thread 2 while reading the object and if thread 1 deletes the object , then in such a case the read operation will fail or give inconsistent result.
Hence couldn't we implement this method like this
public override Object this[Object key] {
get {
lock(_table.SyncRoot)
{
return _table[key];
}
}
set {
lock(_table.SyncRoot) {
_table[key] = value;
}
}
}
Thanks
Vivek
Locking the Hashtable for reading is not necessary because that is already thread safe under these circumstances.
The documentation for Hashtable states:
Hashtable is thread safe for use by multiple reader threads and a single writing thread.
By locking the write access, there is in effect only a single writer, and it is therefore unnecessary to lock reads.
Note that while this is true for Hashtable, it is not the case for Dictionary<TKey, TValue>.
Not knowing the thread-safety guarantees of the Hashtable I believe you are correct as the lock keyword is a monitor not a implicit reader-writer lock. This could definitely cause undesirable effects. However, given the situation as of today I'd look to the Reactive Extensions for .NET 3.5, it ships with a System.Threading assembly that contains the new concurrent classes as part of the .NET 4.0. These classes are much better suited for dealing with multi-threaded access to collections.
Related
I have read in various places that Global variables are at best a code smell, and best avoided. At the moment I am working on refactoring a big function based PS script to classes, and thought to use a Singleton. The use case being a large data structure that will need to be referenced from a lot of different classes and modules.
Then I found this, which seems to suggest that Singletons are a bad idea too.
So, what IS the right way (in PS 5.1) to create a single data structure that needs to be referenced by a lot of classes, and modified by some of them? Likely pertinent is the fact that I do NOT need this to be thread safe. By definition the queue will be processed in a very linear fashion.
FWIW, I got to the referenced link looking for information on singletons and inheritance, since my singleton is simply one of a number of classes with very similar behavior, where I start with the singleton which contains collections of the next class, which each contain collections of the next class, to create a hierarchical queue. I wanted to have a base class that handled all the common queue management then extend that for the differing functionality lof each class. Which works great other than having that first extended class be a singleton. That seems to be impossible, correct?
EDIT: Alternatively, is it possible with this nested classes in a generic list property approach to be able to identify the parent from within a child? This is how I handled this is the Function based version. A global [XML] variable formed the data structure, and I could step through that structure, using .SelectNode() to populate a variable to pass to the next function down, and using .Parent to get information from higher up, and especially from the root of the data structure.
EDIT: Since I seem not to be able to paste code here right now, I have some code on GitHub. The example here of where the Singleton comes in is at line 121, where I need to verify if there are any other examples of the same task that have not yet comnepelted, so I can skip all but the last instance. This is a proof of concept for deleting common components of various Autodesk software, which is managed in a very ad hoc manner. So I want to be able to install any mix of programs (packages) and uninstall on any schedule, and ensure that the last package that has a shared component uninstall is the one that uninstalls it. So as to no break other dependent programs before that last uninstall happens. Hopefully that makes sense. Autodesk installs are a fustercluck of misery. If you don't have to deal with them, consider yourself lucky. :)
To complement Mathias R. Jessen's helpful answer - which may well be the best solution to your problem - with an answer to your original question:
So, what IS the right way (in PS 5.1) to create a single data structure that needs to be referenced by a lot of classes, and modified by some of them [without concern for thread safety]?
The main reason that global variables are to be avoided is that they are session-global, meaning that code that executes after your own sees those variables too, which can have side effects.
You cannot implement a true singleton in PowerShell, because PowerShell classes do not support access modifiers; notably, you cannot make a constructor private (non-public), you can only "hide" it with the hidden keyword, which merely makes it less discoverable while still being accessible.
You can approximate a singleton with the following technique, which itself emulates a static class (which PowerShell also doesn't support, because the static keyword is only supported on class members, not the class as a whole).
A simple example:
# NOT thread-safe
class AlmostAStaticClass {
hidden AlmostAStaticClass() { Throw "Instantiation not supported; use only static members." }
static [string] $Message # static property
static [string] DoSomething() { return ([AlmostAStaticClass]::Message + '!') }
}
[AlmostAStaticClass]::<member> (e.g., [AlmostAStaticClass]::Message = 'hi') can now be used in the scope in which AlmostAStaticClass was defined and all descendant scopes (but it is not available globally, unless the defining scope happens to be the global one).
If you need access to the class across module boundaries, you can pass it as a parameter (as a type literal); note that you still need :: to access the (invariably static) members; e.g.,
& { param($staticClass) $staticClass::DoSomething() } ([AlmostAStaticClass])
Implementing a thread-safe quasi-singleton - perhaps for use
with ForEach-Object -Parallel (v7+) or Start-ThreadJob (v6+, but installable on v5.1) - requires more work:
Note:
Methods are then required to get and set what are conceptually properties, because PowerShell doesn't support code-backed property getters and setters as of 7.0 (adding this ability is the subject of this GitHub feature request).
You still need an underlying property however, because PowerShell doesn't support fields; again the best you can do is to hide this property, but it is technically still accessible.
The following example uses System.Threading.Monitor (which C#'s lock statement is based on) to manage thread-safe access to a value; for managing concurrent adding and removing items from collections, use the thread-safe collection types from the System.Collections.Concurrent namespace.
# Thread-safe
class AlmostAStaticClass {
static hidden [string] $_message = '' # conceptually, a *field*
static hidden [object] $_syncObj = [object]::new() # sync object for [Threading.Monitor]
hidden AlmostAStaticClass() { Throw "Instantiation not supported; use only static members." }
static SetMessage([string] $text) {
Write-Verbose -vb $text
# Guard against concurrent access by multiple threads.
[Threading.Monitor]::Enter([AlmostAStaticClass]::_syncObj)
[AlmostAStaticClass]::_message = $text
[Threading.Monitor]::Exit([AlmostAStaticClass]::_syncObj)
}
static [string] GetMessage() {
# Guard against concurrent access by multiple threads.
# NOTE: This only works with [string] values and instances of *value types*
# or returning an *element from a collection* that is
# only subject to concurrency in terms of *adding and removing*
# elements.
# For all other (reference) types - entire (non-concurrent)
# collections or individual objects whose properties are
# themselves subject to concurrent access, the *calling* code
# must perform the locking.
[Threading.Monitor]::Enter([AlmostAStaticClass]::_syncObj)
$msg = [AlmostAStaticClass]::_message
[Threading.Monitor]::Exit([AlmostAStaticClass]::_syncObj)
return $msg
}
static [string] DoSomething() { return ([AlmostAStaticClass]::GetMessage() + '!') }
}
Note that, similar to crossing module boundaries, using threads too requires passing the class as a type object to other threads, which, however is more conveniently done with the $using: scope specifier; a simple (contrived) example:
# !! BROKEN AS OF v7.0
$class = [AlmostAStaticClass]
1..10 | ForEach-Object -Parallel { ($using:class)::SetMessage($_) }
Note: This cross-thread use is actually broken as of v7.0, due to classes currently being tied to the defining runspace - see this GitHub issue. It is to be seen if a solution will be provided.
As you can see, the limitations of PowerShell classes make implementing such scenarios cumbersome; using Add-Type with ad hoc-compiled C# code is worth considering as an alternative.
This GitHub meta issue is a compilation of various issues relating to PowerShell classes; while they may eventually get resolved, it is unlikely that PowerShell's classes will ever reach feature parity with C#; after all, OOP is not the focus of PowerShell's scripting language (except with respect to using preexisting objects).
As mentioned in the comments, nothing in the code you linked to requires a singleton.
If you want to retain a parent-child relationship between your ProcessQueue and related Task instance, that can be solved structurally.
Simply require injection of a ProcessQueue instance in the Task constructor:
class ProcessQueue
{
hidden [System.Collections.Generic.List[object]]$Queue = [System.Collections.Generic.List[object]]::New()
}
class Task
{
[ProcessQueue]$Parent
[string]$Id
Task([string]$id, [ProcessQueue]$parent)
{
$this.Parent = $parent
$this.Id = $id
}
}
When instantiating the object hierarchy:
$myQueue = [ProcessQueue]::new()
$myQueue.Add([Task]#{ Id = "id"; Parent = $myQueue})
... or refactor ProcessQueue.Add() to take care of constructing the task:
class ProcessQueue
{
[Task] Add([string]$Id){
$newTask = [Task]::new($Id,$this)
$Queue.Add($newTask)
return $newTask
}
}
At which point you just use ProcessQueue.Add() as a proxy for the [Task] constructor:
$newTask = $myQueue.Add($id)
$newTask.DisplayName = "Display name goes here"
Next time you need to search related tasks from a single Task instance, you just do:
$relatedTasks = $task.Parent.Find($whatever)
I've been creating a prototype for a modern MUD engine. A MUD is a simple form of simulation and provide a good method in which to test a concept I'm working on. This has led me to a couple of places in my code where things, are a bit unclear, and the design is coming into question (probably due to its being flawed). I'm using model first (I may need to change this) and I've designed a top down architecture of game objects. I may be doing this completely wrong.
What I've done is create a MUDObject entity. This entity is effectively a base for all of my other logical constructs, such as characters, their items, race, etc. I've also created a set of three meta classes which are used for logical purposes as well Attributes, Events, and Flags. They are fairly straightforward, and are all inherited from MUDObject.
The MUDObject class is designed to provide default data behavior for all of the objects, this includes deletion of dead objects. The automatically clearing of floors. etc. This is also designed to facilitate this logic virtually if needed. For example, checking a room to see if an effect has ended and deleting the the effect (remove the flag).
public partial class MUDObject
{
public virtual void Update()
{
if (this.LifeTime.Value.CompareTo(DateTime.Now) > 0)
{
using (var context = new ReduxDataContext())
{
context.MUDObjects.DeleteObject(this);
}
}
}
public virtual void Pause()
{
}
public virtual void Resume()
{
}
public virtual void Stop()
{
}
}
I've also got a class World, it is derived from MUDObject and contains the areas and room (which in turn contain the games objects) and handles the timer for the operation to run the updates. (probably going to be moved, put here as if it works would limit it to only the objects in-world at the time.)
public partial class World
{
private Timer ticker;
public void Start()
{
this.ticker = new Timer(3000.0);
this.ticker.Elapsed += ticker_Elapsed;
this.ticker.Start();
}
private void ticker_Elapsed(object sender, ElapsedEventArgs e)
{
this.Update();
}
public override void Update()
{
this.CurrentTime += 3;
// update contents
base.Update();
}
public override void Pause()
{
this.ticker.Enabled = false;
// update contents
base.Pause();
}
public override void Resume()
{
this.ticker.Enabled = true;
// update contents
this.Resume();
}
public override void Stop()
{
this.ticker.Stop();
// update contents
base.Stop();
}
}
I'm curious of two things.
Is there a way to recode the context so that it has separate
ObjectSets for each type derived from MUDObject?
i.e. context.MUDObjects.Flags or context.Flags
If not how can I query a child type specifically?
Does the Update/Pause/Resume/Stop architecture I'm using work
properly when placed into the EF entities directly? given than it's for
data purposes only?
Will locking be an issue?
Does the partial class automatically commit changes when they are made?
Would I be better off using a flat repository and doing this in the game engine directly?
1) Is there a way to recode the context so that it has separate ObjectSets for each type derived from MUDObject?
Yes, there is. If you decide that you want to define a base class for all your entities it is common to have an abstract base class that is not part of the entity framework model. The model only contains the derived types and the context contains DbSets of derived types (if it is a DbContext) like
public DbSet<Flag> Flags { get; set; }
If appropriate you can implement inheritance between classes, but that would be to express polymorphism, not to implement common persistence-related behaviour.
2) Does the Update/Pause/Resume/Stop architecture I'm using work properly when placed into the EF entities directly?
No. Entities are not supposed to know anything about persistence. The context is responsible for creating them, tracking their changes and updating/deleting them. I think that also answers your question about automatically committing changes: no.
Elaboration:
I think here it's good to bring up the single responsibility principle. A general pattern would be to
let a context populate objects from a store
let the object act according to their responsibilities (the simulation)
let a context store their state whenever necessary
I think Pause/Resume/Stop could be responsibilities of MUD objects. Update is an altogether different kind of action and responsibility.
Now I have to speculate, but take your World class. You should be able to express its responsibility in a short phrase, maybe something like "harbour other objects" or "define boundaries". I don't think it should do the timing. I think the timing should be the responsibility of some core utility which signals that a time interval has elapsed. Other objects know how to respond to that (e.g. do some state change, or, the context or repository, save changes).
Well, this is only an example of how to think about it, probably far from correct.
One other thing is that I think saving changes should be done not nearly as often as state changes of the objects that carry out the simulation. It would probably slow down the process dramatically. Maybe it should be done in longer intervals or by a user action.
First thing to say, if you are using EF 4.1 (as it is tagged) you should really consider going to version 5.0 (you will need to make a .NET 4.5 project for this)
With several improvements on performance, you can benefit from other features also. The code i will show you will work for 5.0 (i dont know if it will work for 4.1 version)
Now, let's go to you several questions:
Is there a way to recode the context so that it has separate
ObjectSets for each type derived from MUDObject? If not how can I
query a child type specifically?
i.e. context.MUDObjects.Flags or context.Flags
Yes, you can. But to call is a little different, you will not have Context.Worlds you will only have the base class to be called this way, if you want to get the set of Worlds (that inherit from MUDObject, you will call:
var worlds = context.MUDObjects.OfType<World>();
Or you can do in direct way by using generics:
var worlds = context.Set<World>();
If you define you inheritance the right way, you should have an abstract class called MUDObjects and all others should iherit from that class. EF can work perfectly with this, you just need to make it right.
Does the Update/Pause/Resume/Stop architecture I'm using work properly
when placed into the EF entities directly? given than it's for data
purposes only?
In this case i think you should consider using a Design Pattern called Strategy Pattern, do some research, it will fit your objects.
Will locking be an issue?
Depends on how you develop the system....
Does the partial class automatically commit changes when they are
made?
Did not understand that question.... Partial classes are just like regular classes, thay are just in different files, but when compiled (or event at Design-Time, because of the vshost.exe) they are in fact just one.
Would I be better off using a flat repository and doing this in the
game engine directly?
Hard to answer, it all depends on the requirements of the game, deploy strategy....
I have a lookup table (LUT) of thousands integers that I use on a fair amount of requests to compute stuff based on what was fetched from database.
If I simply create a standard singleton to hold the LUT, is it automatically persisted between requests or do I specifically need to push it to the Application state?
If they are automatically persisted, then what is the difference storing them with the Application state?
How would a correct singleton implementation look like? It doesn't need to be lazily initialized, but it needs to be thread-safe (thousands of theoretical users per server instance) and have good performance.
EDIT: Jon Skeet's 4th version looks promising http://csharpindepth.com/Articles/General/Singleton.aspx
public sealed class Singleton
{
static readonly Singleton instance=new Singleton();
// Explicit static constructor to tell C# compiler
// not to mark type as beforefieldinit
static Singleton()
{
}
Singleton()
{
}
public static Singleton Instance
{
get
{
return instance;
}
}
// randomguy's specific stuff. Does this look good to you?
private int[] lut = new int[5000];
public int Compute(Product p) {
return lut[p.Goo];
}
}
Yes, static members persists (not the same thing as persisted - it's not "saved", it never goes away), which would include implementations of a singleton. You get a degree of lazy initialisation for free, as if it's created in a static assignment or static constructor, it won't be called until the relevant class is first used. That creation locks by default, but all other uses would have to be threadsafe as you say. Given the degree of concurrency involved, then unless the singleton was going to be immutable (your look-up table doesn't change for application lifetime) you would have to be very careful as to how you update it (one way is a fake singleton - on update you create a new object and then lock around assigning it to replace the current value; not strictly a singleton though it looks like one "from the outside").
The big danger is that anything introducing global state is suspect, and especially when dealing with a stateless protocol like the web. It can be used well though, especially as an in-memory cache of permanent or near-permanent data, particularly if it involves an object graph that cannot be easily obtained quickly from a database.
The pitfalls are considerable though, so be careful. In particular, the risk of locking issues cannot be understated.
Edit, to match the edit in the question:
My big concern would be how the array gets initialised. Clearly this example is incomplete as it'll only ever have 0 for each item. If it gets set at initialisation and is the read-only, then fine. If it's mutable, then be very, very careful about your threading.
Also be aware of the negative effect of too many such look-ups on scaling. While you save for mosts requests in having pre-calculation, the effect is to have a period of very heavy work when the singleton is updated. A long-ish start-up will likely be tolerable (as it won't be very often), but arbitrary slow downs happening afterwards can be tricky to trace to their source.
I wouldn't rely on a static being persisted between requests. [There is always the, albeit unlikely, chance that the process would be reset between requests.] I'd recommend HttpContext's Cache object for persisting shared resources between requests.
Edit: See Jon's comments about read-only locking.
It's been a while since I've dealt with singleton's (I prefer letting my IOC container deal with lifetimes), but here's how you can handle the thread-safety issues. You'll need to lock around anything that mutates the state of the singleton. Read only operations, like your Compute(int) won't need locking.
// I typically create one lock per collection, but you really need one per set of atomic operations; if you ever modify two collections together, use one lock.
private object lutLock = new object();
private int[] lut = new int[5000];
public int Compute(Product p) {
return lut[p.Goo];
}
public void SetValue(int index, int value)
{
//lock as little code as possible. since this step is read only we don't lock it.
if(index < 0 || index > lut.Length)
{
throw new ArgumentException("Index not in range", "index");
}
// going to mutate state so we need a lock now
lock(lutLock)
{
lut[index] = value;
}
}
I'm working with a database where the designers decided to mark every table with a IsHistorical bit column. There is no consideration for proper modeling and there is no way I can change the schema.
This is causing some friction when developing CRUD screens that interact with navigation properties. I cannot simply take a Product and then edit its EntityCollection I have to manually write IsHistorical checks all over the place and its driving me mad.
Additions are also horrible because so far I've written all manual checks to see if an addition is just soft deleted so instead of adding a duplicate entity I can just toggle IsHistoric.
The three options I've considered are:
Modifying the t4 templates to include IsHistorical checks and synchronization.
Intercept deletions and additions in the ObjectContext, toggle the IsHistorical column, and then synch the object state.
Subscribe to the AssociationChanged event and toggle the IsHistorical column there.
Does anybody have any experience with this or could recommend the most painless approach?
Note: Yes, I know, this is bad modeling. I've read the same articles about soft deletes that you have. It stinks I have to deal with this requirement but I do. I just want the most painless method of dealing with soft deletes without writing the same code for every navigation property in my database.
Note #2 LukeLed's answer is technically correct although forces you into a really bad poor mans ORM, graph-less, pattern. The problem lies in the fact that now I'm required to rip out all the "deleted" objects from the graph and then call the Delete method over each one. Thats not really going to save me that much manual ceremonial coding. Instead of writing manual IsHistoric checks now I'm gathering deleted objects and looping through them.
I am using generic repository in my code. You could do it like:
public class Repository<T> : IRepository<T> where T : EntityObject
{
public void Delete(T obj)
{
if (obj is ISoftDelete)
((ISoftDelete)obj).IsHistorical = true
else
_ctx.DeleteObject(obj);
}
Your List() method would filter by IsHistorical too.
EDIT:
ISoftDelete interface:
public interface ISoftDelete
{
bool IsHistorical { get; set; }
}
Entity classes can be easily marked as ISoftDelete, because they are partial. Partial class definition needs to be added in separate file:
public partial class MyClass : EntityObject, ISoftDelete
{
}
As I'm sure you're aware, there is not going to be a great solution to this problem when you cannot modify the schema. Given that you don't like the Repository option (though, I wonder if you're not being just a bit hasty to dismiss it), here's the best I can come up with:
Handle ObjectContext.SavingChanges
When that event fires, trawl through the ObjectStateManager looking for objects in the deleted state. If they have an IsHistorical property, set that, and changed the state of the object to modified.
This could get tricky when it comes to associations/relationships, but I think it more or less does what you want.
I use the repository pattern also with similar code to LukLed's, but I use reflection to see if the IsHistorical property is there (since it's an agreed upon naming convention):
public class Repository<TEntityModel> where TEntityModel : EntityObject, new()
{
public void Delete(TEntityModel entity)
{
// see if the object has an "IsHistorical" flag
if (typeof(TEntityModel).GetProperty("IsHistorical") != null);
{
// perform soft delete
var historicalProperty = entity.GetType().GetProperty("IsHistorical");
historicalProperty.SetValue(entity, true, null);
}
else
{
// perform real delete
EntityContext.DeleteObject(entity);
}
EntityContext.SaveChanges();
}
}
Usage is then simply:
using (var fubarRepository = new Repository<Fubar>)
{
fubarRepository.Delete(someFubar);
}
Of course, in practice, you extend this to allow deletes by passing PK instead of an instantiated entity, etc.
Okay, so here's a problem I'm running into.
I have some classes in my application that have methods that require a database connection. I am torn between two different ways to design the classes, both of which are centered around dependency injection:
Provide a property for the connection that is set by the caller prior to method invocation. This has a few drawbacks.
Every method relying on the connection property has to validate that property to ensure that it isn't null, it's open and not involved in a transaction if that's going to muck up the operation.
If the connection property is unexpectedly closed, all the methods have to either (1.) throw an exception or (2.) coerce it open. Depending on the level of robustness you want, either case is appropriate. (Note that this is different from a connection that is passed to a method in that the reference to the connection exists for the lifetime of the object, not simply for the lifetime of the method invocation. Consequently, the volatility of the connection just seems higher to me.)
Providing a Connection property seems (to me, anyway) to scream out for a corresponding Transaction property. This creates additional overhead in the documentation, since you'd have to make it fairly obvious when the transaction was being used, and when it wasn't.
On the other hand, Microsoft seems to favor the whole set-and-invoke paradigm.
Require the connection to be passed as an argument to the method. This has a few advantages and disadvantages:
The parameter list is naturally larger. This is irksome to me, primarily at the point of call.
While a connection (and a transaction) must still be validated prior to use, the reference to it exists only for the duration of the method call.
The point of call is, however, quite clear. It's very obvious that you must provide the connection, and that the method won't be creating one behind your back automagically.
If a method doesn't require a transaction (say a method that only retrieves data from the database), no transaction is required. There's no lack of clarity due to the method signature.
If a method requires a transaction, it's very clear due to the method signature. Again, there's no lack of clarity.
Because the class does not expose a Connection or a Transaction property, there's no chance of callers trying to drill down through them to their properties and methods, thus enforcing the Law of Demeter.
I know, it's a lot. But on the one hand, there's the Microsoft Way: Provide properties, let the caller set the properties, and then invoke methods. That way, you don't have to create complex constructors or factory methods and the like. Also, avoid methods with lots of arguments.
Then, there's the simple fact that if I expose these two properties on my objects, they'll tend to encourage consumers to use them in nefarious ways. (Not that I'm responsible for that, but still.) But I just don't really want to write crappy code.
If you were in my shoes, what would you do?
Here is a third pattern to consider:
Create a class called ConnectionScope, which provides access to a connection
Any class at any time, can create a ConnectionScope
ConnectionScope has a property called Connection, which always returns a valid connection
Any (and every) ConnectionScope gives access to the same underlying connection object (within some scope, maybe within the same thread, or process)
You then are free to implement that Connection property however you want, and your classes don't have a property that needs to be set, nor is the connection a parameter, nor do they need to worry about opening or closing connections.
More details:
In C#, I'd recommend ConnectionScope implement IDisposable, that way your classes can write code like "using ( var scope = new ConnectionScope() )" and then ConnectionScope can free the connection (if appropriate) when it is destroyed
If you can limit yourself to one connection per thread (or process) then you can easily set the connection string in a [thread] static variable in ConnectionScope
You can then use reference counting to ensure that your single connection is re-used when its already open and connections are released when no one is using them
Updated: Here is some simplified sample code:
public class ConnectionScope : IDisposable
{
private static Connection m_Connection;
private static int m_ReferenceCount;
public Connection Connection
{
get
{
return m_Connection;
}
}
public ConnectionScope()
{
if ( m_Connection == null )
{
m_Connection = OpenConnection();
}
m_ReferenceCount++;
}
public void Dispose()
{
m_ReferenceCount--;
if ( m_ReferenceCount == 0 )
{
m_Connection.Dispose();
m_Connection = null;
}
}
}
Example code of how one (any) of your classes would use it:
using ( var scope = new ConnectionScope() )
{
scope.Connection.ExecuteCommand( ... )
}
I would prefer the latter method. It sounds like your classes use the database connection as a conduit to the persistence layer. Making the caller pass in the database connection makes it clear that this is the case. If the connection/transaction were represented as a property of the object, then things are not so clear and all of the ownership and lifetime issues come out. Better to avoid them from the start.