IEnumerable<T> provides two GetEnumerator methods - what is the difference between them? - ienumerable

When I emplement IEnumerable<T> interface I see two GetEnumerator methods: one returning IEnumerator and other IEnumerator<T>. When would I use one or another?

You usually implement both. One is the newer, generic version that returns a typesafe enumerator (IEnumerator<T>). The other one is for compatibility with Legacy code (returns IEnumerator). A typical implementation is:
public IEnumerator<T> GetEnumerator() {
foreach(T item in items) {
yield return item;
}
}
IEnumerator IEnumerable.GetEnumerator() {
return GetEnumerator();
}

The reason there are two methods is because IEnumerable<T> inherits the IEnumerable interface so you are seeing the generic method from IEnumerable<T> and the non-generic method from IEnumerable.
Here is how you want to implement the interface in your type:
class Foo : IEnumerable<Foo>
{
public IEnumerator<Foo> GetEnumerator()
{
// do your thing here
}
// do a EIMI here and simply call the generic method
IEnumerator IEnumerable.GetEnumerator()
{
this.GetEnumerator();
}
}

If you are implementing the IEnumerable<T> generic interface, you will pretty much always have to use the generic GetEnumerator method - unless you cast your object explicitly to (non-generic) IEnumerable.
The reason is backwards compatability with .NET 1.0/1.1 which didn't support generics.

Usually GetEnumerator() calls GetEnumerator<T>(), so there should not be much difference in behavior. As for why there are two methods, this is done for backwards compatibility and for use in situation where T is not of great interest (or is just unknown).

The one is generic, the other one not. I believe the compiler prefers to use the generic overload.

Related

How can an abstract implement an interface?

I have a common interface that describes access to the output stream like this:
interface IOutput {
function writeInteger(aValue:Int):Void;
}
And I have an abstract implementation of this interface based on standard haxe.io.BytesOutput class:
abstract COutput(BytesOutput) from BytesOutput {
public inline function new(aData:BytesOutput) {
this = aData;
}
public inline function writeInteger(aValue:Int):Void {
this.writeInt32(aValue);
}
}
Though this abstract is truly implementing interface described above there's no direct reference to interface and when I'm trying to use it like this:
class Main {
public static function out(aOutput:IOutput) {
aOutput.writeInteger(0);
}
public static function main() {
var output:COutput = new BytesOutput();
out(output); // type error
}
}
Compiler throws an error: COutput should be IOutput. I can solve this problem only through using common class that wraps BytesOutput and implements IOutput.
My question is how to show the Haxe compiler that the abstract implements the interface.
Abstracts can't implement interfaces because they're a compile-time feature and don't exist at runtime. This conflicts with interfaces, they do exist at runtime and dynamic runtime checks like Std.is(something, IOutput) have to work.
Haxe also has a mechanism called structural subtyping that can be used as an alternative to interfaces. With this approach, there's no need for an explicit implements declaration, it's good enough if something unifies with a structure:
typedef IOutput = {
function writeInteger(aValue:Int):Void;
}
Unfortunately, abstracts aren't compatible with structural subtyping either due to the way they're implemented.
Have you considered using static extensions instead? At least for your simple example, that seems like the perfect solution for making a writeInteger() method available for any haxe.io.Output:
import haxe.io.Output;
import haxe.io.BytesOutput;
using Main.OutputExtensions;
class Main {
static function main() {
var output = new BytesOutput();
output.writeInteger(0);
}
}
class OutputExtensions {
public static function writeInteger(output:Output, value:Int):Void {
output.writeInt32(value);
}
}
You could even combine this with structural subtyping so writeInteger() becomes available on anything that has a writeInt32() method (try.haxe link):
typedef Int32Writable = {
function writeInt32(value:Int):Void;
}
As #Gama11 states, abstracts cannot implement interfaces. In Haxe, for type to implement an interface, it must be able to be compiled to something class-like that can be called using the interface’s methods without any magic happening. That is, to use a type as its interface, there needs to be a “real” class implementing that type. Abstracts in Haxe compile down to their base type—the abstract itself is entirely invisible after compilation happens. Thus, at runtime, there is no instance of a class with the methods defined in your abstract which implement the interface.
However, you can make your abstract appear to implement an interface by defining an implicit conversion to the interface you are trying to implement. For your example, the following might work:
interface IOutput {
function writeInteger(aValue:Int):Void;
}
abstract COutput(BytesOutput) from BytesOutput {
public inline function new(aData:BytesOutput) {
this = aData;
}
#:to()
public inline function toIOutput():IOutput {
return new COutputWrapper((cast this : COutput));
}
public inline function writeInteger(aValue:Int):Void {
this.writeInt32(aValue);
}
}
class COutputWrapper implements IOutput {
var cOutput(default, null):COutput;
public function new(cOutput) {
this.cOutput = cOutput;
}
public function writeInteger(aValue:Int) {
cOutput.writeInteger(aValue);
}
}
class Main {
public static function out(aOutput:IOutput) {
aOutput.writeInteger(0);
}
public static function main() {
var output:COutput = new BytesOutput();
out(output);
out(output);
}
}
Run on try.haxe.org
Note that, each time an implicit conversion happens, a new instance of the wrapper will be constructed. This may have performance implications. If you only access your value through its interface, consider setting the type of your variable to the interface rather than the abstract.
This is similar to “boxing” a primitive/value type in C#. In C#, value types, defined using the struct keyword, are allowed to implement interfaces. Like an abstract in Haxe, a value type in C# is compiled (by the JITter) into untyped code which simply directly accesses and manipulates the value for certain operations. However, C# allows structs to implement interfaces. The C# compiler will translate any attempt to implicitly cast a struct to an implemented interface into the construction of a wrapper class which stores a copy of the value and implements the interface—similar to our manually authored wrapper class (this wrapper class is actually generated by the runtime as part of JITing and is performed by the IL box instruction. See M() in this example). It is conceivable that Haxe could add a feature to automatically generate such a wrapper class for you like C# does for struct types, but that is not currently a feature. You may, however, do it yourself, as exemplified above.

Can I keep Entity Framework context as class variable?

I'm used to working the database connections where you connect/open/close as fast as possible in each method. I'm now working with the Entity Framework and so my methods all do this type of thing:
using (var context = new FooEntities()) {
// linq to sql query here
}
I've been told that with Entity Framework I can actually have that context variable be a class level variable and not have to instantiate it in each method. Is that really the case, or should I continue this pattern in each method?
I'm using version 5.0.0 of the framework if that makes a difference.
It depends on how you are expecting it to act. The only reason you'd want it to stick around is if you wanted to use the caching feature of DbContext across multiple method calls. But since its pulling connections from the Pool anyway, disposing of a DbContext shouldn't really impact performance when creating a new one.
For me personally, I create the context as close as possible and kill it as soon as possible. Thus, the Get calls should use AsNoTracking() to speed up the calls a lot if you don't care about trying to update them later. You could also create a DbContextFactory so each class could control that interaction as it sees fit. (i.e. Method A always creates a new one, but Methods B and C could share if either one called first). Though, that could cause its own issues down the road, but then you can opt into those conditions.
You can have Context as a property of a class, but you have to consider how to control the disposing of the Context. For example:
public class UnitOfWork:IDisposable
{
public DbContext Context { get; set; }
public UnitOfWork()
{
Context = null; //initialize context here
}
public void DoWorkWithContext1()
{
//anything you need
}
public void DoWorkWithContext2()
{
//anything you need
}
public void Dispose()
{
if (Context != null)
Context.Dispose();
}
}
Then you'll use the class in this way:
using (var unit= new UnitOfWork())
{
unit.DoWorkWithContext1();
unit.DoWorkWithContext2();
}

aspectj - how to find a method of an annotated class is calling another method of same class. i.e. nested calls

I have an annotation #AppManaged which is used to signify classes that need to have certain behavior woven in. One behavior woven in is converting method calls into concurrent GPars(a groovy parallel library) calls instead.
However I do not want nested method calls on the same class to be advised.
So...
#AppManaged
class someclass
{
public void method1(){
method2(); **// should not be advised**
}
public void method2(){
}
}
But if the method call is from on AppManaged class to another, then it is supposed to be advised, hence something like !cflowbelow(#within(AppManaged)) does not help.
#AppManaged
class Someotherclass
{
private someclass s;
public void method3(){
s.method2();**// Should be advised.**
}
}
Basically I am looking for a pointcut which will match only nested calls within the same object instance and prevent them from being advised.
Any help would be highly appreciated.
Thanks and Regards
Abraham Menacherry.
How about:
pointcut appManagedExecution(Object appManaged) : execution(* (#AppManaged *).*(..)) && this(appManaged);
pointcut appManagedCall(Object called) : call(* (#AppManaged *).*(..)) && target(called);
pointcut routed(Object appManaged, Object called) : appManagedCall(called) && cflowbelow(appManagedExecution(appManaged)) && if(called != appManaged);

Function Import and Repository pattern with Entity Framework 4.0

Could anyone advise me on how they've implemented the use of Function Imports when using the Repository pattern against EF 4.0?
We have a table mapped to a Candidate Entity and also a Function Import off an existing sproc that maps to Candidate. This works great in EF but we're abstracting by use of Repositories which take on their constructor the IObjectSet where T is the POCO entity. However this means I'm unable to get a reference to the function import. The only way I can see of doing this is passing a reference to the ObjectContext to the repositories that need it but this feels like a bit of a design smell to me.
Even though several of our Repositories are extended with custom interfaces we're still faced with the same issue.
public class CandidateRepository : Repository<Candidate>, ICandidateRepository
{
public CandidateRepository(IObjectSet<Candidate> entities)
: base(entities)
{
}
public Candidate GetByEmail(string email)
{
return Entities.SingleOrDefault(c => c.EmailAddress.Equals(email));
}
public bool CandidateExists(string candidateNumber)
{
return Entities.SingleOrDefault(c => c.Number.Equals(candidateNumber)) != null;
}
public Candidate GetByNumber(string number)
{
return Entities.SingleOrDefault(c => c.Number.Equals(number));
}
public Candidate GetMember(string number)
{
return new Candidate(); //This one needs to return from the Function Import
}
}
Any advice greatly appreciated.
To solve your problem directly you can cast entities to ObjectSet<T> and use entites.Context property to get ObjectContext.
public Candidate GetMember(string number)
{
var objectSet = Enities as ObjectSet<Candidate>;
if(objectSet == null) throw new Exception("Oh, it's not EF IObjectSet implementation");
return objectSet.Context.MyCustomFunction(string number);
}
As you can see this code relies on specific IObjectSet implementation which is not good at all.
The better idea is to create repositories for aggregate roots only rather then for each table. So it will be more natural to pass ObjectContext to repository ctor.
I have went down this route and i have experienced that it is less of a pain when you pass in an interface implementation of the ObjectContext to your repository. The interface implementation should have some way of calling the function. So when you pass in the concrete implementation of the ObjectContext everything should work fine.

How do I simplify these NUNit tests?

These three tests are identical, except that they use a different static function to create a StartInfo instance. I have this pattern coming up all trough my testcode, and would love
to be be able to simplify this using [TestCase], or any other way that reduces boilerplate code. To the best of my knowledge I'm not allowed to use a delegate as a [TestCase] argument, and I'm hoping people here have creative ideas on how to make the code below more terse.
[Test]
public void ResponseHeadersWorkinPlatform1()
{
DoResponseHeadersWorkTest(Platform1StartInfo.CreateOneRunning);
}
[Test]
public void ResponseHeadersWorkinPlatform2()
{
DoResponseHeadersWorkTest(Platform2StartInfo.CreateOneRunning);
}
[Test]
public void ResponseHeadersWorkinPlatform3()
{
DoResponseHeadersWorkTest(Platform3StartInfo.CreateOneRunning);
}
void DoResponseHeadersWorkTest(Func<ScriptResource,StartInfo> startInfoCreator)
{
ScriptResource sr = ScriptResource.Default;
var process = startInfoCreator(sr).Start();
//assert some things here
}
Firstly, I don't think the original is too bad. It's only messy if your assertions are different from test case to test case.
Anyway, you can use a test case, but it can't be done via a standard [TestCase] attribute due to using more complicated types. Instead, you need to use a public IEnumerable<> as the data provider and then tag your test method with a [TestCaseSource] attribute.
Try something like:
public IEnumerable<Func<ScriptResource, StartInfo>> TestCases
{
get
{
yield return Platform1StartInfo.CreateOneRunning;
yield return Platform2StartInfo.CreateOneRunning;
yield return Platform3StartInfo.CreateOneRunning;
}
}
[TestCaseSource("TestCases")]
public void MyDataDrivenTest(Func<ScriptResource, StartInfo> startInfoCreator)
{
ScriptResource sr = ScriptResource.Default;
var process = startInfoCreator(sr);
// do asserts
}
}
This is a more concise version of the standard pattern of yielding TestCaseData instances containing the parameters. If you yield instances of TestCaseData you can add more information and behaviours to each test (like expected exceptions, descriptions and so forth), but it is slightly more verbose.
Part of the reason I really like this stuff is that you can make one method for your 'act' and one method for your 'assert', then mix and match them independently. E.g. my friend was doing something yesterday where he used two Actions to say ("when method Blah is called, this method on the ViewModel should be triggered"). Very terse and effective!
It looks good. Are you looking to add a factory maybe ? Or you could add these methods to a Action List(in test setup) and call first action delegate, second action delegate and third action delegate.