NDepend rule for "Dispose objects before losing scope" - ndepend

I'm evaluating NDepend as a part of an effort to enforce code quality and correct framework usage, and I am looking for a way to write the equivalent of CA2000: Dispose objects before losing scope. Anyone else has tried to do this, or has knowledge about how to do it?

This rule is not implementable through NDepend because the context is inside method body. NDepend is more about writing code rules concerning the program structure like for example:
Types with disposable instance fields must be disposable
Disposable types with unmanaged resources should declare finalizer

Related

How to write Dart idiomatic utility functions or classes?

I am pondering over a few different ways of writing utility classes/functions. By utility I mean a part of code being reused in many places in the project. For example a set of formatting functions for the date & time handling.
I've got Java background, where there was a tendency to write
class UtilsXyz {
public static doSth(){...};
public static doSthElse(){...};
}
which I find hard to unit test because of their static nature. The other way is to inject here and there utility classes without static members.
In Dart you can use both attitudes, but I find other techniques more idiomatic:
mixins
Widely used and recommended in many articles for utility functions. But I find their nature to be a solution to infamous diamond problem rather than utility classes. And they're not very readable. Although I can imagine more focused utility functions, which pertain only Widgets, or only Presenters, only UseCases etc. They seem to be natural then.
extension functions
It's somehow natural to write '2023-01-29'.formatNicely(), but I'd like to be able to mock utility function, and you cannot mock extension functions.
global functions
Last not least, so far I find them the most natural (in terms of idiomatic Dart) way of providing utilities. I can unit test them, they're widely accessible, and doesn't look weird like mixins. I can also import them with as keyword to give some input for a reader where currently used function actually come from.
Does anybody have some experience with the best practices for utilities and is willing to share them? Am I missing something?
To write utility functions in an idiomatic way for Dart, your options are either extension methods or global functions.
You can see that they have a linter rule quoting this problem:
AVOID defining a class that contains only static members.
Creating classes with the sole purpose of providing utility or otherwise static methods is discouraged. Dart allows functions to exist outside of classes for this very reason.
https://dart-lang.github.io/linter/lints/avoid_classes_with_only_static_members.html.
Extension methods.
but I'd like to unit test some utility functions, and you cannot test extension functions, because they're static.
I did not find any resource that points that the extension methods are static, neither in StackOverflow or the Dart extension documentation. Although extension can have static methods themselves. Also, there is an open issue about supporting static extension members.
So, I think extensions are testable as well.
To test extension methods you have 2 options:
Import the extension name and use the extension syntax inside the tests.
Write an equivalent global utility function test it instead and make the extension method call this global function (I do not recommend this because if someone changes the extension method, the test will not be able to caught).
EDIT: as jamesdlin mentioned, the extension themselves can be tested but they cannot be mocked since they need to be resolved in compile time.
Global functions.
To test global functions, just import and test it.
I think the global functions are pretty straightforward:
This is the most simple, idiomatic way to write utility functions, this does not trigger any "wtf" flag when someone reads your code (like mixins), even Dart beginners.
This also takes advantage of the Dart top-level functions feature.
That's why I prefer this approach for utility functions that are not attached to any other classes.
And, if you are writing a library/package, the annotation #visibleForTesting may fall helpful for you (This annotation is from https://pub.dev/packages/meta).

coffeescript and repetition of code. Is there a solution?

So - I am really really digging coffeescript. But, I am curious how the possibility of repetition of code is dealth with across a large repository of code.
For instance.
Lets say I create a simple class.
class Cart
constructor: (#session, #group) ->
class Shoes extends Cart
compiler will create __extends and __hasProp methods.
Mind you, this is just one example -- pretty much this happens with loops etc... So, granted each bit of code is usually in its walled garden.. BUT, there could be many many of the same methods thru-out a code base.... because of the compiler just creating generic helper methods that are all the same.
Anyone else have to contend with this or deal with that possible bloat?
That is probably a lot more specific to what build tool you are using to manage a large codebase. grunt-contrib-coffee for example provides the ability to concatenate before compilation which means something like the __extends method should only get declared once. Likewise, I believe, asset pipeline in rails makes similar optimizations through the require statements.

which one is better invoking a "function" in drool or static method from a java Util class

In drools, we often have common logic that needs to be invoked. There are two options to achieve this.
Use function in drools.
Move the common logic to some Util class in java and invoke it from drools.
Which of the above is recommended?
Thanks.
I always recommend using imported static methods unless it is a very simple piece of logic that is local to a subset of your rules and needs to be dynamically defined. The reasons are:
keep DRL code clean of procedural logic, making maintenance cheaper and easier.
it is easier to write an xUnit test to test your function logic in a static method than it is to test a DRL function.
it makes the function available to all DRL files, without conflicts and without IDE error codes.
The DRL function construct is a facility to solve simple local problems, but java classes are where you want to keep and maintain your procedural code.

CORBA: Can a CORBA IDL type be an attribute of another?

Before I start using CORBA I want to know something.
It would seem intuitive to me that you could use an IDL type as an attribute of another, which would then expose that attribute's methods to the client application (using ".") as well.
But is this possible?
For example (excuse my bad IDL):
interface Car{
attribute BrakePedal brakePedal;
//...
}
//then.. (place above)
interface BrakePedal{
void press();
//...
}
//...
Then in the client app, you could do: myCar.brakePedal.press();
CORBA would seem crappy if you couldn't do these kind of multi-level
object interfaces. After all, real-world objects are multi-level, right? So can
someone put my mind at ease and confirm (or try, if you already have CORBA set up) if this definitely works? None of the IDL documentation explicitly shows this in example, which is why I'm concerned. Thanks!
Declaring an attribute is logically equivalent to declaring a pair of accessor functions, one to read the value of the attribute, and one to write it (you can also have readonly attributes, in which case you would only get the read function).
It does appear from the CORBA spec. that you could put an interface name as an attribute name. I tried feeding such IDL to omniORB's IDL to C++ translator, and it didn't reject it. So I think it is allowed.
But, I'm really not sure you would want to do this in practice. Most CORBA experts recommend that if you are going to use attributes you only use readonly attributes. And for something like this, I would just declare my own function that returned an interface.
Note that you can't really do the syntax you want in the C++ mapping anyway; e.g.
server->brakePedal()->press(); // major resource leak here
brakePedal() is the attribute accessor function that returns a CORBA object reference. If you immediately call press() on it, you are going to leak the object reference.
To do this without a leak you would have to do something like this:
BrakePedal_var brakePedal(server->brakePedal());
brakePedal->press();
You simply can't obtain the notational convenience you want from attributes in this scenario with the C++ mapping (perhaps you could in the Python mapping). Because of this, and my general dislike for attributes, I'd just use a regular function to return the BrakePedal interface.
You don't understand something important about distributed objects: remote objects (whether implemented with CORBA, RMI, .NET remoting or web services) are not the same as local objects. Calls to CORBA objects are expensive, slow, and may fail due to network problems. The object.attribute.method() syntax would make it hard to see that two different remote calls are being executed on that one line, and make it hard to handle any failures that might occur.

Encapsulation in the age of frameworks

At my old C++ job, we always took great care in encapsulating member variables, and only exposing them as properties when absolutely necessary. We'd have really specific constructors that made sure you fully constructed the object before using it.
These days, with ORM frameworks, dependency-injection, serialization, etc., it seems like you're better off just relying on the default constructor and exposing everything about your class in properties, so that you can inject things, or build and populate objects more dynamically.
In C#, it's been taken one step further with Object initializers, which give you the ability to basically define your own constructor. (I know object initializers are not really custom constructors, but I hope you get my point.)
Are there any general concerns with this direction? It seems like encapsulation is starting to become less important in favor of convenience.
EDIT: I know you can still carefully encapsulate members, but I just feel like when you're trying to crank out some classes, you either have to sit and carefully think about how to encapsulate each member, or just expose it as a property, and worry about how it is initialized later. It just seems like the easiest approach these days is to expose things as properties, and not be so careful. Maybe I'm just flat wrong, but that's just been my experience, espeically with the new C# language features.
I disagree with your conclusion. There are many good ways of encapsulating in c# with all the above mentioned technologies, as to maintain good software coding practices. I would also say that it depends on whose technology demo you're looking at, but in the end it comes down to reducing the state-space of your objects so that you can make sure they hold their invariants at all times.
Take object relational frameworks; most of them allow you to specify how they are going to hydrate the entities; NHibernate for example allows you so say access="property" or access="field.camelcase" and similar. This allows you to encapsulate your properties.
Dependency injection works on the other types you have, mostly those which are not entities, even though you can combine AOP+ORM+IOC in some very nice ways to improve the state of these things. IoC is often used from layers above your domain entities if you're building a data-driven application, which I guess you are, since you're talking about ORMs.
They ("they" being application and domain services and other intrinsic classes to the program) expose their dependencies but in fact can be encapsulated and tested in even better isolation than previously since the paradigms of design-by-contract/design-by-interface which you often use when mocking dependencies in mock-based testing (in conjunction with IoC), will move you towards class-as-component "semantics". I mean: every class, when built using the above, will be better encapsulated.
Updated for urig: This holds true for both exposing concrete dependencies and exposing interfaces. First about interfaces: What I was hinting at above was that services and other applications classes which have dependencies, can with OOP depend on contracts/interfaces rather than specific implementations. In C/C++ and older languages there wasn't the interface and abstract classes can only go so far. Interfaces allow you to tie different runtime instances to the same interface without having to worry about leaking internal state which is what you're trying to get away from when abstracting and encapsulating. With abstract classes you can still provide a class implementation, just that you can't instantiate it, but inheritors still need to know about the invariants in your implementation and that can mess up state.
Secondly, about concrete classes as properties: you have to be wary about what types of types ;) you expose as properties. Say you have a List in your instance; then don't expose IList as the property; this will probably leak and you can't guarantee that consumers of the interface don't add things or remove things which you depend on; instead expose something like IEnumerable and return a copy of the List, or even better, do it as a method:
public IEnumerable MyCollection { get { return _List.Enum(); } } and you can be 100% certain to get both the performance and the encapsulation. Noone can add or remove to that IEnumerable and you still don't have to perform a costly array copy. The corresponding helper method:
static class Ext {
public static IEnumerable<T> Enum<T>(this IEnumerable<T> inner) {
foreach (var item in inner) yield return item;
}
}
So while you can't get 100% encapsulation in say creating overloaded equals operators/method you can get close with your public interfaces.
You can also use the new features of .Net 4.0 built on Spec# to verify the contracts I talked about above.
Serialization will always be there and has been for a long time. Previously, before the internet-area it was used for saving your object graph to disk for later retrieval, now it's used in web services, in copy-semantics and when passing data to e.g. a browser. This doesn't necessarily break encapsulation if you put a few [NonSerialized] attributes or the equivalents on the correct fields.
Object initializers aren't the same as constructors, they are just a way of collapsing a few lines of code. Values/instances in the {} will not be assigned until all of your constructors have run, so in principle it's just the same as not using object initializers.
I guess, what you have to watch out for is deviating from the good principles you've learnt from your previous job and make sure you are keeping your domain objects filled with business logic encapsulated behind good interfaces and ditto for your service-layer.
Private members are still incredibly important. Controlling access to internal object data is always good, and shouldn't be ignored.
Many times private methods I've found to be overkill. Most of the time, if the work you're doing is important enough to break out, you can refactor it in such a way that either a) the private method is trivial, or b) is an integral part of other functions.
In addition, with unit testing, having many methods private makes it very hard to unit test. There are ways around that (making test objects friends, etc), but add difficulties.
I wouldn't discount private methods entirely though. Any time there's important, internal algorithms that really make no sense outside of the class there's no reason to expose those methods.
I think that encapsulation is still important, it helps more in libraries than anything imho. You can create a library that does X, but you don't need everyone to know how X was created. And if you wanted to create it more specifically to obfuscate the way you create X. The way I learned about encapsulation, I remember also that you should always define your variables as private to protect them from a data attack. To protect against a hacker breaking your code and accessing variables that they are not supposed to use.