Event Handler behavioral difference .net 1.1 vs 2.0 with null delegate - .net-2.0

Not sure what exactly is going on here, but seems like in .NET 1.1 an uninitialized event delegate can run without issues, but in .NET 2.0+ it causes a NullReferenceException. Any ideas why. The code below will run fine without issues in 1.1, in 2.0 it gives a NullReferenceException. I'm curious why does it behave differently? What changed?
Thanks
eg
class Class1
{
public delegate void ChartJoinedRowAddedHandler(object sender);
public static event ChartJoinedRowAddedHandler ChartJoinedRowAdded;
public static DataTable dt;
public static void Main()
{
dt = new DataTable();
dt.RowChanged += new DataRowChangeEventHandler(TableEventHandler);
object [] obj = new object[]{1,2};
dt.Columns.Add("Name");
dt.Columns.Add("Last");
dt.NewRow();
dt.Rows.Add(obj);
}
private static void TableEventHandler(object sender, DataRowChangeEventArgs e)
{
ChartJoinedRowAdded(new object());
}
}

[updated] AFAIK, there was no change here to the fundamental delegate handling; the difference is in how DataTable behaves.
However! Be very careful using static events, especially if you are subscribing from instances (rather than static methods). This is a good way to keep huge swathes of objects alive and not be garbage collected.
Running the code via csc from 1.1 shows that the general delegate side is the same - I think the difference is that the DataTable code that raises RowChanged was swallowing the exception. For example, make the code like below:
Console.WriteLine("Before");
ChartJoinedRowAdded(new object());
Console.WriteLine("After");
You'll see "Before", but no "After"; an exception was thrown and swallowed by the DataTable.

The eventhandler system is basically just a list of functions to call when a given event is raised.
It initializes to the "null" list, and not the empty list, so you need to do
if (ChartJoinedRowAdded != null)
ChartJoinedRowAdded(new object())

The way events work hasn't really changed from 1.1 to 2
Although the syntax looks like normal aggregation it really isn't:
dt.RowChanged += TableEventHandler;
dt.RowChanged += null;
dt.RowChanged += delegate (object sender, DataRowChangeEventArgs e) {
//anon
};
Will fire TableEventHandler and then the delegate - the null is just skipped.
You can use null to clear events, but only inside the event firing class:
this.MyEvent = null;
If nothing subscribes your event will be null - see soraz's answer. The DataTable class will contain a similar check and won't fire the event if there are no subscribers.
The standard pattern is:
//events should just about always use this pattern: object, args
public static event EventHandler<MyEventArgs> ChartJoinedRowAdded;
//inheriting classes can override this event behaviour
protected virtual OnChartJoinedRowAdded() {
if( ChartJoinedRowAdded != null )
ChartJoinedRowAdded( this, new MyEventArgs(...) );
}

Related

OOP avoid unnecessary repeated calls

so I have a question on OOP class design. I have read that we should "Tell, don't ask" and not use Exceptions for "Flow control". However in this particular case I see some redundant code being executed!
Lets assume Person have a list of events that he will be attending, and it must be enforced that he cannot attend an event that overlaps with his current schedule. So I have the following Java code
public class Person {
// this arraylist of events must not have overlapping events!
ArrayList<Events> eventsToAttend;
// checks if a person is free to attend a new event by viewing all events he is attending
public boolean canAttendEvent(Event newEvent) {
for(int i = 0; i < eventsToAttend.size(); i++) {
if (newEvent.isSameDayAndTime(eventsToAttend.get(i))) {
return false;
}
}
return true;
}
public void attendEvent(Event newEvent) {
// enforce the validity of the newEvent
if (!canAttendEvent(newEvent)) {
// throw exception and return
}
eventsToAttend.add(newEvent);
}
public static main(String[] args) {
// just an example usage!
Person person = somePersonWithEventsAlready;
Event event = new Event();
if (person.canAttendEvent(event)) {
// !!!
// Notice that canAttendEvent() is called twice!! how do you prevent this?
// !!!
person.attendEvent(event);
}
// Alternatively I could just try - catch around person.attendEvent(), but is that bad practise?
}
}
The issue I am facing in general with this way of doing things, is that "canAttendEvent()" is being called twice. However it is good practice according to OOP design patterns?
What would be a better way to do something like this? Thank you for reading this.
try - catch in the main is the best way to achieve what you are trying to avoid: call twice the function canAttendEvent

How to make JPA EntityListeners validate the existence of an interface

I am working in J2EE 5 using JPA, I have a working solution but I'm looking to clean up the structure.
I am using EntityListeners on some of the JPA objects I am persisting, the listeners are fairly generic but depend on the beans implementing an interface, this works great if you remember to add the interface.
I have not been able to determine a way to tie the EntityListener and the Interface together so that I would get an exception that lead in the right direction, or even better a compile time error.
#Entity
#EntityListener({CreateByListener.class})
public class Note implements CreatorInterface{
private String message;....
private String creator;
....
}
public interface CreatorInterface{
public void setCreator(String creator);
}
public class CreateByListener {
#PrePersist
public void dataPersist(CreatorInterface data){
SUser user = LoginModule.getUser();
data.setCreator(user.getName());
}
}
This functions exactly the way I want it to, except when a new class is created and it uses the CreateByListener but does not implement the CreatorInterface.
When this happens a class cast exception is thrown somewhere deep from within the JPA engine and only if I happen to remember this symptom can I figure out what went wrong.
I have not been able to figure a way to require the interface or test for the presence of the interface before the listener would be fired.
Any ideas would be appreciated.
#PrePersist
public void dataPersist(Object data){
if (!(data instanceof CreatorInterface)) {
throw new IllegalArgumentException("The class "
+ data.getClass()
+ " should implement CreatorInterface");
}
CreatorInterface creatorInterface = (CreatorInterface) data;
SUser user = LoginModule.getUser();
creatorInterface.setCreator(user.getName());
}
This does basically the same thing as what you're doing, but at least you'll have a more readable error message indicating what's wrong, instead of the ClassCastException.

Can I use NUnit TestCase to test mocked repository and real repository

I would like to be able to run tests on my fake repository (that uses a list)
and my real repository (that uses a database) to make sure that both my mocked up version works as expected and my actual production repository works as expected. I thought the easiest way would be to use TestCase
private readonly StandardKernel _kernel = new StandardKernel();
private readonly IPersonRepository fakePersonRepository;
private readonly IPersonRepository realPersonRepository;
[Inject]
public PersonRepositoryTests()
{
realPersonRepository = _kernel.Get<IPersonRepository>();
_kernel = new StandardKernel(new TestModule());
fakePersonRepository = _kernel.Get<IPersonRepository>();
}
[TestCase(fakePersonRepository)]
[TestCase(realPersonRepository)]
public void CheckRepositoryIsEmptyOnStart(IPersonRepository personRepository)
{
if (personRepository == null)
{
throw new NullReferenceException("Person Repostory never Injected : is Null");
}
var records = personRepository.GetAllPeople();
Assert.AreEqual(0, records.Count());
}
but it asks for a constant expression.
Attributes are a compile-time decoration for an attribute, so anything that you put in a TestCase attribute has to be a constant that the compiler can resolve.
You can try something like this (untested):
[TestCase(typeof(FakePersonRespository))]
[TestCase(typeof(PersonRespository))]
public void CheckRepositoryIsEmptyOnStart(Type personRepoType)
{
// do some reflection based Activator.CreateInstance() stuff here
// to instantiate the incoming type
}
However, this gets a bit ugly because I imagine that your two different implementation might have different constructor arguments. Plus, you really don't want all that dynamic type instantiation code cluttering the test.
A possible solution might be something like this:
[TestCase("FakePersonRepository")]
[TestCase("TestPersonRepository")]
public void CheckRepositoryIsEmptyOnStart(string repoType)
{
// Write a helper class that accepts a string and returns a properly
// instantiated repo instance.
var repo = PersonRepoTestFactory.Create(repoType);
// your test here
}
Bottom line is, the test case attribute has to take a constant expression. But you can achieve the desired result by shoving the instantiation code into a factory.
You might look at the TestCaseSource attribute, though that may fail with the same error. Otherwise, you may have to settle for two separate tests, which both call a third method to handle all of the common test logic.

Can execute question using delegate commands in prism

This seems like a dumb question but I have looked through the docs for prism and searched the internet and can't find an example... Here is the deal.
I am using a DelegateCommand in Prism, it is working fine except when I assign a delegate to the can execute to the CanExecute method. in another view model I have a event that takes a bool that I am publishing too and I can see that the event is firing and that the bool is getting passed to my view model with the command in it no problem but this is what I don't understand... How does can execute know that the state has changed? Here is some code for the example.
from the view models ctor
eventAggregator.GetEvent<NavigationEnabledEvent>().Subscribe(OnNavigationEnabledChange, ThreadOption.UIThread);
NavigateCommand = new DelegateCommand(OnNavigate, () => nextButtonEnabled);
Now - here is the OnNavigationEnableChange event.
private void OnNavigationEnabledChange(bool navigationState)
{
nextButtonEnabled = navigationState;
}
enter code here
Like - I am totally missing something here - how does the command know that nextButtonEnabled is no true?
If someone could point me to a working example that would be awesome.
OK - thanks!
This is why I don't use the implementation of DelegateCommand in Prism. I've always hated the callback-based approach for enabling/disabling commands. It's entirely unnecessary, and as far as I can tell, its only (and rather doubtful) 'benefit' is that it's consistent with how execution itself is handled. But that has always seemed pointless to me because execution and enabling/disabling are clearly very different: a button knows when it wants to execute a command but doesn't know when the command's status might have changed.
So I always end up writing something like this:
public class RelayCommand : ICommand
{
private bool _isEnabled;
private Action _onExecute;
public RelayCommand(Action executeHandler)
{
_isEnabled = true;
_onExecute = executeHandler;
}
public bool IsEnabled
{
get { return _isEnabled; }
set
{
_isEnabled = value;
if (CanExecuteChanged != null)
{
CanExecuteChanged(this, EventArgs.Empty);
}
}
}
public bool CanExecute(object parameter)
{
return _isEnabled;
}
public event EventHandler CanExecuteChanged;
public void Execute(object parameter)
{
_onExecute();
}
}
(If necessary you could modify this to use weak references to execute change event handlers, like Prism does.)
But to answer your question: how is the callback approach even meant to work? Prism's DelegateCommand offers a RaiseCanExecuteChanged method you can invoke to ask it to raise the event that'll cause command invokers to query your command's CanExecute. Given that you have to tell the DelegateCommand any time your enabled status changes, I don't see any meaningful benefit of a callback-based approach. (Sometimes you see a broadcast model though - arranging so that any change in status anywhere notifies all command invokers! In that case, a callback is useful because it means it doesn't matter if you don't know what actually changed. But requerying every single command seems unpleasant to me.)
Answering your question how does the command know that it is now enabled:
NavigateCommand = new DelegateCommand(OnNavigate, () => nextButtonEnabled);
This overload of the DelegateCommand constructor takes 2 parameters:
The first is the command action and the second is the CanExecute delegate that returns bool.
in your example your CanExecute action always returns nextButtonEnabled
eventAggregator.GetEvent<NavigationEnabledEvent>().Subscribe(OnNavigationEnabledChange, ThreadOption.UIThread);
triggers OnNavigationEnabledChange that changes nextButtonEnabled
this is how it works...

How do I simplify these NUNit tests?

These three tests are identical, except that they use a different static function to create a StartInfo instance. I have this pattern coming up all trough my testcode, and would love
to be be able to simplify this using [TestCase], or any other way that reduces boilerplate code. To the best of my knowledge I'm not allowed to use a delegate as a [TestCase] argument, and I'm hoping people here have creative ideas on how to make the code below more terse.
[Test]
public void ResponseHeadersWorkinPlatform1()
{
DoResponseHeadersWorkTest(Platform1StartInfo.CreateOneRunning);
}
[Test]
public void ResponseHeadersWorkinPlatform2()
{
DoResponseHeadersWorkTest(Platform2StartInfo.CreateOneRunning);
}
[Test]
public void ResponseHeadersWorkinPlatform3()
{
DoResponseHeadersWorkTest(Platform3StartInfo.CreateOneRunning);
}
void DoResponseHeadersWorkTest(Func<ScriptResource,StartInfo> startInfoCreator)
{
ScriptResource sr = ScriptResource.Default;
var process = startInfoCreator(sr).Start();
//assert some things here
}
Firstly, I don't think the original is too bad. It's only messy if your assertions are different from test case to test case.
Anyway, you can use a test case, but it can't be done via a standard [TestCase] attribute due to using more complicated types. Instead, you need to use a public IEnumerable<> as the data provider and then tag your test method with a [TestCaseSource] attribute.
Try something like:
public IEnumerable<Func<ScriptResource, StartInfo>> TestCases
{
get
{
yield return Platform1StartInfo.CreateOneRunning;
yield return Platform2StartInfo.CreateOneRunning;
yield return Platform3StartInfo.CreateOneRunning;
}
}
[TestCaseSource("TestCases")]
public void MyDataDrivenTest(Func<ScriptResource, StartInfo> startInfoCreator)
{
ScriptResource sr = ScriptResource.Default;
var process = startInfoCreator(sr);
// do asserts
}
}
This is a more concise version of the standard pattern of yielding TestCaseData instances containing the parameters. If you yield instances of TestCaseData you can add more information and behaviours to each test (like expected exceptions, descriptions and so forth), but it is slightly more verbose.
Part of the reason I really like this stuff is that you can make one method for your 'act' and one method for your 'assert', then mix and match them independently. E.g. my friend was doing something yesterday where he used two Actions to say ("when method Blah is called, this method on the ViewModel should be triggered"). Very terse and effective!
It looks good. Are you looking to add a factory maybe ? Or you could add these methods to a Action List(in test setup) and call first action delegate, second action delegate and third action delegate.