Checking calls on multiple returned, wrapped fakes with FakeItEasy - fakeiteasy

I have two interfaces like this:
public interface Store { Tx BeginTx() }
public interface Tx { void Write() }
And I have a method like this:
void WriteALot(fakeStore)
{
var tx1 = fakeStore.BeginTx();
tx1.Write();
var tx2 = fakeStore.BeginTx();
tx2.Write();
}
And a test:
var fakeStore = A.Fake<Store>(x => x.Wrapping(realstore));
// A.CallTo(() => fakeStore.BeginTx()).ReturnAWrappedTx()?
WriteALot(fakeStore);
// Check that a total of two calls were made to the Tx's
Can this be done?
EDIT:
I should clarify that there will actually be several hundred transactions and multiple Write-calls to each. And the implementations of the Store and Tx are complex. This is for an integration test and I use FakeItEasy for inspection of batching-behavior under different setups. It might a little too far from the intended use case for the library though :)
I guess what I'm asking is if I can collect and preferably merge the faked transactions without doing it manually. I can imagine something like ReturnLazily with a side-effect of collecting the returned fakes, but that is pretty unmanageable and hard to read (and I couldn't get the assertion part working).

With the updated requirements, I tried this kind of thing, and it passed. I'm sure it's still overly simplistic, but it sounds like your tests will be varied and weird, and I really don't have a chance of writing something that will fit your particular use case. However, by introducing a small factory class, I achieved some level of readability (to my mind), and was able to gather up the created transactions:
private class TransactionFactory
{
private readonly IList<Tx> allTransactions = new List<Tx>();
public IEnumerable<Tx> AllTransactions => allTransactions;
public Tx Create()
{
var realTransaction = new RealTransaction();
var fakeTransaction = A.Fake<Tx>(options =>
options.Wrapping(realTransaction));
allTransactions.Add(fakeTransaction);
return fakeTransaction;
}
}
[Test]
public void UpdatedTests()
{
var realStore = new RealStore();
var fakeStore = A.Fake<Store>(x => x.Wrapping(realStore));
var transactionFactory = new TransactionFactory();
A.CallTo(() => fakeStore.BeginTx()).ReturnsLazily(transactionFactory.Create);
WriteALot(fakeStore);
Assert.That(transactionFactory.AllTransactions.SelectMany(Fake.GetCalls).Count(),
Is.EqualTo(2));
}
This should be amenable to various modifications, but as you point out it's not exactly how FakeItEasy expected to be used, so you're likely going to end up doing a lot of custom coding around the library.

Assuming you meant to write tx1.Write and tx2.Write above, you can easily check that each transaction was called once, which is probably more useful than checking that a total of two calls was made:
public void Test()
{
var realStore = new RealStore();
var fakeStore = A.Fake<Store>(x => x.Wrapping(realStore));
var realTransaction1 = new RealTransaction();
var realTransaction2 = new RealTransaction();
var wrappedTransaction1 = A.Fake<Tx>(options => options.Wrapping(realTransaction1));
var wrappedTransaction2 = A.Fake<Tx>(options => options.Wrapping(realTransaction2));
A.CallTo(() => fakeStore.BeginTx())
.Returns(wrappedTransaction1).Once().Then
.Returns(wrappedTransaction2);
WriteALot(fakeStore);
A.CallTo(() => wrappedTransaction1.Write()).MustHaveHappenedOnceExactly();
A.CallTo(() => wrappedTransaction2.Write()).MustHaveHappenedOnceExactly();
}
But if you really want to make sure that two calls were made without checking that each transaction was responsible for 1 write, you could
[Test]
public void LaxTest()
{
int numberOfTransactionCalls = 0;
var realStore = new RealStore();
var fakeStore = A.Fake<Store>(x => x.Wrapping(realStore));
var realTransaction1 = new RealTransaction();
var realTransaction2 = new RealTransaction();
var wrappedTransaction1 = A.Fake<Tx>(options => options.Wrapping(realTransaction1));
var wrappedTransaction2 = A.Fake<Tx>(options => options.Wrapping(realTransaction2));
A.CallTo(() => wrappedTransaction1.Write()).Invokes(() => ++numberOfTransactionCalls);
A.CallTo(() => wrappedTransaction2.Write()).Invokes(() => ++numberOfTransactionCalls);
A.CallTo(() => fakeStore.BeginTx())
.Returns(wrappedTransaction1).Once().Then
.Returns(wrappedTransaction2);
WriteALot(fakeStore);
Assert.That(numberOfTransactionCalls, Is.EqualTo(2));
}
Note that if your production method really is as simple as you post, there's no need to delegate to an actual implementation and you could omit all the wrapping:
[Test]
public void UnwrappedTest()
{
var fakeStore = A.Fake<Store>();
var transaction1 = A.Fake<Tx>();
var transaction2 = A.Fake<Tx>();
A.CallTo(() => fakeStore.BeginTx())
.Returns(transaction1).Once().Then
.Returns(transaction2);
WriteALot(fakeStore);
A.CallTo(() => transaction1.Write()).MustHaveHappenedOnceExactly();
A.CallTo(() => transaction2.Write()).MustHaveHappenedOnceExactly();
}
In my opinion it's a lot easier to understand what's going on. But maybe you just simplified for the sake of asking the question.

Related

QuickFix synchronous order filling

Can the order filling be executed synchronously with fix protocol? Since protocol by it's nature is async I am thinking to use TaskCompletionSource. However I experience problem in picking up unique identifier. OrderId won't work in case when required field is missing server will respond with BusinessMessageReject and I won't know how to set task to cancelled or failed. I thought to use msgSeq as unique identifier. However, at the time of sending and I don't know it, because it's handled by QuickFixN internally. Plus in case of connection reset seqNum will be reset too. There are possibly edge cases like to deal somehow with long running messages.
Please see below code attached. I am omitting other methods, so you can get the idea of what I am trying to achieve. Let me know if it's waste of time.
class FixClient : IApplication
{
private ConcurrentDictionary<string, TaskCompletionSource<ExecutionReport>> _currentOrdersUnderProcessing = new ConcurrentDictionary<string, TaskCompletionSource<ExecutionReport>>();
// In case when some required field is missing
public void OnMessage(BusinessMessageReject message, SessionID sessionID)
{
// how can I can extract needed key, if there is no OrderId in the response
var orderId = ""; // how?
if (_currentOrdersUnderProcessing.TryRemove(orderId, out var taskCompletionSource))
{
taskCompletionSource.SetException(new Exception("Couldn't execute order"));
}
}
public void OnMessage(ExecutionReport message, SessionID sessionID)
{
var orderId = message.GetField(11); // ClOrdID field
if (_currentOrdersUnderProcessing.TryRemove(orderId, out var taskCompletionSource))
{
taskCompletionSource.SetResult(message);
}
}
public Task SendNewBuyMarketOrderAsync(string symbol)
{
var orderId = Guid.NewGuid().ToString();
var message = new NewOrderSingle(uniqueOrderId, instructionsForOrderHandling, symbol, side, transactionTime, orderType);
if (QuickFix.Session.SendToTarget(message, sessionId)) // if send successfully
{
var tsc = new TaskCompletionSource<ExecutionReport>();
_currentOrdersUnderProcessing.TryAdd(orderId, tsc)
return tsc.Task;
}
else
{
return Task.FromException(new Exception("Couldn't place order"));
}
}
}

Replay(1) returning result from disconnected source

I have an Observable stream (representing a network connection of data values) which I'm Replaying and RefCounting. The underlying source is disconnecting as expected when the subscriber count hits zero, but Replay(1) returns a value from this source to the next subscriber, which I consider to be stale. I've fixed this with a custom implementation of ReplaySubject, which forgets the last value when the last observer disconnects, but it's a leaky abstraction and I'm wondering whether there's a more idomatic way to solve this problem?
Example failing test:
[Test]
public async Task Replay_DoesNotReplay_AfterRefCountDisconnection()
{
int i = 0;
var replay = Observable.Defer(
() =>
{
i++;
return new[] {i, i}.ToObservable();
})
.Replay(1)
.RefCount();
var first = await replay.Take(1).ToTask();
var second = await replay.Take(1).ToTask();
Assert.AreEqual(1, first);
Assert.AreEqual(2, second);
}

How to use Observables as a lazy data source

I'm wrapping an API that emits events in Observables and currently my datasource code looks something like this, with db.getEventEmitter() returning an EventEmitter.
const Datasource = {
getSomeData() {
return Observable.fromEvent(db.getEventEmitter(), 'value');
}
};
However, to actually use this, I need to both memoize the function and have it return a ReplaySubject, otherwise each subsequent call to getSomeData() would reinitialize the entire sequence and recreate more event emitters or not have any data until the next update, which is undesirable, so my code looks a lot more like this for every function
const someDataCache = null;
const Datasource = {
getSomeData() {
if (someDataCache) { return someDataCache; }
const subject = new ReplaySubject(1);
Observable.fromEvent(db.getEventEmitter(), 'value').subscribe(subject);
someDataCache = subject;
return subject;
}
};
which ends up being quite a lot of boilerplate for just one single function, and becomes more of an issue when there are more parameters.
Is there a better/more elegant design pattern to accomplish this? Basically, I'd like that
Only one event emitter is created.
Callers who call the datasource later get the most recent result.
The event emitters are created when they're needed.
but right now I feel like this pattern is fighting the Observable pattern, resulting a bunch of boilerplate.
As a followup to this question, I ended up commonizing the logic to leverage Observables in this way. publishReplay as cartant mentioned does get me most of the way to what I needed. I've documented what I've learned in this post, with the following tl;dr code:
let first = true
Rx.Observable.create(
observer => {
const callback = data => {
first = false
observer.next(data)
}
const event = first ? 'value' : 'child_changed'
db.ref(path).on(event, callback, error => observer.error(error))
return {event, callback}
},
(handler, {event, callback}) => {
db.ref(path).off(event, callback)
},
)
.map(snapshot => snapshot.val())
.publishReplay(1)
.refCount()

Detect IsAlive on an IObservable

I'm writing a function IsAlive to take an IObservable<T>, and a timespan, and return an IObservable<bool> The canonical use case is to detect if a streaming server is still sending data.
I've come up with the following solution for it, but feel it's not the most clear as to how it works.
public static IObservable<bool> IsAlive<T>(this IObservable<T> source,
TimeSpan timeout,
IScheduler sched)
{
return source.Window(timeout, sched)
.Select(wind => wind.Any())
.SelectMany(a => a)
.DistinctUntilChanged();
}
Does anyone have a better approach?
FYI -
Here are the unit tests and existing approaches that I've tried: https://gist.github.com/997003
This should work:
public static IObservable<bool> IsAlive<T>(this IObservable<T> source,
TimeSpan timeout,
IScheduler sched)
{
return source.Buffer(timeout, 1, sched)
.Select(l => l.Any())
.DistinctUntilChanged();
}
This approach makes semantic sense, too. Every time an item comes in, it fills the buffer and then true is passed along. And every timeout, an empty buffer will be created and false will be passed along.
Edit:
This is why the buffer-1 approach is better than windowing:
var sched = new TestScheduler();
var subj = new Subject<Unit>();
var timeout = TimeSpan.FromTicks(10);
subj
.Buffer(timeout, 1, sched)
.Select(Enumerable.Any)
.Subscribe(x => Console.WriteLine("Buffer(timeout, 1): " + x));
subj
.Window(timeout, sched)
.Select(wind => wind.Any())
.SelectMany(a => a)
.Subscribe(x => Console.WriteLine("Window(timeout): "+x));
sched.AdvanceTo(5);
subj.OnNext(Unit.Default);
sched.AdvanceTo(16);
yields:
Buffer(timeout, 1): True
Window(timeout): True
Buffer(timeout, 1): False
To be specific, the window is open for the whole timeout and doesn't close and reset as soon as an item comes in. This is where the buffer limit of 1 comes into play. As soon as an item comes in, the buffer and its timer get restarted.
I could re-implement my buffer as a window, as buffer's implementation is a window, but a) I think buffer makes better semantic sense and b) I don't have to SelectMany. Scott's Select and SelectMany could be combined into a single SelectMany(x => x.Any()), but I can avoid the entire lambda and specify the Enumerable.Any method group, which will bind faster (trivial) anyway.
How about:
source.Select(_ => true)
.Timeout(timeout, sched)
.DistinctUntilChanged()
.Catch<bool, TimeoutException>)(ex => Observable.Return(false));

ADO.NET - Bad Practice?

I was reading an article in MSDN several months ago and have recently started using the following snippet to execute ADO.NET code, but I get the feeling it could be bad. Am I over reacting or is it perfectly acceptable?
private void Execute(Action<SqlConnection> action)
{
SqlConnection conn = null;
try {
conn = new SqlConnection(ConnectionString);
conn.Open();
action.Invoke(conn);
} finally {
if (conn != null && conn.State == ConnectionState.Open) {
try {
conn.Close();
} catch {
}
}
}
}
public bool GetSomethingById() {
SomeThing aSomething = null
bool valid = false;
Execute(conn =>
{
using (SqlCommand cmd = conn.CreateCommand()) {
cmd.CommandText = ....
...
SqlDataReader reader = cmd.ExecuteReader();
...
aSomething = new SomeThing(Convert.ToString(reader["aDbField"]));
}
});
return aSomething;
}
What is the point of doing that when you can do this?
public SomeThing GetSomethingById(int id)
{
using (var con = new SqlConnection(ConnectionString))
{
con.Open();
using (var cmd = con.CreateCommand())
{
// prepare command
using (var rdr = cmd.ExecuteReader())
{
// read fields
return new SomeThing(data);
}
}
}
}
You can promote code reuse by doing something like this.
public static void ExecuteToReader(string connectionString, string commandText, IEnumerable<KeyValuePair<string, object>> parameters, Action<IDataReader> action)
{
using (var con = new SqlConnection(connectionString))
{
con.Open();
using (var cmd = con.CreateCommand())
{
cmd.CommandText = commandText;
foreach (var pair in parameters)
{
var parameter = cmd.CreateParameter();
parameter.ParameterName = pair.Key;
parameter.Value = pair.Value;
cmd.Parameters.Add(parameter);
}
using (var rdr = cmd.ExecuteReader())
{
action(rdr);
}
}
}
}
You could use it like this:
//At the top create an alias
using DbParams = Dictionary<string, object>;
ExecuteToReader(
connectionString,
commandText,
new DbParams() { { "key1", 1 }, { "key2", 2 } }),
reader =>
{
// ...
// No need to dispose
}
)
IMHO it is indeed a bad practice, since you're creating and opening a new database-connection for every statement that you execute.
Why is it bad:
performance wise (although connection pooling helps decrease the performance hit): you should open your connection, execute the statements that have to be executed, and close the connection when you don't know when the next statement will be executed.
but certainly context-wise. I mean: how will you handle transactions ? Where are your transaction boundaries ? Your application-layer knows when a transaction has to be started and committed, but you're unable to span multiple statements into the same sql-transaction with this way of working.
This is a very reasonable approach to use.
By wrapping your connection logic into a method which takes an Action<SqlConnection>, you're helping prevent duplicated code and the potential for introduced error. Since we can now use lambdas, this becomes an easy, safe way to handle this situation.
That's acceptable. I've created a SqlUtilities class two years ago that had a similar method. You can take it one step further if you like.
EDIT: Couldn't find the code, but I typed a small example (probably with many syntax errors ;))
SQLUtilities
public delegate T CreateMethod<T> (SqlDataReader reader);
public static T CreateEntity<T>(string query, CreateMethod<T> createMethod, params SqlParameter[] parameters) {
// Open the Sql connection
// Create a Sql command with the query/sp and parameters
SqlDataReader reader = cmd.ExecuteReader();
return createMethod(reader);
// Probably some finally statements or using-closures etc. etc.
}
Calling code
private SomeThing Create(SqlDataReader reader) {
SomeThing something = new SomeThing();
something.ID = Convert.ToIn32(reader["ID"]);
...
return something;
}
public SomeThing GetSomeThingByID (int id) {
return SqlUtilities.CreateEntity<SomeThing> ("something_getbyid", Create, ....);
}
Of course you could use a lambda expression instead of the Create method, and you could easily make a CreateCollection method and reuse the existing Create method.
However if this is a new project. Check out LINQ to entities. Is far easier and flexible than ADO.Net.
Well, In my opinion check what you do before going through it.Something that is working doesn't mean it is best and good programming practice.Check out and find a concrete example and benefit of using it.But if you are considering using for big projects it would be nice using frameworks like NHibernate.Because there are a lot projects even frameworks developed based on it,like http://www.cuyahoga-project.org/.