How do I combine three observables such that - system.reactive

If I have three observables, how do I combine them such that when the first emit a new value, I can combine that with the latest emitted from the other two.. Its important that we only produce a new value of the combined observables when the first observable has a new value regardles of whether the other two has emitted new values.
Thanks.

You could do this (note that you get default values for sources 2 and 3 if they have yet to emit, and I have allowed for you to optionally supply those default values):
public static class ObservableExtensions
{
public static IObservable<TResult> CombineFirstWithLatestOfRest
<TSource1, TSource2, TSource3, TResult>(
this IObservable<TSource1> source1,
IObservable<TSource2> source2,
IObservable<TSource3> source3,
Func<TSource1, TSource2, TSource3, TResult> resultSelector,
TSource2 source2Default = default(TSource2),
TSource3 source3Default = default(TSource3))
{
var latestOfRest = source2.CombineLatest(source3, Tuple.Create);
return source1.Zip(latestOfRest.MostRecent(
Tuple.Create(source2Default, source3Default)),
(s1,s23) => resultSelector(s1, s23.Item1, s23.Item2));
}
}
Example use:
var source1 = Observable.Interval(TimeSpan.FromSeconds(2));
var source2 = Observable.Interval(TimeSpan.FromSeconds(1));
var source3 = Observable.Interval(TimeSpan.FromSeconds(3.5));
var res = source1.CombineFirstWithLatestOfRest(source2, source3,
(x,y,z) => string.Format("1: {0} 2: {1} 3: {2}", x,y,z));
res.Subscribe(Console.WriteLine);
Addendum
There's a subtle problem here that might be significant in your scenario. A sometimes undesirable aspect of CombineLatest is that it does not emit until it has a value from every contributing stream. This means in the example above that the slower source3 holds up source2 and values are missed. Specifically, events returned will have the default values for both source2 and source3 until both have emitted at least one event each. Kicking off source2 and source3 with their default values is a convenient workaround for this behaviour, which we can get away with since it is source1 that drives events:
public static class ObservableExtensions
{
public static IObservable<TResult> CombineFirstWithLatestOfRest
<TSource1, TSource2, TSource3, TResult>(
this IObservable<TSource1> source1,
IObservable<TSource2> source2,
IObservable<TSource3> source3,
Func<TSource1, TSource2, TSource3, TResult> resultSelector,
TSource2 source2Default = default(TSource2),
TSource3 source3Default = default(TSource3))
{
source2 = source2.StartWith(source2Default); // added this
source3 = source3.StartWith(source3Default); // and this
var lastestOfRest = source2.CombineLatest(source3, Tuple.Create);
return source1.Zip(lastestOfRest.MostRecent(
Tuple.Create(source2Default, source3Default)), // now redundant
(s1,s23) => resultSelector(s1, s23.Item1, s23.Item2));
}
}

Related

Differentiating an AVRO union type

I'm consuming Avro serialized messages from Kafka using the "automatic" deserializer like:
props.put(
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
"io.confluent.kafka.serializers.KafkaAvroDeserializer"
);
props.put("schema.registry.url", "https://example.com");
This works brilliantly, and is right out of the docs at https://docs.confluent.io/current/schema-registry/serializer-formatter.html#serializer.
The problem I'm facing is that I actually just want to forward these messages, but to do the routing I need some metadata from inside. Some technical constraints mean that I can't feasibly compile-in generated class files to use the KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG => true, so I am using a regular decoder without being tied into Kafka, specifically just reading the bytes as a Array[Byte] and passing them to a manually constructed deserializer:
var maxSchemasToCache = 1000;
var schemaRegistryURL = "https://example.com/"
var specificDeserializerProps = Map(
"schema.registry.url"
-> schemaRegistryURL,
KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG
-> "false"
);
var client = new CachedSchemaRegistryClient(
schemaRegistryURL,
maxSchemasToCache
);
var deserializer = new KafkaAvroDeserializer(
client,
specificDeserializerProps.asJava
);
The messages are a "container" type, with the really interesting part one of about ~25 types in a union { A, B, C } msg record field:
record Event {
timestamp_ms created_at;
union {
Online,
Offline,
Available,
Unavailable,
...
...Failed,
...Updated
} msg;
}
So I'm successfully reading a Array[Byte] into record and feeding it into the deserializer like this:
var genericRecord = deserializer.deserialize(topic, consumerRecord.value())
.asInstanceOf[GenericRecord];
var schema = genericRecord.getSchema();
var msgSchema = schema.getField("msg").schema();
The problem however is that I can find no to discern, discriminate or "resolve" the "type" of the msg field through the union:
System.out.printf(
"msg.schema = %s msg.schema.getType = %s\n",
msgSchema.getFullName(),
msgSchema.getType().name());
=> msg.schema = union msg.schema.getType = union
How to discriminate types in this scenario? The confluent registry knows, these things have names, they have "types", even if I'm treating them as GenericRecords,
My goal here is to know that record.msg is of "type" Online | Offline | Available rather than just knowing it's a union.
After having looked into the implementation of the AVRO Java library, it think it's safe to say that this is impossible given the current API. I've found the following way of extracting the types while parsing, using a custom GenericDatumReader subclass, but it needs a lot of polishing before I'd use something like this in production code :D
So here's the subclass:
import org.apache.avro.Schema;
import org.apache.avro.generic.GenericData;
import org.apache.avro.generic.GenericDatumReader;
import org.apache.avro.io.ResolvingDecoder;
import java.io.IOException;
import java.util.List;
public class CustomReader<D> extends GenericDatumReader<D> {
private final GenericData data;
private Schema actual;
private Schema expected;
private ResolvingDecoder creatorResolver = null;
private final Thread creator;
private List<Schema> unionTypes;
// vvv This is the constructor I've modified, added a list of types
public CustomReader(Schema schema, List<Schema> unionTypes) {
this(schema, schema, GenericData.get());
this.unionTypes = unionTypes;
}
public CustomReader(Schema writer, Schema reader, GenericData data) {
this(data);
this.actual = writer;
this.expected = reader;
}
protected CustomReader(GenericData data) {
this.data = data;
this.creator = Thread.currentThread();
}
protected Object readWithoutConversion(Object old, Schema expected, ResolvingDecoder in) throws IOException {
switch (expected.getType()) {
case RECORD:
return super.readRecord(old, expected, in);
case ENUM:
return super.readEnum(expected, in);
case ARRAY:
return super.readArray(old, expected, in);
case MAP:
return super.readMap(old, expected, in);
case UNION:
// vvv The magic happens here
Schema type = expected.getTypes().get(in.readIndex());
unionTypes.add(type);
return super.read(old, type, in);
case FIXED:
return super.readFixed(old, expected, in);
case STRING:
return super.readString(old, expected, in);
case BYTES:
return super.readBytes(old, expected, in);
case INT:
return super.readInt(old, expected, in);
case LONG:
return in.readLong();
case FLOAT:
return in.readFloat();
case DOUBLE:
return in.readDouble();
case BOOLEAN:
return in.readBoolean();
case NULL:
in.readNull();
return null;
default:
return super.readWithoutConversion(old, expected, in);
}
}
}
I've added comments to the code for the interesting parts, as it's mostly boilerplate.
Then you can use this custom reader like this:
List<Schema> unionTypes = new ArrayList<>();
DatumReader<GenericRecord> datumReader = new CustomReader<GenericRecord>(schema, unionTypes);
DataFileReader<GenericRecord> dataFileReader = new DataFileReader<GenericRecord>(eventFile, datumReader);
GenericRecord event = null;
while (dataFileReader.hasNext()) {
event = dataFileReader.next(event);
}
System.out.println(unionTypes);
This will print, for each union parsed, the type of that union. Note that you'll have to figure out which element of that list is interesting to you depending on how many unions you have in a record, etc.
Not pretty tbh :D
I was able to come up with a single-use solution after a lot of digging:
val records: ConsumerRecords[String, Array[Byte]] = consumer.poll(100);
for (consumerRecord <- asScalaIterator(records.iterator)) {
var genericRecord = deserializer.deserialize(topic, consumerRecord.value()).asInstanceOf[GenericRecord];
var msgSchema = genericRecord.get("msg").asInstanceOf[GenericRecord].getSchema();
System.out.printf("%s \n", msgSchema.getFullName());
Prints com.myorg.SomeSchemaFromTheEnum and works perfectly in my use-case.
The confusing thing, is that because of the use of GenericRecord, .get("msg") returns Object, which, in a general way I have no way to safely typecast. In this limited case, I know the cast is safe.
In my limited use-case the solution in the 5 lines above is suitable, but for a more general solution the answer https://stackoverflow.com/a/59844401/119669 posted by https://stackoverflow.com/users/124257/fresskoma seems more appropriate.
Whether using DatumReader or GenericRecord is probably a matter of preference and whether the Kafka ecosystem is in mind, alone with Avro I'd probably prefer a DatumReader solution, but in this instance I can live with having Kafak-esque nomenclature in my code.
To retrieve the schema of the value of a field, you can use
new GenericData().induce(genericRecord.get("msg"))

applying keyed state on top of stream from co group stream

I have two kafka sources
I am trying to perform world count and merge the counts from two streams
I have created window of 1 min for both data streams and applying coGroupBykey , from DoFn , i am emitting <Key,Value> (word,count)
On top of this coGroupByKey function , I am applying stateful ParDo
Let say if i get (Test,2) from stream 1, (Test,3) from stream 2 in same window time then in CogroupByKey function , i ll merge as (Test,5), but if they are not falling in same window , i will emit (Test,2) and (Test,3)
Now i will apply state for merging these elements
So finally as result i should get (Test,5), but i am not getting the expected result , All elements form stream 1 are going to one partition and
elements from stream 2 to another partition , thats why i am getting result
(Test,2)
(Test,3)
// word count stream from kafka topic 1
PCollection<KV<String,Long>> stream1 = ...
// word count stream from kafka topic 2
PCollection<KV<String,Long>> stream2 = ...
PCollection<KV<String,Long>> windowed1 =
stream1.apply(
Window
.<KV<String,Long>>into(FixedWindows.of(Duration.millis(60000)))
.triggering(Repeatedly.forever(AfterPane.elementCountAtLeast(1)))
.withAllowedLateness(Duration.millis(1000))
.discardingFiredPanes());
PCollection<KV<String,Long>> windowed2 =
stream2.apply(
Window
.<KV<String,Long>>into(FixedWindows.of(Duration.millis(60000)))
.triggering(Repeatedly.forever(AfterPane.elementCountAtLeast(1)))
.withAllowedLateness(Duration.millis(1000))
.discardingFiredPanes());
final TupleTag<Long> count1 = new TupleTag<Long>();
final TupleTag<Long> count2 = new TupleTag<Long>();
// Merge collection values into a CoGbkResult collection.
PCollection<KV<String, CoGbkResult>> joinedStream =
KeyedPCollectionTuple.of(count1, windowed1).and(count2, windowed2)
.apply(CoGroupByKey.<String>create());
// applying state operation after coGroupKey fun
PCollection<KV<String,Long>> finalCountStream =
joinedStream.apply(ParDo.of(
new DoFn<KV<String, CoGbkResult>, KV<String,Long>>() {
#StateId(stateId)
private final StateSpec<MapState<String, Long>> mapState =
StateSpecs.map();
#ProcessElement
public void processElement(
ProcessContext processContext,
#StateId(stateId) MapState<String, Long> state) {
KV<String, CoGbkResult> element = processContext.element();
Iterable<Long> count1 = element.getValue().getAll(web);
Iterable<Long> count2 = element.getValue().getAll(assist);
Long sumAmount =
StreamSupport
.stream(
Iterables.concat(count1, count2).spliterator(), false)
.collect(Collectors.summingLong(n -> n));
System.out.println(element.getKey()+"::"+sumAmount);
// processContext.output(element.getKey()+"::"+sumAmount);
Long currCount =
state.get(element.getKey()).read() == null
? 0L
: state.get(element.getKey()).read();
Long newCount = currCount+sumAmount;
state.put(element.getKey(),newCount);
processContext.output(KV.of(element.getKey(),newCount));
}
}));
finalCountStream
.apply("finalState", ParDo.of(new DoFn<KV<String,Long>, String>() {
#StateId(myState)
private final StateSpec<MapState<String, Long>> mapState =
StateSpecs.map();
#ProcessElement
public void processElement(
ProcessContext c,
#StateId(myState) MapState<String, Long> state) {
KV<String,Long> e = c.element();
Long currCount = state.get(e.getKey()).read()==null
? 0L
: state.get(e.getKey()).read();
Long newCount = currCount+e.getValue();
state.put(e.getKey(),newCount);
c.output(e.getKey()+":"+newCount);
}
}))
.apply(KafkaIO.<Void, String>write()
.withBootstrapServers("localhost:9092")
.withTopic("test")
.withValueSerializer(StringSerializer.class)
.values());
Alternatively, you can use a Flatten + Combine approach, which should be give you simpler code:
PCollection<KV<String, Long>> pc1 = ...;
PCollection<KV<String, Long>> pc2 = ...;
PCollectionList<KV<String, Long>> pcs = PCollectionList.of(pc1).and(pc2);
PCollection<KV<String, Long>> merged = pcs.apply(Flatten.<KV<String, Long>>pCollections());
merged.apply(windiw...).apply(Combine.perKey(Sum.ofLongs()))
You have set up both streams with the trigger Repeatedly.forever(AfterPane.elementCountAtLeast(1)) and discardingFiredPanes(). This will cause the CoGroupByKey to output as soon as possible after each input element and then reset its state each time. So it is normal behavior that it basically passes each input straight through.
Let me explain more: CoGroupByKey is executed like this:
All elements from stream1 and stream2 are tagged as you specified. So every (key, value1) from stream1 effectively becomes (key, (count1, value1)). And every (key, value2) from stream2 becomes `(key, (count2, value2))
These tagged collects are flattened together. So now there is one collection with elements like (key, (count1, value1)) and (key, (count2, value2)).
The combined collection goes through a normal GroupByKey. This is where triggers happen. So with the default trigger, you get (key, [(count1, value1), (count2, value2), ...]) with all the values for a key getting grouped. But with your trigger, you will often get separate (key, [(count1, value1)]) and (key, [(count2, value2)]) because each grouping fires right away.
The output of the GroupByKey is wrapped in just an API that is CoGbkResult. In many runners this is just a filtered view of the grouped iterable.
Of course, triggers are nondeterministic and runners are also allowed to have different implementations of CoGroupByKey. But the behavior you are seeing is expected. You probably don't want to use trigger like that or discarding mode, or else you need to do more grouping downstream.
Generally, doing a join with CoGBK is going to require some work downstream, until Beam supports retractions.
PipelineOptions options = PipelineOptionsFactory.create();
options.as(FlinkPipelineOptions.class)
.setRunner(FlinkRunner.class);
Pipeline p = Pipeline.create(options);
PCollection<KV<String,Long>> stream1 = new KafkaWordCount("localhost:9092","test1")
.build(p);
PCollection<KV<String,Long>> stream2 = new KafkaWordCount("localhost:9092","test2")
.build(p);
PCollectionList<KV<String, Long>> pcs = PCollectionList.of(stream1).and(stream2);
PCollection<KV<String, Long>> merged = pcs.apply(Flatten.<KV<String, Long>>pCollections());
merged.apply("finalState", ParDo.of(new DoFn<KV<String,Long>, String>() {
#StateId(myState)
private final StateSpec<MapState<String, Long>> mapState = StateSpecs.map();
#ProcessElement
public void processElement(ProcessContext c, #StateId(myState) MapState<String, Long> state){
KV<String,Long> e = c.element();
System.out.println("Thread ID :"+ Thread.currentThread().getId());
Long currCount = state.get(e.getKey()).read()==null? 0L:state.get(e.getKey()).read();
Long newCount = currCount+e.getValue();
state.put(e.getKey(),newCount);
c.output(e.getKey()+":"+newCount);
}
})).apply(KafkaIO.<Void, String>write()
.withBootstrapServers("localhost:9092")
.withTopic("test")
.withValueSerializer(StringSerializer.class)
.values()
);
p.run().waitUntilFinish();

ReactiveX Self-Cancelling Timer

I want to create an extension method of the form:
IObservable<bool> CancellableTimer( this IObservable source, TimeSpan delay )
{
...
}
which produces a sequence which is always false when the source is, but will go true when the source sequence has stayed true for a period defined by a delay, t:
source: 0---1---------0--1-0-1-0-1-0-1----------0
t------> t------>
result: 0----------1--0---------------------1---0
I'm sure there must be a way to do this using Rx primitives but I'm new to Rx and having trouble getting my head round it. Any ideas please?
Okay so this is what I came up with. I also renamed the method to AsymetricDelay() as it seems like a more appropriate name:
static public IObservable<bool> AsymetricDelay(this IObservable<bool> source, TimeSpan delay, IScheduler scheduler)
{
var distinct = source.DistinctUntilChanged();
return distinct.
Throttle(delay, scheduler) // Delay both trues and falses
.Where(x => x) // But we only want trues to be delayed
.Merge( // Merge the trues with...
distinct.Where(x=>!x) // non delayed falses
)
.DistinctUntilChanged(); // Get rid of any repeated values
}
And here is a unit test to confirm its operation:
[Fact]
public static void Test_AsymetricDelay()
{
var scheduler = new TestScheduler();
var xs = scheduler.CreateHotObservable(
new Recorded<Notification<bool>>(10000000, Notification.CreateOnNext(true)),
new Recorded<Notification<bool>>(60000000, Notification.CreateOnNext(false)),
new Recorded<Notification<bool>>(70000000, Notification.CreateOnNext(true)),
new Recorded<Notification<bool>>(80000000, Notification.CreateOnNext(false)),
new Recorded<Notification<bool>>(100000000, Notification.CreateOnCompleted<bool>())
);
var dest = xs.DelayOn( TimeSpan.FromSeconds(2), scheduler);
var testObserver = scheduler.Start(
() => dest,
0,
0,
TimeSpan.FromSeconds(10).Ticks);
testObserver.Messages.AssertEqual(
new Recorded<Notification<bool>>(30000000, Notification.CreateOnNext(true)),
new Recorded<Notification<bool>>(60000000, Notification.CreateOnNext(false)),
new Recorded<Notification<bool>>(100000000, Notification.CreateOnCompleted<bool>())
);
}

Merging Observables

Here we have a Observable Sequence... in .NET using Rx.
var aSource = new Subject<int>();
var bSource = new Subject<int>();
var paired = Observable
.Merge(aSource, bSource)
.GroupBy(i => i).SelectMany(g => g.Buffer(2).Take(1));
paired.Subscribe(g => Console.WriteLine("{0}:{1}", g.ElementAt(0), g.ElementAt(1)));
aSource.OnNext(4);
bSource.OnNext(1);
aSource.OnNext(2);
bSource.OnNext(5);
aSource.OnNext(3);
bSource.OnNext(3);
aSource.OnNext(5);
bSource.OnNext(2);
aSource.OnNext(1);
bSource.OnNext(4);
Output:
3:3
5:5
2:2
1:1
4:4
We will get events every time a pair of numbers arrive with the same id.
Perfect! Just what i want.
Groups of two, paired by value.
Next question....
How to get a selectmany/buffer for sequences of values.
So 1,2,3,4,5 arrives at both aSource and bSource via OnNext(). Then fire ConsoleWriteLine() for 1-5. Then when 2,3,4,5,6 arrives, we get another console.writeline(). Any clues anyone?
Immediately, the Rx forum suggests looking at .Window()
http://introtorx.com/Content/v1.0.10621.0/17_SequencesOfCoincidence.html
Which on the surface looks perfect. In my case i need a window of value 4, in this case.
Where in the query sequence does it belong to get this effect?
var paired = Observable.Merge(aSource, bSource).GroupBy(i => i).SelectMany(g => g.Buffer(2).Take(1));
Output
1,2,3,4,5 : 1,2,3,4,5
2,3,4,5,6 : 2,3,4,5,6
Regards,
Daniel
Assuming events arrive randomly at the sources, use my answer to "Reordering events with Reactive Extensions" to get the events in order.
Then use Observable.Buffer to create a sliding buffer:
// get this using the OrderedCollect/Sort in the referenced question
IObservable<int> orderedSource;
// then subscribe to this
orderedSource.Buffer(5, 1);
Here is an extension method that fires when it has n inputs of the same ids.
public static class RxExtension
{
public static IObservable<TSource> MergeBuffer<TSource>(this IObservable<TSource> source, Func<TSource, int> keySelector, Func<IList<TSource>,TSource> mergeFunction, int bufferCount)
{
return Observable.Create<TSource>(o => {
var buffer = new Dictionary<int, IList<TSource>>();
return source.Subscribe<TSource>(i =>
{
var index = keySelector(i);
if (buffer.ContainsKey(index))
{
buffer[index].Add(i);
}
else
{
buffer.Add(index, new List<TSource>(){i});
}
if (buffer.Count==bufferCount)
{
o.OnNext(mergeFunction(buffer[index]));
buffer.Remove(index);
}
});
});
}
}
Calling the extension.
mainInput = Observable.Merge(inputNodes.ToArray()).MergeBuffer<NodeData>(x => x.id, x => MergeData(x), 1);

Testing With A Fake DbContext and Autofixture and Moq

SO follow this example
example and how make a fake DBContex For test my test using just this work fine
[Test]
public void CiudadIndex()
{
var ciudades = new FakeDbSet<Ciudad>
{
new Ciudad {CiudadId = 1, EmpresaId =1, Descripcion ="Santa Cruz", FechaProceso = DateTime.Now, MarcaBaja = null, UsuarioId = 1},
new Ciudad {CiudadId = 2, EmpresaId =1, Descripcion ="La Paz", FechaProceso = DateTime.Now, MarcaBaja = null, UsuarioId = 1},
new Ciudad {CiudadId = 3, EmpresaId =1, Descripcion ="Cochabamba", FechaProceso = DateTime.Now, MarcaBaja = null, UsuarioId = 1}
};
//// Create mock unit of work
var mockData = new Mock<IContext>();
mockData.Setup(m => m.Ciudades).Returns(ciudades);
// Setup controller
var homeController = new CiudadController(mockData.Object);
// Invoke
var viewResult = homeController.Index();
var ciudades_de_la_vista = (IEnumerable<Ciudad>)viewResult.Model;
// Assert..
}
Iam tryign now to use Autofixture-Moq
to create "ciudades" but I cant. I try this
var fixture = new Fixture();
var ciudades = fixture.Build<FakeDbSet<Ciudad>>().CreateMany<FakeDbSet<Ciudad>>();
var mockData = new Mock<IContext>();
mockData.Setup(m => m.Ciudades).Returns(ciudades);
I get this error
Cant convert System.Collections.Generic.IEnumerable(FakeDbSet(Ciudad)) to System.Data.Entity.IDbSet(Ciudad)
cant put "<>" so I replace with "()" in the error message
Implementation of IContext and FakeDbSet
public interface IContext
{
IDbSet<Ciudad> Ciudades { get; }
}
public class FakeDbSet<T> : IDbSet<T> where T : class
how can make this to work?
A minor point... In stuff like:
var ciudades_fixture = fixture.Build<Ciudad>().CreateMany<Ciudad>();
The second type arg is unnecessary and should be:
var ciudades_fixture = fixture.Build<Ciudad>().CreateMany();
I really understand why you need a FakeDbSet and the article is a bit TL;DR... In general, I try to avoid faking and mucking with ORM bits and instead dealing with interfaces returning POCOs to the max degree possible.
That aside... The reason the normal syntax for initialising the list works is that there is an Add (and IEnumerable) in DBFixture. AutoFixture doesn't have a story for that pattern directly (after all it is compiler syntactic sugar and not particularly amenable to reflection or in line with any other conventions) but you can use AddManyTo as long as there is an ICollection in play. Luckily, within the impl of FakeDbSet as in the article, the following gives us an in:-
public ObservableCollection<T> Local
{
get { return _data; }
}
As ObservableCollection<T> derives from ICollection<T>, you should be able to:
var ciudades = new FakeDbSet<Cuidad>();
fixture.AddManyTo(ciudades.Local);
var mockData = new Mock<IContext>();
mockData.Setup(m => m.Ciudades).Returns(ciudades);
It's possible to wire up a customization to make this prettier, but at least you have a way to manage it. The other option is to have something implement ICollection (or add a prop with a setter taking IEnumerable<T> and have AF generate the parent object, causing said collection to be filled in.
Long superseded side note: In your initial question, you effectively have:
fixture.Build<FakeDbSet<Ciudad>>().CreateMany()
The problem becomes clearer then - you are asking AF to generate Many FakeDbSet<Ciudad>s, which is not what you want.
I haven't used AutoFixture in a while, but shouldn't it be:
var ciudades = new FakeDbSet<Ciudad>();
fixture.AddManyTo(ciudades);
for the moment I end doing this, I will keep reading about how use automoq, cause I'm new in this
var fixture = new Fixture();
var ciudades_fixture = fixture.Build<Ciudad>().CreateMany<Ciudad>();
var ciudades = new FakeDbSet<Ciudad>();
foreach (var item in ciudades_fixture)
{
ciudades.Add(item);
}
var mockData = new Mock<IContext>();
fixture.Create<Mock<IContext>>();
mockData.Setup(r => r.Ciudades).Returns(ciudades);