how to set casbin conf if I want to use rabc with resource roles at the same time match the restful api - casbin

I use the casbin with rabc with resource model, this is my conf:
[request_definition]
r = sub, obj, act
[policy_definition]
p = sub, obj, act
[role_definition]
g = _, _
g2 = _, _
[policy_effect]
e = some(where (p.eft == allow))
[matchers]
m = g(r.sub, p.sub) && g2(r.obj, p.obj) &&(r.act == p.act
this is the policy:
p, data_group_admin, data_group, write
g, alice, data_group_admin
g2, /api/:id, data_group
this is the request:
alice, /api/1, read
alice, /api/2, write
the result is false, false
actually, I expect the result is true, true,I want to the resource can support restful format, how to set the conf?

You can use AddMatchingFunc to make the default role manager knows how to spread the role link because by default it compares role name exactly.
the example code may look like
e.GetRoleManager().(*defaultrolemanager.RoleManager).AddMatchingFunc('matcher', util.KeyMatch)
for more information you can see https://github.com/casbin/casbin/blob/master/rbac/default-role-manager/role_manager.go#L163
updated, add sample code
package main
import (
"github.com/casbin/casbin/v2"
defaultrolemanager "github.com/casbin/casbin/v2/rbac/default-role-manager"
"github.com/casbin/casbin/v2/util"
)
func main() {
e, _ := casbin.NewEnforcer("./model.conf", "./policy.csv")
e.GetRoleManager().(*defaultrolemanager.RoleManager).AddMatchingFunc("key_match", util.KeyMatch2)
res1, _ := e.Enforce("alice", "/api/1", "write") // true
res2, _ := e.Enforce("alice", "/api/2", "read") // false
}
This will result true, because alice belongs to data_group_admin and /api/1 belongs to data_group, data_group_admin can write data_group
Since you didn't define the read policy, so the second enforce will return false.

In my opinion.
g2, /api/:id, data_group
g2 need two sub not obj and sub.
I mean: if you want alice alice can access /api/1
You need :
p: alice, /api/1, read
p: alice, /api/2, write
or:
p: data_group_admin, /api/1, read
p: data_group_admin, /api/2, write
g: alice, data_group_admin
Summary: distinguish sub and obj in r, p and g.

Related

the direction of <> operator in Chisel?

In 2.6 chiseltest of chisel-bootcamp tutorials, there is a example about create a Queue using Decoupled interface:
case class QueueModule[T <: Data](ioType: T, entries: Int) extends MultiIOModule {
val in = IO(Flipped(Decoupled(ioType)))
val out = IO(Decoupled(ioType))
out <> Queue(in, entries)
}
the <> operator's direction in the last line out <> Queue(in, entries) is really confusing to me as I check <> operator of class DecoupledIO in chisel-api and got the definition is "Connect this data to that data bi-directionally and element-wise." which means out and Queue(in, entries)'s return must be bi-directionally. However, I found the Queue source code:
object Queue
{
/** Create a queue and supply a DecoupledIO containing the product. */
#chiselName
def apply[T <: Data](
enq: ReadyValidIO[T],
entries: Int = 2,
pipe: Boolean = false,
flow: Boolean = false): DecoupledIO[T] = {
if (entries == 0) {
val deq = Wire(new DecoupledIO(chiselTypeOf(enq.bits)))
deq.valid := enq.valid
deq.bits := enq.bits
enq.ready := deq.ready
deq
} else {
val q = Module(new Queue(chiselTypeOf(enq.bits), entries, pipe, flow))
q.io.enq.valid := enq.valid // not using <> so that override is allowed
q.io.enq.bits := enq.bits
enq.ready := q.io.enq.ready
TransitName(q.io.deq, q)
}
}
which return q.io.deq by TransitName method, and q.io.deq are defined as follows:
object DeqIO {
def apply[T<:Data](gen: T): DecoupledIO[T] = Flipped(Decoupled(gen))
}
/** An I/O Bundle for Queues
* #param gen The type of data to queue
* #param entries The max number of entries in the queue.
*/
class QueueIO[T <: Data](private val gen: T, val entries: Int) extends Bundle
{ // See github.com/freechipsproject/chisel3/issues/765 for why gen is a private val and proposed replacement APIs.
/* These may look inverted, because the names (enq/deq) are from the perspective of the client,
* but internally, the queue implementation itself sits on the other side
* of the interface so uses the flipped instance.
*/
/** I/O to enqueue data (client is producer, and Queue object is consumer), is [[Chisel.DecoupledIO]] flipped. */
val enq = Flipped(EnqIO(gen))
/** I/O to dequeue data (client is consumer and Queue object is producer), is [[Chisel.DecoupledIO]]*/
val deq = Flipped(DeqIO(gen))
/** The current amount of data in the queue */
val count = Output(UInt(log2Ceil(entries + 1).W))
}
that means q.io.deq is No-Flipped DecoupledIO and has the same interface direction of out. So I really want to know that how <> works in out <> Queue(in, entries) ?
Decoupled(data) add handshaking protocol to data bundle given in parameters.
If you declare this signal for example :
val dec_data = IO(Decoupled(chiselTypeOf(data)))
dec_data object will have 2 handshake values (ready, valid) with different directions and one data value.
myvalue := dec_data.bits
value_is_valid := dec_data.valid //boolean value in the same direction as data
dec_data.ready := sink_ready_to_receive //boolean value in the oposite data direction
If you want to connect dec_data to another DecoupledIO bundle, you can't use := operator on the whole bundle because it's unidirectional operator.
You have to do connection value by value :
val dec_data_sink = IO(Flipped(Decoupled(chiselTypeOf(data))))
dec_data_sink.bits := dec_data.bits
dec_data_sink.valid := dec_data.valid
dec_data.ready := dec_data_sink.ready
With bulk connector <> you can avoid this painful connexions with :
dec_data_sink <> dec_data
Chisel will automatically connect right signals together.
For more documentation about bulks connections and decoupled interface, see the documentation here.
OK, I check the Verilog generated by this example:
module Queue(
input clock,
input reset,
output io_enq_ready,
input io_enq_valid,
input [8:0] io_enq_bits,
input io_deq_ready,
output io_deq_valid,
output [8:0] io_deq_bits
);
......
......
module QueueModule(
input clock,
input reset,
output in_ready,
input in_valid,
input [8:0] in_bits,
input out_ready,
output out_valid,
output [8:0] out_bits
);
wire q_clock; // #[Decoupled.scala 296:21]
wire q_reset; // #[Decoupled.scala 296:21]
wire q_io_enq_ready; // #[Decoupled.scala 296:21]
wire q_io_enq_valid; // #[Decoupled.scala 296:21]
wire [8:0] q_io_enq_bits; // #[Decoupled.scala 296:21]
wire q_io_deq_ready; // #[Decoupled.scala 296:21]
wire q_io_deq_valid; // #[Decoupled.scala 296:21]
wire [8:0] q_io_deq_bits; // #[Decoupled.scala 296:21]
Queue q ( // #[Decoupled.scala 296:21]
.clock(q_clock),
.reset(q_reset),
.io_enq_ready(q_io_enq_ready),
.io_enq_valid(q_io_enq_valid),
.io_enq_bits(q_io_enq_bits),
.io_deq_ready(q_io_deq_ready),
.io_deq_valid(q_io_deq_valid),
.io_deq_bits(q_io_deq_bits)
);
assign in_ready = q_io_enq_ready; // #[Decoupled.scala 299:17]
assign out_valid = q_io_deq_valid; // #[cmd2.sc 4:7]
assign out_bits = q_io_deq_bits; // #[cmd2.sc 4:7]
assign q_clock = clock;
assign q_reset = reset;
assign q_io_enq_valid = in_valid; // #[Decoupled.scala 297:22]
assign q_io_enq_bits = in_bits; // #[Decoupled.scala 298:21]
assign q_io_deq_ready = out_ready; // #[cmd2.sc 4:7]
endmodule
I found that there is simply inputs connect inputs and outputs connect outputs between Queue and QueueModule. As far as I understand it, there is a instantiation of Queue module in QueueModule, so QueueModule and Queue match the parent/child modules and <> bulk-connect operator connects interfaces of the same gender as the documentation
So I understand that I ignored the Queue itself is also a module and the format of the example:
case class QueueModule[T <: Data](ioType: T, entries: Int) extends MultiIOModule {
val in = IO(Flipped(Decoupled(ioType)))
val out = IO(Decoupled(ioType))
out <> Queue(in, entries)
}
will match QueueModule/Queue to parent/child modules.

Cypher: find path using Scala AnormCypher?

AnormCypher doc provides an example how to retriev data using the Stream API:
http://anormcypher.org/
"The first way to access the results of a return query is to use the Stream API.
When you call apply() on any Cypher statement, you will receive a lazy Stream of CypherRow instances, where each row can be seen as a dictionary:
// Create Cypher query
val allCountries = Cypher("start n=node(*) where n.type = 'Country' return n.code as code, n.name as name")
// Transform the resulting Stream[CypherRow] to a List[(String,String)]
val countries = allCountries.apply().map(row =>
row[String]("code") -> row[String]("name")
).toList
I am trying to use the same aproach to get path with the following Cypher query:
MATCH p = (n {id: 'n5'})-[*]-(m) RETURN p;
Yet, when running this code:
Cypher("MATCH p = (n {id: 'n5'})-[*]-(m) RETURN p;")().map {row =>
println(row[Option[org.anormcypher.NeoRelationship]]("p"))
}
I get exception (see below). How to get path info from CypherRow in this case?
Exception in thread "main" java.lang.RuntimeException: TypeDoesNotMatch(Unexpected type while building a relationship)
at org.anormcypher.MayErr$$anonfun$get$1.apply(Utils.scala:21)
at org.anormcypher.MayErr$$anonfun$get$1.apply(Utils.scala:21)
at scala.util.Either.fold(Either.scala:97)
at org.anormcypher.MayErr.get(Utils.scala:21)
at org.anormcypher.CypherRow$class.apply(AnormCypher.scala:303)
at org.anormcypher.CypherResultRow.apply(AnormCypher.scala:309)
at bigdata.test.n4j.Simple$$anonfun$main$1.apply(Simple.scala:31)
at bigdata.test.n4j.Simple$$anonfun$main$1.apply(Simple.scala:29)
at scala.collection.immutable.Stream.map(Stream.scala:376)
at bigdata.test.n4j.Simple$.main(Simple.scala:29)
Paths in Cypher were changed as of 2.0, so you can't work with them easily directly, as they're not collections. There probably should be a new Path type of some sort in AnormCypher, but for now you can use paths along with relationships() or nodes().
For example, you could do this to extract the relationships:
Cypher("MATCH p = (n {id: 'n5'})-[*]-(m) RETURN relationships(p);")().map {row =>
println(row[Seq[NeoRelationship]]("relationships(p)"))
}

EF DbContext. How to avoid caching?

Spent a lot of time, but still cann't understand how to avoid caching in DbContext.
I attached below entity model of some easy case to demonstrate what I mean.
The problem is that dbcontext caching results. For example, I have next code for querying data from my database:
using (TestContext ctx = new TestContext())
{
var res = (from b in ctx.Buildings.Where(x => x.ID == 1)
select new
{
b,
flats = from f in b.Flats
select new
{
f,
people = from p in f.People
where p.Archived == false
select p
}
}).AsEnumerable().Select(x => x.b).Single();
}
In this case, everything is fine: I got what I want (Only persons with Archived == false).
But if I add another query after it, for example, query for buildings that have people that have Archived flag set to true, I have next things, that I really cann't understand:
my previous result, that is res, will be added by data (there
will be added Persons with Archived == true too)
new result will contain absolutely all Person's, no matter what Archived equals
the code of this query is next:
using (TestContext ctx = new TestContext())
{
var res = (from b in ctx.Buildings.Where(x => x.ID == 1)
select new
{
b,
flats = from f in b.Flats
select new
{
f,
people = from p in f.People
where p.Archived == false
select p
}
}).AsEnumerable().Select(x => x.b).Single();
var newResult = (from b in ctx.Buildings.Where(x => x.ID == 1)
select new
{
b,
flats = from f in b.Flats
select new
{
f,
people = from p in f.People
where p.Archived == true
select p
}
}).AsEnumerable().Select(x => x.b).Single();
}
By the way, I set LazyLoadingEnabled to false in constructor of TestContext.
Does anybody know how to workaround this problem? How can I have in my query what I really write in my linq to entity?
P.S. #Ladislav may be you can help?
You can use the AsNoTracking method on your query.
var res = (from b in ctx.Buildings.Where(x => x.ID == 1)
select new
{
b,
flats = from f in b.Flats
select new
{
f,
people = from p in f.People
where p.Archived == false
select p
}
}).AsNoTracking().AsEnumerabe().Select(x => x.b).Single();
I also want to note that your AsEnumerable is probably doing more harm than good. If you remove it, the Select(x => x.b) will be translated to SQL. As is, you are selecting everything, then throwing away everything but x.b in memory.
have you tried something like:
ctx.Persons.Where(x => x.Flat.Building.Id == 1 && x.Archived == false);
===== EDIT =====
In this case I think you approach is, imho, really hazardous. Indeed you works on the data loaded by EF to interpret your query rather than on data resulting of the interpretation of your query. If one day EF changes is loading policy (for example with a predictive pre-loading) your approach will "send you in then wall".
For your goal, you will have to eager load the data you need to build your "filterd" entity. That is select the building, then foreach Flat select the non archived persons.
Another solution is to use too separate contexts in an "UnitOfWork" like design.

How to write left outer join using MethodCallExpressions?

The code block below answers the question: "How do you perform a left outer join using linq extension methods?"
var qry = Foo.GroupJoin(
Bar,
foo => foo.Foo_Id,
bar => bar.Foo_Id,
(x,y) => new { Foo = x, Bars = y })
.SelectMany(
x => x.Bars.DefaultIfEmpty(),
(x,y) => new { Foo = x, Bar = y});
How do you write this GroupJoin and SelectMany as MethodCallExpressions? All of the examples that I've found are written using DynamicExpressions translating strings into lambdas (another example). I like to avoid taking a dependency on that library if possible.
Can the query above be written with Expressions and associated methods?
I know how to construct basic lambda expressions like foo => foo.Foo_Id using ParameterExpressions MemberExpressions and Expression.Lambda() , but how do you construct (x,y) => new { Foo = x, Bars = y })??? to be able to construct the necessary parameters to create both calls?
MethodCallExpression groupJoinCall =
Expression.Call(
typeof(Queryable),
"GroupJoin",
new Type[] {
typeof(Customers),
typeof(Purchases),
outerSelectorLambda.Body.Type,
resultsSelectorLambda.Body.Type
},
c.Expression,
p.Expression,
Expression.Quote(outerSelectorLambda),
Expression.Quote(innerSelectorLambda),
Expression.Quote(resultsSelectorLambda)
);
MethodCallExpression selectManyCall =
Expression.Call(typeof(Queryable),
"SelectMany", new Type[] {
groupJoinCall.ElementType,
resultType,
resultsSelectorLambda.Body.Type
}, groupJoinCall.Expression, Expression.Quote(lambda),
Expression.Quote(resultsSelectorLambda)));
Ultimately, I need to create a repeatable process that will left join n Bars to Foo. Because we have a vertical data structure, a left-joined query is required to return what is represented as Bars, to allow the user to sort Foo. The requirement is to allow the user to sort by 10 Bars, but I don't expect them to ever use more than three. I tried writing a process that chained the code in the first block above up to 10 times, but once I got passed 5 Visual Studio 2012 start to slow and around 7 it locked up.
Therefore, I'm now trying to write a method that returns the selectManyCall and calls itself recursively as many times as is requested by the user.
Based upon the query below that works in LinqPad, the process that needs to be repeated only requires manually handling the transparent identifiers in Expression objects. The query sorts returns Foos sorted by Bars (3 Bars in this case).
A side note. This process is significantly easier doing the join in the OrderBy delegate, however, the query it produces includes the T-SQL "OUTER APPLY", which isn't supported by Oracle which is required.
I'm grateful for any ideas on how to write the projection to anonymous type or any other out-of-the-box idea that may work. Thank you.
var q = Foos
.GroupJoin (
Bars,
g => g.FooID,
sv => sv.FooID,
(g, v) =>
new
{
g = g,
v = v
}
)
.SelectMany (
s => s.v.DefaultIfEmpty (),
(s, v) =>
new
{
s = s,
v = v
}
)
.GroupJoin (
Bars,
g => g.s.g.FooID,
sv => sv.FooID,
(g, v) =>
new
{
g = g,
v = v
}
)
.SelectMany (
s => s.v.DefaultIfEmpty (),
(s, v) =>
new
{
s = s,
v = v
}
)
.GroupJoin (
Bars,
g => g.s.g.s.g.FooID,
sv => sv.FooID,
(g, v) =>
new
{
g = g,
v = v
}
)
.SelectMany (
s => s.v.DefaultIfEmpty (),
(s, v) =>
new
{
s = s,
v = v
}
)
.OrderBy (a => a.s.g.s.g.v.Text)
.ThenBy (a => a.s.g.v.Text)
.ThenByDescending (a => a.v.Date)
.Select (a => a.s.g.s.g.s.g);
If you're having trouble figuring out how to generate the expressions, you could always get an assist from the compiler. What you could do is declare an lambda expression with the types you are going to query with and write the lambda. The compiler will generate the expression for you and you can examine it to see what expressions make up the expression tree.
e.g., your expression is equivalent to this using the query syntax (or you could use the method call syntax if you prefer)
Expression<Func<IQueryable<Foo>, IQueryable<Bar>, IQueryable>> expr =
(Foo, Bar) =>
from foo in Foo
join bar in Bar on foo.Foo_Id equals bar.Foo_Id into bars
from bar in bars.DefaultIfEmpty()
select new
{
Foo = foo,
Bar = bar,
};
To answer your question, you can't really generate an expression that creates an anonymous object, the actual type isn't known at compile time. You can cheat kinda by creating a dummy object and use GetType() to get its type which you could then use to create the appropriate new expression, but that's more of a dirty hack and I wouldn't recommend doing this. Doing so, you won't be able to generate strongly typed expressions since you don't know the type of the anonymous type.
e.g.,
var dummyType = new
{
foo = default(Foo),
bars = default(IQueryable<Bar>),
}.GetType();
var fooExpr = Expression.Parameter(typeof(Foo), "foo");
var barsExpr = Expression.Parameter(typeof(IQueryable<Bar>), "bars");
var fooProp = dummyType.GetProperty("foo");
var barsProp = dummyType.GetProperty("bars");
var ctor = dummyType.GetConstructor(new Type[]
{
fooProp.PropertyType,
barsProp.PropertyType,
});
var newExpr = Expression.New(
ctor,
new Expression[] { fooExpr, barsExpr },
new MemberInfo[] { fooProp, barsProp }
);
// the expression type is unknown, just some lambda
var lambda = Expression.Lambda(newExpr, fooExpr, barsExpr);
Whenever you need to generate an expression that involves an anonymous object, the right thing to do would be to create an known type and use that in place of the anonymous type. It will have limited use yes but it's a much cleaner way to handle such a situation. Then at least you'll be able to get the type at compile time.
// use this type instead of the anonymous one
public class Dummy
{
public Foo foo { get; set; }
public IQueryable<Bar> bars { get; set; }
}
var dummyType = typeof(Dummy);
var fooExpr = Expression.Parameter(typeof(Foo), "foo");
var barsExpr = Expression.Parameter(typeof(IQueryable<Bar>), "bars");
var fooProp = dummyType.GetProperty("foo");
var barsProp = dummyType.GetProperty("bars");
var ctor = dummyType.GetConstructor(Type.EmptyTypes);
var newExpr = Expression.MemberInit(
Expression.New(ctor),
Expression.Bind(fooProp, fooExpr),
Expression.Bind(barsProp, barsExpr)
);
// lambda's type is known at compile time now
var lambda = Expression.Lambda<Func<Foo, IQueryable<Bar>, Dummy>>(
newExpr,
fooExpr,
barsExpr);
Or, instead of creating and using a dummy type, you might be able to use tuples in your expressions instead.
static Expression<Func<T1, T2, Tuple<T1, T2>>> GetExpression<T1, T2>()
{
var type1 = typeof(T1);
var type2 = typeof(T2);
var tupleType = typeof(Tuple<T1, T2>);
var arg1Expr = Expression.Parameter(type1, "arg1");
var arg2Expr = Expression.Parameter(type2, "arg2");
var arg1Prop = tupleType.GetProperty("Item1");
var arg2Prop = tupleType.GetProperty("Item2");
var ctor = tupleType.GetConstructor(new Type[]
{
arg1Prop.PropertyType,
arg2Prop.PropertyType,
});
var newExpr = Expression.New(
ctor,
new Expression[] { arg1Expr, arg2Expr },
new MemberInfo[] { arg1Prop, arg2Prop }
);
// lambda's type is known at compile time now
var lambda = Expression.Lambda<Func<T1, T2, Tuple<T1, T2>>>(
newExpr,
arg1Expr,
arg2Expr);
return lambda;
}
Then to use it:
var expr = GetExpression<Foo, IQueryable<Bar>>();

Neater way to switch between two IObservables based on a third

I have two value streams and one selector stream and I'd like to produce a result stream that alternates between the value streams based on the selector. The code below gives the right result, but I don't like it.
Does anyone have anything neater?
var valueStreamA = new BehaviorSubject<int>(0);
var valueStreamB = new BehaviorSubject<int>(100);
var selectorStream = new BehaviorSubject<bool>(true);
var filteredA = valueStreamA .CombineLatest(selectorStream, (a, c) => new { A = a, C = c })
.Where(ac => ac.C)
.Select(ac => ac.A);
var filteredB = valueStreamB.CombineLatest(selectorStream, (b, c) => new { B = b, C = c })
.Where(bc => !bc.C)
.Select(bc => bc.B);
var result = Observable.Merge(filteredA, filteredB);
result.Subscribe(Console.WriteLine);
valueStreamA.OnNext(1);
valueStreamB.OnNext(101);
selectorStream.OnNext(false);
valueStreamA.OnNext(2);
valueStreamB.OnNext(102);
selectorStream.OnNext(true);
This productes the following output:
0
1
101
102
2
I'd do something like this:
var a = new BehaviorSubject<int>(0);
var b = new BehaviorSubject<int>(100);
var c = new BehaviorSubject<bool>(true);
var valueStreamA = a as IObservable<int>;
var valueStreamB = b as IObservable<int>;
var selector = c as IObservable<bool>;
var result = selector
// for every change in the selector...
.DistinctUntilChanged()
// select one of the two value streams
.Select(change => change ? valueStreamA : valueStreamB)
// and flatten the resulting wrapped observable
.Switch();
result.Subscribe(Console.WriteLine);
a.OnNext(1);
b.OnNext(101);
c.OnNext(false);
a.OnNext(2);
b.OnNext(102);
c.OnNext(true);
Could do something like:
var xs = Observable.Interval(TimeSpan.FromSeconds(1)).Select(_ => Feeds.Xs);
var ys = Observable.Interval(TimeSpan.FromSeconds(1)).Select(_ => Feeds.Ys);
var selectorSubject = new Subject<Feeds>();
var query = from selector in selectorSubject
select from merged in xs.Merge(ys)
where merged == selector
select merged;
query.Switch().Subscribe(Console.WriteLine);
OnNext into your 'selectorSubject' to change it.
There are a few differences to your example, but easy to get around:
Your question involved a selector of type bool, whereas I have been lazy and reused the Feeds enum in order to allow me to do an easy equality check (where merged == selector).
You of course could simply do (where selector ? merged == Xs : merged == Ys), or something like that to evaluate each merged item and discard ones you don't care about (depending on your selector).
Specifically, you would probably want to select not just the integer, but an identifier of the feed. Consider using something like Tuple.Create(), so you get that info with each update:
{A - 1}, {B - 101} etc. Your where can then do:
where selector ? merged.Item1 == A : merged.Item1 == B //this maps 'true' to feed A
I also used a Switch, which will cause my sample streams to restart because they are not published.
You probably want to publish yours and Connect them (make them 'hot'), so a Switch like mine doesn't cause any new side effects in the subscription. You have a subject (which is hot), but the 'behaviour' part will replace the value you passed into the constructor. Publishing and connecting would prevent that.
Shout if you are still confused. This isn't a full answer, but might give you enough to think about.
Howard.
Now much closer to your original question:
void Main()
{
var valueStreamA = new BehaviorSubject<int>(0);
var valueStreamB = new BehaviorSubject<int>(100);
var selectorStreamA = valueStreamA.Select(id => Tuple.Create("A", id)).Publish();
var selectorStreamB = valueStreamB.Select(id => Tuple.Create("B", id)).Publish();
var selectorStream = new BehaviorSubject<bool>(true);
var query = from selector in selectorStream
select from merged in selectorStreamA.Merge(selectorStreamB)
where selector == true ? merged.Item1 == "A" : merged.Item1 == "B"
select merged.Item2;
query.Switch().Subscribe(Console.WriteLine);
selectorStreamA.Connect();
selectorStreamB.Connect();
//First we get 0 output (because we are already using stream A, and it has a first value)
valueStreamA.OnNext(1); //This is output, because our selector remains as 'A'
valueStreamB.OnNext(101); //This is ignored - because we don't take from B
selectorStream.OnNext(false); //Switch to B
valueStreamA.OnNext(2); //Ignored - we are now using B only
valueStreamB.OnNext(102); //This is output
selectorStream.OnNext(true); //Switch back to A.
}
Outputs:
0
1
102