In 2.6 chiseltest of chisel-bootcamp tutorials, there is a example about create a Queue using Decoupled interface:
case class QueueModule[T <: Data](ioType: T, entries: Int) extends MultiIOModule {
val in = IO(Flipped(Decoupled(ioType)))
val out = IO(Decoupled(ioType))
out <> Queue(in, entries)
}
the <> operator's direction in the last line out <> Queue(in, entries) is really confusing to me as I check <> operator of class DecoupledIO in chisel-api and got the definition is "Connect this data to that data bi-directionally and element-wise." which means out and Queue(in, entries)'s return must be bi-directionally. However, I found the Queue source code:
object Queue
{
/** Create a queue and supply a DecoupledIO containing the product. */
#chiselName
def apply[T <: Data](
enq: ReadyValidIO[T],
entries: Int = 2,
pipe: Boolean = false,
flow: Boolean = false): DecoupledIO[T] = {
if (entries == 0) {
val deq = Wire(new DecoupledIO(chiselTypeOf(enq.bits)))
deq.valid := enq.valid
deq.bits := enq.bits
enq.ready := deq.ready
deq
} else {
val q = Module(new Queue(chiselTypeOf(enq.bits), entries, pipe, flow))
q.io.enq.valid := enq.valid // not using <> so that override is allowed
q.io.enq.bits := enq.bits
enq.ready := q.io.enq.ready
TransitName(q.io.deq, q)
}
}
which return q.io.deq by TransitName method, and q.io.deq are defined as follows:
object DeqIO {
def apply[T<:Data](gen: T): DecoupledIO[T] = Flipped(Decoupled(gen))
}
/** An I/O Bundle for Queues
* #param gen The type of data to queue
* #param entries The max number of entries in the queue.
*/
class QueueIO[T <: Data](private val gen: T, val entries: Int) extends Bundle
{ // See github.com/freechipsproject/chisel3/issues/765 for why gen is a private val and proposed replacement APIs.
/* These may look inverted, because the names (enq/deq) are from the perspective of the client,
* but internally, the queue implementation itself sits on the other side
* of the interface so uses the flipped instance.
*/
/** I/O to enqueue data (client is producer, and Queue object is consumer), is [[Chisel.DecoupledIO]] flipped. */
val enq = Flipped(EnqIO(gen))
/** I/O to dequeue data (client is consumer and Queue object is producer), is [[Chisel.DecoupledIO]]*/
val deq = Flipped(DeqIO(gen))
/** The current amount of data in the queue */
val count = Output(UInt(log2Ceil(entries + 1).W))
}
that means q.io.deq is No-Flipped DecoupledIO and has the same interface direction of out. So I really want to know that how <> works in out <> Queue(in, entries) ?
Decoupled(data) add handshaking protocol to data bundle given in parameters.
If you declare this signal for example :
val dec_data = IO(Decoupled(chiselTypeOf(data)))
dec_data object will have 2 handshake values (ready, valid) with different directions and one data value.
myvalue := dec_data.bits
value_is_valid := dec_data.valid //boolean value in the same direction as data
dec_data.ready := sink_ready_to_receive //boolean value in the oposite data direction
If you want to connect dec_data to another DecoupledIO bundle, you can't use := operator on the whole bundle because it's unidirectional operator.
You have to do connection value by value :
val dec_data_sink = IO(Flipped(Decoupled(chiselTypeOf(data))))
dec_data_sink.bits := dec_data.bits
dec_data_sink.valid := dec_data.valid
dec_data.ready := dec_data_sink.ready
With bulk connector <> you can avoid this painful connexions with :
dec_data_sink <> dec_data
Chisel will automatically connect right signals together.
For more documentation about bulks connections and decoupled interface, see the documentation here.
OK, I check the Verilog generated by this example:
module Queue(
input clock,
input reset,
output io_enq_ready,
input io_enq_valid,
input [8:0] io_enq_bits,
input io_deq_ready,
output io_deq_valid,
output [8:0] io_deq_bits
);
......
......
module QueueModule(
input clock,
input reset,
output in_ready,
input in_valid,
input [8:0] in_bits,
input out_ready,
output out_valid,
output [8:0] out_bits
);
wire q_clock; // #[Decoupled.scala 296:21]
wire q_reset; // #[Decoupled.scala 296:21]
wire q_io_enq_ready; // #[Decoupled.scala 296:21]
wire q_io_enq_valid; // #[Decoupled.scala 296:21]
wire [8:0] q_io_enq_bits; // #[Decoupled.scala 296:21]
wire q_io_deq_ready; // #[Decoupled.scala 296:21]
wire q_io_deq_valid; // #[Decoupled.scala 296:21]
wire [8:0] q_io_deq_bits; // #[Decoupled.scala 296:21]
Queue q ( // #[Decoupled.scala 296:21]
.clock(q_clock),
.reset(q_reset),
.io_enq_ready(q_io_enq_ready),
.io_enq_valid(q_io_enq_valid),
.io_enq_bits(q_io_enq_bits),
.io_deq_ready(q_io_deq_ready),
.io_deq_valid(q_io_deq_valid),
.io_deq_bits(q_io_deq_bits)
);
assign in_ready = q_io_enq_ready; // #[Decoupled.scala 299:17]
assign out_valid = q_io_deq_valid; // #[cmd2.sc 4:7]
assign out_bits = q_io_deq_bits; // #[cmd2.sc 4:7]
assign q_clock = clock;
assign q_reset = reset;
assign q_io_enq_valid = in_valid; // #[Decoupled.scala 297:22]
assign q_io_enq_bits = in_bits; // #[Decoupled.scala 298:21]
assign q_io_deq_ready = out_ready; // #[cmd2.sc 4:7]
endmodule
I found that there is simply inputs connect inputs and outputs connect outputs between Queue and QueueModule. As far as I understand it, there is a instantiation of Queue module in QueueModule, so QueueModule and Queue match the parent/child modules and <> bulk-connect operator connects interfaces of the same gender as the documentation
So I understand that I ignored the Queue itself is also a module and the format of the example:
case class QueueModule[T <: Data](ioType: T, entries: Int) extends MultiIOModule {
val in = IO(Flipped(Decoupled(ioType)))
val out = IO(Decoupled(ioType))
out <> Queue(in, entries)
}
will match QueueModule/Queue to parent/child modules.
Related
I am trying to implement the way-prediction technique in the RocketChip core (in-order). For this, I need to access each way separately. So this is how SRAM for tags looks like after modification (separate SRAM for each way)
val tag_arrays = Seq.fill(nWays) { SeqMem(nSets, UInt(width = tECC.width(1 + tagBits)))}
val tag_rdata = Reg(Vec(nWays, UInt(width = tECC.width(1 + tagBits))))
for ((tag_array, i) <- tag_arrays zipWithIndex) {
tag_rdata(i) := tag_array.read(s0_vaddr(untagBits-1,blockOffBits), !refill_done && s0_valid)
}
And I want to access it like
when (refill_done) {
val enc_tag = tECC.encode(Cat(tl_out.d.bits.error, refill_tag))
tag_arrays(repl_way).write(refill_idx, enc_tag)
ccover(tl_out.d.bits.error, "D_ERROR", "I$ D-channel error")
}
Where repl_way is Chisel random UInt generated by LFSR. But Seq element can be accessed only by Scala Int index which causes a compilation error. Then I tried access it like this
when (refill_done) {
val enc_tag = tECC.encode(Cat(tl_out.d.bits.error, refill_tag))
for (i <- 0 until nWays) {
when (repl_way === i.U) {tag_arrays(i).write(refill_idx, enc_tag)}
}
ccover(tl_out.d.bits.error, "D_ERROR", "I$ D-channel error")
}
But assertion arises -
assert(PopCount(s1_tag_hit zip s1_tag_disparity map { case (h, d) => h && !d }) <= 1)
I am trying to modify ICache.scala file. Any ideas on how to do this properly? Thanks!
I think you can just use a Vec here instead of a Seq
val tag_arrays = Vec(nWays, SeqMem(nSets, UInt(width = tECC.width(1 + tagBits))))
The Vec allows indexing with a UInt
I'm consuming Avro serialized messages from Kafka using the "automatic" deserializer like:
props.put(
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
"io.confluent.kafka.serializers.KafkaAvroDeserializer"
);
props.put("schema.registry.url", "https://example.com");
This works brilliantly, and is right out of the docs at https://docs.confluent.io/current/schema-registry/serializer-formatter.html#serializer.
The problem I'm facing is that I actually just want to forward these messages, but to do the routing I need some metadata from inside. Some technical constraints mean that I can't feasibly compile-in generated class files to use the KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG => true, so I am using a regular decoder without being tied into Kafka, specifically just reading the bytes as a Array[Byte] and passing them to a manually constructed deserializer:
var maxSchemasToCache = 1000;
var schemaRegistryURL = "https://example.com/"
var specificDeserializerProps = Map(
"schema.registry.url"
-> schemaRegistryURL,
KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG
-> "false"
);
var client = new CachedSchemaRegistryClient(
schemaRegistryURL,
maxSchemasToCache
);
var deserializer = new KafkaAvroDeserializer(
client,
specificDeserializerProps.asJava
);
The messages are a "container" type, with the really interesting part one of about ~25 types in a union { A, B, C } msg record field:
record Event {
timestamp_ms created_at;
union {
Online,
Offline,
Available,
Unavailable,
...
...Failed,
...Updated
} msg;
}
So I'm successfully reading a Array[Byte] into record and feeding it into the deserializer like this:
var genericRecord = deserializer.deserialize(topic, consumerRecord.value())
.asInstanceOf[GenericRecord];
var schema = genericRecord.getSchema();
var msgSchema = schema.getField("msg").schema();
The problem however is that I can find no to discern, discriminate or "resolve" the "type" of the msg field through the union:
System.out.printf(
"msg.schema = %s msg.schema.getType = %s\n",
msgSchema.getFullName(),
msgSchema.getType().name());
=> msg.schema = union msg.schema.getType = union
How to discriminate types in this scenario? The confluent registry knows, these things have names, they have "types", even if I'm treating them as GenericRecords,
My goal here is to know that record.msg is of "type" Online | Offline | Available rather than just knowing it's a union.
After having looked into the implementation of the AVRO Java library, it think it's safe to say that this is impossible given the current API. I've found the following way of extracting the types while parsing, using a custom GenericDatumReader subclass, but it needs a lot of polishing before I'd use something like this in production code :D
So here's the subclass:
import org.apache.avro.Schema;
import org.apache.avro.generic.GenericData;
import org.apache.avro.generic.GenericDatumReader;
import org.apache.avro.io.ResolvingDecoder;
import java.io.IOException;
import java.util.List;
public class CustomReader<D> extends GenericDatumReader<D> {
private final GenericData data;
private Schema actual;
private Schema expected;
private ResolvingDecoder creatorResolver = null;
private final Thread creator;
private List<Schema> unionTypes;
// vvv This is the constructor I've modified, added a list of types
public CustomReader(Schema schema, List<Schema> unionTypes) {
this(schema, schema, GenericData.get());
this.unionTypes = unionTypes;
}
public CustomReader(Schema writer, Schema reader, GenericData data) {
this(data);
this.actual = writer;
this.expected = reader;
}
protected CustomReader(GenericData data) {
this.data = data;
this.creator = Thread.currentThread();
}
protected Object readWithoutConversion(Object old, Schema expected, ResolvingDecoder in) throws IOException {
switch (expected.getType()) {
case RECORD:
return super.readRecord(old, expected, in);
case ENUM:
return super.readEnum(expected, in);
case ARRAY:
return super.readArray(old, expected, in);
case MAP:
return super.readMap(old, expected, in);
case UNION:
// vvv The magic happens here
Schema type = expected.getTypes().get(in.readIndex());
unionTypes.add(type);
return super.read(old, type, in);
case FIXED:
return super.readFixed(old, expected, in);
case STRING:
return super.readString(old, expected, in);
case BYTES:
return super.readBytes(old, expected, in);
case INT:
return super.readInt(old, expected, in);
case LONG:
return in.readLong();
case FLOAT:
return in.readFloat();
case DOUBLE:
return in.readDouble();
case BOOLEAN:
return in.readBoolean();
case NULL:
in.readNull();
return null;
default:
return super.readWithoutConversion(old, expected, in);
}
}
}
I've added comments to the code for the interesting parts, as it's mostly boilerplate.
Then you can use this custom reader like this:
List<Schema> unionTypes = new ArrayList<>();
DatumReader<GenericRecord> datumReader = new CustomReader<GenericRecord>(schema, unionTypes);
DataFileReader<GenericRecord> dataFileReader = new DataFileReader<GenericRecord>(eventFile, datumReader);
GenericRecord event = null;
while (dataFileReader.hasNext()) {
event = dataFileReader.next(event);
}
System.out.println(unionTypes);
This will print, for each union parsed, the type of that union. Note that you'll have to figure out which element of that list is interesting to you depending on how many unions you have in a record, etc.
Not pretty tbh :D
I was able to come up with a single-use solution after a lot of digging:
val records: ConsumerRecords[String, Array[Byte]] = consumer.poll(100);
for (consumerRecord <- asScalaIterator(records.iterator)) {
var genericRecord = deserializer.deserialize(topic, consumerRecord.value()).asInstanceOf[GenericRecord];
var msgSchema = genericRecord.get("msg").asInstanceOf[GenericRecord].getSchema();
System.out.printf("%s \n", msgSchema.getFullName());
Prints com.myorg.SomeSchemaFromTheEnum and works perfectly in my use-case.
The confusing thing, is that because of the use of GenericRecord, .get("msg") returns Object, which, in a general way I have no way to safely typecast. In this limited case, I know the cast is safe.
In my limited use-case the solution in the 5 lines above is suitable, but for a more general solution the answer https://stackoverflow.com/a/59844401/119669 posted by https://stackoverflow.com/users/124257/fresskoma seems more appropriate.
Whether using DatumReader or GenericRecord is probably a matter of preference and whether the Kafka ecosystem is in mind, alone with Avro I'd probably prefer a DatumReader solution, but in this instance I can live with having Kafak-esque nomenclature in my code.
To retrieve the schema of the value of a field, you can use
new GenericData().induce(genericRecord.get("msg"))
I want to split some values in loop. I used split method in check and it works for me. But, there are more than 25 values of two different types.
So, I am implementing loop in scala and struggling.
Consider the following scenario:
import scala.concurrent.duration._
import io.gatling.core.Predef._
import io.gatling.http.Predef._
class testSimulation extends Simulation {
val httpProtocol = http
.baseURL("https://website.com")
.doNotTrackHeader("1")
.disableCaching
val uri1 = "https://website.com"
val scn = scenario("EditAttribute")
.exec(http("LogIn")
.post(uri1 + "/web/guest/")
.headers(headers_0)
.exec(http("getPopupData")
.post("/website/getPopupData")
.check(jsonPath("$.data[0].pid").transform(_.split('#').toSeq).saveAs("pID"))) // Saving splited value
.exec(http("Listing")
.post("/website/listing")
.check(jsonPath("$.data[*].AdId").findAll.saveAs("aID")) // All values are collected in vector
// .check(jsonPath("$.data[*].AdId").transform(_.split('#').toSeq).saveAs("aID")) // Split method Not working for batch
// .check(jsonPath("$.data[*].AdId").findAll.saveAs("aID")) // To verify the length of array (vector)
.check(jsonPath("$.data[0].RcId").findAll.saveAs("rID")))
.exec(http("UpdatedDataListing")
.post("/website/search")
.formParam("entityTypeId", "${pId(0)}") // passing splited value, perfectly done
.formParam("action_id", "${aId(0)},${aId(1)},${aId(2)},..and so on) // need to pass splitted values which is not happening
.formParam("userId", "${rID}")
// To verify values on console (What value I m getting after splitting)...
.exec( session => {
val abc = session("pID").as[Seq[String]]
val xyz = session("aID").as[Seq[String]]
println("Separated pId ===> " +abc(0)) // output - first splitted value
println("Separated pId ===> " +abc(1)) // split separater
println("Separated pId ===> " +abc(2)) // second splitted value
println("Length ===> " +abc.length) // output - 3
println("Length ===> " +xyz.length) // output - 25
session
}
)
.exec(http("logOut")
.get("https://" + uri1 + "/logout")
.headers(headers_0))
setUp(scn.inject(atOnceUsers(1))).protocols(httpProtocol)
}
I want to implement a loop which performs splitting of all (25) values in session. I do not want to do hard coding.
I am newbie to scala and Gatling as well.
Since it is a session function the below snippet will give a direction to continue ,use split just like you do in Java :-
exec { session =>
var requestIdValue = new scala.util.Random().nextInt(Integer.MAX_VALUE).toString();
var length = jobsQue.length
try {
var reportElement = jobsQue.pop()
jobData = reportElement.getData;
xml = Configuration.XML.replaceAll("requestIdValue", requestIdValue);
println(s"For Request Id : $requestIdValue .Data Value from feeder is : $jobData Current size of jobsQue : $length");
} catch {
case e: NoSuchElementException => print("Erorr")
}
session.setAll(
"xmlRequest" -> xml)
}
I have an observable query that produces an IObservable<byte> from a stream that I want to parse inline. I want to be able to use different strategies depending on the data source to parse discrete messages from this sequence. Bear in mind I am still on the upward learning curve of RX. I have come up with a solution, but am unsure if there is a way to accomplish this using out-of-the-box operators.
First, I wrote the following extension method to IObservable:
public static IObservable<IList<T>> Parse<T>(
this IObservable<T> source,
Func<IObservable<T>, IObservable<IList<T>>> parsingFunction)
{
return parsingFunction(source);
}
This allows me to specify the message framing strategy in use by a particular data source. One data source might be delimited by one or more bytes while another might be delimited by both start and stop block patterns while another might use a length prefixing strategy. So here is an example of the Delimited strategy that I have defined:
public static class MessageParsingFunctions
{
public static Func<IObservable<T>, IObservable<IList<T>>> Delimited<T>(T[] delimiter)
{
if (delimiter == null) throw new ArgumentNullException("delimiter");
if (delimiter.Length < 1) throw new ArgumentException("delimiter must contain at least one element.");
Func<IObservable<T>, IObservable<IList<T>>> parser =
(source) =>
{
var shared = source.Publish().RefCount();
var windowOpen = shared.Buffer(delimiter.Length, 1)
.Where(buffer => buffer.SequenceEqual(delimiter))
.Publish()
.RefCount();
return shared.Buffer(windowOpen)
.Select(bytes =>
bytes
.Take(bytes.Count - delimiter.Length)
.ToList());
};
return parser;
}
}
So ultimately, as an example, I can use the code in the following fashion to parse discrete messages from the sequence any time the byte pattern for the string '<EOF>' is encountered in the sequence:
var messages = ...operators that surface an IObservable<byte>
.Parse(MessageParsingFunctions.Delimited(Encoding.ASCII.GetBytes("<EOF>")))
...further operators to package discrete messages along with additional metadata
Questions:
Is there a more straight-forward way to accomplish this using just out of the box operators?
If not, would it be preferable to just define the different parsing functions (i.e. ParseDelimited, ParseLengthPrefixed, etc.) as local extensions instead of having a more generic Parse extension method that accepts a parsing function?
Thanks in advance!
Take a look at Rxx Parsers. Here's a related lab. For example:
IObservable<byte> bytes = ...;
var parsed = bytes.ParseBinary(parser =>
from next in parser
let magicNumber = parser.String(Encoding.UTF8, 3).Where(value => value == "RXX")
let header = from headerLength in parser.Int32
from header in next.Exactly(headerLength)
from headerAsString in header.Aggregate(string.Empty, (s, b) => s + " " + b)
select headerAsString
let message = parser.String(Encoding.UTF8)
let entry = from length in parser.Int32
from data in next.Exactly(length)
from value in data.Aggregate(string.Empty, (s, b) => s + " " + b)
select value
let entries = from count in parser.Int32
from entries in entry.Exactly(count).ToList()
select entries
select from _ in magicNumber.Required("The file's magic number is invalid.")
from h in header.Required("The file's header is invalid.")
from m in message.Required("The file's message is invalid.")
from e in entries.Required("The file's data is invalid.")
select new
{
Header = h,
Message = m,
Entries = e.Aggregate(string.Empty, (acc, cur) => acc + cur + Environment.NewLine)
});
I currently have a program that listens to a network stream and fires events when a new message has been deserialized.
while(true)
{
byte[] lengthBytes = new byte[10];
networkStream.Read(lengthBytes, 0, 10);
int messageLength = Int32.Parse(Encoding.UTF8.GetString(lengthBytes));
var messageBytes = new byte[messageLength + 10];
Array.Copy(lengthBytes, messageBytes, 10);
int bytesReadTotal = 10;
while (bytesReadTotal < 10 + messageLength)
bytesReadTotal += networkStream.Read(messageBytes, bytesReadTotal, messageLength - bytesReadTotal + 10);
OnNewMessage(new MessageEventArgs(messageFactory.GetMessage(messageBytes)));
}
I want to rewrite this using the reactive extensions so that instead of the event there is an IObservable<Message>. This could be done using
Observable.FromEvent<EventHandler<MessageEventArgs>, MessageEventArgs>(
(h) => NewMessage += h,
(h) => NewMessage -= h)
.Select( (e) => { return e.Message; });
However I would prefer to rewrite the listening process using System.Reactive instead. My starting point (from here) is
Func<byte[], int, int, IObservable<int>> read;
read = Observable.FromAsyncPattern<byte[], int, int, int>(
networkStream.BeginRead,
networkStream.EndRead);
which allows
byte[] lengthBytes = new byte[10];
read(lengthBytes, 0, lengthBytes.Length).Subscribe(
{
(bytesRead) => ;
});
I'm struggling to see how to continue though. Does anyone have an implementation?
I came up with the following, but I feel it should be possible without creating a class and using Subject<T> (e.g. via some projection of the header packet to the body packet to the message object, but the problem with that is EndRead() doesn't return the byte array, but the number of bytes read. So you need an object or atleast a closure at some point).
class Message
{
public string Text { get; set; }
}
class MessageStream : IObservable<Message>
{
private readonly Subject<Message> messages = new Subject<Message>();
public void Start()
{
// Get your real network stream here.
var stream = Console.OpenStandardInput();
GetNextMessage( stream );
}
private void GetNextMessage(Stream stream)
{
var header = new byte[10];
var read = Observable.FromAsyncPattern<byte [], int, int, int>( stream.BeginRead, stream.EndRead );
read( header, 0, 10 ).Subscribe( b =>
{
var bodyLength = BitConverter.ToInt32( header, 0 );
var body = new byte[bodyLength];
read( body, 0, bodyLength ).Subscribe( b2 =>
{
var message = new Message() {Text = Encoding.UTF8.GetString( body )};
messages.OnNext( message );
GetNextMessage( stream );
} );
} );
}
public IDisposable Subscribe( IObserver<Message> observer )
{
return messages.Subscribe( observer );
}
}
Since Observable.FromAsyncPattern only makes the async call once, you will need to make a function that will call it multiple times instead. This should get you started, but probably has lots of room for improvement. It assumes that you can make the async calls repeatedly with the same arguments and assumes that the selector will handle any issues that arise from this.
Function FromRepeatedAsyncPattern(Of T1, T2, T3, TCallResult, TResult)(
begin As Func(Of T1, T2, T3, AsyncCallback, Object, IAsyncResult),
[end] As Func(Of IAsyncResult, TCallResult),
selector As Func(Of TCallResult, TResult),
isComplete As Func(Of TCallResult, Boolean)
) As Func(Of T1, T2, T3, IObservable(Of TResult))
Return Function(a1, a2, a3) Observable.Create(Of TResult)(
Function(obs)
Dim serial As New SerialDisposable()
Dim fac = Observable.FromAsyncPattern(begin, [end])
Dim onNext As Action(Of TCallResult) = Nothing
'this function will restart the subscription and will be
'called every time a value is found
Dim subscribe As Func(Of IDisposable) =
Function()
'note that we are REUSING the arguments, the
'selector should handle this appropriately
Return fac(a1, a2, a3).Subscribe(onNext,
Sub(ex)
obs.OnError(ex)
serial.Dispose()
End Sub)
End Function
'set up the OnNext handler to restart the observer
'every time it completes
onNext = Sub(v)
obs.OnNext(selector(v))
'subscriber disposed, do not check for completion
'or resubscribe
If serial.IsDisposed Then Exit Sub
If isComplete(v) Then
obs.OnCompleted()
serial.Dispose()
Else
'using the scheduler lets the OnNext complete before
'making the next async call.
'you could parameterize the scheduler, but it may not be
'helpful, and it won't work if Immediate is passed.
Scheduler.CurrentThread.Schedule(Sub() serial.Disposable = subscribe())
End If
End Sub
'start the first subscription
serial.Disposable = subscribe()
Return serial
End Function)
End Function
From here, you can get an IObservable(Of Byte) like so:
Dim buffer(4096 - 1) As Byte
Dim obsFac = FromRepeatedAsyncPattern(Of Byte(), Integer, Integer, Integer, Byte())(
AddressOf stream.BeginRead, AddressOf stream.EndRead,
Function(numRead)
If numRead < 0 Then Throw New ArgumentException("Invalid number read")
Console.WriteLine("Position after read: " & stream.Position.ToString())
Dim ret(numRead - 1) As Byte
Array.Copy(buffer, ret, numRead)
Return ret
End Function,
Function(numRead) numRead <= 0)
'this will be an observable of the chunk size you specify
Dim obs = obsFac(buffer, 0, buffer.Length)
From there, you will need some sort of accumulator function that takes byte arrays and outputs complete messages when they are found. The skeleton of such a function might look like:
Public Function Accumulate(source As IObservable(Of Byte())) As IObservable(Of Message)
Return Observable.Create(Of message)(
Function(obs)
Dim accumulator As New List(Of Byte)
Return source.Subscribe(
Sub(buffer)
'do some logic to build a packet here
accumulator.AddRange(buffer)
If True Then
obs.OnNext(New message())
'reset accumulator
End If
End Sub,
AddressOf obs.OnError,
AddressOf obs.OnCompleted)
End Function)
End Function