I am using Pymodbus serial forwarder example which works fine for one serial device. I want to be able to poll more than one device in the bus.
As discussed here it seems (and my tests confirm) that the ModbusServerContext does not pass down unit id.
Is there any workaround to enable polling of more than one device (say unit ids 1 & 2) in the serial forwarder example?
I think the above should be:
store = {
1: RemoteSlaveContext(client, unit=1),
2: RemoteSlaveContext(client, unit=2)}
context = ModbusServerContext(slaves=store, single=False)
Since I want to forward all normal addresses, I am doing this:
store = {unit_number: RemoteSlaveContext(client, unit=unit_number)
for unit_number in range(1, 248)}
context = ModbusServerContext(slaves=store, single=False)
To answer my question, one could use the following:
store = {0x1:RemoteSlaveContext(client),0x2:RemoteSlaveContext(client)}
context = ModbusServerContext(slaves=store, single=False)
With this setup, unit ids are passed down.
But till now there is a bug and the response might originate from either (1 or 2) serial unit ids.
Related
I'm trying to connect a PCIe device to a chipyard design using the existing edge overlay for the VCU118 (slightly modified because I'm using a different board but this should not matter).
#michael-etzkorn already posted an issue on Github about this in which they explain how they only got this working using two different clocks.
I'd appreciate it if I could get some pointers as to how this is done (the issue leaves out some implementation details of the configs) and also if it would be possible to do this without adding an extra clock (#michael-etzkorn points out that this could cause some issues).
Based on the work in your gist, it looks like you've answered most of your original question, but since I already typed this out I'll include it as an answer here.
To hook up any port, you'll essentially need to do three things.
Create an IOBinder
Create a HarnessBinder
Hook up the diplomatic nodes in the TestHarness
The IOBinder takes the bundles from within the system and punches them through to chiptop. The HarnessBinder connects the IO in ChipTop to the harness. Diplomacy negotiates the parameters for the diplomatic nodes. This step may be optional, but many modules, like the XDMA wrapper in the PCIe overlay, are diplomatic so this is usually a required step.
IOBinder
The IOBinder can take your CanHAveMasterTLMMIOPort and punch out pins for it
class WithXDMASlaveIOPassthrough extends OverrideIOBinder({
(system: CanHaveMasterTLMMIOPort) => {
val io_xdma_slave_pins_temp = IO(DataMirror.internal.chiselTypeClone[HeterogeneousBag[TLBundle]](system.mmio_tl)).suggestName("tl_slave_mmio")
io_xdma_slave_pins_temp <> system.mmio_tl
(Seq(io_xdma_slave_pins_temp), Nil)
}
})
This can look relatively the same for each port. However, I've found experimentally, I had to flip the temp pin connection <> for CanHaveSlaveTLPort.
HarnessBinder
The harness binder retrieves that port and connects it to the outer bundle. The pcieClient bundle is retrieved from the harness and connected to ports.head which is returned from IOBinders. This is essentially a fancy functional programming way to clone the IO and connect it to the bundle in ChipTop.
class WithPCIeClient extends OverrideHarnessBinder({
(system: CanHaveMasterTLMMIOPort, th: BaseModule with HasHarnessSignalReferences, ports: Seq[HeterogeneousBag[TLBundle]]) => {
require(ports.size == 1)
th match { case vcu118th: XDMAVCU118FPGATestHarnessImp => {
val bundles = vcu118th.xdmavcu118Outer.pcieClient.out.map(_._1)
val pcieClientBundle = Wire(new HeterogeneousBag(bundles.map(_.cloneType)))
// pcieClientBundle <> DontCare
bundles.zip(pcieClientBundle).foreach{case (bundle, io) => bundle <> io}
pcieClientBundle <> ports.head
} }
}
})
Also, I should note: this isn't the ideal way to connect to the harness as its possible BundleMap user fields are generated and they won't be driven unless you have that pcieClientBundle <> DontCare there. I found I had to expose AXI ports instead and modified the overlay to output axi nodes to get Diplomacy to work between the testharness and ChipTop regions.
A write up of that problem along with some more info at:
What are these `a.bits.user.amba_prot` signals and why are they only uninitialized conditionally in my HarnessBinder?
All this code is at that question.
TestHarness Diplomatic Connections
val overlayOutput = dp(PCIeOverlayKey).last.place(PCIeDesignInput(wrangler=pcieWrangler.node, corePLL=harnessSysPLL)).overlayOutput
val (pcieNode: TLNode, pcieIntNode: IntOutwardNode) = (overlayOutput.pcieNode, overlayOutput.intNode)
val (pcieSlaveTLNode: TLIdentityNode, pcieMasterTLNode: TLAsyncSinkNode) = (pcieNode.inward, pcieNode.outward)
val inParamsMMIOPeriph = topDesign match { case td: ChipTop =>
td.lazySystem match { case lsys: CanHaveMasterTLMMIOPort =>
lsys.mmioTLNode.edges.in(0)
}
}
val inParamsControl = topDesign match {case td: ChipTop =>
td.lazySystem match { case lsys: CanHaveMasterTLCtrlPort =>
lsys.ctrlTLNode.edges.in(0)
}
}
val pcieClient = TLClientNode(Seq(inParamsMMIOPeriph.master))
val pcieCtrlClient = TLClientNode(Seq(inParamsControl.master))
val connectorNode = TLIdentityNode()
// pcieSlaveTLNode should be driven for both the control slave and the axi bridge slave
connectorNode := pcieClient
connectorNode := pcieCtrlClient
pcieSlaveTLNode :=* connectorNode
Clock Groups... (unsolved)
pcieWrangler was my attempt at hooking up the axi_aclk. It's not correct. This just creates a second clock with the same 250 MHz frequency as the axi_aclk and so it mostly works but using a second clock isn't correct.
val sysClk2Node = dp(ClockInputOverlayKey).last.place(ClockInputDesignInput()).overlayOutput.node
val pciePLL = dp(PLLFactoryKey)()
pciePLL := sysClk2Node
val pcieClock = ClockSinkNode(freqMHz = 250) // Is this the reference clock?
val pcieWrangler = LazyModule(new ResetWrangler)
val pcieGroup = ClockGroup()
pcieClock := pcieWrangler.node := pcieGroup := pciePLL
Perhaps you can experiment and find out how we can hook up the axi_aclk as the driver for the axi async logic :)
I'd be happy to open a question about that since I don't know the answer yet myself.
To answer some follow up questions
How do I know how big of an address range I should reserve for PCIe (master and control)?
For control, you can match the size of in the overlay 0x4000000. For the master port, you'd ideally just hook a DMA engine up to that that has access to the full address range on the host. Otherwise, you have to do AXI2PCIE BAR translation logic to access different regions of host memory which isn't fun.
How do I connect the interrupt node to the system?
I believe this is only if you're connecting a PCIe Root Complex. If you're connecting a device, you shouldn't need to worry about the interrupt node. If you do wish to add it, you'll have to add the NExtInterrupts(3)++ to your config. I only got so far as that and my uncommented code in the TestHarness before I realized I didn't need it. If you feel you do, we can open a new question and try answering this more fully.
Hi im currently looking into grpc and im curious about the use usage of a repeated field vs a stream.
For example let's say i want to implement a reservation service for movie seats. The issue im facing is, that i want to inform the service for which movie i want to reserve the seats for.
I can think of 2 solutions, first:
I send the id of the movie with every seat i want to reserve, or with oneof at the beginning of the stream
like this:
rpc ReserveSeatsForShowing(stream SeatReservationRequest) returns(Reservation);
message SeatReservationRequest{
oneof reservationOneOf{
int32 showingId = 1;
SeatReservation seatReservation = 2;
}
}
Or using a repeated field like this
rpc ReserveSeatsForShowing(SeatReservationRequest) returns(Reservation);
message SeatReservationRequest{
int32 showingId = 1;
repeated SeatReservation seatReservation = 2;
}
Since i haven't really worked with grpc before im not quite sure which option to choose or if other options are available.
Looking forward for your recommendations
For the seat reservation, I think it would make sense to use repeated field. Just like real world scenario, the request is like "I want seat A, B, C for movie X", which is more like repeated manner than streaming. thus, the payload is very small. Also, this way should use less server resource since it is a batch process.
I am trying to implement ZMQ REQ/REP model in Java
I have a Server-role, running on post 5564, which acts as Replier
ZMQ.Socket repSock = context.socket(ZMQ.REP);
I have a Client-role, running on post 5563
ZMQ.Socket syncclient = context.socket(ZMQ.REQ);
I have a proxy-server in middle, which passes request and response
ZMQ.proxy(reqSocket, repSocket, null);
Good thing about having a proxy is I can add multiple Servers
repSocket.connect("tcp://" + addr.getHostAddress() + ":" + port);
Which is working fine .
Now, when I remove a Server node from Proxy
repSocket.disconnect("tcp://" + addr.getHostAddress() + ":" + port);
Client gets stuck, as an request has being made and the REQ-socket waits for a response.
So the process stucks at syncclient.recvStr()
for (int request_nbr = 0; request_nbr < (request_nbr + 1); request_nbr++) {
syncclient.send(str.getBytes(),0);
System.out.println("Send Dataaaa....... " );
String data = syncclient.recvStr(Charset.defaultCharset());
System.out.println(" here.. " +data);
request_nbr++;
}
I searched and couldn't find a way to track the REQ-socket
I need any one of 2 things:
A way to keep track on a Socket-instance, which I am about to disconnect, wait till all messages are processed, so that syncclient.recvStr() will not be blocked
A way to reset the syncclient-socket, so that I can keep getting REQ/REP respond without an interruption
In real-world scenarios, rather avoid using a blocking-mode of the ZeroMQ .send() / .recv() methods and better use .poll().
While this may require a bit more SLOCs of code, the results are leaving you in a control, whereas a blocking SLOC takes all the control from your code and you cannot do much about that until ( if at all ) a next message gets delivered. That's a very wrong design practice and except the most simplistic schoolbook examples, that are actually sort of anti-patterns for the real-world.
So, do not expect Question 2 to become somehow magically solved, this is not a part of ZeroMQ API ( for many rather aloud evangelisated reasons ). Better decide between .setsockopt( ZMQ.REQ_RELAXED, 1 ), if API version and context of use permits, or do not use the trivial REQ/REP pattern at all ( due it's known risk of falling into an unsalvageable mutual dead-lock ( ref. other my posts on this very subject, where this phenomenon was both illustrated and countless times explained ) ).
In a similar manner, asking Question 1 seems reasonable in cases you have never read the ZeroMQ specifications and/or documentation and ZeroMQ "Best Practices". Having spent some time in this, your options would be crystal-clear. There are none such tools for doing this built-in. One can add some add-on, if in a need to add any similar non-core logic for her/his own need. The only setting that indirectly influences the behaviour on aSocket.close() is available in .setsockopt( ZMQ.LINGER, 0 ), which may help to prevent a system from a transition into an effective hangup-state, once aSocket waits infinitely for a state that will never happen in cases, when a message-queue is still non-empty ( messages still waiting for getting delivered ).
Going into Distributed-Systems design is like entering a new world. No sequences are guarranteed ( non-serial code execution paths happen ). No means of any local control of remote entities, their states, their failures, their presence at all, their actual ZeroMQ API version.
Indeed a challenging world to enter into.
N.b.:
You might already know, that one can .connect() aSocket-instance ( better an Access Point to aSocket-instance ) to more than one remote ends without using the proxy. With some additional .setsockopt() tuning ZMQ.IMMEDIATE to a value of 1, will help better manage the round-robin distribution policy, irrespective of the transport-classes used for the actual message delivery ( { tcp:// | ipc:// | vmci:// | pgm:// | epgm:// | inproc:// } ). All that at your fingertips.
Given the following code if I use the first method in the if branch to obtain a MIDIDestination the code works correctly, and MIDI data is sent. If I use the second method from the else branch, no data is sent.
var client = MIDIClientRef()
var port = MIDIPortRef()
var dest = MIDIEndpointRef()
MIDIClientCreate("jveditor" as CFString, nil, nil, &client)
MIDIOutputPortCreate(client, "output" as CFString, &port)
if false {
dest = MIDIGetDestination(1)
} else {
var device = MIDIGetExternalDevice(0)
var entity = MIDIDeviceGetEntity(device, 0)
dest = MIDIEntityGetDestination(entity, 0)
}
var name: Unmanaged<CFString>?
MIDIObjectGetStringProperty(dest, kMIDIPropertyDisplayName, &name)
print(name?.takeUnretainedValue() as! String)
var gmOn : [UInt8] = [ 0xf0, 0x7e, 0x7f, 0x09, 0x01, 0xf7 ]
var pktlist = MIDIPacketList()
var current = MIDIPacketListInit(&pktlist)
current = MIDIPacketListAdd(&pktlist, MemoryLayout<MIDIPacketList>.stride, current, 0, gmOn.count, &gmOn)
MIDISend(port, dest, &pktlist)
In both cases the printed device name is correct, and the status of every call is noErr.
I have noticed that if I ask for the kMIDIManufacturerName property that I get different results - specifically using the first method I get Generic, from the USB MIDI interface to which the MIDI device is connected, and with the second method I get the value of Roland configured via the Audio MIDI Setup app.
The reason I want to use the second method is specifically so that I can filter out devices that don't have the desired manufacturer name, but as above I can't then get working output.
Can anyone explain the difference between these two methods, and why the latter doesn't work, and ideally offer a suggestion as to how I can work around that?
It sounds like you want to find only the MIDI destination endpoints to talk to a certain manufacturer's devices. Unfortunately that isn't really possible, since there is no protocol for discovering what MIDI devices exist, what their attributes are, and how they are connected to the computer.
(Remember that MIDI is primitive 1980s technology. It doesn't even require bidirectional communication. There are perfectly valid MIDI setups with MIDI devices that you can send data to, but can never receive data from, and vice versa.)
The computer knows what MIDI interfaces are connected to it (for instance, a USB-MIDI interface). CoreMIDI calls these "Devices". You can find out how many there are, how many ports each has, etc. But there is no way to find out anything about the physical MIDI devices like keyboards and synthesizers that are connected to them.
"External devices" are an attempt to get around the discovery problem. They are the things that appear in Audio MIDI Setup when you press the "Add Device" button. That's all!
Ideally your users would create an external device for each physical MIDI device in their setup, enter all the attributes of each one, and set up all the connections in a way that perfectly mirrors their physical MIDI cables.
Unfortunately, in reality:
There may not be any external devices. There is not much benefit to creating them in Audio MIDI Setup, and it's a lot of boring data entry, so most people don't bother.
If there are external devices, you can't trust any of the information that the users added. The manufacturer might not be right, or might be spelled wrong, for instance.
It's pretty unfriendly to force your users to set things up in Audio MIDI Setup before they can use your software. Therefore, no apps do that... and therefore nobody sets anything up in Audio MIDI Setup. It's a chicken-and-egg problem.
Even if there are external devices, your users might want to send MIDI to other endpoints (like virtual endpoints created by other apps) that are not apparently connected to external devices. You should let them do what they want.
The documentation for MIDIGetDevice() makes a good suggestion:
If a client iterates through the devices and entities in the system, it will not ever visit any virtual sources and destinations created by other clients. Also, a device iteration will return devices which are "offline" (were present in the past but are not currently present), while iterations through the system's sources and destinations will not include the endpoints of offline devices.
Thus clients should usually use MIDIGetNumberOfSources, MIDIGetSource, MIDIGetNumberOfDestinations and MIDIGetDestination, rather iterating through devices and entities to locate endpoints.
In other words: use MIDIGetNumberOfDestinations and MIDIGetDestination to get the possible destinations, then let your users pick one of them. That's all.
If you really want to do more:
Given a destination endpoint, you can use MIDIEndpointGetEntity and MIDIEndpointGetDevice to get to the MIDI interface.
Given any MIDI object, you can find its connections to other objects. Use MIDIObjectGetDataProperty to get the value of property kMIDIPropertyConnectionUniqueID, which is an array of the unique IDs of connected objects. Then use MIDIObjectFindByUniqueID to get to the object. The outObjectType will tell you what kind of object it is.
But that's pretty awkward, and you're not guaranteed to find any useful information.
Based on a hint from Kurt Revis's answer, I've found the solution.
The destination that I needed to find is associated with the source of the external device, with the connection between them found using the kMIDIPropertyConnectionUniqueID property of that source.
Replacing the code in the if / else branch in the question with the code below works:
var external = MIDIGetExternalDevice(0)
var entity = MIDIDeviceGetEntity(external, 0)
var src = MIDIEntityGetSource(entity, 0)
var connID : Int32 = 0
var dest = MIDIObjectRef()
var type = MIDIObjectType.other
MIDIObjectGetIntegerProperty(src, kMIDIPropertyConnectionUniqueID, &connID)
MIDIObjectFindByUniqueID(connID, &dest, &type)
A property dump suggests that the connection Unique ID property is really a data property (perhaps containing multiple IDs) but the resulting CFData appears to be in big-endian format so reading it as an integer property instead seems to work fine.
I have not been able to find this information anywhere. How long can a string be send with the TTL version of the redpark cable?
The following delegate method is called twice when I print something thorough serial from my Arduino, an example of a string is this: 144;480,42;532,40;20e
- (void) readBytesAvailable:(UInt32)length{
When I use the new function methods of retrieving available data [getStringFromBytesAvailable] I will only get 144;480,42;532,40; and then the whole function is called again and the string now contains the rest of the string: 20e
The following method is working for appending the two strings, but only if the rate of data transmission is 'slow' (1 time a second, I would prefer minimum 10 times a second).
-
(void) readBytesAvailable:(UInt32)length{
if(string && [string rangeOfString:#"e"].location == NSNotFound){
string = [string stringByAppendingString:[rscMgr getStringFromBytesAvailable]];
NSLog(string);
finishedReading = YES;
}
else{
string = [rscMgr getStringFromBytesAvailable];
}
if (finishedReading == YES)
{
//do stuff
}
finishedReading = NO;
string = nil;
}
}
But can you tell my why the methods is called twice if I write a "long" string, and how to avoid this issue?
Since your program fragment runs faster then the time it takes to send a string, you need to capture the bytes and append them to a string.
If the serial data is terminated with a carriage return you can test for it to know when you have received the entire string.
Then you can allow your Arduino to send 10 times a second.
That is just how serial ports work. You can't and don't need to avoid those issues. There is no attempt at any level of the SW/HW to keep your serial data stream intact, so making any assumptions about that in your code is just wrong. Serial data is just a stream of bytes, with no concept of packetization. So you have to deal with the fact that you might have to read partial data and read the rest later.
The serialPortConfig within the redparkSerial header file provided by RedPark does, in fact, give you more configuration control than you may realize. The readBytesAvailable:length method is abstracted, and is only called when one of two conditions is met: rxForwardingTimeout value is exceeded with data in the primary buffer (default set to 100 ms) or rxForwardCount is reached (default set to 16 characters).
So, in your case it looks like you've still got data in your buffer after your initial read, which means that the readBytesAvailable:length method will be called again (from the main run loop) to retrieve the remaining data. I would propose playing around with the rxForwardingTimeout and rxForwardCount until it performs as you'd expect.
As already mentioned, though, I'd recommend adding a flag (doesn't have to be carriage return) to at least the end of your packet, for identification.
Also, some good advice here: How do you design a serial command protocol for an embedded system?
Good luck!