How to connect a PCIe device to a chipyard design - scala

I'm trying to connect a PCIe device to a chipyard design using the existing edge overlay for the VCU118 (slightly modified because I'm using a different board but this should not matter).
#michael-etzkorn already posted an issue on Github about this in which they explain how they only got this working using two different clocks.
I'd appreciate it if I could get some pointers as to how this is done (the issue leaves out some implementation details of the configs) and also if it would be possible to do this without adding an extra clock (#michael-etzkorn points out that this could cause some issues).

Based on the work in your gist, it looks like you've answered most of your original question, but since I already typed this out I'll include it as an answer here.
To hook up any port, you'll essentially need to do three things.
Create an IOBinder
Create a HarnessBinder
Hook up the diplomatic nodes in the TestHarness
The IOBinder takes the bundles from within the system and punches them through to chiptop. The HarnessBinder connects the IO in ChipTop to the harness. Diplomacy negotiates the parameters for the diplomatic nodes. This step may be optional, but many modules, like the XDMA wrapper in the PCIe overlay, are diplomatic so this is usually a required step.
IOBinder
The IOBinder can take your CanHAveMasterTLMMIOPort and punch out pins for it
class WithXDMASlaveIOPassthrough extends OverrideIOBinder({
(system: CanHaveMasterTLMMIOPort) => {
val io_xdma_slave_pins_temp = IO(DataMirror.internal.chiselTypeClone[HeterogeneousBag[TLBundle]](system.mmio_tl)).suggestName("tl_slave_mmio")
io_xdma_slave_pins_temp <> system.mmio_tl
(Seq(io_xdma_slave_pins_temp), Nil)
}
})
This can look relatively the same for each port. However, I've found experimentally, I had to flip the temp pin connection <> for CanHaveSlaveTLPort.
HarnessBinder
The harness binder retrieves that port and connects it to the outer bundle. The pcieClient bundle is retrieved from the harness and connected to ports.head which is returned from IOBinders. This is essentially a fancy functional programming way to clone the IO and connect it to the bundle in ChipTop.
class WithPCIeClient extends OverrideHarnessBinder({
(system: CanHaveMasterTLMMIOPort, th: BaseModule with HasHarnessSignalReferences, ports: Seq[HeterogeneousBag[TLBundle]]) => {
require(ports.size == 1)
th match { case vcu118th: XDMAVCU118FPGATestHarnessImp => {
val bundles = vcu118th.xdmavcu118Outer.pcieClient.out.map(_._1)
val pcieClientBundle = Wire(new HeterogeneousBag(bundles.map(_.cloneType)))
// pcieClientBundle <> DontCare
bundles.zip(pcieClientBundle).foreach{case (bundle, io) => bundle <> io}
pcieClientBundle <> ports.head
} }
}
})
Also, I should note: this isn't the ideal way to connect to the harness as its possible BundleMap user fields are generated and they won't be driven unless you have that pcieClientBundle <> DontCare there. I found I had to expose AXI ports instead and modified the overlay to output axi nodes to get Diplomacy to work between the testharness and ChipTop regions.
A write up of that problem along with some more info at:
What are these `a.bits.user.amba_prot` signals and why are they only uninitialized conditionally in my HarnessBinder?
All this code is at that question.
TestHarness Diplomatic Connections
val overlayOutput = dp(PCIeOverlayKey).last.place(PCIeDesignInput(wrangler=pcieWrangler.node, corePLL=harnessSysPLL)).overlayOutput
val (pcieNode: TLNode, pcieIntNode: IntOutwardNode) = (overlayOutput.pcieNode, overlayOutput.intNode)
val (pcieSlaveTLNode: TLIdentityNode, pcieMasterTLNode: TLAsyncSinkNode) = (pcieNode.inward, pcieNode.outward)
val inParamsMMIOPeriph = topDesign match { case td: ChipTop =>
td.lazySystem match { case lsys: CanHaveMasterTLMMIOPort =>
lsys.mmioTLNode.edges.in(0)
}
}
val inParamsControl = topDesign match {case td: ChipTop =>
td.lazySystem match { case lsys: CanHaveMasterTLCtrlPort =>
lsys.ctrlTLNode.edges.in(0)
}
}
val pcieClient = TLClientNode(Seq(inParamsMMIOPeriph.master))
val pcieCtrlClient = TLClientNode(Seq(inParamsControl.master))
val connectorNode = TLIdentityNode()
// pcieSlaveTLNode should be driven for both the control slave and the axi bridge slave
connectorNode := pcieClient
connectorNode := pcieCtrlClient
pcieSlaveTLNode :=* connectorNode
Clock Groups... (unsolved)
pcieWrangler was my attempt at hooking up the axi_aclk. It's not correct. This just creates a second clock with the same 250 MHz frequency as the axi_aclk and so it mostly works but using a second clock isn't correct.
val sysClk2Node = dp(ClockInputOverlayKey).last.place(ClockInputDesignInput()).overlayOutput.node
val pciePLL = dp(PLLFactoryKey)()
pciePLL := sysClk2Node
val pcieClock = ClockSinkNode(freqMHz = 250) // Is this the reference clock?
val pcieWrangler = LazyModule(new ResetWrangler)
val pcieGroup = ClockGroup()
pcieClock := pcieWrangler.node := pcieGroup := pciePLL
Perhaps you can experiment and find out how we can hook up the axi_aclk as the driver for the axi async logic :)
I'd be happy to open a question about that since I don't know the answer yet myself.
To answer some follow up questions
How do I know how big of an address range I should reserve for PCIe (master and control)?
For control, you can match the size of in the overlay 0x4000000. For the master port, you'd ideally just hook a DMA engine up to that that has access to the full address range on the host. Otherwise, you have to do AXI2PCIE BAR translation logic to access different regions of host memory which isn't fun.
How do I connect the interrupt node to the system?
I believe this is only if you're connecting a PCIe Root Complex. If you're connecting a device, you shouldn't need to worry about the interrupt node. If you do wish to add it, you'll have to add the NExtInterrupts(3)++ to your config. I only got so far as that and my uncommented code in the TestHarness before I realized I didn't need it. If you feel you do, we can open a new question and try answering this more fully.

Related

Akka stream hangs when starting more than 15 external processes using ProcessBuilder

I'm building an app that has the following flow:
There is a source of items to process
Each item should be processed by external command (it'll be ffmpeg in the end but for this simple reproducible use case it is just cat to have data be passed through it)
In the end, the output of such external command is saved somewhere (again, for the sake of this example it just saves it to a local text file)
So I'm doing the following operations:
Prepare a source with items
Make an Akka graph that uses Broadcast to fan-out the source items into individual flows
Individual flows uses ProcessBuilder in conjunction with Flow.fromSinkAndSource to build flow out of this external process execution
End the individual flows with a sink that saves the data to a file.
Complete code example:
import akka.actor.ActorSystem
import akka.stream.scaladsl.GraphDSL.Implicits._
import akka.stream.scaladsl._
import akka.stream.ClosedShape
import akka.util.ByteString
import java.io.{BufferedInputStream, BufferedOutputStream}
import java.nio.file.Paths
import scala.concurrent.duration.Duration
import scala.concurrent.{Await, ExecutionContext, Future}
object MyApp extends App {
// When this is changed to something above 15, the graph just stops
val PROCESSES_COUNT = Integer.parseInt(args(0))
println(s"Running with ${PROCESSES_COUNT} processes...")
implicit val system = ActorSystem("MyApp")
implicit val globalContext: ExecutionContext = ExecutionContext.global
def executeCmdOnStream(cmd: String): Flow[ByteString, ByteString, _] = {
val convertProcess = new ProcessBuilder(cmd).start
val pipeIn = new BufferedOutputStream(convertProcess.getOutputStream)
val pipeOut = new BufferedInputStream(convertProcess.getInputStream)
Flow
.fromSinkAndSource(StreamConverters.fromOutputStream(() ⇒ pipeIn), StreamConverters.fromInputStream(() ⇒ pipeOut))
}
val source = Source(1 to 100)
.map(element => {
println(s"--emit: ${element}")
ByteString(element)
})
val sinksList = (1 to PROCESSES_COUNT).map(i => {
Flow[ByteString]
.via(executeCmdOnStream("cat"))
.toMat(FileIO.toPath(Paths.get(s"process-$i.txt")))(Keep.right)
})
val graph = GraphDSL.create(sinksList) { implicit builder => sinks =>
val broadcast = builder.add(Broadcast[ByteString](sinks.size))
source ~> broadcast.in
for (i <- broadcast.outlets.indices) {
broadcast.out(i) ~> sinks(i)
}
ClosedShape
}
Await.result(Future.sequence(RunnableGraph.fromGraph(graph).run()), Duration.Inf)
}
Run this using following command:
sbt "run PROCESSES_COUNT"
i.e.
sbt "run 15"
This all works quite well until I raise the amount of "external processes" (PROCESSES_COUNT in the code). When it's 15 or less, all goes well but when it's 16 or more then the following things happen:
Whole execution just hangs after emitting the first 16 items (this amount of 16 items is Akka's default buffer size AFAIK)
I can see that cat processes are started in the system (all 16 of them)
When I manually kill one of these cat processes in the system, something frees up and processing continues (of course in the result, one file is empty because I killed its processing command)
I checked that this is caused by the external execution for sure (not i.e. limit of Akka Broadcast itself).
I recorded a video showing these two situations (first, 15 items working fine and then 16 items hanging and freed up by killing one process) - link to the video
Both the code and video are in this repo
I'd appreciate any help or suggestions where to look solution for this one.
It is an interesting problem and it looks like that the stream is dead-locking. The increase of threads may be fixing the symptom but not the underlying problem.
The problem is following code
Flow
.fromSinkAndSource(
StreamConverters.fromOutputStream(() => pipeIn),
StreamConverters.fromInputStream(() => pipeOut)
)
Both fromInputStream and fromOutputStream will be using the same default-blocking-io-dispatcher as you correctly noticed. The reason for using a dedicated thread pool is that both perform Java API calls that are blocking the running thread.
Here is a part of a thread stack trace of fromInputStream that shows where blocking is happening.
at java.io.FileInputStream.readBytes(java.base#11.0.13/Native Method)
at java.io.FileInputStream.read(java.base#11.0.13/FileInputStream.java:279)
at java.io.BufferedInputStream.read1(java.base#11.0.13/BufferedInputStream.java:290)
at java.io.BufferedInputStream.read(java.base#11.0.13/BufferedInputStream.java:351)
- locked <merged>(a java.lang.ProcessImpl$ProcessPipeInputStream)
at java.io.BufferedInputStream.read1(java.base#11.0.13/BufferedInputStream.java:290)
at java.io.BufferedInputStream.read(java.base#11.0.13/BufferedInputStream.java:351)
- locked <merged>(a java.io.BufferedInputStream)
at java.io.FilterInputStream.read(java.base#11.0.13/FilterInputStream.java:107)
at akka.stream.impl.io.InputStreamSource$$anon$1.onPull(InputStreamSource.scala:63)
Now, you're running 16 simultaneous Sinks that are connected to a single Source. To support back-pressure, a Source will only produce an element when all Sinks send a pull command.
What happens next is that you have 16 calls to method FileInputStream.readBytes at the same time and they immediately block all threads of default-blocking-io-dispatcher. And there are no threads left for fromOutputStream to write any data from the Source or perform any kind of work. Thus, you have a dead-lock.
The problem can be fixed if you increase the threads in the pool. But this just removes the symptom.
The correct solution is to run fromOutputStream and fromInputStream in two separate thread pools. Here is how you can do it.
Flow
.fromSinkAndSource(
StreamConverters.fromOutputStream(() => pipeIn).async("blocking-1"),
StreamConverters.fromInputStream(() => pipeOut).async("blocking-2")
)
with following config
blocking-1 {
type = "Dispatcher"
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = 2
}
}
blocking-2 {
type = "Dispatcher"
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = 2
}
}
Because they don't share the pools anymore, both fromOutputStream and fromInputStream can perform their tasks independently.
Also note that I just assigned 2 threads per pool to show that it's not about the thread count but about the pool separation.
I hope this helps to understand akka streams better.
Turns out this was limit on Akka configuration level of blocking IO dispatchers:
So changing that value to something bigger than the amount of streams fixed the issue:
akka.actor.default-blocking-io-dispatcher.thread-pool-executor.fixed-pool-size = 50

Pymodbus serial forwarder with multiple slaves

I am using Pymodbus serial forwarder example which works fine for one serial device. I want to be able to poll more than one device in the bus.
As discussed here it seems (and my tests confirm) that the ModbusServerContext does not pass down unit id.
Is there any workaround to enable polling of more than one device (say unit ids 1 & 2) in the serial forwarder example?
I think the above should be:
store = {
1: RemoteSlaveContext(client, unit=1),
2: RemoteSlaveContext(client, unit=2)}
context = ModbusServerContext(slaves=store, single=False)
Since I want to forward all normal addresses, I am doing this:
store = {unit_number: RemoteSlaveContext(client, unit=unit_number)
for unit_number in range(1, 248)}
context = ModbusServerContext(slaves=store, single=False)
To answer my question, one could use the following:
store = {0x1:RemoteSlaveContext(client),0x2:RemoteSlaveContext(client)}
context = ModbusServerContext(slaves=store, single=False)
With this setup, unit ids are passed down.
But till now there is a bug and the response might originate from either (1 or 2) serial unit ids.

Why a code, trying to read messages from two ZeroMQ sockets, fails?

I have issues with reading messages from two zmq servers (one set to REQ|REP and one PUB|SUB)
The two servers are running on another computer. When I read just the REQ|REP connection everything works perfectly but as soon as I also try to read the PUB|SUB connection the program freezes (I guess it waits forever for a message)
from PyQt5 import QtCore, QtGui, QtWidgets
import zmq
import ui_mainwindow
class MainWindow(QtWidgets.QMainWindow, ui_mainwindow.Ui_MainWindow):
def __init__(self, parent = None):
super(MainWindow, self).__init__(parent)
self.context = zmq.Context()
try:
self.stateSocket = self.context.socket(zmq.REQ)
self.stateSocket.connect("tcp://134.105.89.197:5555")
except zmq.ZMQError as e:
print('States setup failed: ', e)
try:
self.context = zmq.Context()
self.anglesSocket = self.context.socket(zmq.SUB)
self.anglesSocket.connect("tcp://134.105.89.197:5556")
except zmq.ZMQError as e:
print('angles setup failed: ', e)
self.timer = QtCore.QTimer()
self.timer.timeout.connect(self.publishState)
self.timer.setInterval(500)
self.timer.start()
self.timer2 = QtCore.QTimer()
self.timer2.timeout.connect(self.publishAngles)
self.timer2.setInterval(500)
self.timer2.start()
# +more variables unrelated to problem
def publishState(self):
request= "a string"
try:
self.stateSocket.send_string(request)
self.reset = 0
message = self.stateSocket.recv()#flags=zmq.NOBLOCK)
values = [float(i) for i in message.decode("UTF-8").split(',')]
print("Status: ", message)
except zmq.ZMQError as e:
print('State communication: ', e)
values = [0] * 100
def publishAngles(self):
try:
message = anglesSocket.recv_string() # flags=zmq.NOBLOCK)
#values = [float(i) for i in message.decode("UTF-8").split(',')]
print("Angles: ", message)
except zmq.ZMQError as e:
print('Angles communication: ', e)
values = [0] * 100
edit: added the full relevant code.
What I observe is the deadlock does not come from the REQ|REP, This part alone works perfectly fine. But it seems that the PUB|SUBpart does not work in the timer function. When I make a minimal example with a while loop inside publishAngels() it works.
So is there an elegant way to use a PUB|SUB socket in a Qt Timer connected function?
In case one has never worked with ZeroMQ,one may here enjoy to first look at "ZeroMQ Principles in less than Five Seconds"before diving into further details
Q: "Is there any stupid mistake I am overlooking?"
Yes, there are a few and all easy to refine.
1) The, so far incomplete, visible ZeroMQ part exhibits principal uncertainty of what type of the subscription and other safe-guarding settings were, if ever, when and where, applied to the SUB-socket-Archetype AccessPoint. The same applied to the REQ-socket-Archetype AccessPoint, except the subscription management related kind(s) of setting(s) for obvious reasons.
2) The code ignores the documented principles of the known rules for the distributed-Finite-State-Automaton's (dFSA) logic, hardwired into the REQ/REP Scalable Formal Communication Archetype. Avoid this using a correct logic, not violating the, here mandatory, dFSA-stepper of REQ-REP-REQ-REP-REQ-REP, plus make either of the REQ and SUB handling become mutually independent and you have it. In other words, a naive, dFSA-rules ignoring use of the zmq.NOBLOCK flag does not solve the deadlock either.
If you feel to be serious into becoming a distributed-computing professional, a must read is the fabulous Pieter Hintjen's book "Code Connected, Volume 1"

Clustering in AEM

I am facing an error, which is something peculiar. I am using AEM 5.6.1.
I have 2 author instances(a1 and a2) and both are in cluster. We are performing tar optimization on the instances daily between 2a.m. - 5a.m.(London Timezone). Now, in the error.log of a2, I am seeing the below error everyday in the above mentioned time:
419 ERROR [pool-6-thread-1] org.apache.sling.discovery.impl.cluster.ClusterViewServiceImpl getEstablishedView: the existing established view does not incude the enter code herelocal instance yet! Assming isolated mode.
Now, I did some research on this and has come to know that, AEM users ClusterViewServiceImpl.java for clustering. And in that, the below mentioned code snippet is basically failing:
EstablishedClusterView clusterViewImpl = new EstablishedClusterView(
config, view, getSlingId());
boolean foundLocal = false;
for (Iterator<InstanceDescription> it = clusterViewImpl
.getInstances().iterator(); it.hasNext();) {
InstanceDescription instance = it.next();
if (instance.isLocal()) {
foundLocal = true;
break;
}
}
if (foundLocal) {
return clusterViewImpl;
} else {
logger.info("getEstablishedView: the existing established view does not incude the local instance yet! Assuming isolated mode.");
return getIsolatedClusterView();
}
Can someone help me to understand more in depth regarding the same. Does it mean that, the clustering is not properly working? What can be the possible impacts because of this error?
I think you've got a classic case of split brain.
Clustering authors is not a good approach and has been disfavoured in future versions of AEM, as the authors often get out of sync when they can't talk to each other for whatever reason, usually temporarily network related. Believe me, they are sensitive.
When communication drops, the slave thinks it no longer has a master, and claims to be the master itself. When that occurs, and communication is re-established the damage has been done as there is no recovery mechanism.
At best, only ever allow users to connect to the primary author and have the secondary author as a High Availability server.
Better still, set up replication from the primary author that everyone writes to, and have it auto replicate on write to the secondary backup author.
Hope that helps.

Control flow of messages in Akka actor

I have an actor using Akka which performs an action that takes some time to complete, because it has to download a file from the network.
def receive = {
case songId: String => {
Future {
val futureFile = downloadFile(songId)
for (file <- futureFile) {
val fileName = doSomenthingWith(file)
otherActor ! fileName
}
}
}
}
I would like to control the flow of messages to this actor. If I try to download too many files simultaneously, I have a network bottleneck. The problem is that I am using a Future inside the actor receive, so, the methods exits and the actor is ready to process a new message. If I remove the Future, I will download only one file per time.
What is the best way to limit the number of messages being processed per unit of time? Is there a better way to design this code?
There is a contrib project for Akka that provides a throttle implementation (http://letitcrash.com/post/28901663062/throttling-messages-in-akka-2). If you sit this in front of the actual download actor then you can effectively throttle the rate of messages going into that actor. It's not 100% perfect in that if the download times are taking longer than expected you could still end up with more downloads then might be desired, but it's a pretty simple implementation and we use it quite a bit to great effect.
Another option could be to use a pool of download actors and remove the future and allow the actors to perform this blocking so that they are truly handling only one message at a time. Because you are going to let them block, I would suggest giving them their own Dispatcher (ExecutionContext) so that this blocking does not negatively effect the main Akka Dispatcher. If you do this, then the pool size itself represents your max allowed number of simultaneous downloads.
Both of these solutions are pretty much "out-of-the-box" solutions that don't require much custom logic to support your use case.
Edit
I also thought it would be good to mention the Work Pulling Pattern as well. With this approach you could still use a pool and then a single work distributer in front. Each worker (download actor) could perform the download (still using a Future) and only request new work (pull) from the work distributer when that Future has fully completed meaning the download is done.
If you have an upper bound on the amount of simultanious downloads you want to happen you can 'ack' back to the actor saying that a download completed and to free up a spot to download another file:
case object AckFileRequest
class ActorExample(otherActor:ActorRef, maxFileRequests:Int = 1) extends Actor {
var fileRequests = 0
def receive = {
case songId: String if fileRequests < maxFileRequests =>
fileRequests += 1
val thisActor = self
Future {
val futureFile = downloadFile(songId)
//not sure if you're returning the downloaded file or a future here,
//but you can move this to wherever the downloaded file is and ack
thisActor ! AckFileRequest
for (file <- futureFile) {
val fileName = doSomenthingWith(file)
otherActor ! fileName
}
}
case songId: String =>
//Do some throttling here
val thisActor = self
context.system.scheduler.scheduleOnce(1 second, thisActor, songId)
case AckFileRequest => fileRequests -= 1
}
}
In this example, if there are too many file requests then we put this songId request on hold and queue it back up for processing 1 second later. You can obviously change this however you see fit, maybe you can just send the message straight back to the actor in a tight loop or do some other throttling, depends on your use case.
There is a contrib implementation of message Throttling, as described here.
The code is very simple:
// A simple actor that prints whatever it receives
class Printer extends Actor {
def receive = {
case x => println(x)
}
}
val printer = system.actorOf(Props[Printer], "printer")
// The throttler for this example, setting the rate
val throttler = system.actorOf(Props(classOf[TimerBasedThrottler], 3 msgsPer 1.second))
// Set the target
throttler ! SetTarget(Some(printer))
// These three messages will be sent to the printer immediately
throttler ! "1"
throttler ! "2"
throttler ! "3"
// These two will wait at least until 1 second has passed
throttler ! "4"
throttler ! "5"