Pox/Mininet: learning location of hosts - pox

My question might be a little vague as I clearly misunderstand a lot, but I'll give it a try anyway:
Suppose I have 7 switches in a Fat Tree topology, and the bottom four are each connected to two hosts. When I start the controller I instructs the switches to send LLDP packets and this is how I learn the topology. Also I calculate a Spanning Tree to use when I flood packets like ARP requests.
My problem: how do I learn which switch a certain host is connected to? If h1 sends a layer 3 packet to h3, I know how to route the packets because I have a spanning tree, but this might not be the shortests route. I use Dijkstra to compute shortest routes from each switch to all others, but if I want to send a message to h3, I don't know what switch is directly connected to it.
Any ideas?

The component responsible for do it is the Host_tracker. You need listen the Host_tracker event in your code, just like this:
from pox.core import core
import pox
import pox.lib.packet as pkt
from pox.lib.revent import *
from pox.openflow.discovery import Discovery
from pox.host_tracker import host_tracker
import pox.openflow.libopenflow_01 as of
class YourController(EventMixin):
def __init__ (self):
def startup ():
core.openflow.addListeners(self, priority=0)
core.openflow_discovery.addListeners(self)
core.host_tracker.addListeners(self)
""" Here is the place where is created the listener"""
core.call_when_ready(startup, ('openflow','openflow_discovery', 'host_tracker'))
def _handle_HostEvent (self, event):
""" Here is the place where is used the listener"""
print "Host, switchport and switch...", event.entry
def _handle_PacketIn(self, event):
""" Packet processing """
def launch():
from host_tracker import launch
launch()
core.registerNew(YourController)

Related

How to connect a PCIe device to a chipyard design

I'm trying to connect a PCIe device to a chipyard design using the existing edge overlay for the VCU118 (slightly modified because I'm using a different board but this should not matter).
#michael-etzkorn already posted an issue on Github about this in which they explain how they only got this working using two different clocks.
I'd appreciate it if I could get some pointers as to how this is done (the issue leaves out some implementation details of the configs) and also if it would be possible to do this without adding an extra clock (#michael-etzkorn points out that this could cause some issues).
Based on the work in your gist, it looks like you've answered most of your original question, but since I already typed this out I'll include it as an answer here.
To hook up any port, you'll essentially need to do three things.
Create an IOBinder
Create a HarnessBinder
Hook up the diplomatic nodes in the TestHarness
The IOBinder takes the bundles from within the system and punches them through to chiptop. The HarnessBinder connects the IO in ChipTop to the harness. Diplomacy negotiates the parameters for the diplomatic nodes. This step may be optional, but many modules, like the XDMA wrapper in the PCIe overlay, are diplomatic so this is usually a required step.
IOBinder
The IOBinder can take your CanHAveMasterTLMMIOPort and punch out pins for it
class WithXDMASlaveIOPassthrough extends OverrideIOBinder({
(system: CanHaveMasterTLMMIOPort) => {
val io_xdma_slave_pins_temp = IO(DataMirror.internal.chiselTypeClone[HeterogeneousBag[TLBundle]](system.mmio_tl)).suggestName("tl_slave_mmio")
io_xdma_slave_pins_temp <> system.mmio_tl
(Seq(io_xdma_slave_pins_temp), Nil)
}
})
This can look relatively the same for each port. However, I've found experimentally, I had to flip the temp pin connection <> for CanHaveSlaveTLPort.
HarnessBinder
The harness binder retrieves that port and connects it to the outer bundle. The pcieClient bundle is retrieved from the harness and connected to ports.head which is returned from IOBinders. This is essentially a fancy functional programming way to clone the IO and connect it to the bundle in ChipTop.
class WithPCIeClient extends OverrideHarnessBinder({
(system: CanHaveMasterTLMMIOPort, th: BaseModule with HasHarnessSignalReferences, ports: Seq[HeterogeneousBag[TLBundle]]) => {
require(ports.size == 1)
th match { case vcu118th: XDMAVCU118FPGATestHarnessImp => {
val bundles = vcu118th.xdmavcu118Outer.pcieClient.out.map(_._1)
val pcieClientBundle = Wire(new HeterogeneousBag(bundles.map(_.cloneType)))
// pcieClientBundle <> DontCare
bundles.zip(pcieClientBundle).foreach{case (bundle, io) => bundle <> io}
pcieClientBundle <> ports.head
} }
}
})
Also, I should note: this isn't the ideal way to connect to the harness as its possible BundleMap user fields are generated and they won't be driven unless you have that pcieClientBundle <> DontCare there. I found I had to expose AXI ports instead and modified the overlay to output axi nodes to get Diplomacy to work between the testharness and ChipTop regions.
A write up of that problem along with some more info at:
What are these `a.bits.user.amba_prot` signals and why are they only uninitialized conditionally in my HarnessBinder?
All this code is at that question.
TestHarness Diplomatic Connections
val overlayOutput = dp(PCIeOverlayKey).last.place(PCIeDesignInput(wrangler=pcieWrangler.node, corePLL=harnessSysPLL)).overlayOutput
val (pcieNode: TLNode, pcieIntNode: IntOutwardNode) = (overlayOutput.pcieNode, overlayOutput.intNode)
val (pcieSlaveTLNode: TLIdentityNode, pcieMasterTLNode: TLAsyncSinkNode) = (pcieNode.inward, pcieNode.outward)
val inParamsMMIOPeriph = topDesign match { case td: ChipTop =>
td.lazySystem match { case lsys: CanHaveMasterTLMMIOPort =>
lsys.mmioTLNode.edges.in(0)
}
}
val inParamsControl = topDesign match {case td: ChipTop =>
td.lazySystem match { case lsys: CanHaveMasterTLCtrlPort =>
lsys.ctrlTLNode.edges.in(0)
}
}
val pcieClient = TLClientNode(Seq(inParamsMMIOPeriph.master))
val pcieCtrlClient = TLClientNode(Seq(inParamsControl.master))
val connectorNode = TLIdentityNode()
// pcieSlaveTLNode should be driven for both the control slave and the axi bridge slave
connectorNode := pcieClient
connectorNode := pcieCtrlClient
pcieSlaveTLNode :=* connectorNode
Clock Groups... (unsolved)
pcieWrangler was my attempt at hooking up the axi_aclk. It's not correct. This just creates a second clock with the same 250 MHz frequency as the axi_aclk and so it mostly works but using a second clock isn't correct.
val sysClk2Node = dp(ClockInputOverlayKey).last.place(ClockInputDesignInput()).overlayOutput.node
val pciePLL = dp(PLLFactoryKey)()
pciePLL := sysClk2Node
val pcieClock = ClockSinkNode(freqMHz = 250) // Is this the reference clock?
val pcieWrangler = LazyModule(new ResetWrangler)
val pcieGroup = ClockGroup()
pcieClock := pcieWrangler.node := pcieGroup := pciePLL
Perhaps you can experiment and find out how we can hook up the axi_aclk as the driver for the axi async logic :)
I'd be happy to open a question about that since I don't know the answer yet myself.
To answer some follow up questions
How do I know how big of an address range I should reserve for PCIe (master and control)?
For control, you can match the size of in the overlay 0x4000000. For the master port, you'd ideally just hook a DMA engine up to that that has access to the full address range on the host. Otherwise, you have to do AXI2PCIE BAR translation logic to access different regions of host memory which isn't fun.
How do I connect the interrupt node to the system?
I believe this is only if you're connecting a PCIe Root Complex. If you're connecting a device, you shouldn't need to worry about the interrupt node. If you do wish to add it, you'll have to add the NExtInterrupts(3)++ to your config. I only got so far as that and my uncommented code in the TestHarness before I realized I didn't need it. If you feel you do, we can open a new question and try answering this more fully.

Send UDP from .pcap to a non local server via Scapy

I am trying to send a UDP packet that I captured with Wireshark to my private game server through Scapy so that I can trigger a modded event to occur. Eventually I would like to expand on this and create something that makes these events interactive or maybe do something like display the top player of the week at some point during the games, but at this point I am not quite there.
It's a very, very old game that communicates via UDP only and has taken very little modding/anti-cheat implementation into consideration so I feel pretty confident that if I get the packet I want to the server, my game server should react as intended (or atleast I am hoping).
The problem:
I am about as green as it gets when it comes to this sort of thing.
I have spent the past 2-3 weeks reading through Stackoverflow and the Scapy documentations and pulling my hair out.
I have tried:
Reading and editing my .pcap file for the source/destination information, opening up a socket and sending in various different ways. I have unfortunately just not come across anyone with a question that I found similar enough to mine to go off of...
I am not sure it should even be done like I am trying, but here is my code so far:
>>> from scapy.all import *
>>> from scapy.utils import rdpcap
>>> import socket
>>> s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
>>> pkts=rdpcap("/home/Kali/Desktop/Packet.pcap")
>>> for pkt in pkts:
...: pkt[Ether].src= "xx:xx:xx:xx:xx:xx:"
...: pkt[Ether].dst= "xx:xx:xx:xx:xx:xx"
...: pkt[IP].src= "192.168.1.1"
...: pkt[IP].dst= "xxx.x.xxx.xx" >>I read somewhere that I didn't need ports to be updated from the pcap? But that was a local example so not sure.
...: del pkt.chksum
...: s.send(bytes(pkt))
Any help would be greatly appreciated. Thanks for taking the time to read this!

Why a code, trying to read messages from two ZeroMQ sockets, fails?

I have issues with reading messages from two zmq servers (one set to REQ|REP and one PUB|SUB)
The two servers are running on another computer. When I read just the REQ|REP connection everything works perfectly but as soon as I also try to read the PUB|SUB connection the program freezes (I guess it waits forever for a message)
from PyQt5 import QtCore, QtGui, QtWidgets
import zmq
import ui_mainwindow
class MainWindow(QtWidgets.QMainWindow, ui_mainwindow.Ui_MainWindow):
def __init__(self, parent = None):
super(MainWindow, self).__init__(parent)
self.context = zmq.Context()
try:
self.stateSocket = self.context.socket(zmq.REQ)
self.stateSocket.connect("tcp://134.105.89.197:5555")
except zmq.ZMQError as e:
print('States setup failed: ', e)
try:
self.context = zmq.Context()
self.anglesSocket = self.context.socket(zmq.SUB)
self.anglesSocket.connect("tcp://134.105.89.197:5556")
except zmq.ZMQError as e:
print('angles setup failed: ', e)
self.timer = QtCore.QTimer()
self.timer.timeout.connect(self.publishState)
self.timer.setInterval(500)
self.timer.start()
self.timer2 = QtCore.QTimer()
self.timer2.timeout.connect(self.publishAngles)
self.timer2.setInterval(500)
self.timer2.start()
# +more variables unrelated to problem
def publishState(self):
request= "a string"
try:
self.stateSocket.send_string(request)
self.reset = 0
message = self.stateSocket.recv()#flags=zmq.NOBLOCK)
values = [float(i) for i in message.decode("UTF-8").split(',')]
print("Status: ", message)
except zmq.ZMQError as e:
print('State communication: ', e)
values = [0] * 100
def publishAngles(self):
try:
message = anglesSocket.recv_string() # flags=zmq.NOBLOCK)
#values = [float(i) for i in message.decode("UTF-8").split(',')]
print("Angles: ", message)
except zmq.ZMQError as e:
print('Angles communication: ', e)
values = [0] * 100
edit: added the full relevant code.
What I observe is the deadlock does not come from the REQ|REP, This part alone works perfectly fine. But it seems that the PUB|SUBpart does not work in the timer function. When I make a minimal example with a while loop inside publishAngels() it works.
So is there an elegant way to use a PUB|SUB socket in a Qt Timer connected function?
In case one has never worked with ZeroMQ,one may here enjoy to first look at "ZeroMQ Principles in less than Five Seconds"before diving into further details
Q: "Is there any stupid mistake I am overlooking?"
Yes, there are a few and all easy to refine.
1) The, so far incomplete, visible ZeroMQ part exhibits principal uncertainty of what type of the subscription and other safe-guarding settings were, if ever, when and where, applied to the SUB-socket-Archetype AccessPoint. The same applied to the REQ-socket-Archetype AccessPoint, except the subscription management related kind(s) of setting(s) for obvious reasons.
2) The code ignores the documented principles of the known rules for the distributed-Finite-State-Automaton's (dFSA) logic, hardwired into the REQ/REP Scalable Formal Communication Archetype. Avoid this using a correct logic, not violating the, here mandatory, dFSA-stepper of REQ-REP-REQ-REP-REQ-REP, plus make either of the REQ and SUB handling become mutually independent and you have it. In other words, a naive, dFSA-rules ignoring use of the zmq.NOBLOCK flag does not solve the deadlock either.
If you feel to be serious into becoming a distributed-computing professional, a must read is the fabulous Pieter Hintjen's book "Code Connected, Volume 1"

Controlling requests per second and timeout threshold in Gatling

I am working on a Gatling simulation. For the life of me, I cannot get my code to reach 10000 requests per second. I have read the documentation and I keep messing with different methods and whatnot but my requests per second seems capped at 5000 requests per second. I have attached my current iteration of my code. The URL and path information is blurred out. Assume that I have no issue with the HTTP part of my simulation.
package computerdatabase
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._
//import assertions._
class userSimulation extends Simulation {
object Query {
val feeder = csv("firstfileSHUF.txt").random
val query = repeat(2000) {
feed(feeder).
exec(http("user")
.get("/path/path/" + "${userID}" + "?fullData=true"))
}
}
val baseUrl = "http:URL:7777"
val httpConf = http
.baseURL(baseUrl) // Here is the root for all relative URLs
val scn = scenario("user") // A scenario is a chain of requests and pauses
.exec(Query.query)
setUp(scn.inject(rampUsers(1500) over (60 seconds)))
.throttle(reachRps(10000) in (2 minute),
holdFor(3 minutes))
.protocols(httpConf)
}
Additionally, I would like to set the maximum threshold for a timeout to be 100ms. I have tried to do this with assertions and also editing the configuration files but it never seems to show up during the tests or in my reports. How can I set a request to KO if the request took longer than 100ms? Thank you for your help with this matter!
I ended up figuring this out. My code above is correct and I know understand what Stephane, one of the main contributors to Gatling was explaining. The server at the time simply could not handle my RPS threshold. It was an upper bound that was unreachable. After making changes to the server, we could handle this sort of latency. Additionally, I found a way to timeout at 100ms in the configuration file. Specifically, requestTimeout = 100 will cause the timeout behavior I was looking for.

pySerial buffer won't flush

I'm having a problem with serial IO under both Windows and Linux using pySerial. With this code the device never receives the command and the read times out:
import serial
ser = serial.Serial('/dev/ttyUSB0',9600,timeout=5)
ser.write("get")
ser.flush()
print ser.read()
This code times out the first time through, but subsequent iterations succeed:
import serial
ser = serial.Serial('/dev/ttyUSB0',9600,timeout=5)
while True:
ser.write("get")
ser.flush()
print ser.read()
Can anyone tell what's going on? I tried to add a call to sync() but it wouldn't take a serial object as it's argument.
Thanks,
Robert
Put some delay in between write and read
e.g.
import serial
ser = serial.Serial('/dev/ttyUSB0',9600,timeout=5)
ser.flushInput()
ser.flushOutput()
ser.write("get")
# sleep(1) for 100 millisecond delay
# 100ms dely
sleep(.1)
print ser.read()
Question is really old, but I feel this might be relevant addition.
Some devices (such as Agilent E3631, for example) rely on DTR. Some ultra-cheap adapters do not have DTR line (or do not have it broken out), and using those, such devices may never behave in expected manner (delays between reads and writes get ridiculously long).
If you find yourself wrestling with such a device, my recommendation is to get an adapter with DTR.
This is because pyserial returns from opening the port before it is actually ready. I've noticed that things like flushInput() don't actually clear the input buffer, for example, if called immediately after the open(). Following is code to demonstrate:
import unittest
import serial
import time
"""
1) create a virtual or real connection between COM12 and COM13
2) in a terminal connected to COM12 (at 9600, N81), enter some junk text (e.g.'sdgfdsgasdg')
3) then execute this unit test
"""
class Test_test1(unittest.TestCase):
def test_A(self):
with serial.Serial(port='COM13', baudrate=9600) as s: # open serial port
print("Read ASAP: {}".format(s.read(s.in_waiting)))
time.sleep(0.1) # wait for 100 ms for pyserial port to actually be ready
print("Read after delay: {}".format(s.read(s.in_waiting)))
if __name__ == '__main__':
unittest.main()
"""
output will be:
Read ASAP: b''
Read after delay: b'sdgfdsgasdg'
.
----------------------------------------------------------------------
Ran 1 test in 0.101s
"""
My workaround has been to implement a 100ms delay after opening before doing anything.
Sorry that this is old and obvious to some, but I didn't see this option mentioned here. I ended up calling a read_all() when flush wasn't doing anything with my hardware.
# Stopped reading for a while on the connection so things build up
# Neither of these were working
conn.flush()
conn.flushInput()
# This did the trick, return value is ignored
conn.read_all()
# Waits for next line
conn.read_line()