Difference between a CANopen device and a CANopen node - canopen

Is there a difference between a CANopen device and a CANopen node?

The word "CANopen device" is used in the CANopen specificaiton (CiA-301) to describe an entity with its own node-ID, Object Dictionary and NMT state machine.
But then the word node is used to describe different services related to the CANopen device.
Node control services, used to set the state of the CANopen device
Node guarding protocol used to monitor the CANopen device
Node-ID is the ID of the CANopen device.
So basically the word "CANopen node" does not exist in the CANopen specification, it uses the word "CANopen device". But the naming of the services and common language uses the word CANopen node to describe a CANopen device.
So they are the same.

A CANopen device is the CANopen node. One CANopen device has one NODE-ID. One physical device could have more CANopen devices (more NODE-IDs).
There are also logical and virtual devices, but it is out of the current scope.
See standard CiA 301 chapter 4 for detailed description.

is actually same. there is terminology difference, but basically they are same.

A CANopen node is a CANopen device connected to a bus.

Related

how drivers work (e.g. PCIe and USB)

I am curious about how drivers in general work. I do understand basic concepts and also how a single driver operates. Where I am confused is how they work when multiple drivers are involved.
Let me explain my question through an example:
Suppose I have a PCIe and USB interface in HW. The primary interface to host (where driver, OS, applications reside) is PCIe. USB interface is accessible to host through PCIe.
So, in this case, I would have driver for PCIe as well for USB.
When data has to be transferred through USB by application, application would invoke system/OS calls. This would eventually land up in USB driver.
Is this correct?
Once USB driver has completed processing, PCIe calls have to be called. Who does it? is it OS or USB driver itself?
I would assume that, it would be OS as otherwise it would break basic modular philosophy. But driver calling OS seems counterintuitive as I always assumed flow to be from application to OS to driver and HW.
Can anyone please throw some light on this topic?
Much like in user space code, there exist standardized APIs for access various types of hardware in kernel land(exact usage varies by OS). As a result it isn't really that much of a stretch for one device driver to access another device's driver via these standardized APIs. (Warning: USB is a very complex protocol, and many details have been glossed over to keep a long post shorter)
The original question focused on PCIe to USB cards. In this example I think it's helpful to think of there being three "layers" of drivers. The first layer is the PCIe bus controller driver, which controls PCIe bus specific functions such as mapping out MMIO for PCIe devices and supporting interrupts from those PCIe devices. The second layer is the USB host controller layer, which provides the functions for issuing standardized USB transactions. Finally, the USB device driver (like a USB driver keyboard) sits on top of the stack using the standardized USB transaction to implement the functionality of the specific USB peripheral device. Calls from the keyboard driver will call functions down in the USB host controller driver, which in turn may even call down to the PCIe driver. All of this is done in the kernel space, even though many separate drivers are employed.
Most PCIe devices do the bulk of their communication with the CPU via MMIO access, which appear as memory reads/writes to the processor. Generally no specific driver function is needed to perform the MMIO transfer of data from PCIe to CPU (although there may be some simple access functions to do endian correction or deal with cache issues).
USB host controller drivers are interesting in that they conform to a standard (such as XHCI, the USB 3.0 standard, which I'll use in this example) which dictates a standard device memory map and behavior. Thus there usually is some chip specific driver performs non-standard initialization to the USB host controller device. Additionally, these chip specific drivers will both retrieve the location of the XHCI standardized MMIO region and provide a way to receive interrupts from the XHCI controller (in this example from PCIe interrupts).
Next, this standardized memory region and interrupt mechanism is passed to a generic XHCI host controller driver. The generic XHCI code does not care if the device is PCIe, it just cares that it gets passed a memory region that follows the XHCI standard and that it receives the correct interrupts The XHCI driver provides the generic USB transfer functions which in turn the USB keyboard device can use to initiate USB transactions.
For the most part, the XHCI driver is just going to do read/writes to the MMIO region that was passed in. This allows the same common XHCI code to service a wide array USB host controllers, many of which are not PCIe devices. Thus effectively allowing the XHCI driver to abstract away the underlying hardware implementing the USB controller. Thus, for the example posed by the original question, the USB host controller standards are designed to hide the underlying hardware mechanisms to make for a more modular USB driver system.

Linux device driver basics

what is the difference between a device and a driver and how they are related ?
plz explain me in context of below diagram
A device is a general device like hardisk, network card etc.
Device driver is a piece of code written to interact with the device, in a more clear way to control the device. It tells how we are going to interact with a device.
The picture you mentioned is related to virtualization:
Qemu - is an Emulator means it make virtual CPU, NIC etc. so that Virtual machines can have their own CPU, NIC etc. Think of it like you don't have anything but you are creating an illusion that you have it,
As explained qemu will create emulated devices, now to operate on them we need some drivers. That is where virtio drivers comes into picture.
Virtio-driver: These driver are written to control the emulated devices.

How does Bonjour Over Bluetooth Work

Can anyone explain how bonjour works over bluetooth from iphone OS 3.0 onwards?
The documentation says the Bonjour API's used in the application just works even if Wi-fi is off and Bluetooth is on. It also says , a Bluetooth PAN is established and hence IP address comes into picture.
But Bonjour (based on mdns) requires multicast to work. But , Bluetooth PAN (piconet) works on a master-Slave concept. Any data to be exchanged between peers has to go to the master first and then the master forwards to the all clients. Moreover there is a restriction on the number of slaves in piconet i.e., 8. that means bonjour over bluetooth has a limitation that it would work for a max of 8 devices?
Apparently, it's PANU to PANU communication. So the limitation is actually - one-on-one communication. If you use Bluetooth Explorer, included with Xcode, you'll see the iOS device presents a service with ID 0x1115. Since there is no GN nor NAP nodes in the connection, only two devices can participate in the connection.
Bluetooth Explorer also shows various custom fields that serve to exchange metadata about the connection. See my somewhat related question for an example of the service announcement.
I have only been able to get this service to appear when using GameKit, on both iPhone 3G with 4.2.1 and iPad with 5.0.1.
I know nothing about Boujour and iPhone... Perhaps Bonjour just sees the TCP/IP network and multicasts on to it -- regardless of whether the IP network is over bluetooth or WiFi or FooBar...
IIRC PAN just forms a point-to-point link to the PAN peer and, thus if the peer is an access-point (rather than just another end-node) it it it that will handle multicasting the packets.

Can two Identical devices be present on the same bus in any PCI Topology

As per the PCI standard, devices are identified on the basis of Vendor Id, Device Id and the bus no. All devices of same type have identical vendor id and device id. If I put two such devices on the same bus say bus 0. How will the PCI Software Subsystem distinguish between the two?
If such a case is not possible in PCI, then can such thing be possible through PCI Express Switch?
Yes, it's perfectly fine. The host distinguishes identical devices by slot.
PCI and PCI Express devices are identified by Bus/Device/Function, which is necessarily unique per device in the system. Vendor and Device ID are just properties of a device found at a certain Bus/Device/Function.
When enumerating board, a driver will typically scan PCI configuration space (iterate through all the installed PCI devices), until it finds one or more devices that match the expected Vendor and Device ID, and possibly also the subsystem IDs. Once it finds a match, it should record the bus/device/function as a "handle" to the open device.
Properly written software should find all vendor/device matches, put them in a table, and let you pick which one you want to use (e.g. /dev/mydevice0, /dev/mydevice1, etc). However, I have seen lazy software that simply stops at the first match.
As I know, each PCI device can be uniquely described by (Bus,Device,Function). Consider your case(2 devices have identical VID and DID installed) and I think they must be located at different PCI bus ! If they must be in the same bus then please make their Device or Function number different

Direct communication between two PCI devices

I have a NIC card and a HDD both connected on PCIe slots in a Linux machine. Ideally, I'd like to funnel incoming packets to the HDD without involving the CPU, or involving it minimally. Is it possible to set up direct communication along the PCI bus like that? Does anyone have pointers as to what to read up on to get started on a project like this?
Thanks all.
Not sure if you are asking about PCI or PCIe. You used both terms, and the answer is different for each.
If you are talking about a legacy PCI bus: The answer is "yes". Board to board DMA is doable. Video capture boards may DMA video frames directly into your graphics card memory for example.
In your example, the video card could DMA directly to a storage device. However, the data would be quite "raw". Your NIC would have no concept of a filesystem for example. You also need to make sure you can program the NIC's DMA engine to sit within the confines of your SATA controller's registers. You don't want to walk off the end of the BAR!
If you are talking about a modern PCIe bus: The answer is "typically no, but it depends". Peer-to-peer bus transactions are a funny thing in the PCI Express Spec. Root complex devices are not required to support it.
In my testing, peer-to-peer DMA will work, if your devices are behind a PCIe switch (not directly plugged into the motherboard). However, if your devices are connected directly to the chipset (Root Complex), peer-to-peer DMA will not work, except in some special cases. The most notable special case would be the video capture example I mentioned earlier. The special cases are mentioned in the chipset datasheets.
We have tested the peer-to-peer PCIe DMA with a few different Intel and AMD chipsets and found consistent behavior. Have not tested the most recent generations of chipsets though. (We have discussed the lack of peer-to-peer PCIe DMA support with Intel, not sure if our feedback has had any impact on their Engineering dept.)
Assuming that both the NIC card and the HDD are End Points (or Legacy Endpoints) you cannot funnel traffic without involving the Root Complex (CPU).
PCIe, unlike PCI or PCI-X, is not a bus but a link, thus any transaction from an Endpoint device (say the NIC) would have to travel through the Root Complex (CPU) in order to get to another branch (HDD).