I want to set up ptp (precision time protocol) now and HSR protocol in future. So I have a STM32H743ZIT6 and a KSZ8463FRL switch.
The switch has 3 port. According to datasheet port 1 and port 2 can be used as TC (Transparent Clock: P2P/E2E) and port 3 can be used as OC (Ordinary Clock: Master/Slave). I'm confused. Which port should connect to my mcu? Which port should be connect to GMC (Grand Master Clock)? My board is slave for another master now but can my board be used as Master too?
I connect port 3 to mcu and port 1 and 2 is free.
I haven't enough information. Please help me or introduce useful references. Thanks.
Ordinary Clock Ports are uplink facing so Port3 should be connected to GM. This this configuration, the MCU board is slave for GM and can be connected to port 1 or 2.
To use the board as Master, the MCU has to be connected to port 3(Port 3 always connects to Master). The CPU can sync to a GPS and act as GM this way.
The 1588v2 support on STM32H743 may be buggy... as per STM32 forums.
Related
I am running noderd on a raspberry pi 4 as docker container (as well as mosquitto, timescaledb and grafana). But I fail to get data from a smartmeter (sml protocol) into nodered. The raspi is connected to an optical sensor via usb cable and I get data on the raspi (sudo cat /dev/ttyUSB0 | od -tx1).
I do not find any parameter configuration for the smartmeter node (node-red-contrib-smartmeter) to get any data into nodered. Below you see the flow (connection details: 9600 Baud rate, 8N1 - should be fine since it is from the manual and already worked before).
To check the serial device connection with the docker container I have installed serialport in nodered. After some adjustments the serialport node in nodered could connect to /dev/ttyUSB0. Now, I get values - strang ones - from my serial device into nodered.
But for the smartmeter node I still get no values even though the parameters are the same than from the serialport node. Do you have any idea? Is there an alternative for the smartmeter node that should work?
Thank you very much in advance!
I have 2 Raspberry Pi4, running as first one is master and the second one is slave. They are connected via ethernet cable.
A loadcell and HX711 have been wired to the slave
I would like to read the weight data from the master.
GPIO Zero library(https://gpiozero.readthedocs.io/en/stable/recipes_remote_gpio.html)
Has few examples Led, Button etc. as master and slave
I could not find an example for Hx711.
Does any one has experience with GPIO Zero library and Hx711 server, client solution.
Thanks
I am looking at EtherCAT.
I am using embedded Linux.
etherlab and SOEM have been compiled to test that the EtherCAT master functionality is possible.
But I could not find anything about the EtherCAT slave(S/W).
First of all, etherlab had only master function.
SOES also required specific hardware(Lan9252, twrk60).(https://github.com/OpenEtherCATsociety/SOES/tree/master/soes/hal)
I think ethercat slave is also possible if ethercat master is available with ethernet port.
Is EtherCAT slave a physical hardware (device) unconditionally required, unlike the EtherCAT master?
EtherCAT Slave requires physical ESC(EtherCAT Slave Controller).
If we look at a Haswell architectural diagram today we can see that there are PCIe lanes directly connected to the CPU (for graphics) as well as some of them routed to the the platform controller hub (southbridge replacement):
If we look Intel 8 series data-sheet (the specification of the C222) we will find that the Intel C222 contains the I/O APIC used to route legacy INTx interrupts (Chapter 5.10). My question is what happens if a legacy INTx interupt requests arrives directly at the CPU (over the PCIe 3.0 lanes). Does that have to be forwarded to the C222 first or is there another I/O APIC in the system agent that I will have to program in that case? Also, with Intel Virtualization Technology for Directed I/O there is now an additional indirection, the interrupt remapping table. Is that table in the system agent (former northbridge) on the CPU or on the C222 and does that mean all interrupts from the PCIe 3.0 lanes need to be routed to the C222 first in case the remapping is enabled?
Legacy INTx interupt requests arriving at a root port in the CPU are forwarded to the I/O APIC in the PCH.
There is a separate VT-d instance in the CPU (perhaps even a separate instance per root port), so message-signaled interrupts arriving at a root port do not go through the PCH.
We have a Jboss 5 AS cluster consiteing of 2 nodes using multicast, every thing works fine and the servers are able to discover and make a cluster
but the problem is these servers generate heavy multicast traffic which effects the network performace of other servers shareing the same network.
I am new to Jboss clustering is there any way to use unicast (point-to-point) instead of multicast ? Or configure the multicast such that its not problem for rest of the network ? can you refer me to some documentation , blog post or simmillar that can help me get rid of this problem.
Didn't got any answers here but this might be of help to someone in future we managed to resolve it by
Set the following TTL property for jboss in the start up script
-Djgroups.udp.ip_ttl=1
this will restrict the number of hops to 1 for the multicast messages. This will not reduce the amount of network traffic between the clustered JBoss but will prevent it spreading outside.
If you have other servers in the same subnet that are effected by flooding problem then
you might have to switch to TCP stack and do unicast instead of multicast
-Djboss.default.jgroups.stack=tcp
Also there are more configuration files in jboss deploy for clustering that you should look at.
server/production/deploy/cluster/jboss-cache-manager.sar/META-INF/jboss-cache-manager-jboss-beans.xml
and other conf files in the JGroups config.
If multicast is not an option of for some reason it doesn't work due to network topology we can use the unicast.
To use unicast clustering instead of UDP mcast. Open up your profile and look into file jgroups-channelfactory-stacks.xml and locate the stack named "tcp". That stacks still uses UDP only for multicast discovery. If low UDP traffic is alright, you dont need to change it. If it is or mcast doesn't work, you will need to configure TCPPING protocol and configure intial_hosts where to look for cluster members.
Afterwards, you will need to tell JBoss Cache to use this stack, open up jboss-cache-manager-jboss-beans.xml where for each cache you have a stack defined. You can either change it here from udp to tcp or you can simply use the property when starting AS, just add:
-Djboss.default.jgroups.stack=tcp