How to read Analog Output Holding Registers on Advantech ADAM 6717 through ModBus TCP - plc

I've been exploring the ADAM 6717 from Advantech.
This is the ModBus address table for said device:
At first I wanted to modify the value of the Digital output channel 0(DO0), so, as can be seen from the picture above, such address is the 0x0017.
I succeed at this by using a ModBus tool and the following settings:
Sending either "On" or "Off", turns On and off a LED connected to that output. Everything runs smoothly according to my expectation up to this point.
The problem arises when I want to read the Analog Input channel 6 or equivalently, address 400431~40044.
Since that address lies on the Analog Output Holding Registers part of the address table, I though that the following settings would accomplish the job:
However, as can be seen above, the reading shows 0.0 when there is actually 6V connected to that input (a potentiometer)
It is worth mentioning that I've made sure to enable the AI6 channel as well as setting it to Voltage mode instead of current. Also, the web utility for the device shows the AI6 reading correctly as I change the potentiometer's resistance value.
So the problem doesn't lie in the connection from the potentiometer to the AI6 but somewhere else.
Out of nothing and leaving aside what I think I know on this topic, I though of changing the function from 0x03 to 0x04
However, the response is exactly the same.
It bugs me that I can read and write values to the output coils but not the Analog output holding registers.
Is there any configuration that I might be missing over here?
Thanks in advance.
Device settings:
IP address: 10.0.0.1
Port in which the ModBus service is running: 5020

Related

AudioKit: Virtual MIDI port confusion?

I'm fiddling around with creation of virtual MIDI ports in AudioKit (v5-main).
It seems to me that there is a confusion on INPUT and OUTPUT ports, but then again, perhaps I'm not understanding what's going on.
I create Virtual MIDI Input and Output ports by:
let midi = MIDI()
midi.createVirtualInputPorts(numberOfPort: 1, [Int32(1000000)], names: ["MIDI Test Input Port"])
midi.createVirtualOutputPorts(numberOfPort: 1, [Int32(2000000)], names: ["MIDI Test Output Port"])
I do realise that I can do this in one go (createVirtualPorts), but I wanted control over the portIDs for further investigation.
But when I list my INPUT ports by displaying:
midi.inputUIDs
The OUTPUT (portID 2000000) shows up on the list.
And vice versa, if I use:
midi.destinationUIDs
... I see the INPUT (PortID 1000000).
To me, it would make sense if my virtual Input Port showed up on the Input (inputUIDs) list, and the Output port showed up on the Output (destinationUIDs) list. But it's the other way round.
Further investigating my problem, I tried to inspect the receivedMIDINoteOn to see the portID reported.
The port UID for incoming MIDI events is the OUTPUT (PortID 2000000).
I double checked to see how physical MIDI ports behave, and indeed, I receive MIDI notes on INPUT ports, as expected.
Am I missing something with regards to terminology here (Input/Output/Destination)?

Can't receive from USB bulk endpoint despite Windows enumerates and libusb reads descriptor of STM32 custom device class

For a fast ADC sampling USB device, I am using the USB 2.0 High Speed capable STM32F733 with the embedded USB-HS PHY. In USBView, I can see that the device is enumerated, the libusb code opens the device and claims interface, but when I try to receive data with libusb_bulk_transfer, the operation times out (return code -12). Things I have tried: I have confirmed than when I request data with libusb_bulk_transfer, the device is interrupted. Note: I have DMA enabled in my class configuration C file and it is not clear to me how that is triggered. I have verified that the transfersize and packet count registers are being set correctly by the LL library function, and that when I request data from
Any tips on debugging such problems will be much appreciated - this board is my undergrad thesis due in under two months!
Desktop sequence:
libusb_get_device_list, libusb_get_device_descriptor, libusb_open, libusb_get_string_descriptor_ascii, libusb_free_device_list, libusb_bulk_transfer(devh, fat_EPIN_ADDR, inframe, fat_EPIN_SIZE, &gotBytes, 100). Where gotBytes is integer, and inframe is a large array.
Device firmware:
MX_USB_DEVICE_Init();
uint8_t txBuffer[10*fat_EPIN_SIZE];
while (1)
{
USBD_LL_Transmit(&hUsbDeviceHS, Custom_fat_EPIN_ADDR, txBuffer, Custom_fat_EPIN_SIZE);
HAL_Delay(1);
}
Custom_fat_EPIN_SIZE is 0x200 and the endpoint address is 0x81 (EP IN 1)
Installed driver for device is WinUSB (verified in Device Manger to be winusb.sys), and I am linking libusb-1.0 into my desktop program. You can find my source code at https://gitlab.com/tywonemi-school-stuff/silicon-radar-fun, the firmware is My SW/v1 and the desktop software is a Qt Creator project in My SW/Viewer, of note is usb.cpp. You can also compare with testing project/HIDTest, which is code that I tested with STM32F303 nucleo dev board where I was able to read an array through IN bulk endpoint with the Viewer application. However, F3 has the USB peripheral, while F7 has OTG_USB, and I am now attempting USB 2.0 compliant HS so there may be more protocol-based pitfalls. You can also find the output of the device descriptor etc from USBView in my SW/USBView_broken.txt
EDIT 1: I have found finally some concrete error in the STM32 behavior. The DMAADDR is set for EPIN 0x81, and never increments, despite the DMA being enabled. I have went through literally every occurrence of the word "DMA" in the USB_OTG periphery.
I thought it might be that my linker script makes my array be stored in DTCM or similar, and the OTG DMA can't access it, but the address of txBuffer is 0x2003EBEC which is in SRAM2. The AHB matrix in the reference manual clearly shows, that the USB OTG HS DMA is master for a bus that SRAM2 is a slave of. And DTCM is connected too. I will look for application notes for USB OTG HS DMA - it just seems to be refusing to copy data!
I have fixed my issue by disabling the DMA setting. I have re-read the relevant portions of the reference manual and still don't know how exactly the values propagate into the Tx FIFOs. It is possible that DMA-less operation will be a major bottleneck in my project, I might return to this later.

I2C: Raspberry Pi (Master) read Arduino (Slave)

I would like to read a block of data from my Arduino Mega (and also from an Arduino Micro in another project) with my Raspberry Pi via I2C. The code has to be in Perl because it's sort of a plug-in for my Home-Automation-Server.
I'm using the Device::SMBus interface and the connection works, I'm able to write and read single Bytes. I can even use writeBlockData with register address 0x00. I randomly discovererd that this address works.
But when I want to readBlockData, no register-address seems to work.
Does anyone know the correct register-address, or is that not even the problem that causes errors?
Thanks in advance
First off, which register(s) are you wanting to read? Here's an example using my RPi::I2C software (it should be exceptionally similar with the distribution you're using), along with a sketch that has a bunch of pseudo-registers configured for reading/writing.
First, the Perl code. It reads two bytes (the output of an analogRead() of pin A0 which is set up as register 80), then bit-shifts the two bytes into a 16-bit integer to get the full 0-1023 value of the pin:
use warnings;
use strict;
use RPi::I2C;
my $arduino_addr = 0x04;
my $arduino = RPi::I2C->new($arduino_addr);
my #bytes = $arduino->read_block(2, 80);
my $a0_value = ($bytes[0] << 8) | $bytes[1];
print "$a0_value\n";
Here's a full-blown Arduino sketch you can review that sets up a half dozen or so pseudo-registers, and when each register is specified, the Arduino writes or reads the appropriate data. If no register is specified, it operates on 0x00 register.
The I2C on the Arduino always does an onReceive() call before it does the onRequest() (when using Wire), so I set up a global variable reg to hold the register value, which I populate in the onReceive() interrupt, which is then used in the onRequest() call to send you the data at the pseudo-register you've specified.
The sketch itself doesn't really do anything useful, I just presented it as an example. It's actually part of my automated unit test platform for my RPi::WiringPi distribution.

NFC type B PUPI doubts

I have the Panasonic MN63Y1210 tag. I have read it with different phones and always I see that the ID is 0x00000000
I've made a program with Arduino and Adafruit's PN532 shield and I have that response too, in ATQB, the PUPI appears like 0x00000000, but when I read the ISO 14443-3 I read this:
A Pseudo-Unique Identifier (PUPI) is used to differenciate PICCs
during anticollision. This 4-byte number may be either a number
dinamically generated by the PICC or a diversified fixed number. The
PUPI shall only be generated by a state transition form the POWER-OFF
to the IDLE state.
For the transition from POWER OFF to IDLE, we need a field, so, I expect that when I try to read the tag this is not in POWER OFF, because I'm applying a field, but I think it is strange to have that PUPI of 0x00000000. I've tested with another tag (same Panasonic model) and I get the same PUPI...
Is this normal? Or what do you think about it?
I would suggest that you start by looking into the datasheet of the MN63Y1210:
On page 26 (table 3-13) you will find that the default value this chip uses for the PUPI is 00000000. You can configure that value, for instance, over the serial interface.

packets lost xbee series 1

I have two xbee's series 1. I have them as endpoint devices working in API mode and talking to each other. The first xbee is attached at a raspberry pi, while the other is on my pc where I see the terminal tab of XCTU program. The baud rate I use is 125000.
From raspberry pi I try to send a jpg image which is 30Kbytes. I send data frames 100 byte long (the biggest as it is said in the xbee documentation). Inside a loop I create and send the packets, I have also a cout statement that prints the loop number. Everything is fine and all bytes are sent. When I comment out the cout statement not all bytes are sent.
From what I have understood the cout statement works as a delay between packets, but I still cannot understand why is this happening as it is supposed that I use the half speed ...
I hope I was clear and look forward for a reply.
UPDATE
Just to summarize, i changed baud rate to 250000 where there is the same behavior as in 125000. I also implemented hardware flow control by checking cts signal. When xbees are in transparent mode I need a delay between sending characters at around 150us. The same goes for api mode too. The difference with 125000 baud rate in api mode was that the delay needed, was enough to be betwween each data packet, but in 250000 the delay is needed between each byte that i send. If i do the above everything goes well.
The next thing i did was to plug both xbees in my pc in transparent mode. I went to terminal tab of xctu software where i chose assemble packet and sent at around 3000 bytes to the other xbee. The result was the same. The second xbee received at about 1500 bytes and then each time that i was sending one byte from the first to the second, the "lost bytes" were being received at packets of 1000. :/
So could anyone know what am I doing wrong?
You should connect the /CTS pin from the XBee module into the Raspberry Pi, and have your routine stop sending data when the XBee de-asserts it.
At higher baud rates, it's possible to stream data into the XBee module faster than it can send to the remote module. The local XBee module uses the /CTS pin to notify the host when its buffers are almost full and the host should stop sending. People refer to this as hardware flow control.
It may be necessary to modify the serial driver on the Raspberry Pi to make use of that signal -- it should pause the transmit buffer when de-asserted, and automatically resume sending when re-asserted.