Is the inter-packet gap is a field in ethernet packet? - ethernet

In wikipedia the image of ethernet frame includes a field "inter-packet gap". Other sited I've looked in don't have that field. I don't seem to understand if the gap is predefined according to the protocol or it can be changed using this field.

The default gap between packets is 96 "bit times" (the time taken to send 96 bits on the medium used).
This is sometimes too big or too small in very specialised circumstances, so organisations are allowed to override this by specifying their own.
By including this field, you're effectively telling the recipient, "I'm not going to send anything for n bit times now, please do the same".
Apparently, it's too small for some Ethernet connections on MS Windows and can be changed
here

Related

Can a holding register in the middle of readable holding registers be an "IllegalDataAddress"?

While unit testing a Modbus driver I'm writing I experienced the following:
I can read holding registers 0 to 1022.
I can't read holding registers 1022 to 13000. I get an illegal data address error code.
I can read holding registers 13000 to 25000.
I would have expected devices supporting Modbus to behave in one of the following two ways:
Every device supports the full range of addresses from 0x0000 to 0xFFFF.
Every device supports a range of addresses from 0x0000 to N, where N < 0xFFFF.
Do any of you more experienced people know:
Is Assumption 1 or 2 about the expected behavior of Modbus devices correct?
Are there other reasons beside being out of bounds for an address to be an illegal data address?
Both assumptions are false. It's completely up to the device to decide which registers to support. Some devices are nice and support a wide range of registers, even if they're unused. Most that I've used don't, though. They'll use groups of registers, like your device.
Also, not all Modbus devices support all the Modbus function codes. Just because it's defined by the Modbus protocol standard doesn't necessarily mean the device will support it.
The key thing is to stick to the addresses defined in the device's manual. The manual is usually required reading, otherwise you'll just be guessing at the ranges, units, and scaling.

STM32 - How do I handle more than 14 filters?

I am currently using the STM32F103RD processor, which has 14 available filters on the CAN1 bus.
I am connecting to a J1939 bus, and I need to monitor around 20 PGN's. How do I handle setting up the 20 PGN's with only 14 filters available?
These 20 PGN's are not sequential, so I can't setup a specific range to allow. These 20 may be all over the place.
You have 14 filter banks, but each of those banks can match two distinct PGNs using identifier list mode (FBMx=0). So you can actually match up to 28 PGNs with this part! See section 24.7.4 of the STM32F10x reference manual (page 655) for details.
If you need to match more than 28 PGNs, you have two options:
Pick sets of three or more PGNs and match each of those sets with a single mask-mode filter bank. To reduce the number of unwanted messages that are matched, you will need to pick the sets of matched PGNs carefully (i.e, keep the number of "don't care" bits in the resulting mask to a minimum). Since J1939 is relatively slow, though, filtering some unwanted messages out in software shouldn't be a huge burden.
Use a connectivity-line STM32 part, such as the STM32F107VC. These parts have double the number of CAN filter banks.
I got it working using the list mode. The tricky part is that PGNs need to be shifted around into a 32-bit header so that it can match the "CAN_RI0R" register to trigger an interrupt. For example, I want to receive Engine RPM which is PGN 61444 (0xF004). This needs to be converted into the following header:
0xC7802004
How to compute this:
Start with PGN 61444 (0xF004)
Shift it left by 8 bits so that it is in the correct J1939 Frame Format location.
OR it with 0x0018000000
Shift it left by 3 bits so that it matches the STM32 register locations for STID/EXID.
OR it with 0x4 to set the "Extended" bit indicating that it is an extended header.
The only caveat with this method, versus the mask mode is that as far as I know, you can only receive messages from one single ECU, since the source address is included in the header, and it must match exactly. Meaning, you can only receive messages from the Engine ECU (0x00). For example, 0x18F004XX, where XX is the source address. When using the mask mode, you can ignore the Source Address bits, and receive the PGN from any ECU.

How can I limit the number of blocks written in a Write_10 command?

I have a product that is basically a USB flash drive based on an NXP LPC18xx microcontroller. I'm using a library provided from the manufacturer (LPCOpen) that handles the USB MSC and the SD card media (which is where I store data).
Here is the problem: Internally the LPC18xx has a 64kB (limited by hardware) buffer used to cache reads/writes which means it can only cache up to 128 blocks(512B) of memory. The SCSI Write-10 command has a total-blocks field that can be up to 256 blocks (128kB). When originally testing the product on Windows 7 it never writes more than 128 blocks at a time but when tested on Linux it sometimes writes more than 128 blocks, which causes the microcontroller to crash.
Is there a way to tell the host OS not to request more than 128 blocks? I see references[1] to a Read-Block-Limit command(05h) but it doesn't seem to be widely supported. Also, what sense key would I return on the Write-10 command to tell Linux the write is too large? I also see references to a block limit VPD page in some device spec sheets but cannot find a lot of documentation about how it is implemented.
[1]https://en.wikipedia.org/wiki/SCSI_command
Let me offer a disclaimer up front that this is what you SHOULD do, but none of this may work. A cursory search of the Linux SCSI driver didn't show me what I wanted to see. So, I'm not at all sure that "doing the right thing" will get you the results you want.
Going by the book, you've got to do two things: implement the Block Limits VPD and handle too-large transfer sizes in WRITE AND READ.
First, implement the Block Limits VPD page, which you can find in late revisions of SBC-3 floating around on the Internet (like this one: http://www.13thmonkey.org/documentation/SCSI/sbc3r25.pdf). It's probably worth going to the t10.org site, registering, and then downloading the last revision (http://www.t10.org/cgi-bin/ac.pl?t=f&f=sbc3r36.pdf).
The Block Limits VPD page has a maximum transfer length field that specifies the maximum number of blocks that can be transferred by all the READ and WRITE commands, and basically anything else that reads or writes data. Of course the downside of implementing this page is that you have to make sure that all the other fields you return are correct!
Second, when handling READ and WRITE, if the command's transfer length exceeds your maximum, respond with an ILLEGAL REQUEST key, and set the additional sense code to INVALID FIELD IN CDB. This behavior is indicated by a table in the section that describes the Block Limits VPD, but only in late revisions of SBC-3 (I'm looking at 35h).
You might just start with returning INVALID FIELD IN CDB, since it's the easiest course of action. See if that's enough?

About Operating System, about page-table entries status bits

In the movie The Social Network, when Mark Zuckberg was in class, the teacher asked this question:
Suppose we're given a computer, with a 16-bit virtual address, and a page size of 256-bytes,the system uses one-level page tables that start at address hex 400, may be you want DMA (Direct Memory Access) on your 16-bit system. Who knows? The first pages are reserved for hardware flags, etc. Assume page-table entries have eight status bits. The eight status bits would then be ...
Mark Zuckberg answered:
One valid bit, one modified bit, one reference bit and five permission bits.
How did he get this?
http://chomaloma.blogspot.com.au/2011/02/social-network-inaccuracies-regarding.html
That does explain it a little
Intel nomenclature in parentheses. The 'valid' (present), 'modified' (dirty) and 'reference' (accessed) bits are the minimum set of bits you need for a demand paging manager and MMU.
The 'valid' (present) bit is used by the MMU to know whether the page is mapped to a valid physical address.
The 'modified' (dirty) bit is used by the demand paging manager to determine if the page being evicted needs to be written to backing media. As accessing backing media can be considered an expensive operation, you really want to keep this to a minimum--especially when writing to it as that is generally slower than reading from it.
The 'reference' (accessed) bit is useful to the demand paging manager to figure out how to age the pages it controls. You don't want to evict the most frequently used pages as that would require saving and/or loading them repeatedly from backing store (which has already been stated as SLOW).
The remaining five bits are gravy. They are free to use as permission and/or option bits. For example, can the page be accessed by supervisor and/or user threads? Is the page available for write, or is it read-only? What is the caching strategy to be used on the page?
Hope this helps.
Sparky
How did he get the answer?---That is just movie BS.
If you take the number of bits in the address and subtract the number of bits used to represent a page, you get the number of bits available for the processor to use as system status bits.
With that information, he could identify the number of system status bits
The usage of those bits is another story. The allocation of system status bits is system dependent. Maybe they exist, but I don't know of any 16-bit virtual addressing system. So he's not referring to any specific type of system.
A reference bit is not used by all systems (e.g. VMS). That's not even mandatory.
Hollywood magic.

Send TCP/IP message from PLC to PC using Ladder Program

Consider the following Ladder Program that checks if a connection is enabled (A202.00) then send a message from the PLC to the PC.
The documentation (Omron CX-Programmer) has a severe lack of explanation of the program convention. What I do not understand is:
To send a message from a node to a node. I should need to specify the receiver ID. It seems the function block does not have an option where I can insert an IP address. Am I supposed to MOV an IP address to a DM address (D300) then use it? If that's the case how (IP address has dots in between 4 bytes..)?
Can someone please explain what is S (First source word), D (First destination word) and C (First control word). Aren't they just memory address? E.g. sending content of a memory adress to another memory address?
[EDIT]
What am I trying to do?
I am trying to interface a measuring gauge (controlled through Ethernet by PC/C# application) to a robotic system (no RS232 or serial, no TCP/IP, only has the simplest I/O points) with an Omron PLC. When gauge completes a measurement, the C# app sends a command to the Omron PLC which, according to the command received, switch ON or OFF an output which triggers a voltage flow to the robot's I/O port.
Should I use FINS? What functions/protocol from the PLC I need to know to do this? I do not know so I am testing every function from the documentation. So far, zero progress.
1) All addressing information is encapsulated in the five control words (C -> C+4). C- "First Control Word" is the pointer to the first word in this table of five words you must have stored somewhere in your PLC to set up the communication.
2) First source word points to the first word in your PLC you wish to send. First destination word points to the first address in the PLC/device you wish to send to. In the example , the first control word specifies that 10 words should be sent. You point to the first one and it will send that one plus the next nine addresses as well.
To do this you have to use FINS communication - the PC stores a memory structure similar to the PLCs (CIO, DM, etc) called Event Memory and these are the addresses in the PC you are pointing to. The PC gets a FINS node number and address just like a PLC would - no IP addresses are involved. (see : FINS Manual) FINS is old, however, and has been superceded by things like Sysmac Gateway.
There are much better ways of communicating between PLC/PC, however, depending on what you are trying to do. Are you trying to write an HMI? If so, what language are you using?
Edit :
If you're using C#, I highly recommend you look into Sysmac Gateway and CX-Compolet. This is probably the most flexible, simple, and extensible way to get .NET working with Omron PLCs. If it is at all possible, however, a better way might even be to have the measurement unit communicate directly with the PLC via hardware I/O (relays, DIO, etc).
CX-Compolet, Sysmac Gateway link:
http://www.ia.omron.com/product/family/63/index_l_u.html