maximum MAX_HEADER_SIZE under Jboss7.1.1.Final - jboss

What is the maximum MAX_HEADER_SIZE value, we can set in Standalone.xml under Jboss7.1.1.Final?
I checked for data with size 5Mb,and it's accepting it.But is it having any maximumlength?

Related

Tomcat default value for socket.soKeepAlive

I am trying to debug an issue related to keep-alives/connection resets and found that tomcat documentation says:
socket.soKeepAlive:
(bool)Boolean value for the socket's keep alive setting (SO_KEEPALIVE). JVM default used > if not set.
This is not set by the application I am debugging. Is there a way to figure out the default the jvm is using? (For instance by inspecting a system property?)
I cannot test out the behavior by inspecting the actual keep alive behavior since I don't have access to the VM.
Answering on my own based on more research and experimentation.
Default value of socket.soKeepAlive: From the JVM docs for SocketOptions.SO_KEEPALIVE:
The initial value of this socket option is FALSE. The socket option may be enabled or disabled at any time.
Also to note:
When the SO_KEEPALIVE option is enabled the operating system may use a keep-alive mechanism to periodically probe the other end of a connection
From what I understood, this would mean tomcat would not probe clients to check if an established connection is active by default
Default Value of keepAliveTimeout: The default value is to use the value that has been set for the connectionTimeout attribute
In my case this was not being reflected. connectionTimeout was set to 10 seconds, but still tomcat responses had keep alive headers set to only 5 seconds.
However, I found that the application authors also set an attribute called socket.soTimeout to 5 seconds which tomcat describes as:
This is equivalent to standard attribute connectionTimeout.
I found that when both conncetionTimeout and socket.soTimeout are set, socket.soTimeout takes precedence since changing the socket.soTimeout value caused the values returned by keep alive headers to change accordingly.

SB37 JCL error to small a size?

The following space allocation is giving me an sB37 JCL error. The cobol size of the output file is 100 bytes and the lrecl size is 100 bytes. What do you think is causing this error? I have tried increase the size to 500,100 and still get the same error.
Code:
//OUTPUT1 DD DSN=A.B.C,DISP=(NEW,CATLG,DELETE),
// DCB=(LRECL=100,BLKSIZE=,RECFM=FBM),
// SPACE=(CYL,(10,5),RLSE)
Try to increase not only the space, but the volume as well.
Include VOL=(,,,#) in your DD. # is the numbers of values you want to allocate
Ex: SPACE=(CYL,(10,5),RLSE),VOL=(,,,3) - includes 3 volumes.
Additionally, you can increase the size, but try to stay within reasonable limits :)
The documentation for B37 says the application programmer should respond as indicated for message IEC030I. The documentation for IEC030I says, in part...
Probable user error. For all cases, allocate as many units as volumes
required.
...as noted in another answer. However, be advised that the documentation for the VOL parameter of the DD statement says...
If you omit the volume count or if you specify 1 through 5, the system
allows up to five volumes; if you specify 6 through 20, the system
allows 20 volumes; if you specify a count greater than 20, the system
allows 5 plus a multiple of 15 volumes. You can override the maximum
volume count in data class by using the volume-count subparameter. The
maximum volume count for an SMS-managed mountable tape data set or a
Non-managed tape data set is 255.
...so for DASD allocations you are best served specifying a volume count greater than 5 (at least).
//OUTPUT1 DD DSN=A.B.C,DISP=(NEW,CATLG,DELETE),
// DCB=(LRECL=100,BLKSIZE=,RECFM=FBM),
// SPACE=(CYL,(10,5),RLSE)
Try this instead. Notice that the secondary will take advantage of a large dataset whereas without that parameter the largest secondary that makes any sense is < 300. Oh, and if indeed it is from a COBOL program make sure that the FD says "BLOCK 0"!!!!! If it isn't "BLOCK 0" then you might not even need to change your JCL because it wasn't fixed block machine. It was merely fixed and unblocked so the space would almost never be enough. And finally you may wish to revisit why you have the M in the RECFM to begin with. Notice also that I took out the LRECL, the BLKSIZE and the RECFM. That is because the FD in the COBOL program is all you need and putting it in the JCL is not only redundant but dangerous because any change will have to be now done in multiple places.
//OUTPUT1 DD DSN=A.B.C,DISP=(NEW,CATLG,DELETE),
// DSNTYPE=LARGE,UNIT=(SYSALLDA,59),
// SPACE=(CYL,(10,1000),RLSE)
There is a limit of 65,535 tracks per one volume. So if you will specify a SPACE that exceeds that limit - system will simply ignore it.
You can increase this limit to 16,777,215 tracks by adding DSNTYPE=LARGE paramter.
Or you can specify that your dataset is a multi volume by adding VOL=(,,,3)
You can also use DATACLAS=xxxx paramter here, however first of all you need to find it. Easy way is to contact your local Storage Team and ask for one. Or If you are familiar with ISPF navigation, you can enter ISMF;4 command to open a panel
use bellow paramters before hitting enter.
CDS Name . . . . . . 'ACTIVE'
Data Class Name . . *
It should produce a list of all available data classes. Find the one that suits you ( has enougth amount of volume count, does not limit primary and secondary space

Why file starting offset in mmap() must be multiple of the page size

In mmap() manpage:
Its prototype is:
void *mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset);
and description:
The mmap() function asks to map 'length' bytes starting at offset 'offset'
from the file (or other object) specified by the file descriptor fd into
memory, preferably at address 'start'.
Sepcifically, for the last argument:
'offset' should be a multiple of the page size as returned by getpagesize(2).
From what I practised, offset MUST be a multiple of the page size, for example, 4096 on my Linux, otherwise, mmap() would return Invalid argument, offset is for file offset, why it must be multiple of virtual memory system's page size?
Thanks,
The simple answer: to make it fast. The more complex answer: whenever you access the memory at a location within the mapped memory, the OS must make sure that this location is filled with the contents of the file. But the OS can only detect whether you access a memory page - not a single location. What it does is, it creates a simple relation between offsets in the file and memory pages - and whenever you access a memory page, that part of the file is loaded. To make these calculations fast, it restricts you to start at certain offsets.

What is the default size used by logrotate?

I looked at the man pages, searched here, googled, etc.
Does anyone know the default size used by logrotate?
What reference can you cite stating the default size?
I know I can set size specifically in my config files.
There isn't a default size, because it's an optional flag. logrotate will rotate based on certain criteria that you give it, e.g., file size or time period (daily, weekly, etc). So it's not that it has a default file size (except maybe something like size 0), it's just not a precondition for a file being rotated.
So far as I know, there is no default size for logrotate, no need to waste time on this,set the value yourself in config file.

What is the max size of AF_UNIX datagram message in Linux?

Currently I'm hitting a hard limit of 130688 bytes. If I try and send anything larger in one message I get a ENOBUFS error.
I have checked the net.core.rmem_default, net.core.wmem_default, net.core.rmem_max, net.core.wmem_max, and net.unix.max_dgram_qlen sysctl options and increased them all but they have no effect because these deal with the total buffer size not the message size.
I have also set the SO_SNDBUF and SO_RCVBUF socket options, but this has the same issue as above. The default socket buffer size are set based on the default socket options anyways.
I've looked at the kernel source where ENOBUFS is returned in the socket stack, but it wasn't clear to me where it was coming from. The only places that seem to return this error have to do with not being able to allocate memory.
Is the max size actually 130688? If not can this be changed without recompiling the kernel?
AF_UNIX SOCK_DATAGRAM/SOCK_SEQPACKET datagrams need contiguous memory. Contiguous physical memory is hard to find, and the allocation fails, logging something similar to this on the kernel log:
udgc: page allocation failure. order:7, mode:0x44d0
[...snip...]
DMA: 185*4kB 69*8kB 34*16kB 27*32kB 11*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3788kB
Normal: 13*4kB 6*8kB 100*16kB 62*32kB 24*64kB 10*128kB 0*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 7012kB
[...snip...]
unix_dgram_sendmsg() calls sock_alloc_send_skb() lxr1, which calls sock_alloc_send_pskb() with data_len = 0 and header_len = size of datagram lxr2. sock_alloc_send_pskb() allocates header_len from "normal" skbuff buffer space, and data_len from scatter/gather pages lxr3. So, it looks like AF_UNIX sockets don't support scatter/gather on current Linux.