SB37 JCL error to small a size? - jcl

The following space allocation is giving me an sB37 JCL error. The cobol size of the output file is 100 bytes and the lrecl size is 100 bytes. What do you think is causing this error? I have tried increase the size to 500,100 and still get the same error.
Code:
//OUTPUT1 DD DSN=A.B.C,DISP=(NEW,CATLG,DELETE),
// DCB=(LRECL=100,BLKSIZE=,RECFM=FBM),
// SPACE=(CYL,(10,5),RLSE)

Try to increase not only the space, but the volume as well.
Include VOL=(,,,#) in your DD. # is the numbers of values you want to allocate
Ex: SPACE=(CYL,(10,5),RLSE),VOL=(,,,3) - includes 3 volumes.
Additionally, you can increase the size, but try to stay within reasonable limits :)

The documentation for B37 says the application programmer should respond as indicated for message IEC030I. The documentation for IEC030I says, in part...
Probable user error. For all cases, allocate as many units as volumes
required.
...as noted in another answer. However, be advised that the documentation for the VOL parameter of the DD statement says...
If you omit the volume count or if you specify 1 through 5, the system
allows up to five volumes; if you specify 6 through 20, the system
allows 20 volumes; if you specify a count greater than 20, the system
allows 5 plus a multiple of 15 volumes. You can override the maximum
volume count in data class by using the volume-count subparameter. The
maximum volume count for an SMS-managed mountable tape data set or a
Non-managed tape data set is 255.
...so for DASD allocations you are best served specifying a volume count greater than 5 (at least).

//OUTPUT1 DD DSN=A.B.C,DISP=(NEW,CATLG,DELETE),
// DCB=(LRECL=100,BLKSIZE=,RECFM=FBM),
// SPACE=(CYL,(10,5),RLSE)
Try this instead. Notice that the secondary will take advantage of a large dataset whereas without that parameter the largest secondary that makes any sense is < 300. Oh, and if indeed it is from a COBOL program make sure that the FD says "BLOCK 0"!!!!! If it isn't "BLOCK 0" then you might not even need to change your JCL because it wasn't fixed block machine. It was merely fixed and unblocked so the space would almost never be enough. And finally you may wish to revisit why you have the M in the RECFM to begin with. Notice also that I took out the LRECL, the BLKSIZE and the RECFM. That is because the FD in the COBOL program is all you need and putting it in the JCL is not only redundant but dangerous because any change will have to be now done in multiple places.
//OUTPUT1 DD DSN=A.B.C,DISP=(NEW,CATLG,DELETE),
// DSNTYPE=LARGE,UNIT=(SYSALLDA,59),
// SPACE=(CYL,(10,1000),RLSE)

There is a limit of 65,535 tracks per one volume. So if you will specify a SPACE that exceeds that limit - system will simply ignore it.
You can increase this limit to 16,777,215 tracks by adding DSNTYPE=LARGE paramter.
Or you can specify that your dataset is a multi volume by adding VOL=(,,,3)
You can also use DATACLAS=xxxx paramter here, however first of all you need to find it. Easy way is to contact your local Storage Team and ask for one. Or If you are familiar with ISPF navigation, you can enter ISMF;4 command to open a panel
use bellow paramters before hitting enter.
CDS Name . . . . . . 'ACTIVE'
Data Class Name . . *
It should produce a list of all available data classes. Find the one that suits you ( has enougth amount of volume count, does not limit primary and secondary space

Related

How to use "Easy edge trace" and "edge trace distances" in ImageJ?

I have already installed the both plugins but don't know how to use them for pod analysis. Need help in that as i don't have programming background. Also can we use it for batch processing of images, in case i have more than 100 images?
Another approach per specially coded ImageJ-macro gives reasonable estimates of the widths and lengths of all pods in the sample image. You can access the macro code from here. Unzip the zip-archive and drop the file "plantPodDimensions.ijm" onto the ImageJ main window. Then open the sample image and run the macro. The estimated pod dimensions appear in a table.
Specimen [right to left] Mean Pod Width [cm] Pod Length [cm]
OHiI7_pod-1 0.70±0.11 23.6
OHiI7_pod-2 0.59±0.09 22.3
OHiI7_pod-3 0.64±0.05 20.7
OHiI7_pod-4 0.41±0.04 20.5
OHiI7_pod-5 0.66±0.07 22.9
OHiI7_pod-6 0.68±0.10 24.4
OHiI7_pod-7 0.60±0.07 20.5
Of course it couldn't be tested, if the macro works as expected for other images than the sample image.
With the download of the plugins come documents that explain the use of the plugins. Batch processing is possible, if the starting points of the traces are known or can somehow be determined by additional pre-processing steps (not trivial). Both plugins are macro-recordable. In any case, batch processing will require some macro code.
For the use case in question I would recommend to perform the analyses via the GUI, not per batch processing. The coding of a suitable macro would take more time than the processing of 100+ images.

How to find the number of data mapped by mmap()?

if mmap() was used to read a file, how can I find the number of data mapped by mmap().
float *map = (float *)mmap(NULL, FILESIZE, PROT_READ, MAP_SHARED, fd, 0);
The mmap system call does not read data. It just maps the data in your virtual address space (by indirectly configuring your MMU), and that virtual address space is changed by a successful mmap. Later, your program will read that data (or not). In your example, your program might later read map[356] if mmap has succeeded (and you should test against its failure).
Read carefully the documentation of mmap(2). The second argument (in your code, FILESIZE) defines the size of the mapping (in bytes). You might check that it is a multiple of sizeof(float) and divide it by sizeof(float) to get the number of elements in map that are meaningful and obtained from the file. The size of the mapping is rounded up to a multiple of pages. The man page of mmap(2) says:
A file is mapped in multiples of the page size. For a file that is
not a multiple of the page size, the remaining memory is zeroed when
mapped, and writes to that region are not written out to the file.
Data is mapped in pages. A page is usually 4096 bytes. Read more about paging.
The page size is returned by getpagesize(2) or by sysconf(3) with _SC_PAGESIZE (which usually gives 4096).
Consider reading some book like Operating Systems: Three Easy Pieces (freely downloadable) to understand how virtual memory works and what is a memory mapped file.
On Linux, the /proc/ filesystem (see proc(5)) is very useful to understand the virtual address space of some process: try cat /proc/$$/maps in your terminal, and read more to understand its output. For a process of pid 1234, try also cat /proc/1234/maps
From inside your process, you could even read sequentially the /proc/self/maps pseudo-file to understand its virtual address space, like here.

How many iterations are saved by JAGS/BUGS when burnin and thinning are specified?

I have a quick question about the details of running a model in JAGS and BUGS.
Say I run a model with n.burnin=5000, n.iter=5000 and thin=2. Does this mean that the program will:
Run 5,000 iterations, and discard results; and then
Run another 10,000 iterations, only keeping every second result?
If I save these simulations as a CODA object, are all 10,000 saved, or only the thinned 5,000? I'm just trying to understand which set of iterations are used to make the ACF plot?
With JAGS, n.burnin=5000, n.iter=5000 and thin=2, means you keep nothing. You run 5000, discard the first 5000 of these 5000 and then only keep a half of the remaining values of the chain (keep 1 value and discard the next one ..).
Use for example n.burnin=2000, n.iter=7000, thin=50, n.chains=5 : so you have (7000-2000)/50 * 5 = 500 values.
Could you be more specific which software you're talking about? It looks like you're referring to the arguments of the function bugs() in the R2WinBUGS package (except that the argument is called n.thin not thin). Looking at help(bugs) it just says n.burnin is the "number of iterations to discard at the beginning". Which doesn't specifically answer your question, but looking at the source for bugs.script() in that package suggests to me that it would run 5000 iterations burn in, as you suspected. You could send a suggestion to the maintainers of that package to clarify their documentation.
In your example, bugs() would then run 0 further iterations after the burn-in. Here the documentation is clearer - n.iter is the total number of iterations including the burn-in.
For your second question, the CODA output from WinBUGS (and any software which calls WinBUGS or OpenBUGS) will only include the thinned sample.

CJ1W-CT021 Card Error Omron PLC

I got this error on a CJ1W-CT021 card. It happen all of a sudden after its been running the program for some time. How i found it was by going to the IO Table and Unit Set up. Clicked on parameters for that card and found two settings in red.
Output Control Mode and And/Or Counter Output Patterns. This was there reading
Output Control Mode = 0x40 No Applicable Set Data
And/Or Counter Output Patterns = 0x64 No Applicable Set Data
no idea on how or why these would change they should of been
Output Control Mode = Range Mode
And/Or Counter Output Patterns = Logically Or
I have added some new code, but nothing big or really even used as i had the outputs of the new rungs jumped out. One thing i thought might cause this is every cycle of the program it was checking the value of an encoder connected to this card. Maybe checking it too offten? Anyhow if anyone has any idea what these do or how they would change please post.
Thanks
Glen
EDIT.. I wanted to add the bits i used, dont think any are part of this cards internal io but i may be wrong?
Work bits 66.01 - 66.06 , 60.02 - 60.07 , 160.12, 160.01 - 160.04, 161.02, 161.03
and
Data Bits (D)20720, 20500, 20600, 20000, 20590, 20040
I would check section 4-1 through 4-2-4 of the CT021 manual - make sure you aren't writing to reserved memory locations used for configuration data of the CT021 unit.
EDIT:
1) Check Page 26 of the above manual to see the location of the machine switch settings. The bottom dial sets the '1's digit and the top dial sets the '10's digit (ie machine number can be 0-99);
2) Per page 94, D-Memory is allocated from D20000 + (N X 100) (400 Words) where N is equal to the machine number.
I would guess that your machine number is set to 0 (ie: both dials at '0'), 5, or 6. In the case of machine number '0', this would make the reserved DM range D20000 -> D20399. In this case (see pages 97, 105) D20000 would contain configuration data for Output Control Mode (bits 00-07) and Counter Output Patterns (bits 08-15). It looks like you are writing 0x6440 to D20000 (or D20500, D20600 for machine number 5 or 6, respectively) and are corrupting the configuration data.
If your machine number is 0 then stay away from D20000-D20399 unless you are directly trying to modify the counter's configuration state (ie: don't use them in your program!).
If the machine number is 1 then likewise for D20100-D20499, etc. If you have multiple counters they can overlap ranges so they should always be set with machine numbers which are 4 apart from each other.

Getting the IO count

I am using xen hypervisor. I am trying to get the IO count of the VMs running on top of the xen hypervisor. Can someone suggest me some way or tool to get the IO count ? I tried using xenmon and virt-top. Virt-top doesnt give any value and xenmon always shows 0. Any suggestions to get the number of read or write calls made by a VM or the read and write(Block IO) bandwidth of a particular VM. Thanks !
Regards,
Sethu
You can read this directly from sysfs on most systems. You want to open the following directory:
/sys/devices/xen-backend
And look for directories starting with vbd-
The nomenclature is:
vbd-{domain_id}-{vbd_id}/statistics
Inside, you'll find what you need, which is:
br_req - Number of block read requests
oo_req - Number of 'out of' requests (no room left in list to service any given request)
rd_req - Number of read requests
rd_sect - Number of sectors read
wr_sect - Number of sectors written
The br_req will be an aggregate count of things like write barriers, aborts, etc.
Note, for this to work, The kernel has to be told to export Xen attributes via sysfs, but most Xen packages have this enabled. Additionally, the location in sysfs might be different with earlier versions of Xen.
have you tried xentop?
There is also bwm-ng (check your distro). It shows block utilization per disk (real/virtual). If you know the name of the virtual disk attached to the VM, then you can use bwm-ng to get those stats.