What does the second number in Pester's coverage % mean? - powershell

When I run pester I get this output
Covered 100% / 75%. 114 analyzed Commands in 1 File
What does the 75% mean? I haven't been able to find it anywhere in the documentation.

It is the value of $PesterPreference.CodeCoverage.CoveragePercentTarget.Value, i.e the minimum amount of test coverage you want to achieve. This is set to 75% by default.
It's mentioned on the page describing New-PesterConfiguration:
https://pester-docs.netlify.app/docs/commands/New-PesterConfiguration
CoveragePercentTarget: Target percent of code coverage that you want
to achieve, default 75%. Default value: 75
But it was quite hard to figure out, and it could do with being added to the documentation page about test coverage. I ended up searching through the source code and found that the message you listed is output here:
https://github.com/pester/Pester/blob/1515194f4868f6aaae82d7d376a8a776afe0ebf4/src/functions/Output.ps1
CoverageMessage = 'Covered {2:0.##}% / {5:0.##}%. {3:N0} analyzed {0} in {4:N0} {1}.'
Which is populated with values here:
$coverageMessage = $ReportStrings.CoverageMessage -f $command, $file, $executedPercent, $totalCommandCount, $fileCount, $PesterPreference.CodeCoverage.CoveragePercentTarget.Value

Related

How do I store YOLOv5 Outputs to one csv file rather than many .txt files per detection

Hi all I have been researching this challenge for some time now and all my efforts are futile. What I am trying to do?
I am running YOLOV5 and this is working fine in the training stage and the detection stages. The operations is outputting multiple .txt files per video for each detection per frame. This is the command I am using:
python3 detect.py --weights /Users/YOLO2ClassOnly/yolov5/runs/train/exp11/weights/best.pt --source /Users/YOLO2ClassOnly/yolov5/data/videos --conf 0.1 --line-thickness 1 --save-txt SAVE_TXT --save-conf
This command produces multiple text files, for example vid0_walking.txt, vid1_walking.txt, vid2_walking.txt...n/ etc.
This is depleting my storage resources and I am trying to avoid this.
What I would like to do?
Store the files in one .csv file in this format, please.
# xmin ymin xmax ymax confidence class name
# 0 749.50 43.50 1148.0 704.5 0.874023 0 person
# 2 114.75 195.75 1095.0 708.0 0.624512 0 person
# 3 986.00 304.00 1028.0 420.0 0.286865 27 tie
I have been following Glen Jorcher Links Here:
https://github.com/ultralytics/yolov5/issues/7499
But this is futile, this function print(results.pandas().xyxy[0])
is not working to generate the output for video as per above.
Please help, this is challenging me due to my lack of understanding.
Thanx in advance for acknowledging my digital presence and I am grateful for your guidance!

tensorflow checkpoint missing input tensor node

( please pardon my long post, dearly appreciate your help )
I am training the squeezeDet model for the pascal VOC style custom data as per the training code from the repository HERE
train.py
model_definition and HERE
the saved model checkpoint performs well as I can see acceptable performance.
Now i am trying to freeze the model for deployment using coreML to see how the performance is in a mobile platform. The authors of the script only report performance in a GPU environment in their research paper.
I follow the recommended steps as per tensorflow, my commands are as below
First,
I write the graph out from the checkpoint meta file
path_to_ckpt_meta = rootdir + "model.ckpt-355000.meta"
path_to_ckpt_data = rootdir + "model.ckpt-355000"
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
saver = tf.train.import_meta_graph(path_to_ckpt_meta)
saver.restore(sess, path_to_ckpt_data)
tf.train.write_graph(tf.get_default_graph().as_graph_def(), rootdir, "model_ckpt_355000_graph_V2.pb", False)
Now
I check the graph summary as see all the tensors in the model . The output summary file is HERE.
However, when I check the checkpoint file using the inspect_checkpoint.py function from tensorflow I see no image_input nodes. The output of inspection is HERE.
Second
I freeze the graph using the tensorflow freeze_graph.py function
python ./tensorflow/python/tools/freeze_graph.py \
--input_graph=path-to-dir/train/model_ckpt_355000_graph.pb \
--input_checkpoint=path-to-dir/train/model.ckpt-355000 \
--output_graph=path-to-dir/train/frozen_sqdt_ckpt_355000.pb \
--output_node_names=bbox/trimming/bbox,probability/score,probability/class_idx
the freeze_graph call completes without error and results in the frozen graph as per the command above.
Now,
when I check the frozen graph using the summarize_graph function call
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=/tmp/logs/squeezeDet_NewDataset_test01_March02/train/frozen_sqdt_ckpt_355000.pb
I get the following
No inputs spotted.
No variables spotted.
Found 3 possible outputs: (name=bbox/trimming/bbox, op=Transpose) (name=probability/score, op=Max) (name=probability/class_idx, op=ArgMax)
Found 2703452 (2.70M) const parameters, 0 (0) variable parameters, and 0 control_edges
Op types used: 130 Const, 68 Identity, 32 BiasAdd, 32 Conv2D, 31 Relu, 15 Mul, 14 Add, 10 ConcatV2, 9 Sub, 5 RealDiv, 5 Reshape, 4 Maximum, 4 Minimum, 3 StridedSlice, 3 MaxPool, 2 Exp, 2 Greater, 2 Cast, 2 Select, 1 Transpose, 1 Softmax, 1 Sigmoid, 1 Unpack, 1 RandomUniform, 1 QueueDequeueManyV2, 1 Pack, 1 Max, 1 Floor, 1 FIFOQueueV2, 1 ArgMax
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/tmp/logs/squeezeDet_NewDataset_test01_March02/train/frozen_sqdt_ckpt_355000.pb --show_flops --input_layer= --input_layer_type= --input_layer_shape= --output_layer=bbox/trimming/bbox,probability/score,probability/class_idx
this output above suggests that there is no input detected from the frozen graph. I check the summary of the frozen graph and find no image_input tensor. HERE
When I check my original graph ( written in step 1 ) with summarize graph, It does show inputs.
My troubleshooting
Suggests there is some mixup in the original authors code where the image_input is not provided as an input tensor. Though, the confusing part is that I can see the input image tensor in the summary of the output graph from the checkpoint meta file.
My question is,
-- why is the frozen graph removing the input nodes, when the original graph has the inputs ?
-- And, what can I do to change this and be able to successfully freeze_graph correctly.
Is there a transformation that need to perform in order to make this freeze model compatible with the coreML format.?
All your help is much appreciated.
Best
Aman

PowerShell "get-childitem2" size over ~3.5gb

This link points to Get-Childitem2, a function which allows traversing the 260 character limit.
https://gallery.technet.microsoft.com/scriptcenter/Get-ChildItemV2-to-list-29291aae#content.
It's a great function which works really well, however, it misreports file sizes over ~3.5GB which is the whole reason I'm running it.
I'm at pains to admit I'm just not good enough to find and fix the code so it accurately reports file sizes over ~3.5GB.
I imagine it's somewhere here; as there doesn't appear to be an 'nFileSizeHigh' option:
} Else {
$Object.Length = [int64]("0x{0:x}" -f $findData.nFileSizeLow)
$Object.pstypenames.insert(0,'System.Io.FileInfo')
}
I've chosen this over Robocopy and AlphaFS as I've had various issues with both.
Example of file size issues:
get-childitem C:\Temp\Huge: (correct size in bytes)
3166720000 - sp1_vl_build_x64_dvd_617403.iso
5653628928 - server_2016_x64_dvd_9327751.iso
4548247552 - it_English_-3_MLF_X19-53588.ISO
get-childitem2 C:\Temp\Huge:
3166720000 - sp1_vl_build_x64_dvd_617403.iso
1358661632 - server_2016_x64_dvd_9327751.iso
253280256 - it_English_-3_MLF_X19-53588.ISO
From this documentation
The size of the file is equal to (nFileSizeHigh * (MAXDWORD+1)) + nFileSizeLow
And at the top of the script (line 92) is a mention of nFileSizeHigh, but no attempt to add it into the file length. So I can only guess the script is buggy and wasn't tested on files larger than [DWORD MAX] which is [uint32]::MaxValue, or about 3.5GB.
If you change the two lines up at the top to [uint32] instead of [int32]:
[void]$STRUCT_TypeBuilder.DefineField('nFileSizeHigh', [uint32], 'Public')
[void]$STRUCT_TypeBuilder.DefineField('nFileSizeLow', [uint32], 'Public')
and make the length calculation more like the documentation link:
$Object.Length = ($findData.nFileSizeHigh * ([uint32]::MaxValue+1)) + ([int64]('0x{0:x}' -f $findData.nFileSizeLow))
then it handles a file of ~7GB correctly in my quick testing.
Why it's going through string formatting to convert to int64, and how it should be done properly, I don't know, this is mostly trial and error until it worked.

Grib2 to PostGIS raster -- anyone get this to work?

I have an application for which I need to import U.S. National Weather Service surface analyses, which are distributed as grib2 files. I want to pull those into PostGIS 2.0 rasters, do some calculations and modeling, and display the data and model results in GeoServer.
Since grib2 is a GDAL-supported format, the supplied raster2pgsql utility should be able to slurp a grib2 right into PostGIS-compatible SQL, and once it's there, GeoServer ought to be able to handle it. However, I'm running into problems which have no obvious solutions -- not obvious to me, at any rate! Raster2pgsql runs, apparently without errors, producing SQL, and running the SQL creates what looks very much like a raster. But GeoServer can't display it -- the bounds, in particular, come out looking weird (0,0 -1,-1) and "preview layer" just throws a NullPointerException.
Has anyone been down this road already? I've got issues as basic as not knowing what the SRID should be for the data (4326, perhaps?). I don't expect anyone to debug my problems for me but if someone has already got this toolchain working, or part of it, I can plug known-good things in and see what I can discover.
TIA,
rw
Updated: Per Mike, here is the coordinate-system stuff from one of the files; I elided the other 749 bands in the output from "gdalinfo". Note that the filename is different -- I found out by running "gdalinfo" on my original file that something was wrong with it, gdalinfo couldn't read it. New (35MB!) file here.
Gdalinfo output:
Driver: GRIB/GRIdded Binary (.grb)
Files: ruc2.t00z.bgrb13anl.grib2
Size is 451, 337
Coordinate System is:
PROJCS["unnamed",
GEOGCS["Coordinate System imported from GRIB file",
DATUM["unknown",
SPHEROID["Sphere",6371229,0]],
PRIMEM["Greenwich",0],
UNIT["degree",0.0174532925199433]],
PROJECTION["Lambert_Conformal_Conic_2SP"],
PARAMETER["standard_parallel_1",25],
PARAMETER["standard_parallel_2",25],
PARAMETER["latitude_of_origin",0],
PARAMETER["central_meridian",265],
PARAMETER["false_easting",0],
PARAMETER["false_northing",0]]
Origin = (-3332155.288903323933482,6830293.833488883450627)
Pixel Size = (13545.000000000000000,-13545.000000000000000)
Corner Coordinates:
Upper Left (-3332155.289, 6830293.833) (139d51'22.04"W, 54d10'20.71"N)
Lower Left (-3332155.289, 2265628.833) (126d 6'34.06"W, 16d 9'49.48"N)
Upper Right ( 2776639.711, 6830293.833) ( 57d12'21.76"W, 55d27'10.73"N)
Lower Right ( 2776639.711, 2265628.833) ( 68d56'16.73"W, 17d11'55.33"N)
Center ( -277757.789, 4547961.333) ( 98d 8'30.73"W, 39d54'5.40"N)
Band 1 Block=451x1 Type=Float64, ColorInterp=Undefined
Description = 1[-] HYBL="Hybrid level"
Metadata:
GRIB_UNIT=[Pa]
GRIB_COMMENT=Pressure [Pa]
GRIB_ELEMENT=PRES
[Etc., Etc., for all 750 bands]
I hope this helps, at least those comming to this thread.
Bear in mind that GeoServer, while being capable of loading Raster data from PostGIS, the default PostGIS "importing" module is ONLY available for vector data, that's why you get those odd bounds (-1 -1 0 0).
You'll have to add ImageMosaicJDBC plugin to your geoserver installation, follow steps here!
http://docs.geoserver.org/latest/en/user/tutorials/imagemosaic-jdbc/imagemosaic-jdbc_tutorial.html
Got an excellent answer to my problem here. Putting it in as a separate answer.
He recommended using gdalwarp to pull the GRIB2 file into a known SRID, thus:
gdalwarp -t_srs EPSG:4326 original_file.grib2 4326_file.grib2
Then, raster2pgsql works just fine, e.g.
raster2pgsql -M -a 4326_file.grib2 some_sql.sql

COBOL add 0 to a Variable in COMPUTE

I ran into a strange statement when working on a COBOL program from $WORK.
We have a paragraph that is opening a cursor (from DB2), and the looping over it until it hits an EOT (in pseudo code):
... working storage ...
01 I PIC S9(9) COMP VALUE ZEROS.
01 WS-SUB PIC S9(4) COMP VALUE 0.
... code area ...
PARA-ONE.
PERFORM OPEN-CURSOR
PERFORM FETCH-CURSOR
PERFORM VARYING I FROM 1 BY 1 UNTIL SQLCODE = DB2EOT
do stuff here...
END-PERFORM
COMPUTE WS-SUB = I + 0
PERFORM CLOSE-CURSOR
... do another loop using WS-SUB ...
I'm wondering why that COMPUTE WS-SUB = I + 0 line is there. My understanding is that I will always at least be 1, because of the perform block above it (i.e., even if there is an EOT to start with, I will be set to one on that initial iteration).
Is that COMPUTE line even needed? Is it doing some implicit casting that I'm not aware of? Why would it be there? Why wouldn't you just MOVE I TO WS-SUB?
Call it stupid, but with some compilers (with the correct options in effect), given
01 SIGNED-NUMBER PIC S99 COMP-5 VALUE -1.
01 UNSIGNED-NUMBER PIC 99 COMP-5.
...
MOVE SIGNED-NUMBER TO UNSIGNED-NUMBER
DISPLAY UNSIGNED-NUMBER
results in: 255. But...
COMPUTE UNSIGNED-NUMBER = SIGNED-NUMBER + ZERO
results in: 1 (unsigned)
So to answer your question, this could be classified as a technique used cast signed numbers into unsigned numbers. However, in the code example you gave it makes no sense at all.
Note that the definition of "I" was (likely) coded by one programmer and of WS-SUB by another (naming is different, VALUE clause is different for same purpose).
Programmer 2 looks like "old school": PIC S9(4), signed and taking up all the digits which "fit" in a half-word. The S9(9) is probably "far over the top" as per range of possible values, but such things concern Programmer 1 not at all.
Probably Programmer 2 had concerns about using an S9(9) COMP for something requiring (perhaps many) fewer than 9999 "things". "I'll be 'efficient' without changing the existing code". It seems to me unlikely that the field was ever defined as unsigned.
A COMP/COMP-4 with nine digits does have a performance penalty when used for calculations. Try "ADD 1" to a 9(9) and a 9(8) and a 9(10) and compare the generated code. If you can have nine digits, define with 9(10), otherwise 9(8), if you need a fullword.
Programmer 2 knows something of this.
The COMPUTE with + 0 is probably deliberate. Why did Programmer 2 use the COMPUTE like that (the original question)?
Now it is going to get complicated.
There are two "types" of "binary" fields on the Mainframe: those which will contain values limited by the PICture clause (USAGE BINARY, COMP and COMP-4); those which contain values limited by the field size (USAGE COMP-5).
With BINARY/COMP/COMP-4, the size of the field is determined from the PICture, and so are the values that can be held. PIC 9(4) is a halfword, with a maxiumum value of 9999. PIC S9(4) a halfword with values -9999 through +9999.
With COMP-5 (Native Binary), the PICture just determines the size of the field, all the bits of the field are relevant for the value of the field. PIC 9(1) to 9(4) define halfwords, pic 9(5) to 9(9) define fullwords, and 9(10) to 9(18) define doublewords. PIC 9(1) can hold a maximum of 65535, S9(1) -32,768 through +32,767.
All well and good. Then there is compiler option TRUNC. This has three options. STD, the default, BIN and OPT.
BIN can be considered to have the most far-reaching affect. BIN makes BINARY/COMP/COMP-4 behave like COMP-5. Everything becomes, in effect, COMP-5. PICtures for binary fields are ignored, except in determining the size of the field (and, curiously, with ON SIZE ERROR, which "errors" when the maxima according to the PICture are exceeded). Native Binary, in IBM Enterprise Cobol, generates, in the main, though not exclusively, the "slowest" code. Truncation is to field size (halfword, fullword, doubleword).
STD, the default, is "standard" truncation. This truncates to "PICture". It is therefore a "decimal" truncation.
OPT is for "performance". With OPT, the compiler truncates in whatever way is the most "performant" for a particular "code sequence". This can mean intermediate values and final values may have "bits set" which are "outside of the range" of the PICture. However, when used as a source, a binary field will always only reflect the value specified by the PICture, even if there are "excess" bits set.
It is important when using OPT that all binary fields "conform to PICture" meaning that code must never rely on bits which are set outside the PICture definition.
Note: Even though OPT has been used, the OPTimizer (OPT(STD) or OPT(FULL)) can still provide further optimisations.
This is all well and good.
However, a "pickle" can readily ensue if you "mix" TRUNC options, or if the binary definition in a CALLing program is not the same as in the CALLed program. The "mix" can occur if modules within the same run-unit are compiled with different TRUNC options, or if a binary field on a file is written with one TRUNC option and later read with another.
Now, I suspect Programmer 2 encountered something like this: Either, with TRUNC(OPT) they noticed "excess bits" in a field and thought there was a need to deal with them, or, through the "mix" of options in a run-unit or "across file usage" they noticed "excess bits" where there would be a need to do something about it (which was to "remove the mix").
Programmer 2 developed the COMPUTE A = B + 0 to "deal" with a particular problem (perceived or actual) and then applied it generally to their work.
This is a "guess", or, better, a "rationalisation" which works with the known information.
It is a "fake" fix. There was either no problem (the normal way that TRUNC(OPT) works) or the correct resolution was "normalisation" of the TRUNC option across modules/file use.
I do not want loads of people now rushing off and putting COMPUTE A = B + 0 in their code. For a start, they don't know why they are doing it. For a continuation it is the wrong thing to do.
Of course, do not just remove the "+ 0" from any of these that you find. If there is a "mix" of TRUNCs, a program may stop "working".
There is one situation in which I have used "ADD ZERO" for a BINARY/COMP/COMP-4. This is in a "Mickey Mouse" program, a program with no purpose but to try something out. Here I've used it as a method to "trick" the optimizer, as otherwise the optimizer could see unchanging values so would generate code to use literal results as all values were known at compile time. (A perhaps "neater" and more flexible way to do this which I picked up from PhilinOxford, is to use ACCEPT for the field). This is not the case, for certain, with the code in question.
I wonder if a testing version of the sources ever had
COMPUTE WS-SUB = I + 0
ON SIZE ERROR
DISPLAY "WS-SUB overflow"
STOP RUN
END-COMPUTE
with the range test discarded when the developer was satisfied and cleaning up? MOVE doesn't allow declarative SIZE statements. That's as much of a reason as I could see. Or perhaps developer habit of using COMPUTE to move, as a subtle reminder to question the need for defensive code at every step? And perhaps not knowing, as Joe pointed out, the SIZE clause would be just as effective without the + 0? Or a maintainer struggled with off by one errors and there was a corrective change from 1 to 0 after testing?