This link points to Get-Childitem2, a function which allows traversing the 260 character limit.
https://gallery.technet.microsoft.com/scriptcenter/Get-ChildItemV2-to-list-29291aae#content.
It's a great function which works really well, however, it misreports file sizes over ~3.5GB which is the whole reason I'm running it.
I'm at pains to admit I'm just not good enough to find and fix the code so it accurately reports file sizes over ~3.5GB.
I imagine it's somewhere here; as there doesn't appear to be an 'nFileSizeHigh' option:
} Else {
$Object.Length = [int64]("0x{0:x}" -f $findData.nFileSizeLow)
$Object.pstypenames.insert(0,'System.Io.FileInfo')
}
I've chosen this over Robocopy and AlphaFS as I've had various issues with both.
Example of file size issues:
get-childitem C:\Temp\Huge: (correct size in bytes)
3166720000 - sp1_vl_build_x64_dvd_617403.iso
5653628928 - server_2016_x64_dvd_9327751.iso
4548247552 - it_English_-3_MLF_X19-53588.ISO
get-childitem2 C:\Temp\Huge:
3166720000 - sp1_vl_build_x64_dvd_617403.iso
1358661632 - server_2016_x64_dvd_9327751.iso
253280256 - it_English_-3_MLF_X19-53588.ISO
From this documentation
The size of the file is equal to (nFileSizeHigh * (MAXDWORD+1)) + nFileSizeLow
And at the top of the script (line 92) is a mention of nFileSizeHigh, but no attempt to add it into the file length. So I can only guess the script is buggy and wasn't tested on files larger than [DWORD MAX] which is [uint32]::MaxValue, or about 3.5GB.
If you change the two lines up at the top to [uint32] instead of [int32]:
[void]$STRUCT_TypeBuilder.DefineField('nFileSizeHigh', [uint32], 'Public')
[void]$STRUCT_TypeBuilder.DefineField('nFileSizeLow', [uint32], 'Public')
and make the length calculation more like the documentation link:
$Object.Length = ($findData.nFileSizeHigh * ([uint32]::MaxValue+1)) + ([int64]('0x{0:x}' -f $findData.nFileSizeLow))
then it handles a file of ~7GB correctly in my quick testing.
Why it's going through string formatting to convert to int64, and how it should be done properly, I don't know, this is mostly trial and error until it worked.
Related
When I run pester I get this output
Covered 100% / 75%. 114 analyzed Commands in 1 File
What does the 75% mean? I haven't been able to find it anywhere in the documentation.
It is the value of $PesterPreference.CodeCoverage.CoveragePercentTarget.Value, i.e the minimum amount of test coverage you want to achieve. This is set to 75% by default.
It's mentioned on the page describing New-PesterConfiguration:
https://pester-docs.netlify.app/docs/commands/New-PesterConfiguration
CoveragePercentTarget: Target percent of code coverage that you want
to achieve, default 75%. Default value: 75
But it was quite hard to figure out, and it could do with being added to the documentation page about test coverage. I ended up searching through the source code and found that the message you listed is output here:
https://github.com/pester/Pester/blob/1515194f4868f6aaae82d7d376a8a776afe0ebf4/src/functions/Output.ps1
CoverageMessage = 'Covered {2:0.##}% / {5:0.##}%. {3:N0} analyzed {0} in {4:N0} {1}.'
Which is populated with values here:
$coverageMessage = $ReportStrings.CoverageMessage -f $command, $file, $executedPercent, $totalCommandCount, $fileCount, $PesterPreference.CodeCoverage.CoveragePercentTarget.Value
DM scripting beginner here, almost no programming skills.
I would like to know the commands to access all the metadata of DM images/spectra.
I realized that all my STEM images at 80 kV taken between 2 dates (let's say 02.11.2017-05.04.2019) have the scale calibration wrong by the same factor (scale of all such images needs to be multiplied by 1.21).
I would like to write a script which multiplies the scale value by a factor only for images in scanning mode at 80 kV taken during a period for all images in a folder with subfolders or for all images opened in DM and save the new scale value.
I checked this website http://digitalmicrograph-scripting.tavernmaker.de/other%20resources/Old-DMHelp/AllFunctions.html but only found how to call the scale value (ImageGetDimensionCalibration). I have a general idea how to write the script based on other scripts if I find out how to call the metadata.
If anyone can write the whole script for me I would greatly appreciate your effort.
All general meta-data is organized in the image tag-structure
You can see this, if you open the Image Display Info of an image. (Via the menu, or by pressing CTRL + D) and then browse to the "Tags" section:
All info on the right are image tags and they are organized in a hierarchical tree.
How this tree looks like, and what information is written where, is totally open and will depend on what GMS version you are using, how the hardware is configured etc. Also custom scripts might alter this information.
So for a scripting start, open the data you want to modify and have a look in this tree.
Hint: The following min-script can be useful. It opens a tag-browsing window for the front-most image but as a modeless dialog (i.e. you can keep it open and interact with other parts):
GetFrontImage().ImageGetTagGroup().TagGroupOpenBrowserWindow(0)
The information you need to check against is most probably found in the Microscope Info sub-tree. Here, usually all information gathered from the microscope during acquisition is stored. What is there, will depend on your system and how it is set up.
The information of the STEM image acquisition - as far as the scanning engine and detector is concerned - is most probably in the DigiScan sub-tree.
The Data Bar sub-tree usually contains date and time of creation etc.
Calibration values are not stored in the image tag-structure
What you will not find in this tag-structure is the image calibration, i.e. the values actually used by DM to display calibrated values. These values are "one level up" so to speak here:
This is important to know in the following for your script, because you will need different commands for both the "meta-data" from the tags, and the "calibration" you want to change.
Accessing meta-data by script
The script-commands you need to read from the tags are all described in the F1 help documentation here:
Essentially, you need a command to get the "root" TagGroup of an image, which is ImageGetTagGroup() and then you traverse within this tree.
This might seem confusing - because there are a lot of slightly different commands for the different types of stored tags - but the essential bits are easy:
All "Paths" through the tree are just the individual names (typed exactly)
For each "branch" you have to use a single colon :
The commands to set/get a tag-value all require as input the "root" tagGroup object and the "path" as a string. The get commands require a variable of matching type to store the value in, the set commands need the value which should be written.
= The get commands themeselves return true or false depending on whether or not a tag-path could be found and the value could be read.
So the following script would read the "Imaging Mode" from the tags of the image shown as example above:
string mode
GetFrontImage().ImageGetTagGroup().TagGroupGetTagAsString( "Microscope Info:Imaging Mode", mode )
OKDialog( "Mode: " + mode )
and in a little more verbose form:
string mode // variable to hold the value
image img // variable for the image
string path // variable/constant to specify the where
TagGroup tg // variable to hold the "tagGroup" object
img := GetFrontImage() // Use the selected image
tg = img.ImageGetTagGroup() // From the image get the tags (root)
path = "Microscope Info:Imaging Mode" // specify the path
if ( tg.TagGroupGetTagAsString( path, mode ) )
OKDialog( "Mode: " + mode )
else
Throw( "Tag not found" )
If the tag is not a string but a value, you will need the according commands, i.e.
TagGroupGetTagAsNumber().
Using Gtk+, we introduce some of the icons into the app via gtk_image_new_from_file(). We found that if the icon file is directly in the apps Dir, then it all works well with "no path", eg.
FString255 = "Icon_Charts.png"
IconImage_Ptr = gtk_image_new_from_file( Trim(FString255)//c_Null_Char )
However, when we tried to move the icons to a sub-Dir (what a surprise, called "Icons"), we could not get Gtk to recognise the png's. We tried every permutation we could think of using absolute and relative variations (e.g. with ".\" with "./", with "double slashes", with "C:.....\Icons..." etc.) ... no joy.
Does anybody know the syntax Gtk expects for relative path, e.g. something like:
FString255 = ".\Icons\Icon_Charts.png"
???
Or perhaps is there something "special" about gtk_image_new_from_file() and perhaps it can ONLY accept "no path" file-names?
We get the feeling it must be something super simple that we missed.
To avoid any non-obvious behavior you should always use absolute paths.
GLib provides the g_win32_get_package_installation_directory_of_module() function to allow getting the path of the current project (assuming standard directory layout). For example:
char *path, *package_dir;
package_dir = g_win32_get_package_installation_directory_of_module (NULL);
g_assert (package_dir != NULL);
path = g_build_filename (package_dir, "Icons", "Icon_Charts.png", NULL);
g_free (package_dir);
OK, sussed it, and we are feeling extremely stupid and embarrassed. As our OP suspected:
" We get the feeling it must be something super simple that we missed."
... well, it was :-(.
Before the "actual answer", TingPing's answer would be a possibility, if the real issue was not something else entirely. However, even then, if we were to go that route, we would use something like (keeping our submissions "Fortran-consistent" with the OP):
Temp_cPtr = g_get_current_dir_utf8 ()
!
If( c_Associated(Temp_cPtr) ) Then
!
!
n = c_StrLen( Temp_cPtr )
!
FString255Path = ""
!
Call C_F_String( Temp_cPtr, FString255Path(1:n) )
!
End if
!
!
FString255 = FString255Path(1:n)//"\Icons\Icon_Charts.png"
IconImage_Ptr(j) = gtk_image_new_from_file( Trim(FString255)//c_Null_Char )
or set "\Icons" as var and adjust the path variable at the outset, eg.
FString255Path = FString255Path(1:n)//"\Icons\"
FString255 = Trim(FString255Path)//"Icon_Charts.png"
... yes, we could have used Allocatable strings also, there is a reason why we used fixed len strings here.
In the event, the usual "simple" thing does actually work, for example, the desired relative path approach really is, as initially thought:
FString255 = ".\Icons\Icon_Charts.png"
IconImage_Ptr(j) = gtk_image_new_from_file( Trim(FString255)//c_Null_Char )
OK, now for the "actual answer", and our "monument to stupidity du jour"; in fact this Gtk app is quite large, with some parts of the front end created with Glade and "builder", while other parts are written explicitly in code. Some of the same icons are used by both the Glade/builder bits, as well as the explicit code bits. As it happens, we had used the correct (relative) path at the outset ... unfortunately, we looked for the results in a part of the GUI that is generated by the Glade/builder bits, and which of course have their own independent mechanism for loading icons (even if the same icons are re-used in the "code"), and so no amount of fiddling with the "code/paths" would make any difference there.
... this is rather a "bush-league" mistake on our part, and would feel better if the entire question/post was deleted ... but perhaps we should "honour our monument to stupidity" :-).
... our apologies for wasting anybody's time.
I'm using uigetfile with a custom set of FilterSpecs. Here is the sentence:
[FileName,PathName,FilterIndex] = uigetfile({'*.wav';'*.mp3'},'Open Audio File');
As you can see my FilterSpec is {'*.wav';'*.mp3'} and this works perfectly fine. My problem is simple, is just that matlab is always appending AllFiles(*.*) to my FilterSpecs. I have searched in Matlab docs and it literally states:
"uigetfile appends All Files(.) to the file types when FilterSpec is a string.", but the problem is that I don't see another way of specifying a custom FilterSpec without using strings.Sorry if this results in a dumb question.
Thanks in advance
There's no way to (easily) remove the 'AllFiles' from uigetfile() since it's always added by MATLAB.
If you really want to do it, you have to copy the uigetputfile_helper() code (to
MYuigetputfile_helper() for example) and change it. And then you call it from your MYuigetfile() - same idea here.
The change would be around lines 311 and 319 in my version from uigetputfile_helper(), i.e.
% Now add 'All Files' appropriately.
if (addAllFiles)
% If a string, create a cell array and append '*.*'.
if (~iscell(returned_filter))
returned_filter = {returned_filter; '*.*'};
% If it is a cell array without descriptors, add '*.*'.
elseif (size(returned_filter, 2) == 1)
returned_filter{end+1} = '*.*';
end
end
Hope that helps... have fun!
If you back up a few lines from the previous poster's answer, you'll see a comment:
We want to add 'All Files' in all cases unless we have a cell array with descriptors.
This comment is on line 245 of uigetputfile_helper for me. Simply describe your file types at the time you call uigetfile, and you won't see All Files (*.*)
Example:
[fname,pname] = uigetfile({'*.m','MATLAB Code (*.m)';'*.mat','MATLAB Data (*.mat)'});
I ran into a strange statement when working on a COBOL program from $WORK.
We have a paragraph that is opening a cursor (from DB2), and the looping over it until it hits an EOT (in pseudo code):
... working storage ...
01 I PIC S9(9) COMP VALUE ZEROS.
01 WS-SUB PIC S9(4) COMP VALUE 0.
... code area ...
PARA-ONE.
PERFORM OPEN-CURSOR
PERFORM FETCH-CURSOR
PERFORM VARYING I FROM 1 BY 1 UNTIL SQLCODE = DB2EOT
do stuff here...
END-PERFORM
COMPUTE WS-SUB = I + 0
PERFORM CLOSE-CURSOR
... do another loop using WS-SUB ...
I'm wondering why that COMPUTE WS-SUB = I + 0 line is there. My understanding is that I will always at least be 1, because of the perform block above it (i.e., even if there is an EOT to start with, I will be set to one on that initial iteration).
Is that COMPUTE line even needed? Is it doing some implicit casting that I'm not aware of? Why would it be there? Why wouldn't you just MOVE I TO WS-SUB?
Call it stupid, but with some compilers (with the correct options in effect), given
01 SIGNED-NUMBER PIC S99 COMP-5 VALUE -1.
01 UNSIGNED-NUMBER PIC 99 COMP-5.
...
MOVE SIGNED-NUMBER TO UNSIGNED-NUMBER
DISPLAY UNSIGNED-NUMBER
results in: 255. But...
COMPUTE UNSIGNED-NUMBER = SIGNED-NUMBER + ZERO
results in: 1 (unsigned)
So to answer your question, this could be classified as a technique used cast signed numbers into unsigned numbers. However, in the code example you gave it makes no sense at all.
Note that the definition of "I" was (likely) coded by one programmer and of WS-SUB by another (naming is different, VALUE clause is different for same purpose).
Programmer 2 looks like "old school": PIC S9(4), signed and taking up all the digits which "fit" in a half-word. The S9(9) is probably "far over the top" as per range of possible values, but such things concern Programmer 1 not at all.
Probably Programmer 2 had concerns about using an S9(9) COMP for something requiring (perhaps many) fewer than 9999 "things". "I'll be 'efficient' without changing the existing code". It seems to me unlikely that the field was ever defined as unsigned.
A COMP/COMP-4 with nine digits does have a performance penalty when used for calculations. Try "ADD 1" to a 9(9) and a 9(8) and a 9(10) and compare the generated code. If you can have nine digits, define with 9(10), otherwise 9(8), if you need a fullword.
Programmer 2 knows something of this.
The COMPUTE with + 0 is probably deliberate. Why did Programmer 2 use the COMPUTE like that (the original question)?
Now it is going to get complicated.
There are two "types" of "binary" fields on the Mainframe: those which will contain values limited by the PICture clause (USAGE BINARY, COMP and COMP-4); those which contain values limited by the field size (USAGE COMP-5).
With BINARY/COMP/COMP-4, the size of the field is determined from the PICture, and so are the values that can be held. PIC 9(4) is a halfword, with a maxiumum value of 9999. PIC S9(4) a halfword with values -9999 through +9999.
With COMP-5 (Native Binary), the PICture just determines the size of the field, all the bits of the field are relevant for the value of the field. PIC 9(1) to 9(4) define halfwords, pic 9(5) to 9(9) define fullwords, and 9(10) to 9(18) define doublewords. PIC 9(1) can hold a maximum of 65535, S9(1) -32,768 through +32,767.
All well and good. Then there is compiler option TRUNC. This has three options. STD, the default, BIN and OPT.
BIN can be considered to have the most far-reaching affect. BIN makes BINARY/COMP/COMP-4 behave like COMP-5. Everything becomes, in effect, COMP-5. PICtures for binary fields are ignored, except in determining the size of the field (and, curiously, with ON SIZE ERROR, which "errors" when the maxima according to the PICture are exceeded). Native Binary, in IBM Enterprise Cobol, generates, in the main, though not exclusively, the "slowest" code. Truncation is to field size (halfword, fullword, doubleword).
STD, the default, is "standard" truncation. This truncates to "PICture". It is therefore a "decimal" truncation.
OPT is for "performance". With OPT, the compiler truncates in whatever way is the most "performant" for a particular "code sequence". This can mean intermediate values and final values may have "bits set" which are "outside of the range" of the PICture. However, when used as a source, a binary field will always only reflect the value specified by the PICture, even if there are "excess" bits set.
It is important when using OPT that all binary fields "conform to PICture" meaning that code must never rely on bits which are set outside the PICture definition.
Note: Even though OPT has been used, the OPTimizer (OPT(STD) or OPT(FULL)) can still provide further optimisations.
This is all well and good.
However, a "pickle" can readily ensue if you "mix" TRUNC options, or if the binary definition in a CALLing program is not the same as in the CALLed program. The "mix" can occur if modules within the same run-unit are compiled with different TRUNC options, or if a binary field on a file is written with one TRUNC option and later read with another.
Now, I suspect Programmer 2 encountered something like this: Either, with TRUNC(OPT) they noticed "excess bits" in a field and thought there was a need to deal with them, or, through the "mix" of options in a run-unit or "across file usage" they noticed "excess bits" where there would be a need to do something about it (which was to "remove the mix").
Programmer 2 developed the COMPUTE A = B + 0 to "deal" with a particular problem (perceived or actual) and then applied it generally to their work.
This is a "guess", or, better, a "rationalisation" which works with the known information.
It is a "fake" fix. There was either no problem (the normal way that TRUNC(OPT) works) or the correct resolution was "normalisation" of the TRUNC option across modules/file use.
I do not want loads of people now rushing off and putting COMPUTE A = B + 0 in their code. For a start, they don't know why they are doing it. For a continuation it is the wrong thing to do.
Of course, do not just remove the "+ 0" from any of these that you find. If there is a "mix" of TRUNCs, a program may stop "working".
There is one situation in which I have used "ADD ZERO" for a BINARY/COMP/COMP-4. This is in a "Mickey Mouse" program, a program with no purpose but to try something out. Here I've used it as a method to "trick" the optimizer, as otherwise the optimizer could see unchanging values so would generate code to use literal results as all values were known at compile time. (A perhaps "neater" and more flexible way to do this which I picked up from PhilinOxford, is to use ACCEPT for the field). This is not the case, for certain, with the code in question.
I wonder if a testing version of the sources ever had
COMPUTE WS-SUB = I + 0
ON SIZE ERROR
DISPLAY "WS-SUB overflow"
STOP RUN
END-COMPUTE
with the range test discarded when the developer was satisfied and cleaning up? MOVE doesn't allow declarative SIZE statements. That's as much of a reason as I could see. Or perhaps developer habit of using COMPUTE to move, as a subtle reminder to question the need for defensive code at every step? And perhaps not knowing, as Joe pointed out, the SIZE clause would be just as effective without the + 0? Or a maintainer struggled with off by one errors and there was a corrective change from 1 to 0 after testing?