Not sure how I can do a minimal working code example here, but I will at least try to explain what is going on.
I have a utility that processes some text files to extract data, and then provides that data back in various ways depend on commend line options and some data in an XML file. There is a variety of "Processes" I could look for in the data, and I can output the final data in one or more formats (txt, csv, xml), with potentially different "Processes" being output to different file types. And since I am processing potentially hundreds of txt files, I want to multi-three this. So I have created a function, and since the mix of processes to monitor and output types to emit is the same for every txt file, which I then want to compile into a single big data structure at the end, I have created that final data structure as a hash of hashes, but empty of data. Then I use
Set-Variable resultsContainer -option:constant -value:$results
to make a constant, which is empty of data, to hand off to the function. the idea being I can hand the function an empty container plus the path of a txt file, and it can fill the container and return it, where I can in theory use it to fill the mutable data container. So I do a foreach on the txt files, and pass what SHOULD be an empty container each time, like this.
$journalResults = Invoke-PxParseJournal -journal "$source\$journal" -container $resultsContainer
However, instead of staying empty, as I would expect for a constant, I am effectively passing all the previous journals' data to each successive iteration of the function. I proved this to myself by initializing a counter to 1 and then running this loop after the Invoke-PxParseJournal call.
foreach ($process in $resultsContainer.Keys) {
foreach ($output in $resultsContainer.$process.Keys) {
foreach ($item in $resultsContainer.$process.$output) {
Write-Host "$(Split-Path $journal -leaf) $process $output $item$('!' * $count)"
}
}
}
After the first Invoke the loop produces nothing, but from there everything is appended. So I see results like this
journal.0002.txt Open_Project csv Gordon,1/20/2017 12:08:43 AM,Open an existing project,.\RFO_Benchmark - previous.rvt,0:00:22.012!!
journal.0003.txt Open_Project csv Gordon,1/20/2017 12:08:43 AM,Open an existing project,.\RFO_Benchmark - previous.rvt,0:00:22.012!!!
journal.0004.txt Open_Project csv Gordon,1/20/2017 12:08:43 AM,Open an existing project,.\RFO_Benchmark - previous.rvt,0:00:22.012!!!!
Identical repeats each time. Even odder, if I rerun the script in the console I WILL get an error saying
Set-Variable : Cannot overwrite variable resultsContainer because it is read-only or constant.
But still the results are data being appended. Now, my first thought was that because I was using the same variable name in the function as in the root script I was dealing with some scoping problem, so I changed the name of the variable in the function and gave it an alias, like this.
[Alias('container')]$parseJournal
I then populate and return the $parseJournal variable.
No change in behavior. Which now has me wondering if I just don't understand how parameters are passed. I had thought it was ByVal, but this is acting like it is ByReference, so even with the name change I am in fact just adding to the same data structure in memory each time.
Is there something obvious here that I am missing? FWIW< I am in PS 2.0 at the moment. I don't have a Win10 VM I can spin up easily at the moment to test there.
Related
I have an existing email template file for Outlook with To, CC, Subject and Body prefilled.
I can replace the values I need on the subject just fine, however, when it comes to the HTMLBody part, it only replaces values outside the table; I've tested this by putting all 15 placeholders outside the table.
In Powershell, I defined an array with the items that will be replaced and another that reads the values from a JSON file, then I loop through both in order to replace the values on the HTMLBody.
This is the code in question:
$emailToreplaceValues=#(
"[DailyReportDate]",
"[DailyReportSuccess]",
"[DailyReportFailure]",
"[DailyReportFailureRate]"
)
$newValues=#(
$valuesJSON.DailyReport.Date,
$valuesJSON.DailyReport.Success,
$valuesJSON.DailyReport.Failure,
$dailyReportFailureRate
)
$reportEmail = $outlookObj.CreateItemFromTemplate("$emailTemplate")
$reportEmail.Subject = $reportEmail.Subject.Replace("[date]", $date)
for($i=0;$i -le $newValues.Count;$i++) {
$reportEmail.HTMLBody = $reportEmail.HTMLBody.Replace($emailToreplaceValues[$i], $newValues[$i])
}
There's more values but for the sake of brevity, I only included a few of the values, from my understanding, the issue is that some of those values are inside a HTML table cell but I don't know if I can access the table or cells directly.
Firstly, do not use MailItem.HTMLBody property as variable - it is expensive to set and read, and it might not be the same HTML you set as Outlook performs some massaging and validation. Introduce an explicit variable, set it to the value of HTMLBody, do all your string replacements in a loop using that variable, then set the MailItem.HTMLBody property once.
You can also try to output the value of that variable to make sure the old values to be replaced are really there and are not broken by HTML formatting or encoding.
For the sake of future reference, the only way I was able to fix this, was by grabbing the html code off the email that I based my email template off.
I organized it so that any tags I want replaced are in their own line without anything else other than the spaces for indentation, then defined it as a variable that goes through the replace cycle and gets assigned to the MailItem.HTMLBody property after the replace cycle.
I have installed Import-Excel Module for PowerShell by dfinke which has a great functionality but I'm facing some troubles with the headers.
I would like to insert only the text into a string array, but instead, it comes with the header even when -NoHeader is declared. According to the documentation it's not its function not insert the header in the variable but I'm looking for a way to do it. So far I came with a newbie solution of $xlsxArray | Format-Table -HideHeaders | Out-File C:\temp\info.txt and then remove the spaces with .Trim() so the file doesn't get written #{P1=ContentofTheCell}.
Is there a better way to accomplish it?
Thank you so far.
You didn't give enough detail about the desired output, but I'll try to give guidance.
Import-Excel will return objects. Normally the column headers become the property names on the objects. When you use -NoHeader, the property names are simply named P1, P2 etc... An object's properties must have names. If you want the data from those properties you may have to process differently. You can access the properties like any other object collection:
$ExcelData = Import-Excel "C:\Temp\Some.xlsx"
$ExcelData.PropertyName
The PropertyName would be the column header from the file. So let's say I had a colum named Balance in that file, then the example would something like:
$ExcelData = Import-Excel "C:\Temp\Some.xlsx"
$ExcelData.balance
Output:
7254.74
4268.16
3051.32
64.77
323.22
146.62
14798.83
Note: these are pretty simple examples. Obviously things can get more complex.
With code like:
fopen("DD:LOGLIBY(L1234567)", "w");
and JCL like:
//LOGTEST EXEC PGM=LOGTEST
//LOGLIBY DD DSN=MYUSER.LOG.LIBY,DISP=SHR
I can create PDS(E) members while at the same time browsing the PDS(E) to look at existing members, as expected with DISP=SHR.
If instead I code:
fopen("//'MYUSER.LOG.LIBY(L1234567)'", "w");
The fopen fails if I am browsing the PDS(E) at the time, or the browse of the PDS(E) fails while I have the file open. In other words there is no DISP=SHR. According to the fopen() documentation, DISP=SHR is the default when using file modes of "r" etc, but not "w".
How can I provide DISP=SHR in the second example?
There are two possibilities...
For anyone not familiar with the internal structure of partitioned datasets (that is, PDS or PDS/E), these datasets are logically divided into two parts: a "directory" having pointers to all the individual members, and a "data" area containing the actual records for the individual members:
PDS: <DIRECTORY BLOCKS>
<MEMBER1>: ADDRESS OF DATA FOR MEMBER1 (xxx)
<MEMBER2>: ADDRESS OF DATA FOR MEMBER2 (yyy)
...
<DIRECTORY FREESPACE)
...
<EOF - END OF THE PDS DIRECTORY>
<DATA PORTION>
+xxx = DATA FOR MEMBER1
...
<EOF - END OF MEMBER1>
+yyy = DATA FOR MEMBER2
...
<EOF - END OF MEMBER2>
...
FREE SPACE (ALLOCATED, BUT UNUSED)
...
END OF PDS
Throughout the next few paragraphs, keep in mind that you can open either the entire PDS/PDSE, and this enables you to read/write whatever members you like, or you can allocate and open a single member, and that gets processed like any other sequential file.
First, if you actually to have a DD statement coded as you show in the question, then you may simply need to change your open from fopen(dsn,...) to fopen(dd:ddname,...). If you're running under the UNIX Shell or you do something that results in your process running in a different address space (such as fork()), then this might not work, but it may be worth a try. If you do this with the JCL you show, the challenge would be managing the PDS/E directory - you'd need to issue your own "STOW" when you create/update a new member since the JCL allocates the entire dataset, not just a single member. The sequence would be:
Open the DD for output.
Write your data.
Update the PDS or PDS/E directory with new member information (this is where the STOW function comes in - it updates the directory of the PDS/PDSE to reflect the member you created or updated).
Close the file
If you also need to read members, you'd need to issue FIND (or BLDL/POINT - which can be fseek() in C) to point to the correct member, then read the member. I'm sure it sounds like a hassle, but the advantage of this approach is that you can allocate/open the file once, and process as many individual members as you like.
A second workaround might be to dynamically allocate the file yourself, and then open it using DD:ddname syntax...if you only infrequently access the file, this is probably easier to code. The gory details of dynamic allocation are fully described here: https://www.ibm.com/support/knowledgecenter/SSLTBW_2.4.0/com.ibm.zos.v2r4.ieaa800/reqsvc.htm.
There are several ways to invoke dynamic allocation: you can write a small assembler program, you can use the z/OS UNIX Services BPXWDYN callable service, or you can use the C runtime "dynalloc()" or "svc99()" functions. The dynalloc() function is easy to use, but it only exposes a subset of what dynamic allocation can do...svc99() is more cumbersome to use, but it exposes more functionality.
However you do it, dynamic allocation takes "text units" that roughly correspond to the parameters you find on JCL DD statements. What you're describing sounds like you'd just need to pass DSN and DISP text units, and maybe DDNAME (you can either pass your own DDNAME, or let the system generate one for you).
The C runtime functions make all this easy, but be aware that there are a few oddities, such as the need to pad the parameters to their maximum length. For example, a DSN needs to be 44 characters and padded on the right with blanks - not a C-style null-terminated string.
Here's a small code snippet as an example:
#include <dynit.h>
. . .
int allocate(ddn, dsn, mem)
{
__dyn_t ip; // Parameters to dynalloc()
. . .
// Prepare the parameters to dynalloc()
dyninit(&ip); // Initialize the parameters
ip.__ddname = ddn; // 8-char blank-padded
ip.__dsname = dsn; // 44-char blank-padded
ip.__status = __DISP_SHR; // DISP=(SHR)
ip.__normdisp = __DISP_KEEP; // DISP=(...,KEEP)
ip.__misc_flags = __CLOSE; // FREE=CLOSE
if (*mem) // Optional PDS, PDS/E member
ip.__member = mem; // 8-char blank-padded
// Now we can call dynalloc()...
if (dynalloc(&ip)) // 0: Success, else error
{
// On error, the errcode/infocode explain why - values
// are detailed in z/OS Authorized Services Reference
printf("SVC99: Can't allocate %s - RC 0x%x, Info 0x%x\n",
dsn, ip.__errcode, ip.__infocode);
return FALSE;
}
// If dynalloc works, you can open the file with fopen("DD:ddname",...)
}
Don't forget that when you're done with the file, you generally need to deallocate it.
The code snippet above uses "FREE=CLOSE" - this means that when the file is closed, z/OS will automatically free the allocation...if you only open and process the dataset once, this is a convenient approach. If you need to repeatedly open and close the file, then you wouldn't use FREE=CLOSE, but instead call dynamic allocation a second time after you're done with your processing and want to free the file.
If you need to concurrently access multiple files, beware that you'll need to generate multiple unique DDNAMEs. You can either do this in your own code, or you can use the form of dynamic allocation that automatically builds and returns a usable DDNAME (of the form "SYSnnnnn").
Also, don't forget that updating a dataset under DISP=SHR can be dangerous in some situations, especially if the dataset involved can be a conventional PDS as well as a PDS/E. The big danger is that two applications open the dataset for output concurrently...both will write data into the same place, and the result will likely be a damaged PDS directory.
There are some other oddities in the UNIX Services environment, particularly if you use fork() or exec() and expect file handles to work in subprocesses since allocations are generally tied to a particular z/OS address space. Services like spawn() can let the child process run in the same address space, so this is one possibility.
I'm using a function that I call from another script. It prompts a user for input until it gets back something that is not empty or null.
function GetUserInputValue($InputValue)
{
do{
$UserValue = Read-Host -Prompt $InputValue
if (!$UserValue) { $InputValue + ' cannot be empty' }
}while(!$UserValue)
$UserValue
return $UserValue
}
The issue is quite strange and likely a result of my lack of powershell experience. When I run the code and provide empty results, the messages from the if statement queue up and only display when I finally provide a valid input. See my console output below.
Console Results
test:
test:
test:
test:
test:
test:
test: 1
test cannot be empty
test cannot be empty
test cannot be empty
test cannot be empty
test cannot be empty
test cannot be empty
1
I can make this work however in the main file with hard coded values.
do{
$Server = Read-Host -Prompt 'Server'
if (!$Server) { 'Server cannot be empty' }
}while(!$Server)
I'm working Visual Studio Code. This is a function I have in another file I've named functions.ps1.
I call this from my main file like this,
$test = GetUserInputValue("test")
$test
When you put a naked value in a script like "here's a message" or 5 or even a variable by itself $PID what you're implicitly doing is calling Write-Output against that value.
That returns the object to the pipeline, and it gets added to the objects that that returns. So in a function, it's the return value of the function, in a ForEach-Object block it's the return value of the block, etc. This bubbles all the back up the stack / pipeline.
When it has nowhere higher to go, the host handles it.
The console host (powershell.exe) or ISE host (powershell_ise.exe) handle this by displaying the object on the console; this just happens to be the way they handle it. Another host (a custom C# application for example can host the powershell runtime) might handle it differently.
So what's happening here is that you are returning the message that you want to display, as part of the return value of your function, which is not what you want.
Instead, you should use Write-Host, as this writes directly to the host, skipping the pipeline. This is the correct command to use when you want to display a message to the user that must be shown (for other information you can use different commands like Write-Verbose, Write-Warning, Write-Error, etc.).
Doing this will give you the correct result, and prevent your informational message from being part of the return value of your function.
Speaking of which, you are returning the value twice. You don't need to do:
$UserValue
return $UserValue
The first one returns the value anyway (see the top of this answer); the second one does the same thing except that it returns immediately. Since it's at the end of the function anyway, you can use wither one, but only use one.
One more note: do not call PowerShell functions with parentheses:
$test = GetUserInputValue("test")
This works only because the function has a single parameter. If it had multiple params and you attempted to call it like a method (with parentheses and commas) it would not work correctly. You should separate arguments with spaces, and you should usually call parameters by name:
$test = GetUserInputValue "test"
# better:
$test = GetUserInputValue -InputValue "test"
I am new to MATLAB and try to run a loop within a loop. I define a variable ID beforehand, for example ID={'100'}. In my loop, I then want to go to the ID's directory and then load the matfile in there. However, whenever I load the matfile, suddenly the ID definition gets overridden by all possible IDs (all folders in the directory where also the ID 100 is). Here is my code - I also tried fullfile, but no luck so far:
ID={'100'}
for subno=1:length(ID) % first loop
try
for sessno=1:length(Session) % second loop, for each ID there are two sessions
subj_name = ([ID{subno} '_' Session{sessno} '_vectors_SC.mat']);
cd(['C:\' ID{subno} '\' Session{sessno}]);
load(subj_vec_name) % the problem occurs here, when loading, not before
end
end
end
When I then check the length of the IDs, it is now not 1 (one ID, namely 100), but there are all possible IDs within the directory where 100 also lies, and the loop iterates again for all other possible IDs (although it should stop after ID 100).
You should always specify an output to load to prevent over-writing variables in your workspaces and polluting your workspace with all of the contents of the file. There is an extensive discussion of some of the strange potential side-effects of not doing this here.
Chances are, you have a variable named ID inside of the .mat file and it over-writes the ID variable in your workspace. This can be avoided using the output of load.
The output to load will contain a struct which can be used to access your data.
data = load(subj_vec_name);
%// Access variables from file
id_from_file = data.ID;
%// Can still access ID from your workspace!
ID
Side Note
It is generally not ideal to change directories to access data. This is because if a user runs your script, they may start in a directory they want to be in but when your program returns, it dumps them somewhere unexpected.
Instead, you can use fullfile to construct a path to the file and not have to change folders. This also allows your code to work on both *nix and Windows systems.
subj_name = ([ID{subno} '_' Session{sessno} '_vectors_SC.mat']);
%// Construct the file path
filepath = fullfile('C:', ID{subno}, Session{sessno}, subj_name);
%// Load the data without changing directories
data = load(filepath);
With the command load(subj_vec_name) you are loading the complete mat file located there. If this mat file contains the variable "ID" it will overwrite your initial ID.
This should not cause your outer for-loop to execute more than once. The vector 1:length(ID) is created by the initial for-loop and should not get overwritten by subsequent changes to length(ID).
If you insert disp(ID) before and after the load command and post the output it might be easier to help.