Using Powershell and HTMLBody.Replace how do I replace values inside an existing table? - powershell

I have an existing email template file for Outlook with To, CC, Subject and Body prefilled.
I can replace the values I need on the subject just fine, however, when it comes to the HTMLBody part, it only replaces values outside the table; I've tested this by putting all 15 placeholders outside the table.
In Powershell, I defined an array with the items that will be replaced and another that reads the values from a JSON file, then I loop through both in order to replace the values on the HTMLBody.
This is the code in question:
$emailToreplaceValues=#(
"[DailyReportDate]",
"[DailyReportSuccess]",
"[DailyReportFailure]",
"[DailyReportFailureRate]"
)
$newValues=#(
$valuesJSON.DailyReport.Date,
$valuesJSON.DailyReport.Success,
$valuesJSON.DailyReport.Failure,
$dailyReportFailureRate
)
$reportEmail = $outlookObj.CreateItemFromTemplate("$emailTemplate")
$reportEmail.Subject = $reportEmail.Subject.Replace("[date]", $date)
for($i=0;$i -le $newValues.Count;$i++) {
$reportEmail.HTMLBody = $reportEmail.HTMLBody.Replace($emailToreplaceValues[$i], $newValues[$i])
}
There's more values but for the sake of brevity, I only included a few of the values, from my understanding, the issue is that some of those values are inside a HTML table cell but I don't know if I can access the table or cells directly.

Firstly, do not use MailItem.HTMLBody property as variable - it is expensive to set and read, and it might not be the same HTML you set as Outlook performs some massaging and validation. Introduce an explicit variable, set it to the value of HTMLBody, do all your string replacements in a loop using that variable, then set the MailItem.HTMLBody property once.
You can also try to output the value of that variable to make sure the old values to be replaced are really there and are not broken by HTML formatting or encoding.

For the sake of future reference, the only way I was able to fix this, was by grabbing the html code off the email that I based my email template off.
I organized it so that any tags I want replaced are in their own line without anything else other than the spaces for indentation, then defined it as a variable that goes through the replace cycle and gets assigned to the MailItem.HTMLBody property after the replace cycle.

Related

How do I prevent users to use thousands separator in FileMaker Pro?

In FileMaker Pro, when using number field, the user can choose to use a thousand separator or not. For example, if I have a database with a field for the price of an item, the user can either enter 1,000 or 1000.
I am using my database to generate an XML file that needs to be uploaded. The thing is, that my XML scheme dictates that only a value of 1000 is allowed and not 1,000. Therefore, I want to either automatically remove the comma, or (my preference in this case) alert the user when trying to enter a value with a thousand separator.
What I tried is the following.
For the field, I am setting Validation options. For example:
Require Strict data type: Numeric Only
Validated by calculation: Position ( Self ; ","; 1 ; 1 ) = 0
Validated by calculation: Self = Substitue ( Self, ",", "")
Auto-enter calculation: Filter( Self ; "0123456789." )
Unfortunately, none of these work. As the field is defined as a number (and I want to keep it like this, as I am also performing calculations based on this number), the Position function and the Substitute function apparently ignore the thousand separator!
EDIT:
Note that I am generating my XML by concatenating a string, for example:
"<Products><Product><Name>" & Name & "</Name><Price>" & Price & "</Price></Product></Product>"
The reason is that what I am exporting is dependent on the values in my database. Therefore, I am not using the [File][Export records...] function.
Auto-enter calculation will work, but you need to uncheck the box "Do not replace existing value of field" (which is checked by default).
I'd suggest using the calculation GetAsNumber(self) as the auto-enter calc. If it should only contain integers, wrap that in a call to Int()
I am using my database to generate an XML file that needs to be uploaded. The thing is, that my XML scheme dictates that only a value of 1000 is allowed and not 1,000.
If this is only a problem when you export, why not handle it when exporting?
If you are exporting as XML using XSLT, you can add an instruction to
your stylesheet to remove the comma from all number fields;
Alternatively, you can export from a layout where the field is
formatted to display without the comma and select the Apply current's layout data formatting to exported data option when
exporting.
Added:
Perhaps I should have clarified. I am not using the export function to generate the XML as there is some logic involved in how the XML should be formatted (dependent on the data that I want to export). What I do instead is that I make a string where I combine XML-tags and actual values from the database.
IMHO, you're making a mistake by not taking advantage of the built-in XML/XSLT export option. Any imaginable logic can be implemented this way, without burdening your solution with the fragile task of creating a valid XML.
In any case, if you're using the field in a calculation, you can replace all references to it with:
GetAsNumber (YourField )
to get an unformatted, numeric-only, value.
Your question puzzles me. As far as I know, FileMaker does not store the thousands separator, but rather offers it only as a display option.
That's also why those functions can't find it.
Are you sure you are exporting the raw data and not a "formatted as layout" variant?

Handle POST data sent as array

I have an html form which sends a hidden field and a radio button with the same name.
This allows people to submit the form without picking from the list (but records a zero answer).
When the user does select a radio button, the form posts BOTH the hidden value and the selected value.
I'd like to write a perl function to convert the POST data to a hash. The following works for standard text boxes etc.
#!/usr/bin/perl
use CGI qw(:standard);
sub GetForm{
%form;
foreach my $p (param()) {
$form{$p} = param($p);
}
return %form;
}
However when faced with two form inputs with the same name it just returns the first one (ie the hidden one)
I can see that the inputs are included in the POST header as an array but I don't know how to process them.
I'm working with legacy code so I can't change the form unfortunately!
Is there a way to do this?
I have an html form which sends a hidden field and a radio button with
the same name.
This allows people to submit the form without picking from the list
(but records a zero answer).
That's an odd approach. It would be easier to leave the hidden input out and treat the absence of the data as a zero answer.
However, if you want to stick to your approach, read the documentation for the CGI module.
Specifically, the documentation for param:
When calling param() If the parameter is multivalued (e.g. from multiple selections in a scrolling list), you can ask to receive an array. Otherwise the method will return the first value.
Thus:
$form{$p} = [ param($p) ];
However, you do seem to be reinventing the wheel. There is a built-in method to get a hash of all paramaters:
$form = $CGI->new->Vars
That said, the documentation also says:
CGI.pm is no longer considered good practice for developing web applications, including quick prototyping and small web scripts. There are far better, cleaner, quicker, easier, safer, more scalable, more extensible, more modern alternatives available at this point in time. These will be documented with CGI::Alternatives.
So you should migrate away from this anyway.
Replace
$form{$p} = param($p); # Value of first field named $p
with
$form{$p} = ( multi_param($p) )[-1]; # Value of last field named $p
or
$form{$p} = ( grep length, multi_param($p) )[-1]; # Value of last field named $p
# that has a non-blank value

JQuery Wildcard for using atttributes in selectors

I've research this topic extensibly and I'm asking as a last resort before assuming that there is no wildcard for what I want to do.
I need to pull up all the text input elements from the document and add it to an array. However, I only want to add the input elements that have an id.
I know you can use the \S* wildcard when using an id selector such as $(#\S*), however I can't use this because I need to filter the results by text type only as well, so I searching by attribute.
I currently have this:
values_inputs = $("input[type='text'][id^='a']");
This works how I want it to but it brings back only the text input elements that start with an 'a'. I want to get all the text input elements with an 'id' of anything.
I can't use:
values_inputs = $("input[type='text'][id^='']"); //or
values_inputs = $("input[type='text'][id^='*']"); //or
values_inputs = $("input[type='text'][id^='\\S*']"); //or
values_inputs = $("input[type='text'][id^=\\S*]");
//I either get no values returned or a syntax error for these
I guess I'm just looking for the equivalent of * in SQL for JQuery attribute selectors.
Is there no such thing, or am I just approaching this problem the wrong way?
Actually, it's quite simple:
var values_inputs = $("input[type=text][id]");
Your logic is a bit ambiguous. I believe you don't want elements with any id, but rather elements where id does not equal an empty string. Use this.
values_inputs = $("input[type='text']")
.filter(function() {
return this.id != '';
});
Try changing your selector to:
$("input[type='text'][id]")
I figured out another way to use wild cards very simply. This helped me a lot so I thought I'd share it.
You can use attribute wildcards in the selectors in the following way to emulate the use of '*'. Let's say you have dynamically generated form in which elements are created with the same naming convention except for dynamically changing digits representing the index:
id='part_x_name' //where x represents a digit
If you want to retrieve only the text input ones that have certain parts of the id name and element type you can do the following:
var inputs = $("input[type='text'][id^='part_'][id$='_name']");
and voila, it will retrieve all the text input elements that have "part_" in the beginning of the id string and "_name" at the end of the string. If you have something like
id='part_x_name_y' // again x and y representing digits
you could do:
var inputs = $("input[type='text'][id^='part_'][id*='_name_']"); //the *= operator means that it will retrieve this part of the string from anywhere where it appears in the string.
Depending on what the names of other id's are it may start to get a little trickier if other element id's have similar naming conventions in your document. You may have to get a little more creative in specifying your wildcards. In most common cases this will be enough to get what you need.

Text input through SSRS parameter including a Field name

I have a SSRS "statement" type report that has general layout of text boxes and tables. For the main text box I want to let the user supply the value as a parameter so the text can be customized, i.e.
Parameters!MainText.Value = "Dear Mr.Doe, Here is your statement."
then I can set the text box value to be the value of the parameter:
=Parameters!MainText.Value
However, I need to be able to allow the incoming parameter value to include a dataset field, like so:
Parameters!MainText.Value = "Dear Mr.Doe, Here is your [Fields!RunDate.Value] statement"
so that my report output would look like:
"Dear Mr.Doe, Here is your November statement."
I know that you can define it to do this in the text box by supplying the static text and the field request, but I need SSRS to recognize that inside the parameter string there is a field request that needs to be escaped and bound.
Does anyone have any ideas for this? I am using SSRS 2008R2
Have you tried concatenating?
Parameters!MainText.Value = "Dear Mr.Doe, Here is your" & [Fields!RunDate.Value] & "statement"
There are a few dramatically different approaches. To know which is best for you will require more information:
Embedded code in the report. Probably the quickest to
implement would be embedded code in the report that returned the
parameter, but called String.Replace() appropriately to substitute
in dynamic values. You'll need to establish some code for the user for which strings will be replaced. Embedded code will get you access to many objects in the report. For example:
Public Function TestGlobals(ByVal s As String) As String
Return Report.Globals.ExecutionTime.ToString
End Function
will return the execution time. Other methods of accessing parameters for the report are shown here.
1.5 If this function is getting very large, look at using a custom assembly. Then you can have a better authoring experience with Visual Studio
Modify the XML. Depending on where you use
this, you could directly modify the .rdl/.rdlc XML.
Consider other tools, such as ReportBuilder. IF you need to give the user
more flexibility over report authoring, there are many tools built
specifically for this purpose, such as SSRS's Report Builder.
Here's another approach: Display the parameter string with the dataset value already filled in.
To do so: create a parameter named RunDate for example and set Default value to "get values from a query" and select the first dataset and value field (RunDate). Now the parameter will hold the RunDate field and you can use it elsewhere. Make this parameter hidden or internal and set the correct data type. e.g. Date/Time so you can format its value later.
Now create the second parameter which will hold the default text you want:
Parameters!MainText.Value = "Dear Mr.Doe, Here is your [Parameters!RunDate.Value] statement"
Not sure if this syntax works but you get the idea. You can also do formatting here e.g. only the month of a Datetime:
="Dear Mr.Doe, Here is your " & Format(Parameters!RunDate.Value, "MMMM") & " statement"
This approach uses only built-in methods and avoids the need for a parser so the user doesn't have to learn the syntax for it.
There is of course one drawback: the user has complete control over the parameter contents and can supply a value that doesn't match the report content - but that is also the case with the String Replace method.
And just for the sake of completeness there's also the simplistic option: append multiple parameters: create 2 parameters named MainTextBeforeRunDate and MainTextAfterRunDate.
The Textbox value expression becomes:
=Parameters!MainTextBeforeRunDate.Value & Fields!RunDate.Value & Parameters!MainTextAfterRunDate.Value.
This should explain itself. The simplest solution is often the best, but in this case I have my doubts. At least this makes sure your RunDate ends up in the final report text.

How to set null values while importing to phpmyadmin?

I'm trying to import a .csv file into phpmyadmin where several fields are purposefully left blank. I need these field to register as null values and not just left as a blank string.
I know in the field properties you can select to allow "null" vs. "not null" for each field, but it still doesn't change cell to a null value while importing. After the import I can manually go check the null box for each field on each record, but that it unrealistic considering the amount of data I'm working with.
Is there a way to get phpmyadmin to set these blank cell to null values on import?
I've been experience similar issues.
If you download a PhpMyAdmin CSV file with NULL values, you'll notice that NULL doesn't get encapsulated with quotes. So you'll have a line like this:
"1";"2";NULL;NULL
"2";"2";NULL;NULL
etc.
However, if you edit a CSV file in something like Open Office Calc, it might change this to put quotes around NULL, like so:
"1";"2";"NULL";"NULL"
"2";"2";"NULL";"NULL"
etc.
What should work is doing a search and replace for ["NULL" = NULL].
In your case, because you have empty (blank) fields, you'll be looking at doing a search and replace like this:
[,, = ,NULL,]
And probably a second pass for NULL values at the end of a line like so:
[,\n = ,NULL\n]
Ancient question, but in case another MySQL noob like myself comes across it.
The find/replace rigamarole jmbertucci describes is avoidable if you're in charge of the creation of the CSV file, for example when you're backing up your own databases. In phpMyAdmin, if you select "custom" export method, you will see replace NULL with: and the default is NULL. Simply change that to "NULL" and you save yourself a step.
I ran into this same problem and jmbertucci's answer worked great. I did run into one additional problem. In the case with a row of data like such
"hello","world",,,,,,
which has multiple sets of null values in a row doing a search replace with [,, = ,NULL,] as jmbertucci suggested won't work as you intend it to on the first pass. Instead you'll end up with
"hello","world",NULL,,NULL,,NULL
You should continue to do the search replace to until you end up with 0 occurrences replaced