How do I prevent users to use thousands separator in FileMaker Pro? - filemaker

In FileMaker Pro, when using number field, the user can choose to use a thousand separator or not. For example, if I have a database with a field for the price of an item, the user can either enter 1,000 or 1000.
I am using my database to generate an XML file that needs to be uploaded. The thing is, that my XML scheme dictates that only a value of 1000 is allowed and not 1,000. Therefore, I want to either automatically remove the comma, or (my preference in this case) alert the user when trying to enter a value with a thousand separator.
What I tried is the following.
For the field, I am setting Validation options. For example:
Require Strict data type: Numeric Only
Validated by calculation: Position ( Self ; ","; 1 ; 1 ) = 0
Validated by calculation: Self = Substitue ( Self, ",", "")
Auto-enter calculation: Filter( Self ; "0123456789." )
Unfortunately, none of these work. As the field is defined as a number (and I want to keep it like this, as I am also performing calculations based on this number), the Position function and the Substitute function apparently ignore the thousand separator!
EDIT:
Note that I am generating my XML by concatenating a string, for example:
"<Products><Product><Name>" & Name & "</Name><Price>" & Price & "</Price></Product></Product>"
The reason is that what I am exporting is dependent on the values in my database. Therefore, I am not using the [File][Export records...] function.

Auto-enter calculation will work, but you need to uncheck the box "Do not replace existing value of field" (which is checked by default).
I'd suggest using the calculation GetAsNumber(self) as the auto-enter calc. If it should only contain integers, wrap that in a call to Int()

I am using my database to generate an XML file that needs to be uploaded. The thing is, that my XML scheme dictates that only a value of 1000 is allowed and not 1,000.
If this is only a problem when you export, why not handle it when exporting?
If you are exporting as XML using XSLT, you can add an instruction to
your stylesheet to remove the comma from all number fields;
Alternatively, you can export from a layout where the field is
formatted to display without the comma and select the Apply current's layout data formatting to exported data option when
exporting.
Added:
Perhaps I should have clarified. I am not using the export function to generate the XML as there is some logic involved in how the XML should be formatted (dependent on the data that I want to export). What I do instead is that I make a string where I combine XML-tags and actual values from the database.
IMHO, you're making a mistake by not taking advantage of the built-in XML/XSLT export option. Any imaginable logic can be implemented this way, without burdening your solution with the fragile task of creating a valid XML.
In any case, if you're using the field in a calculation, you can replace all references to it with:
GetAsNumber (YourField )
to get an unformatted, numeric-only, value.

Your question puzzles me. As far as I know, FileMaker does not store the thousands separator, but rather offers it only as a display option.
That's also why those functions can't find it.
Are you sure you are exporting the raw data and not a "formatted as layout" variant?

Related

Using Powershell and HTMLBody.Replace how do I replace values inside an existing table?

I have an existing email template file for Outlook with To, CC, Subject and Body prefilled.
I can replace the values I need on the subject just fine, however, when it comes to the HTMLBody part, it only replaces values outside the table; I've tested this by putting all 15 placeholders outside the table.
In Powershell, I defined an array with the items that will be replaced and another that reads the values from a JSON file, then I loop through both in order to replace the values on the HTMLBody.
This is the code in question:
$emailToreplaceValues=#(
"[DailyReportDate]",
"[DailyReportSuccess]",
"[DailyReportFailure]",
"[DailyReportFailureRate]"
)
$newValues=#(
$valuesJSON.DailyReport.Date,
$valuesJSON.DailyReport.Success,
$valuesJSON.DailyReport.Failure,
$dailyReportFailureRate
)
$reportEmail = $outlookObj.CreateItemFromTemplate("$emailTemplate")
$reportEmail.Subject = $reportEmail.Subject.Replace("[date]", $date)
for($i=0;$i -le $newValues.Count;$i++) {
$reportEmail.HTMLBody = $reportEmail.HTMLBody.Replace($emailToreplaceValues[$i], $newValues[$i])
}
There's more values but for the sake of brevity, I only included a few of the values, from my understanding, the issue is that some of those values are inside a HTML table cell but I don't know if I can access the table or cells directly.
Firstly, do not use MailItem.HTMLBody property as variable - it is expensive to set and read, and it might not be the same HTML you set as Outlook performs some massaging and validation. Introduce an explicit variable, set it to the value of HTMLBody, do all your string replacements in a loop using that variable, then set the MailItem.HTMLBody property once.
You can also try to output the value of that variable to make sure the old values to be replaced are really there and are not broken by HTML formatting or encoding.
For the sake of future reference, the only way I was able to fix this, was by grabbing the html code off the email that I based my email template off.
I organized it so that any tags I want replaced are in their own line without anything else other than the spaces for indentation, then defined it as a variable that goes through the replace cycle and gets assigned to the MailItem.HTMLBody property after the replace cycle.

Parameter field with multiple values not working

Setup a Parameter field with multiple values to be used in a SQL query command and it does not work when more than one value is selected, but works fine with one value selected. And yes, the "Allow multiple values" flag is set to True under Options.
I am trying to go from this:
EMPBNFIT.BENEFITPLAN in ('CONSUMER CHOICE','HMO', 'HS HMO','HS NETWORK CHOICE','HS PPO BASIC NH RPN','HS PPO PLUS NH RPN','MFS CONSUMER CHOICE','NETWORK CHOICE','PPO BASIC NH RPN','PPO PLUS NH RPN','WAIVE MEDICAL')
to this:
WHERE EMPBNFIT.BENEFITPLAN in ('{?MyPlans}')
What a coincidence; had the same problem this morning. I was able to make a workaround in Crystal by converting the array of multiple parameters into a single string, then replacing the IN section with an INSTR comparison.
Make a formula called ParamFix with this logic:
REPLACE(JOIN({?MyPlans}, ","), "&", "; ")
In my case, the different values were separated by an &, but you can replace that based on what comes back from the tables. Then replace the IN comparison with:
INSTR({#ParamFix}, EMPBNFIT.BENEFITPLAN) > 0

Why does Open XML API Import Text Formatted Column Cell Rows Differently For Every Row

I am working on an ingestion feature that will take a strongly formatted .xlsx file and import the records to a temp storage table and then process the rows to create db records.
One of the columns is strictly formatted as "Text" but it seems like the Open XML API handles the columns cells differently on a row-by-row basis. Some of the values while appearing to be numeric values are truly not (which is why we format the column as Text) -
some examples are "211377", "211727.01", "209395.388", "209395.435"
what these values represent is not important but what happens is that some values (using the Open XML API v2.5 library) will be read in properly as text whether retrieved from the Shared Strings collection or simply from InnerXML property while others get sucked in as numbers with what appears to be appended rounding or precision.
For example the "211377", "211727.01" and "209395.435" all come in exactly as they are in the spreadsheet but the "209395.388" value is being pulled in as "209395.38800000001" (there are others that this happens to as well).
There seems to be no rhyme or reason to which values get messed up and which ones which import fine. What is really frustrating is that if I use the native Import feature in SQL Server Management Studio and ingest the same spreadsheet to a temp table this does not happen - so how is that the SSMS import can handle these values as purely text for all rows but the Open XML API cannot.
To begin the answer you main problem seems to be values,
"209395.388" value is being pulled in as "209395.38800000001"
Yes in .xlsx file value is stored as 209395.38800000001 instead of 209395.388. And it's the correct format to store floating point numbers; nothing wrong in it. You van simply confirm it by following code snippet
string val = "209395.38800000001"; // <= What we extract from Open Xml
Console.WriteLine(double.Parse(val)); // < = Simply pass it to double and print
The output is :
209395.388 // <= yes the expected value
So there's nothing wrong in the value you extract from .xlsx using Open Xml SDK.
Now to cells, yes cell can have verity of formats. Numbers, text, boleans or shared string text. And you can styles to a cell which would format your string to a desired output in Excel. (Ex - Date Time format, Forced strings etc.). And this the way Excel handle the vast verity of data. It need this kind of formatting and .xlsx file format had to be little complex to support all.
My advice is to use a proper parse method set at extracted values to identify what format it represent (For example to determine whether its a number or a text) and apply what type of parse.
ex : -
string val = "209395.38800000001";
Console.WriteLine(float.Parse(val)); // <= Float parse will be deduce a different value ; 209395.4
Update :
Here's how value is saved in internal XML
Try for yourself ;
Make an .xlsx file with value 209395.388 -> Change extention to .zip -> Unzip it -> goto worksheet folder -> open Sheet1
You will notice that value is stored as 209395.38800000001 as scene in attached image.. So nothing wrong on API for extracting stored number. It's your duty to decide what format to apply.
But if you make the whole column Text before adding data, you will see that .xlsx hold data as it is; simply said as string.

Text input through SSRS parameter including a Field name

I have a SSRS "statement" type report that has general layout of text boxes and tables. For the main text box I want to let the user supply the value as a parameter so the text can be customized, i.e.
Parameters!MainText.Value = "Dear Mr.Doe, Here is your statement."
then I can set the text box value to be the value of the parameter:
=Parameters!MainText.Value
However, I need to be able to allow the incoming parameter value to include a dataset field, like so:
Parameters!MainText.Value = "Dear Mr.Doe, Here is your [Fields!RunDate.Value] statement"
so that my report output would look like:
"Dear Mr.Doe, Here is your November statement."
I know that you can define it to do this in the text box by supplying the static text and the field request, but I need SSRS to recognize that inside the parameter string there is a field request that needs to be escaped and bound.
Does anyone have any ideas for this? I am using SSRS 2008R2
Have you tried concatenating?
Parameters!MainText.Value = "Dear Mr.Doe, Here is your" & [Fields!RunDate.Value] & "statement"
There are a few dramatically different approaches. To know which is best for you will require more information:
Embedded code in the report. Probably the quickest to
implement would be embedded code in the report that returned the
parameter, but called String.Replace() appropriately to substitute
in dynamic values. You'll need to establish some code for the user for which strings will be replaced. Embedded code will get you access to many objects in the report. For example:
Public Function TestGlobals(ByVal s As String) As String
Return Report.Globals.ExecutionTime.ToString
End Function
will return the execution time. Other methods of accessing parameters for the report are shown here.
1.5 If this function is getting very large, look at using a custom assembly. Then you can have a better authoring experience with Visual Studio
Modify the XML. Depending on where you use
this, you could directly modify the .rdl/.rdlc XML.
Consider other tools, such as ReportBuilder. IF you need to give the user
more flexibility over report authoring, there are many tools built
specifically for this purpose, such as SSRS's Report Builder.
Here's another approach: Display the parameter string with the dataset value already filled in.
To do so: create a parameter named RunDate for example and set Default value to "get values from a query" and select the first dataset and value field (RunDate). Now the parameter will hold the RunDate field and you can use it elsewhere. Make this parameter hidden or internal and set the correct data type. e.g. Date/Time so you can format its value later.
Now create the second parameter which will hold the default text you want:
Parameters!MainText.Value = "Dear Mr.Doe, Here is your [Parameters!RunDate.Value] statement"
Not sure if this syntax works but you get the idea. You can also do formatting here e.g. only the month of a Datetime:
="Dear Mr.Doe, Here is your " & Format(Parameters!RunDate.Value, "MMMM") & " statement"
This approach uses only built-in methods and avoids the need for a parser so the user doesn't have to learn the syntax for it.
There is of course one drawback: the user has complete control over the parameter contents and can supply a value that doesn't match the report content - but that is also the case with the String Replace method.
And just for the sake of completeness there's also the simplistic option: append multiple parameters: create 2 parameters named MainTextBeforeRunDate and MainTextAfterRunDate.
The Textbox value expression becomes:
=Parameters!MainTextBeforeRunDate.Value & Fields!RunDate.Value & Parameters!MainTextAfterRunDate.Value.
This should explain itself. The simplest solution is often the best, but in this case I have my doubts. At least this makes sure your RunDate ends up in the final report text.

How to set null values while importing to phpmyadmin?

I'm trying to import a .csv file into phpmyadmin where several fields are purposefully left blank. I need these field to register as null values and not just left as a blank string.
I know in the field properties you can select to allow "null" vs. "not null" for each field, but it still doesn't change cell to a null value while importing. After the import I can manually go check the null box for each field on each record, but that it unrealistic considering the amount of data I'm working with.
Is there a way to get phpmyadmin to set these blank cell to null values on import?
I've been experience similar issues.
If you download a PhpMyAdmin CSV file with NULL values, you'll notice that NULL doesn't get encapsulated with quotes. So you'll have a line like this:
"1";"2";NULL;NULL
"2";"2";NULL;NULL
etc.
However, if you edit a CSV file in something like Open Office Calc, it might change this to put quotes around NULL, like so:
"1";"2";"NULL";"NULL"
"2";"2";"NULL";"NULL"
etc.
What should work is doing a search and replace for ["NULL" = NULL].
In your case, because you have empty (blank) fields, you'll be looking at doing a search and replace like this:
[,, = ,NULL,]
And probably a second pass for NULL values at the end of a line like so:
[,\n = ,NULL\n]
Ancient question, but in case another MySQL noob like myself comes across it.
The find/replace rigamarole jmbertucci describes is avoidable if you're in charge of the creation of the CSV file, for example when you're backing up your own databases. In phpMyAdmin, if you select "custom" export method, you will see replace NULL with: and the default is NULL. Simply change that to "NULL" and you save yourself a step.
I ran into this same problem and jmbertucci's answer worked great. I did run into one additional problem. In the case with a row of data like such
"hello","world",,,,,,
which has multiple sets of null values in a row doing a search replace with [,, = ,NULL,] as jmbertucci suggested won't work as you intend it to on the first pass. Instead you'll end up with
"hello","world",NULL,,NULL,,NULL
You should continue to do the search replace to until you end up with 0 occurrences replaced