For a mail merge in Microsoft Word from data in Microsoft Excel, I have written a DATABASE field that successfully adds all of the fields I want and dynamically changes for each mail merge record ("many to one").
I have then tried to format the numbers within this DATABASE statement, but when I used FORMAT() the number did change to the format I wanted, however the header was replaced with Expr1003.
Is there a way to format the numbers within the DATABASE statement shown below but without losing the header titles?
Code without formatting:
{DATABASE \d"{FILENAME \p}/../data5.xlsx" \s "SELECT [Accountable Officer], [Cost Centre Group], [Description], [Annual Budget], [Outturn Forecast], [Outturn Forecast Variance] FROM [data$] WHERE [Accountable Officer] = {Quote 39}{MERGEFIELD Accountable_Officer}{Quote 39} ORDER BY [Cost Centre Group] "\l \b "16" \h}
If I amend Annual Budget with FORMAT() as below:
{DATABASE \d"{FILENAME \p}/../data5.xlsx" \s "SELECT [Accountable Officer], [Cost Centre Group], [Description], FORMAT([Annual Budget], '£#,##0;-£#,##0'), [Outturn Forecast], [Outturn Forecast Variance] FROM [data$] WHERE [Accountable Officer] = {Quote 39}{MERGEFIELD Accountable_Officer}{Quote 39} ORDER BY [Cost Centre Group] "\l \b "16" \h}
then the figures shown in the Annual Budget column are formatted correctly, but the header title then changes to Expr1003 (or Expr1004, Expr1006 etc).
The reason for the change in the field name is because the function is "hiding" the fied name. In this case, the Database field is creating an expression for the field name.
One way around this is to pre-format in the data source, in the case of Excel, by turning the cell content into text instead of a number.
The other way is to assign an alias as the field name using AS:
FORMAT([Annual Budget], '£#,##0;-£#,##0') AS Budget.
I think you should be able to use AS [Annual Budget], but test this, first, to be sure the basic syntax works. My quick test in Access did not allow using the same field name as the alias, but my test in Word did allow it...
Related
In FileMaker Pro, when using number field, the user can choose to use a thousand separator or not. For example, if I have a database with a field for the price of an item, the user can either enter 1,000 or 1000.
I am using my database to generate an XML file that needs to be uploaded. The thing is, that my XML scheme dictates that only a value of 1000 is allowed and not 1,000. Therefore, I want to either automatically remove the comma, or (my preference in this case) alert the user when trying to enter a value with a thousand separator.
What I tried is the following.
For the field, I am setting Validation options. For example:
Require Strict data type: Numeric Only
Validated by calculation: Position ( Self ; ","; 1 ; 1 ) = 0
Validated by calculation: Self = Substitue ( Self, ",", "")
Auto-enter calculation: Filter( Self ; "0123456789." )
Unfortunately, none of these work. As the field is defined as a number (and I want to keep it like this, as I am also performing calculations based on this number), the Position function and the Substitute function apparently ignore the thousand separator!
EDIT:
Note that I am generating my XML by concatenating a string, for example:
"<Products><Product><Name>" & Name & "</Name><Price>" & Price & "</Price></Product></Product>"
The reason is that what I am exporting is dependent on the values in my database. Therefore, I am not using the [File][Export records...] function.
Auto-enter calculation will work, but you need to uncheck the box "Do not replace existing value of field" (which is checked by default).
I'd suggest using the calculation GetAsNumber(self) as the auto-enter calc. If it should only contain integers, wrap that in a call to Int()
I am using my database to generate an XML file that needs to be uploaded. The thing is, that my XML scheme dictates that only a value of 1000 is allowed and not 1,000.
If this is only a problem when you export, why not handle it when exporting?
If you are exporting as XML using XSLT, you can add an instruction to
your stylesheet to remove the comma from all number fields;
Alternatively, you can export from a layout where the field is
formatted to display without the comma and select the Apply current's layout data formatting to exported data option when
exporting.
Added:
Perhaps I should have clarified. I am not using the export function to generate the XML as there is some logic involved in how the XML should be formatted (dependent on the data that I want to export). What I do instead is that I make a string where I combine XML-tags and actual values from the database.
IMHO, you're making a mistake by not taking advantage of the built-in XML/XSLT export option. Any imaginable logic can be implemented this way, without burdening your solution with the fragile task of creating a valid XML.
In any case, if you're using the field in a calculation, you can replace all references to it with:
GetAsNumber (YourField )
to get an unformatted, numeric-only, value.
Your question puzzles me. As far as I know, FileMaker does not store the thousands separator, but rather offers it only as a display option.
That's also why those functions can't find it.
Are you sure you are exporting the raw data and not a "formatted as layout" variant?
I've got a buffer which contains a mix of data, number and character fields. I am getting the displaying the values of the fields, but for some reason date fields return "?" when I try to add them to a string.
I still get ? even if I do
ASSIGN lvString = lvString + STRING( hField:BUFFER-VALUE ).
I've also tried assigning the BUFFER-VALUE to a local DATE variable, and converting that to a string, but that doesn't work either - still ?.
However if I use the STRING-VALUE attribute, it works fine.
How do I get the value out as a date field, rather than just a string?
There are two ways that you can use to achieve your needs. One is to use directly the table buffer and the other is to use an QUERY handle.
First example using a buffer directly from a table (or a TEMP-TABLE, doesn't matter):
DEF VAR dateVar AS DATE NO-UNDO.
FIND FIRST job NO-LOCK.
dateVar = DATE(BUFFER job:BUFFER-FIELD('dt-job'):BUFFER-VALUE).
MESSAGE dateVar
VIEW-AS ALERT-BOX INFO BUTTONS OK.
Second example using a query handle:
DEF VAR dateVar AS DATE NO-UNDO.
DEF QUERY qrJob FOR job.
OPEN QUERY qrJob FOR EACH job.
QUERY qrJob:GET-FIRST().
dateVar = DATE(QUERY qrJob:GET-BUFFER-HANDLE(1):BUFFER-FIELD('dt-job'):BUFFER-VALUE).
MESSAGE dateVar
VIEW-AS ALERT-BOX INFO BUTTONS OK.
As Tim Kuehn said you can substitute 'dt-job' by # of field in the query if you know its position inside the query. I could used BUFFER-FIELD(2) in substitution of BUFFER-FIELD('dt-job') because dt-job is the #2 field in my query. Keep in mind that use the FIELDS clause in a FOR EACH or in an OPEN QUERY statement changes the order of fields in query. Generally, for browsers only available the columns fields specified in FIELDS section, in order.
These might work for you. It's important to say that BUFFER-VALUE always returns a CHARACTER data type and because of this you need to use DATE statement for data conversion.
Hope it helps.
The standard form for getting a data of the field's data type is
buffer table-name:buffer-handle:buffer-field("field-name"):buffer-value.
for arrays it's:
buffer table-name:buffer-handle:buffer-field("field-name"):buffer-value[array-element].
You can also substitute a field # for "field-name" to get the buffer field handle.
I have been stuck on this for a few days. I have a folder with hundreds of shapefiles. I want to add an attribute field to the shapefiles giving the shapefile's name as a date. The shapefile name includes Landsat path/row, year, and Julien date ('1800742003032.shp). I want just the date '2003032' to be added under a "Date" field.
Here's what I have so far:
arcpy.env.workspace = r"C:\Users\mkelly\Documents\Namibia\Raster_Water\1993\Polygons"
for fc in arcpy.ListFeatureClasses("*", "ALL"):
print str("processing" + fc)
field = "DATE"
expression = str(fc)[6:13]
arcpy.AddField_management(fc, field, "TEXT")
arcpy.CalculateField_management(fc, field, "expression", "PYTHON")
Results:
processing1800742003032.shp
processing1800742009136.shp
processing1820732010289.shp
end Processing...
It runs perfectly (on a sample 3 shapefiles) but the problem is that when I open the shapefiles in Arcmap, they all have the same date. The results show that it processed each of the 3 shapefiles, and the add field management must have worked because all of the fields are populated. So there is an issue with either the expression, or the Calculate field command.
How can I get it to populate the specific date for each shapefile, and not just have all of them be '2003032'?? There are no error messages.
Thanks in advance!
I figured it out! For calculate field management, expression should not be in quotes. It should be: arcpy.CalculateField_management(fc, field, expression, "PYTHON")
This post may have been a waste of time, but at least maybe it will help someone with a similar problem in the future.
I have a SSRS "statement" type report that has general layout of text boxes and tables. For the main text box I want to let the user supply the value as a parameter so the text can be customized, i.e.
Parameters!MainText.Value = "Dear Mr.Doe, Here is your statement."
then I can set the text box value to be the value of the parameter:
=Parameters!MainText.Value
However, I need to be able to allow the incoming parameter value to include a dataset field, like so:
Parameters!MainText.Value = "Dear Mr.Doe, Here is your [Fields!RunDate.Value] statement"
so that my report output would look like:
"Dear Mr.Doe, Here is your November statement."
I know that you can define it to do this in the text box by supplying the static text and the field request, but I need SSRS to recognize that inside the parameter string there is a field request that needs to be escaped and bound.
Does anyone have any ideas for this? I am using SSRS 2008R2
Have you tried concatenating?
Parameters!MainText.Value = "Dear Mr.Doe, Here is your" & [Fields!RunDate.Value] & "statement"
There are a few dramatically different approaches. To know which is best for you will require more information:
Embedded code in the report. Probably the quickest to
implement would be embedded code in the report that returned the
parameter, but called String.Replace() appropriately to substitute
in dynamic values. You'll need to establish some code for the user for which strings will be replaced. Embedded code will get you access to many objects in the report. For example:
Public Function TestGlobals(ByVal s As String) As String
Return Report.Globals.ExecutionTime.ToString
End Function
will return the execution time. Other methods of accessing parameters for the report are shown here.
1.5 If this function is getting very large, look at using a custom assembly. Then you can have a better authoring experience with Visual Studio
Modify the XML. Depending on where you use
this, you could directly modify the .rdl/.rdlc XML.
Consider other tools, such as ReportBuilder. IF you need to give the user
more flexibility over report authoring, there are many tools built
specifically for this purpose, such as SSRS's Report Builder.
Here's another approach: Display the parameter string with the dataset value already filled in.
To do so: create a parameter named RunDate for example and set Default value to "get values from a query" and select the first dataset and value field (RunDate). Now the parameter will hold the RunDate field and you can use it elsewhere. Make this parameter hidden or internal and set the correct data type. e.g. Date/Time so you can format its value later.
Now create the second parameter which will hold the default text you want:
Parameters!MainText.Value = "Dear Mr.Doe, Here is your [Parameters!RunDate.Value] statement"
Not sure if this syntax works but you get the idea. You can also do formatting here e.g. only the month of a Datetime:
="Dear Mr.Doe, Here is your " & Format(Parameters!RunDate.Value, "MMMM") & " statement"
This approach uses only built-in methods and avoids the need for a parser so the user doesn't have to learn the syntax for it.
There is of course one drawback: the user has complete control over the parameter contents and can supply a value that doesn't match the report content - but that is also the case with the String Replace method.
And just for the sake of completeness there's also the simplistic option: append multiple parameters: create 2 parameters named MainTextBeforeRunDate and MainTextAfterRunDate.
The Textbox value expression becomes:
=Parameters!MainTextBeforeRunDate.Value & Fields!RunDate.Value & Parameters!MainTextAfterRunDate.Value.
This should explain itself. The simplest solution is often the best, but in this case I have my doubts. At least this makes sure your RunDate ends up in the final report text.
I'm trying to import a .csv file into phpmyadmin where several fields are purposefully left blank. I need these field to register as null values and not just left as a blank string.
I know in the field properties you can select to allow "null" vs. "not null" for each field, but it still doesn't change cell to a null value while importing. After the import I can manually go check the null box for each field on each record, but that it unrealistic considering the amount of data I'm working with.
Is there a way to get phpmyadmin to set these blank cell to null values on import?
I've been experience similar issues.
If you download a PhpMyAdmin CSV file with NULL values, you'll notice that NULL doesn't get encapsulated with quotes. So you'll have a line like this:
"1";"2";NULL;NULL
"2";"2";NULL;NULL
etc.
However, if you edit a CSV file in something like Open Office Calc, it might change this to put quotes around NULL, like so:
"1";"2";"NULL";"NULL"
"2";"2";"NULL";"NULL"
etc.
What should work is doing a search and replace for ["NULL" = NULL].
In your case, because you have empty (blank) fields, you'll be looking at doing a search and replace like this:
[,, = ,NULL,]
And probably a second pass for NULL values at the end of a line like so:
[,\n = ,NULL\n]
Ancient question, but in case another MySQL noob like myself comes across it.
The find/replace rigamarole jmbertucci describes is avoidable if you're in charge of the creation of the CSV file, for example when you're backing up your own databases. In phpMyAdmin, if you select "custom" export method, you will see replace NULL with: and the default is NULL. Simply change that to "NULL" and you save yourself a step.
I ran into this same problem and jmbertucci's answer worked great. I did run into one additional problem. In the case with a row of data like such
"hello","world",,,,,,
which has multiple sets of null values in a row doing a search replace with [,, = ,NULL,] as jmbertucci suggested won't work as you intend it to on the first pass. Instead you'll end up with
"hello","world",NULL,,NULL,,NULL
You should continue to do the search replace to until you end up with 0 occurrences replaced