I have just started investigating into treeline.io beta, so, I could not find any way in the existing machinepacks that would do the job(sanitizing user inputs). Wondering if i can do it in anyway, best if within treeline.
Treeline automatically does type-checking on all incoming request parameters. If you create a route POST /foo with parameter age and give it 123 as an example, it will automatically display an error message if you try to post to /foo with age set to abc, because it's not a number.
As far as more complex validation, you can certainly do it in Treeline--just add more machines to the beginning of your route. The if machine works well for simple tasks; for example, to ensure that age is < 150, you can use if and set the left-hand value to the age parameter, the right-hand value to 150, and the comparison to "<". For more custom validations you can create your own machine using the built-in editor and add pass and fail exits like the if machine has!
The schema-inspector machinepack allow you to sanitize and validate the inputs in Treeline: machinepack-schemainspector
Here a screenshot how I'm using it in my Treeline project:
The content of the Sanitize element:
The content of the Validate element (using the Sanitize output):
For the next parts, I'm always using the Sanitize output (email trimmed and in lowercase for this example).
Related
In FileMaker Pro, when using number field, the user can choose to use a thousand separator or not. For example, if I have a database with a field for the price of an item, the user can either enter 1,000 or 1000.
I am using my database to generate an XML file that needs to be uploaded. The thing is, that my XML scheme dictates that only a value of 1000 is allowed and not 1,000. Therefore, I want to either automatically remove the comma, or (my preference in this case) alert the user when trying to enter a value with a thousand separator.
What I tried is the following.
For the field, I am setting Validation options. For example:
Require Strict data type: Numeric Only
Validated by calculation: Position ( Self ; ","; 1 ; 1 ) = 0
Validated by calculation: Self = Substitue ( Self, ",", "")
Auto-enter calculation: Filter( Self ; "0123456789." )
Unfortunately, none of these work. As the field is defined as a number (and I want to keep it like this, as I am also performing calculations based on this number), the Position function and the Substitute function apparently ignore the thousand separator!
EDIT:
Note that I am generating my XML by concatenating a string, for example:
"<Products><Product><Name>" & Name & "</Name><Price>" & Price & "</Price></Product></Product>"
The reason is that what I am exporting is dependent on the values in my database. Therefore, I am not using the [File][Export records...] function.
Auto-enter calculation will work, but you need to uncheck the box "Do not replace existing value of field" (which is checked by default).
I'd suggest using the calculation GetAsNumber(self) as the auto-enter calc. If it should only contain integers, wrap that in a call to Int()
I am using my database to generate an XML file that needs to be uploaded. The thing is, that my XML scheme dictates that only a value of 1000 is allowed and not 1,000.
If this is only a problem when you export, why not handle it when exporting?
If you are exporting as XML using XSLT, you can add an instruction to
your stylesheet to remove the comma from all number fields;
Alternatively, you can export from a layout where the field is
formatted to display without the comma and select the Apply current's layout data formatting to exported data option when
exporting.
Added:
Perhaps I should have clarified. I am not using the export function to generate the XML as there is some logic involved in how the XML should be formatted (dependent on the data that I want to export). What I do instead is that I make a string where I combine XML-tags and actual values from the database.
IMHO, you're making a mistake by not taking advantage of the built-in XML/XSLT export option. Any imaginable logic can be implemented this way, without burdening your solution with the fragile task of creating a valid XML.
In any case, if you're using the field in a calculation, you can replace all references to it with:
GetAsNumber (YourField )
to get an unformatted, numeric-only, value.
Your question puzzles me. As far as I know, FileMaker does not store the thousands separator, but rather offers it only as a display option.
That's also why those functions can't find it.
Are you sure you are exporting the raw data and not a "formatted as layout" variant?
and thanks for looking!
I have an instance of YouTrack with several custom fields, some of which are String-type. I'm implementing a module to create a new issue via the YouTrack REST API's PUT request, and then updating its fields with user-submitted values by applying commands. This works great---most of the time.
I know that I can apply multiple commands to an issue at the same time by concatenating them into the query string, like so:
Type Bug Priority Critical add Fix versions 5.1 tag regression
will result in
Type: Bug
Priority: Critical
Fix versions: 5.1
in their respective fields (as well as adding the regression tag). But, if I try to do the same thing with multiple String-type custom fields, then:
Foo something Example Something else Bar P0001
results in
Foo: something Example Something else Bar P0001
Example:
Bar:
The command only applies to the first field, and the rest of the query string is treated like its String value. I can apply the command individually for each field, but is there an easier way to combine these requests?
Thanks again!
This is an expected result because all string after foo is considered a value of this field, and spaces are also valid symbols for string custom fields.
If you try to apply this command via command window in the UI, you will actually see the same result.
Such a good question.
I encountered the same issue and have spent an unhealthy amount of time in frustration.
Using the command window from the YouTrack UI I noticed it leaves trailing quotations and I was unable to find anything in the documentation which discussed finalizing or identifying the end of a string value. I was also unable to find any mention of setting string field values in the command reference, grammer documentation or examples.
For my solution I am using Python with the requests and urllib modules. - Though I expect you could turn the solution to any language.
The rest API will accept explicit strings in the POST
import requests
import urllib
from collections import OrderedDict
URL = 'http://youtrack.your.address:8000/rest/issue/{issue}/execute?'.format(issue='TEST-1234')
params = OrderedDict({
'State': 'New',
'Priority': 'Critical',
'String Field': '"Message to submit"',
'Other Details': '"Fold the toilet paper to a point when you are finished."'
})
str_cmd = ' '.join(' '.join([k, v]) for k, v in params.items())
command_url = URL + urllib.urlencode({'command':str_cmd})
result = requests.post(command_url)
# The command result:
# http://youtrack.your.address:8000/rest/issue/TEST-1234/execute?command=Priority+Critical+State+New+String+Field+%22Message+to+submit%22+Other+Details+%22Fold+the+toilet+paper+to+a+point+when+you+are+finished.%22
I'm sad to see this one go unanswered for so long. - Hope this helps!
edit:
After continuing my work, I have concluded that sending all the field
updates as a single POST is marginally better for the YouTrack
server, but requires more effort than it's worth to:
1) know all fields in the Issues which are string values
2) pre-process all the string values into string literals
3) If you were to send all your field updates as a single request and just one of them was missing, failed to set, or was an unexpected value, then the entire request will fail and you potentially lose all the other information.
I wish the YouTrack documentation had some mention or discussion of
these considerations.
I am trying to send a search input to a REST service. In some cases the form input is a long string of numbers (example: 1234567890000000000123456789). I am getting 500 error, and it looks like something is trying the convert the string to a number. The data type for the source database is a string.
Is there something that can be done in building the query string that will force the input to be interpreted as a string?
The service is an implementation of ArcGIS server.
More information on this issue per request.
To test, I have been using a client form provided with the service installation (see illustration below).
I have attempted to add single and double quotes, plus wildcard characters in the form entry. The form submission does not error, but no results are found. If I shorten the number("1234"), or add some alpha numeric characters ("1234A"), the form submission does not error.
The problem surfaced after a recent upgrade to 10.1. I have looked for information that would tie this to a known problem, but not found anything yet.
In terms of forcing the input to be interpreted as a string, you enclose the input in single quotes (e.g., '1234567890000000000123456789'). Though if you are querying a field of type string then you need to enclose all search strings in single quotes, and in that case none of your queries should be working. So it's a little hard to tell from the information you've provided what exactly you are doing and what might be going wrong. Can you provide more detail and/or code? Are you formatting a where clause that you are using in a Query object via one of Esri's client side API's (such as the JavaScript API)? In that case, for fields of data type string you definitely need to enclose the search text in single quotes. For example if the field you are querying were called 'FIELD', this is how you'd format the where clause:
FIELD = '1234'
or
FIELD Like '1234%'
for a wildcard search. If you are trying to enter query criteria directly into the Query form of a published ArcGIS Server service/layer, then there too you need to enclose the search in single quotes, as in the above examples.
According to an Esri help technician, this is known bug.
I'm looking for a way of saving and after handling the arguments of a web form in SWI-Prolog when I submit the form and I call the same program to generate another form and so on. Always calling the same prolog program from one form to the next one.
The CGI SWI-Prolog library saves these arguments as a list of Name(Value) terms, i.e [Name(Value)].
if I pass the arguments like a hidden argument inside the form (TotalArguments is a list):
format('"<"input type="hidden" id="nameofform1" name="nameofform1" value="~w" />~n', TotalArguments),
I need to get rid of the id or name that concatenates on my resultant list on TotalArguments when I append it. Any idea of how to do this so that the final list looks like [nameofform1(value1), nameofform2(value2),...]?
I could also write this list of arguments and append it into a file, and consult it every time the program is called again, but this will load them always and I only need to load the arguments needed in the specific step and form handled at the moment. Because otherwise this file could contain undesirable info after some executions. Any thoughts on how to do it this way?
Any other suggestions for this kind of problem?
Edit with my solution using hidden form
I've solved it by creating:
extract_value([],_).
extract_value([A0|__ ], Valor) :-
A0 =.. [_, Value],
Valor is Value.
and then doing:
extract_value(Arguments, Value),
and submiting the hidden value of the form like:
format('<"input type="hidden" id="nameofform1" name="nameofform1" value="~w"/>~n', [Value]),
and appending it in the next form so that it looks how I wanted:
[nameofform2(value2),nameofform1(value1)]
It's a bit unclear to me what exactly you need here, but to remove the first element of a list that unifies with a given element (especially if you know for certain that the list contains such an element), use selectkchk/3. For example:
selectchk(id(_), List0, List1),
selectchk(name(_), List1, List)
in order to obtain List, which is List0 without the elements id(_) and name(_). Kind of implicit in your question, as I understand it, seems to be how to create a term like "form1(Value)" given the terms name(form1) and Value. You can do this for example with =../2. You can create a term T with functor N and arguments Args with
T =.. [N|Args]
It does not seem necessary to write anything to files here, I would simply pass the info through forms just as you outline.
I am trying to set up a transformer on a Database Reader to file writer channel. I am reading in a sql field called MRN which I would like to send to a variable called mrn. I added a step to a channel with a variable called tmp['MSH'] mapping to a variable called msg['MSH'] But mirth is giving me the error message:
The variable name contains invalid characters. Please enter a new variable name
What are the rules for a valid variable name in mirth?
tmp and msg are two built-in variables containing E4X mappings of the outbound template and inbound message, respectively. You would map, via a MessageBuilder step, from inbound to outbound with tmp['MSH'][...] = msg['MSH']... where ... refers to the appropriate sections. Essentially these are pre-populated javascript property arrays.
If you really want to create a variable for use in multiple places, the rules are alphanumeric plus '_', I believe.
In a MessageBuilder step, you could refer to a previously created variable with ${varname}.
I would recommend investing a little time in getting familiar with the basics. Documentation is wanting, to be sure, but this blog post series are a good place to start.