Set type for variables from form data in Lumen framework. - lumen

If I send form in Lumen, data can be validated via validate method. For example, in some method in some controller:
$this->validate($request, [
'id' => 'required|integer|exists:user',
]);
$user_id = $request->input('id');
But variable type of $user_id still string. Is there any built in (in framework) methods for getting variable in type which I write? In this case is integer.
I use intval() now.

Unfortunately, to my knowledge there's no way to define what types input should have in Laravel/Lumen when the value is accessed.
PHP interprets all user input as strings (or arrays of strings).
In Illuminate\Validation\Validator, the method that determines if a value is an integer uses filter_var() to test if the string value provided by the user conforms to the rules of the int type.
Here's what it's actually doing:
/**
* Validate that an attribute is an integer.
*
* #param string $attribute
* #param mixed $value
* #return bool
*/
protected function validateInteger($attribute, $value)
{
if (! $this->hasAttribute($attribute)) {
return true;
}
return is_null($value) || filter_var($value, FILTER_VALIDATE_INT) !== false;
}
Alas, it does not then update the type of the field it checks, as you have seen.
I think your use of intval() is probably the most appropriate option if you absolutely need the value of the user_id field to be interpreted as an integer.
Really the only caveat of using intval() is that it is limited to returning integers according to your operating system.
From the documentation (http://php.net/manual/en/function.intval.php):
32 bit systems have a maximum signed integer range of -2147483648 to 2147483647. So for example on such a system, intval('1000000000000') will return 2147483647. The maximum signed integer value for 64 bit systems is 9223372036854775807.
So as long as you keep your user-base to less than 2,147,483,647 users, you shouldn't have any issues with intval() if you're on a 32-bit system. Wouldn't we all be so lucky as to have to worry about having too many users?
Joking aside, I bring it up because if you're using intval() for a user-entered number that might be huge, you run the risk of hitting that cap. Probably not a huge concern though.
And of course, intval() will return 0 for non-numeric string input.

Related

Write a struct into a DICOM header

I created a private DICOM tag and I would like to know if it is possible to use this tag to store a struct in a DICOM file using dicomwrite (or alike), instead of creating a field inside the DICOM header for each struct field.
(Something like saving a Patient's name, but instead of using a char data, I would use double)
Here is an example:
headerdicom = dicominfo('Test.dcm');
a.a = 1; a.b = 2; a.c = 3;
headerdicom.Private_0011_10xx_Creator = a;
img = dicomread('Test.dcm');
dicomwrite(img, 'test_modif.dcm', 'ObjectType', 'MR Image Storage', 'WritePrivate', true, headerdicom)
Undefined function 'fieldnames' for input arguments of type 'double'.
Thank you all in advance,
Depending on what "struct" means, here are your options. As you want to use a private tag which means no application but yours will be able to interpret it, you can choose the solution which is technically most appropriate. Basically your question is "which Value Representation should I assign to my private attribute using the DICOM toolkit of my choice?":
Sequence:
There is a DICOM Value Representation "Sequence" (VR=SQ) which allows you to store a list of attributes of different types. This VR is closest to a struct. A sequence can contain an arbitrary number of items each of which has the same attributes in the same order. Each attribute can have its own VR, so if your struct contains different data types (like string, integer, float), this would be my recommendation
Multi-value attribute:
DICOM supports the concept of "Value Multiplicity". This means that a single attribute can contain multiple values which are separated by backslashes. As the VR is a property of the attribute, all values must have the same type. If I understand you correctly, you have a list of floating point numbers which could be encoded as an array of doubles in one field with VR=FD (=Floating Point Double): 0.001\0.003\1.234...
Most toolkits support an indexed access to the attributes.
"Blob":
You can use an attribute with VR=OB (Other Byte) which is also used for encoding pixel data. It can contain up to 4 GB of binary data. The length of the attribute tells you of how many bytes the attribute's value consists. If you just want to copy the memory from / to the struct, this would be the way to go, but obviously it is the weakest approach in terms of type-safety and correctness of encoding. You are going to lose built in methods of your DICOM toolkit that ensure these properties.
To add a private attribute, you have to
reserve a range for the attribute specifying an odd group number and a prefix (2 hex digits) for the element numbers. (e.g. group = 0x0011, Element = 0x10xx) reserves a range from (0x0011, 0x10xx) - (0x0011, 0x10ff). This is done by specifying a Private Creator DICOM tag which holds a manufacturer name. So I suspect that instead of
headerdicom.Private_0011_10xx_Creator = a;
it should read e.g.
headerdicom.Private_0011_10xx_Creator = "Gabs";
register your private tags in the private dictionary, most of the time by specifying the Private Creator, group, element and VR (one of the options above)
Not sure how this can be done in matlab.

Why output value is cut when adParamInputOutput is used?

I have stored procedure which has VARCHAR(MAX) OUTPUT parameter. The parameter is used to both pass and return large string value.
It is working perfectly when it is executed in the context of the SQL Server Management Studio.
The issue appears only in the ASP page. Here is the code:
cmd.Parameters.Append cmd.CreateParameter("#CSV", adBStr, adParamInputOutput, -1, 'some very large string goes here')
I am able to pass a large value (more then 4000 symbols) but the return value is cut. I have try to replace the adBStr with adLongVarWChar and adLongVarChar but I get the following error:
Parameter object is improperly defined. Inconsistent or incomplete
information was provided.
I guess the problem is caused by the adParamInputOutput. So, I am generally asking for a parameter type that will work in both direction with maximum symbols.

Sailsjs/waterline specify number of decimal places in model

How do I tell my sails model that I want some specific number decimal places for a type: 'float' attribute? Like decimalPlaces: 4 or something of that ilk?
The problem is that when i post a value to this entry, the value on disk is truncated to the .00 (hundreds) place. Say I want: 3243.2352362 to be stored just as it is. Currently this is transformed into 3243.24
If it matters I'm using the sails-mysql adapter.
types: {
decimal2: function(number){
return ((number *100)%1 === 0);
}
},
attributes: {
myNumber: {
type: 'float',
decimal2: true
}
}
This is for 2 decimal places though. I cant find a way to make it for dynamically changing N as there is afaik no way to pass a parameter to custom validation.
Workaround for this issue would be to check for custom amount of decimal places in beforeValidation() function.
I would not recommend (and I don't think its possible for float, maybe) adding a constraint in the model.
I'd suggest that you set:
migrate: "safe"
in your model and set the appropriate datatype/decimal Places in your tables.

Generate unique random strings

I am writing a very small URL shortener with Dancer. It uses the REST plugin to store a posted URL in a database with a six character string which is used by the user to access the shorted URL.
Now I am a bit unsure about my random string generation method.
sub generate_random_string{
my $length_of_randomstring = shift; # the length of
# the random string to generate
my #chars=('a'..'z','A'..'Z','0'..'9','_');
my $random_string;
for(1..$length_of_randomstring){
# rand #chars will generate a random
# number between 0 and scalar #chars
$random_string.=$chars[rand #chars];
}
# Start over if the string is already in the Database
generate_random_string(6) if database->quick_select('urls', { shortcut => $random_string });
return $random_string;
}
This generates a six char string and calls the function recursively if the generated string is already in the DB. I know there are 63^6 possible strings but this will take some time if the database gathers more entries. And maybe it will become a nearly infinite recursion, which I want to prevent.
Are there ways to generate unique random strings, which prevent recursion?
Thanks in advance
We don't really need to be hand-wavy about how many iterations (or recursions) of your function there will be. I believe at every invocation, the expected number of iterations is geomtrically distributed (i.e. number of trials before first success is governed by the geomtric distribution), which has mean 1/p, where p is the probability of successfully finding an unused string. I believe that p is just 1 - n/63^6, where n is the number of currently stored strings. Therefore, I think that you will need to have stored 30 billion strings (~63^6/2) in your database before your function recurses on average more than 2 times per call (p = .5).
Furthermore, the variance of the geomtric distribution is 1-p/p^2, so even at 30 billion entries, one standard deviation is just sqrt(2). Therefore I expect ~99% of the time that the loop will take fewerer than 2 + 2*sqrt(2) interations or ~ 5 iterations. In other words, I would just not worry too much about it.
From an academic stance this seems like an interesting program to work on. But if you're on the clock and just need random and distinct strings I'd go with the Data::GUID module.
use strict;
use warnings;
use Data::GUID qw( guid_string );
my $guid = guid_string();
Getting rid of recursion is easy; turn your recursive call into a do-while loop. For instance, split your function into two; the "main" one and a helper. The "main" one simply calls the helper and queries the database to ensure it's unique. Assuming generate_random_string2 is the helper, here's a skeleton:
do {
$string = generate_random_string2(6);
} while (database->quick_select(...));
As for limiting the number of iterations before getting a valid string, what about just saving the last generated string and always building your new string as a function of that?
For example, when you start off, you have no strings, so let's just say your string is 'a'. Then the next time you build a string, you get the last built string ('a') and apply a transformation on it, for instance incrementing the last character. This gives you 'b'. and so on. Eventually you get to the highest character you care for (say 'z') at which point you append an 'a' to get 'za', and repeat.
Now there is no database, just one persistent value that you use to generate the next value. Of course if you want truly random strings, you will have to make the algorithm more sophisticated, but the basic principle is the same:
Your current value is a function of the last stored value.
When you generate a new value, you store it.
Ensure your generation will produce a unique value (one that did not occur before).
I've got one more idea based on using MySQL.
create table string (
string_id int(10) not null auto_increment,
string varchar(6) not null default '',
primary key(string_id)
);
insert into string set string='';
update string
set string = lpad( hex( last_insert_id() ), 6, uuid() )
where string_id = last_insert_id();
select string from string
where string_id = last_insert_id();
This gives you an incremental hex value which is left padded with non-zero junk.

When validating a form, should I assume a field is valid or invalid?

When I write validation code for a web form, I usually assume that the content of a field is valid and attempt to prove that it is invalid. Is it a better practice to assume that the content of the field is invalid and then attempt to prove that it is valid?
A very simple example (pseudo code):
function isValid( formFieldValue, minLength, maxLength ) {
valid = true;
fieldLength = length( formFieldValue );
if( fieldLength < minLength ) {
valid = false;
}
if( fieldLength > maxLength ) {
valid = false;
}
return valid;
}
Would it be better to assume that the field in question is invalid and modify my checks accordingly?
Please note - I'm not talking about XSS protection or input filtering. We can never assume that user input is safe. I am talking about validating things like minimum/maximum length or a valid e-mail address in a form field.
I think when you just talk about things like length etc. it makes no big difference. But I would ever assume that the input is invalid and prove that it's not, because I do the same with probably XSS input.
I think that better idea is to assume wrong input and to prove validity. It's easier.
For javascript allready exists number of libraries that solves your problem.
e.g.
Backbone.Forms https://github.com/powmedia/backbone-forms
jQuery validatin plugin http://bassistance.de/jquery-plugins/jquery-plugin-validation/
The point is whether your all conditions run or not.
Case1: Assume that a form is valid and then check for its invalidity by checking for example 2 conditions.
Case2. Assume that a form in invalid and then check 2 conditions whether it is invalid?
In both cases you will have to check for all conditions to satisfy because you want to validate all your fields. So whether you assume it is valid or invalid at start doesn't matter we mostly check for invalidity.