Is it possible to do some kind of data validation when using FirestoreDataConverter<T>?
Looking at the call signature for fromFirestore, it seems like you can only return T. But what if I look at the data and realize that there is invalid data in the database? (I know, I should guard against that in the first place, but bad things happen.)
And it is even more important to guard against errors when using toFirestore. Again, is there a recommended way to do data validation before writing to the database?
I ask because looking at the VueFire documentation , they seem to return null on invalid data. Is that simply an error on their part?
I can see two options. Include some kind of error-state type in the signature (FirestoreDataConverter<T|E>) or throw an error. Is one or the other the 'recommended practice'?
I wish the official docs had more information on how to use FirestoreDataConverter.
Related
I´m pretty new in quickfix, but according to official doc, even in the case you are typing (in the way of assigning a type) the message you want to send. You have to know in advance which are the available fields. I was expecting to find something more programming friendly. Creating a new OrderCancelRequest, would create an instance where you have a pack of acknowledged variables-properties and IDE would help you to know what field must be provided.
In fact
Type Safe Messages And Fields
void sendOrderCancelRequest()
{
FIX41::OrderCancelRequest message(
FIX::OrigClOrdID("123"),
FIX::ClOrdID("321"),
FIX::Symbol("LNUX"),
FIX::Side(FIX::Side_BUY));
message.set(FIX::Text("Cancel My Order!"));
FIX::Session::sendToTarget(message, SenderCompID("TW"), TargetCompID("TARGET"));
}
and Type Safe Field Only
void sendOrderCancelRequest()
{
FIX::Message message;
FIX::Header header& = message.getHeader();
header.setField(FIX::BeginString("FIX.4.2"));
header.setField(FIX::SenderCompID(TW));
header.setField(FIX::TargetCompID("TARGET"));
header.setField(FIX::MsgType(FIX::MsgType_OrderCancelRequest));
message.setField(FIX::OrigClOrdID("123"));
message.setField(FIX::ClOrdID("321"));
message.setField(FIX::Symbol("LNUX"));
message.setField(FIX::Side(FIX::Side_BUY));
message.setField(FIX::Text("Cancel My Order!"));
FIX::Session::sendToTarget(message);
}
Shows not too much difference. In both cases you have to know which are valid tags to be provided. Maybe, in the second example, the compiler allows you to provide wrong fields, not sure. But at the end, you have to consult which are the available fields to kind of message you want to send (it´s similar to receive, cracking does not help too much, in onMessage you have to know which are the available fields)
So, summarizing. Is it possible to find any smarter way to work with quickfix in order to have more help from the IDE-context? I was thinking about a kind of direct message-class maping.
Thanks a lot for your help. Maybe it´s due to I´m new with quickfix, but I have the feeling that using it oblies you to know messages available fields at the end.
I would like to build an app with a oral code verification.
i could just set my cde in dialogflow before then, juste verify it.
GH : "For continue, give me the code"
Me : " 1 2 3 4"
GH " Access granted" / "Access denied"
But how can do an input a get this code on dialogflow?
First of all - consider if you really want to do this. Having someone say a passcode out loud isn't really very secure and adds very little additional security in a multi-user environment.
There are two stages to this - the first is setting up an Intent to handle this, specifically in the format you want, and the second would be handling and verifying this is the correct code.
Setting up the Intent
We'll need two intents - one that prompts and sets a context so we know we're expecting the validation code, and one that checks for the code.
The prompting intent might look something like this:
The notable part here is that it is setting an output context. We'll see why that matters in a moment.
The one to handle numeric input might look like this:
There is a lot more to this one. First note that we're requiring an input context that matches the output context from the last Intent. This means that this Intent should only match if that Context has been set. This lets us talk about numbers elsewhere in our conversation without triggering this validation.
Next we're looking for sequences of numbers that match the #sys.number-sequence built-in Entity type. There are other entity types that may be useful for you - see the documentation for details and pick one that makes sense or experiment to find what works best in your case.
Finally, we're going to use a webhook for fulfillment to verify if the code is correct. Which is the next session...
Verifying the code
While there are ways to do the verification without a webhook, this is really the most straightforward way to do it. If you're using Google's library to handle input from Dialogflow, you can get the value with something like
var code = app.getArgument('number-sequence');
using whatever the parameter name is. If you're not using the library, you can find this in the JSON at result.parameters.number-sequence.
You would then verify this code, however you want, and return a message indicating if it is correct or not.
If you want to use a sequence of numbers as your code you can use the #sys.number-sequence entity to recognize it and then check the code in your webhook.
Another way would be to simply make a custom entity 'code' that has an entry of '1234'.
I'm attempting to parse json from the GitHub API with play-json, and encountering a problem with the merge_commit_sha field on Pull Requests (incidentally, I know this field is deprecated, but don't want to discuss that in this parsing problem!). Unfortunately merge_commit_sha field comes back as the empty string in some cases:
"merge_commit_sha": ""
This is how the field is declared in my case class:
merge_commit_sha: Option[ObjectId],
I have an implicit Format[ObjectId], which does not tolerate the empty string, because that's not a valid value for a Git hash id. I'm also using a play-json macro-generated Read[PullRequest], which I'd like to keep on using, in preference to individually declaring reads for every single field on pull requests.
As I've declared the field to be an Option, I'd like "merge_commit_sha": "" to be read as the value None, but this is not what currently happens - a string is present, so the Format[ObjectId] is invoked, and returns a JsFailure.
One thing I tried was declaring an implicit Format[Option[ObjectId]] with the required behaviour, but it didn't seem to get used by the macro-generated Read[PullRequest].
You can define a custom Reads and Writes yourself.
Using Json.format[MyType] uses a Scala macro. You may be able to hook into that. Although, 'extending' a macro for this one case class just seems wrong.
Custom Reads and Writes might be a little 'boilerplate-like' and boring, but they have their upsides.
For example if your json has a bunch of new fields on it, you wont get a JsError when validating or transforming it to a case class. You only take what you need from the JSON and create objects. It also allows for a separation between your internal model and what you're consuming, which in some cases is preferred.
I hope this helps,
Rhys
EDIT
After using some other JSON libs I may have found what you are looking for.
I know the question was asking specifically after Play JSON.
If you're able to move away from Play JSON, Look at spray-json-shapeless specifically JsNullBehaviour and JsNullNotNone REF.
I have just started using Pyramid for one of my projects and I have a case where in I need to validate a form field input, by taking that form field value and making a web-service call to assert the value's correctness. Like for example there is a field called your bank's CUSTOMER-ID. I need to take that(alone) as input and validate at the server level by making a web-service call (like http://someotherdomain/validate_customer_id/?customer_id=<input_value>)lets say.
I am using Colander for form schema management and Deform for all form validation logic. I am confused about where I need to place my validation logic for the CUSTOMER-ID case. Is it at MySchema().bind(customer_id=<input_value>) (which has a deferred validator that queries the web-service) or something at the form.validate(request.POST.items()) ? If I take the deferred validator's path, then MySchema().bind is raising colander.Invalid error for incorrect CUSTOMER-ID. Thats fine. But that error is not at the form level but at the schema level. So how would I tell the user about this in a sane way ?
I have good experience with Django forms so I was expecting something like clean method. A form error like form['customer_id'].error is what I am expecting at the template level. Is it possible with Pyramid's Deform or with Colander ?
So I think the big problem you're having is understanding the separation of concerns of Colander and Deform. Colander is what people like to call a general schema validation library. Which means we define a schema, where each node has a particular data type and some nodes might be required/optional. Colander is then able to validate that schema, and tell us whether or no the data we passed to colander conforms to that schema. As an example, in my web apps, I am often building apis that accept GET/POST params that need to be validated. So in Pyramid, let's say I have this scenario:
request.POST = {
'post_id': 1,
'author_id': 1,
'unnecessary_attr': 'stuff'
}
I can then validate it like so:
# schema
schema = SchemaNode(Mapping(),
SchemaNode(Integer(), name='post_id'),
SchemaNode(Integer(), name='author_id'))
schema.deserialize(request.POST)
And it will error if it can't conform the data to the specified schema. So you can see, colander can actually be used to validate ANY set of data, whether that comes from POST/GET/JSON data. Deform on the other hand is a form library, and helps you create/validate forms. It uses colander for all of the validation needs and as you can see it pretty much just completely delegates validation to colander. So to answer your question, you would do all of your validation stuff in colander, and deform would mostly handle the rendering of your forms.
To see a vivid pyramid example application and deform in action look at todopyramid as a part of IndyPy Python Web Shootout. A todo application was implemented in pyramid, django, flask and bottle. I studied the pyramid example - it is well written, shows deform schema validation and uses bootstrap to show validation messages.
Find more pyramid tutorials here:
In this question, someone asked how to write a validation method for Core Data. I did that, and it looks cool. But one thing doesn't happen: The validation. I can easily set any "bad" value and this method doesn't get called automatically. What's the concept behind this? Must I always first call the validation method before setting any value? So would I write setter methods which call the appropriate validation method first?
And if yes, what's the point of following a strict convention in how to write the validation method signature? I guess there's also some automatic way of validation, then. How to activate this?
Validation is not "automatic" especially on iOS. On the desktop you will have the UI elements handling the call to validation. On iOS you should be calling validateValue:forKey:error: with the appropriate key and dealing with the error should there be one. The reason for this is the lack of a standard error display on iOS and the overhead of validating all values.
Note this comment in the documentation:
If you do implement custom validation methods, you should typically not invoke them directly. Instead you should call validateValue:forKey:error: with the appropriate key. This ensures that any constraints defined in the managed object model are also applied.