Invalid hashing in Firebase Cloud Storage Rules Playground - firebase-storage

I am testing hashing in the rules playground:
This returns "CRexOpCRkV1UtjNvRZCVOczkUrNmGyHzhkGKJXiDswo=", the correct hash of the string "SECRET" :
let expected = hashing.sha256("SECRET");
But this returns "SECRETpath/to/the/file.mp4", the argument itself instead of its hash:
let expected = hashing.sha256("SECRET" + request.resource.name);
Is it a bug in the rules playground?
Can hashing functions be used on dynamic values or is it intentionally prevented?
The strange rules playground behavior has been mentioned here before, this time with Firestore security rules: Firestore rules hashing returns identity

Firebaser here!
There are a few issues at play here. I think the primary source of confusion is that the hashing.sha256 function returns a rules.Bytes type. It appears that the Rules Playground in the Firebase Console incorrectly shows a string value when debugging the bytes type, but that is unrelated to behavior in production. For example, this Rule will always deny:
allow write: if hashing.sha256("SECRET" + request.resource.name) ==
"SECRET" + request.resource.name;
To get the behavior you're looking for, you need to use one of the conversion functions for the rules.Bytes type. Based on your question, you'll probably want the toBase64() function, but toHexString() is also an option. If you try these functions in your Rules, the Playground should start behaving correctly and the Rules will work as expected in production as well. So to put it all together, you'd write:
let expected = hashing.sha256("SECRET" + request.resource.name).toBase64();
For example, the rules listed below would allow you to upload a file called "foo/bar" (as Gqot1HkcleDFQ5770UsfmKDKQxt_-Jp4DRkTNmXL9m4= is the Base64 SHA-256 hash of "SECRETfoo/bar")
allow write: if hashing.sha256('SECRET' + request.resource.name).toBase64() ==
"Gqot1HkcleDFQ5770UsfmKDKQxt_-Jp4DRkTNmXL9m4=";
I hope this helps clear things up! Separately we will look into addressing the wrong debugging output in the Playground

After trying with emulators and the deployed app, it seems that hashing.sha256 does not work on dynamic data in any environment. The behavior is consistent, so I filed a feature request to add this function to storage security rules. This would be nice because it would allow passing signed data to the security rule for each file (for ex: an upload authorization obtained via a Cloud Function)
As of now, the workaround that I imagine is putting data in user custom token (or custom claims), so I can pass signed data to the security rule. It is not ideal because I need to re-sign with custom token for every file upload.

Related

Is it possible to use INTENT instead of STRING as List Title in Google Action?

Some Background:
I use Lists a lot for a Google Action with a NodeJS fulfillment backend. The Action is primarily Voice-based. The reason for using List is that I can encode information in List's key and use it later to make a decision. Another reason is that Google Assistant will try to fuzzy match the user's input with the Title of the List's items to find the closest matched option. This is where thing's get a bit hard for me. Consider the following example:
{
JSON.stringify(SOME_OBJECT): {
title: 'Yes'
},
JSON.stringify(ANOTHER_OBJECT): {
title: 'No'
}
}
Now if I say Yes / No, I can get the user's choice and do something with information stored as stringified JSON in the choice's Key.
But, the users may say Sure or Yup or OK as they basically mean the same thing as saying Yes. But as those words don't match Yes, Google Assistant will ignore the "Yes" option. But all of these words belong to the smalltalk.confirmation.yes built-in intent. So, if I could use this intent instead of hardcoding the string Yes then I would be able to capture all of the inputs that mean Yes.
I know I could do this with a Synonyms list or Confirmation intent. But they also have some problems.
Using Synonyms would require me finding every word which is similar. Besides, I would also need to localize these synonyms to all the supported language.
With Confirmation intent, I won't be able to show some information to the user before asking them to choose an option. Besides, it also doesn't support encoding the options as I can do in List's key.
So, List is a good choice for me in this case.
So, is there any way to leverage the built-in intents for this purpose? What do you do in this situation?

how do duckduckgo spice IA secondary API calls get their parameters?

I have been looking through the spice instant answer source code. Yes, I know it is in maintenance mode, but I am still curious.
The documentation makes it fairly clear that the primary spice to API gets its numerical parameters $1, $2, etc. from the handle function.
My question: should there be secondary API calls included with spice alt_to as, say, in the movie spice IA, where do the numerical parameters to that API call come from?
Note, for instance, the $1 in both the movie_image and cast_image secondary API calls in spice alt_to at the preceding link. I am asking which regex capture returns those instances of $1.
I believe I see how this works now. The flow of information is still a bit murky to me, but at least I see how all of the requisite information is there.
I'll take the cryptocurrency instant answer as an example. The alt_to element in the perl package file at that link has a key named cryptonator. The corresponding .js file constructs a matching endpoint:
var endpoint = "/js/spice/cryptonator/" + from + "/" + to;
Note the general shape of the "remainder" past /js/spice/cryptonator: from/to, where from and to will be two strings.
Back in the perl package the hash alt_to->{cryptonator} has a key from which receives, I think, this remainder from/to. The value corresponding to that key is a regex meant to split up that string into its two constituents:
from => '([^/]+)/([^/]*)'
Applied to from/to, that regex will return $1=from and $2=to. These, then, are the $1 and $2 that go into
to => 'https://api.cryptonator.com/api/full/$1-$2'
in alt_to.
In short:
The to field of alt_to->{blah} receives its numerical parameters by having the from regex operate on the remainder past /js/spice/blah/ of the name of the corresponding endpoint constructed in the relevant .js file.

How does resource.data.size() work in firestore rules (what is being counted)?

TLDR: What is request.resource.data.size() counting in the firestore rules when writing, say, some booleans and a nested Object to a document? Not sure what the docs mean by "entries in the map" (https://firebase.google.com/docs/reference/rules/rules.firestore.Resource#data, https://firebase.google.com/docs/reference/rules/rules.Map) and my assumptions appear to be wrong when testing in the rules simulator (similar problem with request.resource.data.keys().size()).
Longer version: Running into a problem in Firestore rules where not being able to update data as expected (despite similar tests working in the rules simulator). Have narrowed down the problem to point where can see that it is a rule checking for request.resource.data.size() equaling a certain number.
An example of the data being passed to the firestore update function looks like
Object {
"parentObj": Object {
"nestedObj": Object {
"key1": Timestamp {
"nanoseconds": 998000000,
"seconds": 1536498767,
},
},
},
"otherKey": true,
}
where the timestamp is generated via firebase.firestore.Timestamp.now().
This appears to work fine in the rules simulator, but not for the actual data when doing
let obj = {}
obj.otherKey = true
// since want to set object key name dynamically as nestedObj value,
// see https://stackoverflow.com/a/47296152/8236733
obj.parentObj = {} // needed for adding nested dynamic keys
obj.parentObj[nestedObj] = {
key1: fb.firestore.Timestamp.now()
}
firebase.firestore.collection('mycollection')
.doc('mydoc')
.update(obj)
Among some other rules, I use the rule request.resource.data.size() == 2 and this appears to be the rules that causes a permission denied error (since commenting out this rules get things working again). Would think that since the object is being passed with 2 (top-level) keys, then request.resource.data.size()=2, but this is apparently not the case (nor is it the number of keys total in the passed object) (similar problem with request.resource.data.keys().size()). So there's a long example to a short question. Would be very helpful if someone could clarify for me what is going wrong here.
From my last communications with firebase support around a month ago - there were issues with request.resource.data.size() and timestamp based security rules for queries.
I was also told that request.resource.data.size() is the size of the document AFTER a successful write. So if you're writing 2 additional keys to a document with 4 keys, that value you should be checking against is 6, not 2.
Having said all that - I am still having problems with request.resource.data.size() and any alternatives such as request.resource.size() which seems to be used in this documentation
https://firebase.google.com/docs/firestore/solutions/role-based-access
I also have some places in my security rules where it seems to work. I personally don't know why that is though.
Been struggling with that for a few hours and I see now that the doc on Firebase is clear: "the request.resource variable contains the future state of the document". So with ALL the fields, not only the ones being sent.
https://firebase.google.com/docs/firestore/security/rules-conditions#data_validation.
But there is actually another way to ONLY count the number of fields being sent with request.writeFields.size(). The property writeFields is a table with all the incoming fields.
Beware: writeFields is deprecated and may stop working anytime, but I have not found any replacement.
EDIT: writeFields apparently does not work in the simulator anymore...

Azure APIM Policy Editor

I would very much like to be able to set Azure API Policy attributes based on a User's Jwt Claims data. I have been able to set string values for things like the counter-key and increment-condition but I can't set all attributes. I imagined doing something like the following:
<rate-limit-by-key
calls="#((int) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/LimitRate/Limit", "5"))"
renewal-period="#((int) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/LimitRate/Duration/InSeconds", "60"))"
counter-key="#((string)context.Variables["Subject"])"
increment-condition="#(context.Response.StatusCode == 200)"
/>
However there seems to be some validation happening when I save the policy as I get the following error:
Error in element 'rate-limit-by-key' on line 98, column 10: The 'calls' attribute is invalid - The value '#((int) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/LimitRate/Limit", "5"))' is invalid according to its datatype 'http://www.w3.org/2001/XMLSchema:int' - The string '#((int) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/LimitRate/Limit", "5"))' is not a valid Int32 value.
I even have trouble setting a string parameter (albeit one with a strict format)
<quota-by-key
calls="10"
bandwidth="100"
renewal-period="#((string) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/Quota/RenewalPeriod", "P00Y00M01DT00H00M00S"))"
counter-key="#((string)context.Variables["Subject"])"
/>
Which gives the following when I try and save the policy:
Error in element 'quota-by-key' on line 99, column 6: #((string) context.Variables["IdentityToken"].AsJwt().Claims.GetValueOrDefault("/Quota/RenewalPeriod", "P00Y00M01DT00H00M00S")) is not in a valid format. Provide number of seconds or use 'PxYxMxDTxHxMxS' format where 'x' is a number.
I have tried a large set of variations casting, Convert.ToInt32, claims that are not strings, #{return 5}, #(5) etc but there seems to be some validation happening at save time that is stopping it.
Is there away around this issue as I think it would be a useful feature to add to my API?
calls attribute on rate-limit-by-key and quota-by-key does not support policy expressions. Internal limitations block us from treating it on per-request basis unfortunately. The best you can do is categorize requests into a few finite groups and apply rate limit/quota conditionally using choose policy.
Or try using increment-count attribute to control by how much counter is increased per each request.

Apply Command to String-type custom fields with YouTrack Rest API

and thanks for looking!
I have an instance of YouTrack with several custom fields, some of which are String-type. I'm implementing a module to create a new issue via the YouTrack REST API's PUT request, and then updating its fields with user-submitted values by applying commands. This works great---most of the time.
I know that I can apply multiple commands to an issue at the same time by concatenating them into the query string, like so:
Type Bug Priority Critical add Fix versions 5.1 tag regression
will result in
Type: Bug
Priority: Critical
Fix versions: 5.1
in their respective fields (as well as adding the regression tag). But, if I try to do the same thing with multiple String-type custom fields, then:
Foo something Example Something else Bar P0001
results in
Foo: something Example Something else Bar P0001
Example:
Bar:
The command only applies to the first field, and the rest of the query string is treated like its String value. I can apply the command individually for each field, but is there an easier way to combine these requests?
Thanks again!
This is an expected result because all string after foo is considered a value of this field, and spaces are also valid symbols for string custom fields.
If you try to apply this command via command window in the UI, you will actually see the same result.
Such a good question.
I encountered the same issue and have spent an unhealthy amount of time in frustration.
Using the command window from the YouTrack UI I noticed it leaves trailing quotations and I was unable to find anything in the documentation which discussed finalizing or identifying the end of a string value. I was also unable to find any mention of setting string field values in the command reference, grammer documentation or examples.
For my solution I am using Python with the requests and urllib modules. - Though I expect you could turn the solution to any language.
The rest API will accept explicit strings in the POST
import requests
import urllib
from collections import OrderedDict
URL = 'http://youtrack.your.address:8000/rest/issue/{issue}/execute?'.format(issue='TEST-1234')
params = OrderedDict({
'State': 'New',
'Priority': 'Critical',
'String Field': '"Message to submit"',
'Other Details': '"Fold the toilet paper to a point when you are finished."'
})
str_cmd = ' '.join(' '.join([k, v]) for k, v in params.items())
command_url = URL + urllib.urlencode({'command':str_cmd})
result = requests.post(command_url)
# The command result:
# http://youtrack.your.address:8000/rest/issue/TEST-1234/execute?command=Priority+Critical+State+New+String+Field+%22Message+to+submit%22+Other+Details+%22Fold+the+toilet+paper+to+a+point+when+you+are+finished.%22
I'm sad to see this one go unanswered for so long. - Hope this helps!
edit:
After continuing my work, I have concluded that sending all the field
updates as a single POST is marginally better for the YouTrack
server, but requires more effort than it's worth to:
1) know all fields in the Issues which are string values
2) pre-process all the string values into string literals
3) If you were to send all your field updates as a single request and just one of them was missing, failed to set, or was an unexpected value, then the entire request will fail and you potentially lose all the other information.
I wish the YouTrack documentation had some mention or discussion of
these considerations.