Firebase Storage: Public to Internet about files under a specific directory - google-cloud-storage

I want to publish all the files under a specific directory to the Internet.
Security Rules
rules_version = '2';
service firebase.storage {
match /b/{bucket}/o {
match /public/{allPaths=**} {
allow read;
allow write: if request.auth != null;
}
}
}
After writing the security rules as described above, when I open the Browser of Cloud Storage:
The files under the directory are still set to private.
If you look at the details of each file, you will see that no public URL has been issued.
And when I enter the Authenticated URL, you will be redirected to Google's login page.
How can I fix this?

Firebase security rules only control who can get the URL of a specific object either by Firebase SDK or REST APIs. If you check for object URL in Firebase console, you'll see an URL of format which is probably what you are looking for:
https://firebasestorage.googleapis.com/v0/b/[PROJECT].appspot.com/o/public%2Fdog.jpg?alt=media&token=fd991fdf-a33f-4321-9017-09907e1a5243
Security rules can only restrict users from getting this URL at first place however anyone with this URL can access the image/file (if an authorized user has shared the URL).
That being said, the rule allow read: if true; will allow anyone to get the public (like the one shown above) and they can access it.
If you want a public link from GCS as well then you would have to edit permissions from GCP console and allow allUsers to read it:

Related

Flutter Firebase Storage Add additional parameters to end of download url

I have an issue regarding the integration of downloaded photos from Firebase storage in my Flutter app.
I am using the https://pub.dev/packages/gallery_saver package to download the images to the device from the user. Due to a bug/the concept of the plugin you are only able to download images if the url ends with e.g. jpeg/png/jpg etc.
Here some other comments of people who have the same issue withe the package: https://github.com/CarnegieTechnologies/gallery_saver/issues/66
To use the package with my app now I am adding the filename to the end of the image url.
This works completely fine when my security rules allow all reads.
As soon as I add these rules:
rules_version = '2';
service firebase.storage {
match /b/{bucket}/o {
match /{allPaths=**} {
allow read: if request.auth != null;
}
}
}
I get an error 403 forbidden on downloading all images.
Is there a way to make this work while using file endings at the end of the url?
Download URLs generated by Firebase for Cloud Storage are opaque URLs, and you can't modify them.
Your options are to:
Mark the file as public on Cloud Storage itself, so that you can download it without the extra token in the URL.
Fix the plugin you use to allow downloading without a filename extension, in which case it can determine the file type from its metadata.
Expand the plugin you use to allow downloading the file through the Firebase SDK, instead of through a download URL.

Firestore security rules request.auth is always null

I am unsuccessfully trying to tighten Firestore security with custom claims.
versions:
Firebase: ^9.6.9
Node: 16.14.0
Next: 11.1.0
I have verified that my custom claims are set as they work as expected for certain collections/requests, specifically ones coming from NextJs Server Side Rendered functions.
I have tried:
Refactoring every service to Firebase V9 instead of using compat.
Changing the firestore rules in production
setting up emulators (Note* I'm not using auth emulator)
adding debug() to security rules
signing out and back in again
request.auth.token (which is what the debug shows on other reqs)
as well as request.resource.auth.token as per These Docs (incorrect usage)
From the security rules debug - request.auth is null
Update:
I see from the rules usage on the firebase console that the rules were treated as errors and not denies.
I have found strange behavior.
I am successfully able to use custom claims for any collection other than 'blog'.
Successful rule matching tags:
I'm starting to think it has something to do with NextJs Server Side Rendering using getServerSideProps()
My code to retrieve blog entries:
export async function getServerSideProps(context) {
const posts: BlogPost[] | any = await GetBlogPosts();
return {
props: {posts}, // will be passed to the page component as props
};
}
interface extendedBlogPosts extends BlogPost {
['key']: string;
id: string;
}
interface BlogPageProps {
posts: extendedBlogPosts[];
userProps: {user: UserInfo; userData: UserData};
}
It sounds like the code in getServerSideProps runs in a trusted environment and accesses Firestore using an Admin SDK. If that is the case then the behavior you are seeing is expected, as the Admin SDKs access the database with elevated/administrative privileges and not as a specific user, and bypass all security rules.
Even if you're not using the Admin SDK, if that code runs in the server using the regular JavaScript SDK (or the one for Node.js, but not for administrative usage), your code is executing in a different environment than where the user signed in. So unless you've signed the user into the server-side code too, it will run without a signed in user, and thus the request.auth variable in your rules will be null.
The current user information is stored in request.auth, not in request.resource.auth as you are using in your first and last screenshots.

Ways to using Google storage buckets on frontend by token without user creation

I have a client - server application that stores files on the server, however I would like to integrate google cloud.
I must say right away that the user can log in by email, as well as by phone (in this case, he will not have an email).
My main task is to make the frontend for each specific user be able to upload files directly to Google cloud storage (without the right to delete them)
My main idea is to give a specific user the right to add new files to a specific scope. At first, I thought that I would create a service account with administrator rights on the backend and give access to a specific basket folder to a specific user using my backend. However, as far as I understand, it is better to create a separate bucket for each user and give the user the right to create new files.
The main problem is how do I give the frontend a token that will contain the right to add only to this bucket.
Initially, I hoped that I could create some kind of token without creating a user entity on the Google Cloud side.
But after a little googling, I realized that I need to use IAM.
IAM offers several options for user identification:
Google Account email: user#gmail.com - But as I wrote earlier, I may not have an email, but just a phone number
Google Group: admins#googlegroups.com - Not my option
Service account: server#example.gserviceaccount.com - But as far as I understand, service accounts are not allowed on the frontend.
Google Workspace domain: example.com - Not my option
Perhaps there are some other options. Ideally, I would like a permanent token, for a specific bucket without creating a user on the Google storage side, since I already have a user in my own database.
I found a solution, the only one that is suitable for a similar task. These are Signed URLs.
The logic is next: The client asks server for a link to upload a file, the server creates a signed link that is available for a certain time, and the client makes a put request by upload the desired file on GCS.
Here is an example in all languages how to create such a link
Here is a simple example of how to upload a file to google cloud storage on the frontend using such a link:
<div>
<label htmlFor="inventoryPicture">Choose file to upload</label>
<input type="file" name="someFile" accept=".jpg" id="photo"/>
</div>
<script>
document.getElementById('photo').addEventListener('change', async (event) => {
const file = event.target.files[0];
//your signed url:
const url = 'https://storage.googleapis.com/your-bucket/123.jpeg?X-Goog-Algorithm=...';
try {
const response = await fetch(url, {
method: 'PUT',
body: file,
});
} catch (error) {
console.error('Error:', error);
}
});
</script>

Firebase Storage Code -13000 Permission Error

I have previously been able to use firebase to store user images, but when once I created a new xcode project I've been getting this error:
Error Domain=FIRStorageErrorDomain Code=-13000 "An unknown error occurred, please
check the server response." UserInfo={object=WtLirPvwL9b7eI3zipGZkk1G4Hi2,
ResponseBody={
"error": {
"code": 400,
"message": "Permission denied. Please enable Firebase Storage for your bucket by
visiting the Storage tab in the Firebase Console and ensure that you have
sufficient permission to properly provision resources."
}
I have been getting this error even after setting up storage for my project.
And I don't think it has to do with the usage rules--for one, I allowed unauthorized reads and writes, and in my testing, the user is already authorized before attempting to push images to storage. In fact, the uid generated from firebase's auth is what is being used as the reference. Also noteworthy is the fact that authorization with firebase is working fine--I'm having no errors creating accounts.
Another interesting fact--I also received this error after setting up yet another XCode project (and the corresponding steps in Firebase). So with two different Xcode projects, the same error is happening to me.
EDIT: usage rules and permissions
rules_version = '2';
service firebase.storage {
match /b/{bucket}/o {
match /{allPaths=**} {
allow read, write: if request.auth != null;
}
}
}
This is the default usage configuration. I previously had changed the content to allow read allow write to see if that would affect anything. But my error is happening with both that config and the default one.
Also, someone linked in the comments to a site recommending to check IAM and Admin permissions. As I suspected, I am the owner and so I see no permission problem there.
Additionally, nothing appears to be wrong with the GoogleService-Info.plist, even though that was what I suspected was wrong. Indeed, it includes the correct string for my storage bucket.
EDIT 2: CONSOLE ERROR
Thanks for #Leopold for pointing out that this error also occurs in the console. At first I thought that the console was working for me (I was able to upload images to it) but I now get the error when trying to view the image I uploaded through the console.
I got to this page by clicking on the image name hyperlink.
For me it looks, that Google Cloud Platform configured in the way, that you do not have permission storage bucket.
Go to https://console.cloud.google.com
Pick your project
Navigate to Cloud Storage
Go to Permissions tab
Add permission. Add permission which suits your needs most.
For my storage I am allowing to access my storage for everyone. And for you to see and play for troubleshooting to do the same.
In the permission tap ADD
Add Storage Object Viewer
Click save
Considering I don't find older questions about this Exact issue, and that the only ones are from yesterday, then I suspect it's a server/google side issue. So I went on firebase storage, uploaded a file through the console, and then I saw this on the right side, under the upload button:
Error Loading Thumbnail/Image (Erro Carregando Visualiação)
image
And then I clicked on the name link (Nome: Fachada.jpg)
and got this:
Permission denied. Please enable Firebase Storage for your bucket by visiting the Storage tab in the Firebase Console and ensure that you have sufficient permission to properly provision resources.
{ "error": {
"code": 400,
"message": "Permission denied. Please enable Firebase Storage for your bucket by visiting the Storage tab in the Firebase Console and
ensure that you have sufficient permission to properly provision
resources." } }
So even the console is yielding this error! It's server sided, you just have to wait, and I have never seen this happen before on Firebase(there is no google search result for this exact error, except for yesterday Saturday, November 20th 2021)

Stop Facebook/Other Apps from crawling my web application via private sharelink

Edit: The suggested answer does not work as the robots are not just randomly crawling from my index, they are visiting a specific link when it is entered in a FB message.
I've created a basic chat application in Flask on App Engine. It allows the user to invite others by adding their ID or by giving them a private sharelink that auto-adds who ever goes to it (similar to youtube or google drive).
A serious flaw I have found is that if a user posts the link into a facebook message, Facebook will crawl/visit the link and by design of my system add them as a user to the conversation. All of a sudden you'll see 3 random users join the conversation.
My chat system is completely anonymous and designed to be temporary so theres no login or authentication other than a unique key for each user saved in their session.
So Facebook bots visit the link, get assigned an ID and get authenticated into the conversation because they used the users share-link, is there a way I can stop this via either Flask/Python or App Engine? Could I IP ban facebook?
Some code for the sake of code, does this for every new visitor:
def requires_session(f):
#wraps(f)
def decorated(*args, **kwargs):
if 'profile' not in session:
user_ref = fs_database.collection('users').document()
data = {
'id': user_ref.id,
'date': datetime.now(timezone.utc)
}
# add the user to the database
user_ref.set(data)
# save their id to their session
session['profile'] = data.get('id')
# create a hash for later on to create a sharelink
session['share'] = hashlib.sha256(data.get('id').encode('utf-8')).hexdigest()
return f(*args, **kwargs)
return decorated
I could maybe add a check first if Facebook-bot: return False
For your case I would say that you can avoid that either on your side or on Google Cloud Platform side. To be more precise, you can reject some connections in your code or you can set firewall rules to your App Engine instance to reject connections coming from certain IPs. In the public documentation you can find more information about firewall rules when using GAE:
Using flex environment.
Using standard environment.
Code-wise you can check at this github repo which is addresses the issue of blocking certain IPs to your Flask app.
The last possible option is authentication, but as the chat is anonymous I guess that's not the solution you are looking for.
The accepted answer lead me to this answer, I protected the route with a decorator that would get the 'user agent' of the incoming connection and see where it comes from. If it comes from Facebook, redirect it away.
def check_for_robot(f):
#wraps(f)
def decorated(*args, **kwargs):
if 'not_a_robot' not in session:
agent = request.headers.get('User-Agent')
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:
ip = request.remote_addr
# Stop robots from crawling when sharing conversation links
# Could use the IPs too
if 'facebook' in agent or 'Slackbot' in agent:
return 'No Robots Thanks'
# Real people will get to here and continue on
session['not_a_robot'] = True
return f(*args, **kwargs)
return decorated
#app.route('/')
#check_for_robot
def index()
return 'hello human'
This issue also occurs with ANY messaging service that crawls your links to display data in the chat message (WhatsApp, Slack, etc).
This also exposed a vulnerability in these messaging services as they now return the incorrect metadata back to the chat service, but embed the link you provided, ie. Phishing, Clickjacking