When you have 2 sites, front end and a rest api, how/where do you store uploaded images? - rest

I have 2 different sites basically. 1 is a rest api, the other is the front end built in vue.
Actually uploading a file isn't the issue. My question is how to have the Vue portion access the files that were uploaded via rest.
Should I save the files to c:...VueProjectFolder\images (could use some code help if this is the case)? Or should the Vue site be inside the rest api folder for relative access? Or is it better to save relative, then move the uploaded files? Or do I have vue access the files via the rest api address?
None really seem like the right answer and I'm failing with google atm.
Down the road, I would expect them to be served from a mapped drive as there will be many. Mostly images, but also sound files.

This is probably not a Vue or 'two sites' issue. This is an architecture issue, and like such issues, It depends. What I can do is tell you how I approached a similar situation.
I uploaded the files normally
I renamed them, created a folder structure based on the year, month, and day the picture was uploaded. Then I moved the image to the permanent location. So an image uploaded today will be located at .../assets/images/2020/09/01/randomImageName.png for instance.
I stored the image location along with whatever resource came with the uploaded image in the database.
Now in my frontend, I do a normal api call for a particular resource and it spits out everything about that resource, including the image location.
I think I should point out that my case was an ecommerce website with a REST API endpoint servicing the frontend requests. Generally this approach is advised since you can take advantage of backing up the image directory, backing up the database and easily moving between servers if need be.
This may not exactly be your case, but I hope it gives you insight into how to approach this efficiently.

Related

What is the best way to link an image with a mongodb item?

I'm currently building my first real project that includes Express and MongoDB. Since it's one of the first backend-heavy projects I've worked on outside of my Udemy course, I've run into a lot of questions.
My project is supposed to be a mock-online store that would display items I have created inside of my MongoDB server. The problem I'm having is that I don't know the proper way of serving those image files that should be associated with each item (such as the image of a hat, for a hat item). I could add them directly into the project's public folder, but I don't know if that would be feasible in terms of the scalability that I want this project to demonstrate. But it doesn't seem like MongoDB will let me store images within each item. How would I go about doing that?
Sorry in advance if any of this is unclear, it's my first time posting as well. I'll try and provide more information if I need too. Thanks!
If you want a scalable solution for images, you typically would use a separate service like AWS S3 or Imgix.
There are several benefits to using a 3rd party service. You don't bog down your web server with image requests, or image resizing. You get virtually unlimited space. Etc.
In your MongoDB document, you would then store a key like /item/1.jpg or whatever, rather than the image itself. Your front-end then uses the key to request the image when someone visits your website.
If you want a turn-key solution, I recommend starting with Imgix (or Cloudinary, or some similar service). It is more expensive than S3, but it is pretty cheap for a small project, and it will get you up and running a lot faster.

nginx-gridfs: control which documents in a collection can be served via nginx vs my app?

I'm doing some planning for replacing an existing solution with mongodb and potentially using the GridFS functionality for static assets. One question that I have is if I choose to front GridFS with nginx, is there a way to control which assets in a collection may be served directly from nginx versus which asset requests need to be directed to my application?
The reason I ask is that some security checks are done for certain assets and those assets have always been served out of the app itself (and will need to continue to be, at least for now).
I was thinking, I could probably just add a property on the file descriptors stored in nginx that is something like isPublished. Could I instruct nginx-gridfs to respect this property?
Looks like there have been requests for this functionality but the project is largely abandoned by contributors. The applicable ticket is here: https://github.com/mdirolf/nginx-gridfs/issues/24

uploading images to php app on GCE and storing them onto GCS

I have a php app running on several instances of Google Compute Engine (GCE). The app allows users to upload images of various sizes, resizes the images and then stores the resized images (and their thumbnails) in the storage disk and their meta data in the database.
What I've been trying to find is a method for storing the images onto Google Cloud Storage (GCS) through the php app running on GCE instances. A similar question was asked here but no clear answer was given there. Any hints or guidance on the best way for achieving this is highly appreciated.
You have several options, all with pros and cons.
Your first decision is how users upload data to your service. You might choose to have customers upload their initial data to Google Cloud Storage, where your app would then fetch it and transform it, or you could choose to have them upload it directly to your service. Let's assume you choose the second option, and you want users to stream data directly to your service.
Your service then transforms the data into a different size. Great. You now have a new file. If this was video, you might care about streaming the data to Google Cloud Storage as you encode it, but for images, let's assume you want to process the whole thing locally and then store it in GCS afterwards.
Now we have to get a file into GCS. It's a PHP app, and so as you have identified, your main three options are:
Invoke the GCS JSON API through the Google API PHP client.
Invoke either the GCS XML or JSON API via custom code.
Use gsutil.
Using gsutil will be the easiest solution here. On GCE, it automatically picks up appropriate credentials for your service account, and it's got several useful performance optimizations and tuning that a raw use of the API might not do without extra work (for example, multithreaded uploads). Plus it's already installed on your GCE instances.
The upside of the PHP API is that it's in-process and offers more fine-grained, programmatic control. As your logic gets more complicated, you may eventually prefer this approach. Getting it to perform as well as gsutil may take some extra work, though.
This choice is comparable to copying files via SCP with the "scp" command line application or by using the libssh2 library.
tl;dr; Using gsutil is a good idea unless you have a need to handle interactions with GCS more directly.

Packaged App: syncFileSystem / fileSystem API - For *large* files

I am looking to develop a Chrome Packaged App that will (at a very simple level) provide a dynamic form filling UI - but allow users to attach large attachments to the forms (could be upwards of 10 files of 10MB each). I would like to have the ability to save and share the form data and the attachment via Google Drive. The forms will be completed collaboratively by multiple team members who also need to all see the attachments. Imagine a form front-end/metadata that sits on top of a shared Google Drive folder...
I have read the documentation, and learnt that the syncFileSystem API is not intended for use for general and/or large files to be stored in Google Drive, but rather for small configuration data.
I then looked at the fileSytem API - hoping that I could include the Sandboxed folder for the app in the folders that the Google Drive Client App (so that the files get synced automatically) - but it doesn't look like the sandbox is meant to be accessed externally.
My current thinking is to recreate a windows explorer type UI in the packaged app (can use drag and drop) - then store the files in the sandbox using the fileSystem API. I can reuse the code from the Google Drive sample packaged app to implement cloud syncing. Good idea?
Two questions stem from this:
How persistent is the fileSystem API. The documentation mentions that the user can purge all stored files - is this done through 'clearing all browser history' ? In which case they could very easily accidentally wipe many hundreds of MB of useful information that I am storing in the packaged app.
I have read that you can use a 3rd party authentication services (which I want to do). If I use a non-Google account to authenticate my users, how would the Google Drive authentication work ? Would I be able to use a different Google account to perform the cloud storage (i.e. unrelated to the actual end user, who may or may not have a Google account already - which may already be signed in)
It seems like waiting for this https://code.google.com/p/chromium/issues/detail?id=148486 (getting read access to non-sandbox directories) would be the easiest way forward.
I don't think clearing browser history deletes temporary sandbox filesystem files, they're supposed to be sort of automatically garbage collected when space is required. It would make sense if that were another checkbox in the "Clear browsing data" section of chrome's options. Perhaps that would make the answer to your first question more clear :-)
The second point, I am not sure how to do this, but it looks like you have already figured out something? At least that's what this page https://groups.google.com/a/chromium.org/forum/#!topic/chromium-apps/hOYu75Cv0AE seems to indicate

Amazon S3 POST upload (from iPhone)

A bit of background: I am building an iPhone app with a complementary server backend (written in Rails or possibly Sinatra, but probably not relevant for this discussion). Part of the functionality involves uploading pictures from the iPhone to the server. These ultimately get stored on S3, so in order to simplify the app and conserve bandwidth, I would like to upload the pictures directly from the iPhone to S3, skipping my backend server.
Using the S3 REST API (in which case I would likely use ASIHTTPRequest) would mean storing the AWS key and secret in the iPhone app, which I don't want to do for security reasons.
For similar reasons I don't want to make my S3 bucket publicly writable.
Now it looks like S3 also has support for browser-based uploads using POST. If I understand it correctly, this works by generating a signed policy document on the server, which then allows the client app to directly POST the file to S3. It seems like in principle this should work not only for browsers, but also for iPhone apps.
However, I have a hard time figuring out the exact way of getting this working (not the iPhone specific part, just S3 POST uploads in general). What information needs to be sent to the server in order to calculate the signature (e.g. does it need the file size or any other file information)? I'll dig through the official docs some more and start experimenting with this.
When you generate the policy you can restrict what is uploaded in various ways (key name, mime-type, file size etc) by constructing a JSON string. These restrictions (including an expirydate) are then signed using your AWS secret key. You then post the signed policy and you access key as form parameters to AWS along with the key for the new resource, it's content and whatever other meta-data you like.
The official doco is the only reference I know of (but I haven't googled for it either...)
http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/HTTPPOSTForms.html#HTTPPOSTConstructPolicy
is the page you're probably most interested in.