Amazon S3 POST upload (from iPhone) - iphone

A bit of background: I am building an iPhone app with a complementary server backend (written in Rails or possibly Sinatra, but probably not relevant for this discussion). Part of the functionality involves uploading pictures from the iPhone to the server. These ultimately get stored on S3, so in order to simplify the app and conserve bandwidth, I would like to upload the pictures directly from the iPhone to S3, skipping my backend server.
Using the S3 REST API (in which case I would likely use ASIHTTPRequest) would mean storing the AWS key and secret in the iPhone app, which I don't want to do for security reasons.
For similar reasons I don't want to make my S3 bucket publicly writable.
Now it looks like S3 also has support for browser-based uploads using POST. If I understand it correctly, this works by generating a signed policy document on the server, which then allows the client app to directly POST the file to S3. It seems like in principle this should work not only for browsers, but also for iPhone apps.
However, I have a hard time figuring out the exact way of getting this working (not the iPhone specific part, just S3 POST uploads in general). What information needs to be sent to the server in order to calculate the signature (e.g. does it need the file size or any other file information)? I'll dig through the official docs some more and start experimenting with this.

When you generate the policy you can restrict what is uploaded in various ways (key name, mime-type, file size etc) by constructing a JSON string. These restrictions (including an expirydate) are then signed using your AWS secret key. You then post the signed policy and you access key as form parameters to AWS along with the key for the new resource, it's content and whatever other meta-data you like.
The official doco is the only reference I know of (but I haven't googled for it either...)
http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/HTTPPOSTForms.html#HTTPPOSTConstructPolicy
is the page you're probably most interested in.

Related

When you have 2 sites, front end and a rest api, how/where do you store uploaded images?

I have 2 different sites basically. 1 is a rest api, the other is the front end built in vue.
Actually uploading a file isn't the issue. My question is how to have the Vue portion access the files that were uploaded via rest.
Should I save the files to c:...VueProjectFolder\images (could use some code help if this is the case)? Or should the Vue site be inside the rest api folder for relative access? Or is it better to save relative, then move the uploaded files? Or do I have vue access the files via the rest api address?
None really seem like the right answer and I'm failing with google atm.
Down the road, I would expect them to be served from a mapped drive as there will be many. Mostly images, but also sound files.
This is probably not a Vue or 'two sites' issue. This is an architecture issue, and like such issues, It depends. What I can do is tell you how I approached a similar situation.
I uploaded the files normally
I renamed them, created a folder structure based on the year, month, and day the picture was uploaded. Then I moved the image to the permanent location. So an image uploaded today will be located at .../assets/images/2020/09/01/randomImageName.png for instance.
I stored the image location along with whatever resource came with the uploaded image in the database.
Now in my frontend, I do a normal api call for a particular resource and it spits out everything about that resource, including the image location.
I think I should point out that my case was an ecommerce website with a REST API endpoint servicing the frontend requests. Generally this approach is advised since you can take advantage of backing up the image directory, backing up the database and easily moving between servers if need be.
This may not exactly be your case, but I hope it gives you insight into how to approach this efficiently.

How to download a csv file from Google Drive to show the content in a mobile app

I am totally new in mobile app development and consequently very confused about how to get going (independent from how much I have read Google Drive API documentation over and over again.)
The way I would like to implement my (initially "android") mobile app (which I will develop by using ionic):
I will have a Google Drive account where I will have 1 CSV file. I will periodically renew the content of the file in the background (possibly twice a week).
The mobile app that I will develop will just retrieve the file from Google Drive, process the content and show it to the user in a more readable (easy to understand) format.
My app will not upload any data/file from the user device to the Google Drive. The app will only retrieve a file from Google Drive to show the content to the user.
Question 1) Does this approach make sense? I ideally would like to eliminate the work for back-end development. Or would you suggest another approach to do the same thing (with or without Google Drive)?
Question 2) The authorization process looks quite confusing to me as it is explained in Google's documentation. I could not find relevant information only addressing the scenario I have in my use case. Requirements: The mobile apps can fetch the corresponding file (or the content of it) and process it to show to the end user, but mobile apps (or any other client) may not update/edit/delete the file, cannot add a new file either. The only purpose of using the Google Drive will be to enable the mobile app fetching the data that will be shown to the user. How can this problem be solved by using Google's OAuth framework? A step by step action plan would really be appreciated.
ADDENDUM
You are also welcome to share your view if I should instead consider using Firebase for my problem, which I guess will be more costly.
Based on discussing the requirements with you, I would recommend against trying to do this with Google Drive API.
There are no tutorials out there for Ionic 4 + Google Drive API, and only a few for older versions. It will be an uphill struggle to create a solution that isn't going to scale well.
Instead you should start looking into using Firebase.
There are lots of tutorials which show you the basics: setting up a login system, and reading some data from the database.
The free limits are quite generous.
You can implement caching into your app so that you store a copy of the data on the device, and only refresh it either weekly, or more advanced, put a second table in that records the last updated date for the main table.
Firebase charges by reads so if you can set it up so that you only read one record (last updated) instead of downloading the whole database every time, then you can stretch your free tier a lot further.
If you do outgrow the free tier and the app is not generating enough to cover the costs then you have the option of investing time instead of money. There are guides in the docs about exporting the users and they provide tools so that the passwords can be put into another system without requiring the users to reset their passwords. The database can be similarly exported.

uploading images to php app on GCE and storing them onto GCS

I have a php app running on several instances of Google Compute Engine (GCE). The app allows users to upload images of various sizes, resizes the images and then stores the resized images (and their thumbnails) in the storage disk and their meta data in the database.
What I've been trying to find is a method for storing the images onto Google Cloud Storage (GCS) through the php app running on GCE instances. A similar question was asked here but no clear answer was given there. Any hints or guidance on the best way for achieving this is highly appreciated.
You have several options, all with pros and cons.
Your first decision is how users upload data to your service. You might choose to have customers upload their initial data to Google Cloud Storage, where your app would then fetch it and transform it, or you could choose to have them upload it directly to your service. Let's assume you choose the second option, and you want users to stream data directly to your service.
Your service then transforms the data into a different size. Great. You now have a new file. If this was video, you might care about streaming the data to Google Cloud Storage as you encode it, but for images, let's assume you want to process the whole thing locally and then store it in GCS afterwards.
Now we have to get a file into GCS. It's a PHP app, and so as you have identified, your main three options are:
Invoke the GCS JSON API through the Google API PHP client.
Invoke either the GCS XML or JSON API via custom code.
Use gsutil.
Using gsutil will be the easiest solution here. On GCE, it automatically picks up appropriate credentials for your service account, and it's got several useful performance optimizations and tuning that a raw use of the API might not do without extra work (for example, multithreaded uploads). Plus it's already installed on your GCE instances.
The upside of the PHP API is that it's in-process and offers more fine-grained, programmatic control. As your logic gets more complicated, you may eventually prefer this approach. Getting it to perform as well as gsutil may take some extra work, though.
This choice is comparable to copying files via SCP with the "scp" command line application or by using the libssh2 library.
tl;dr; Using gsutil is a good idea unless you have a need to handle interactions with GCS more directly.

Packaged App: syncFileSystem / fileSystem API - For *large* files

I am looking to develop a Chrome Packaged App that will (at a very simple level) provide a dynamic form filling UI - but allow users to attach large attachments to the forms (could be upwards of 10 files of 10MB each). I would like to have the ability to save and share the form data and the attachment via Google Drive. The forms will be completed collaboratively by multiple team members who also need to all see the attachments. Imagine a form front-end/metadata that sits on top of a shared Google Drive folder...
I have read the documentation, and learnt that the syncFileSystem API is not intended for use for general and/or large files to be stored in Google Drive, but rather for small configuration data.
I then looked at the fileSytem API - hoping that I could include the Sandboxed folder for the app in the folders that the Google Drive Client App (so that the files get synced automatically) - but it doesn't look like the sandbox is meant to be accessed externally.
My current thinking is to recreate a windows explorer type UI in the packaged app (can use drag and drop) - then store the files in the sandbox using the fileSystem API. I can reuse the code from the Google Drive sample packaged app to implement cloud syncing. Good idea?
Two questions stem from this:
How persistent is the fileSystem API. The documentation mentions that the user can purge all stored files - is this done through 'clearing all browser history' ? In which case they could very easily accidentally wipe many hundreds of MB of useful information that I am storing in the packaged app.
I have read that you can use a 3rd party authentication services (which I want to do). If I use a non-Google account to authenticate my users, how would the Google Drive authentication work ? Would I be able to use a different Google account to perform the cloud storage (i.e. unrelated to the actual end user, who may or may not have a Google account already - which may already be signed in)
It seems like waiting for this https://code.google.com/p/chromium/issues/detail?id=148486 (getting read access to non-sandbox directories) would be the easiest way forward.
I don't think clearing browser history deletes temporary sandbox filesystem files, they're supposed to be sort of automatically garbage collected when space is required. It would make sense if that were another checkbox in the "Clear browsing data" section of chrome's options. Perhaps that would make the answer to your first question more clear :-)
The second point, I am not sure how to do this, but it looks like you have already figured out something? At least that's what this page https://groups.google.com/a/chromium.org/forum/#!topic/chromium-apps/hOYu75Cv0AE seems to indicate

How do I create an iPhone app to take a picture and send it back to an Oracle database?

I'm trying to create a program that you can take a photo with your camera, and send it back, where it will then be attached as a field into an Oracle database. An existing app that this is similar to (if I'm not explaining it clearly enough) would be bank apps that allow you to photograph the front and back of your checks, then send them off to a different location to be processed.
From my understanding, I would need some sort of middleware and not access the database directly with the pictures taken, but I'm just trying to get the project off the ground at the moment.
So, my immediate questions are:
What sort of base project template would be the best to use for this kind of app?
What sort of code is required to send a file from one location to another? (I'm mainly used to these scenarios in .NET languages, not in xCode)
Expose an HTTP based service (and that can be written in any language and run on any platform e.g. GNU/Linux).
The app itself would be native iOS, and you can certainly consume web-services.
The server itself is just your basic CRUD system backed by a persistent store, in your case an RDBMS.
[iOS] <-----/net:HTTP/---->[server]<==/LINQ/==>[RDBMS]