Dropbox app with tiered users - dropbox-api

Preface:
I'm hoping to upgrade an existing application by adding cloud backup and syncing of the customers data. We want this to be as seamless as possible, but also for the customers only interface to the data to be via the applications front-end interface.
Our application can be connected to the oil pipe of a machine, collects data on the oil condition. When a test has completed we want to push this to the cloud. Because of the distinct test nature of the data (as opposed to one big trend) most IoT platforms don't suit very well, so we're aiming to release a slightly modified version of the application which doesn't have the connection to the sensors and this will be our remote front-end.
Since the existing application uses a relatively simple file structure to store it's data, if we simply replicate these files in the cloud, the remote front-end version can just download these to the same location and it'll work fine. Thus this has lead us to Dropbox (or any recommended more appropriate cloud storage system).
We hope to use the Dropbox API directly in our application to push and pull the files as necessary. All of this so far we believe is perfectly achievable.
Question: Is it possible - and if so how would we go about - to setup a user system with the below requirements
The users personal dropbox is not used
Dropbox is completely hidden from the user
The application vendor has a top level user who has access to all data (for analytic, we do not want to store confidential or sensitive data).
When the user logs in they only have access to their folder and any attackers could not disrupt the overall structure. (We understand that if an attacker got the master account then all is lost, but that is an internal issue to keep it secure. As long as the user accounts are isolated this is okay.)
Alternative Question Is anyone aware of a storage system or IoT system which would better suite this use case? We will still require backups/loss prevention as part of the service.

Related

Can I migrate users locally stored CoreData to a server later on?

As with many other founders and their start-ups, I'm low on cash and aiming to launch without funding. The app will be dealing with users health data so setting up a server with the correct encryption may be costly. I am also only familiar with JSON, SwiftUI, Swift5 and API programming so setting up a server is outside my scope of expertise.
Therefore, I aim to launch the app with all user data stored locally in SwiftUI CoreData, as to avoid these issues. With enough users i.e. traction, I will then begin to seek funding, at which point I would hope to set up an encrypted server and transfer user data there.
I am worried that if I launch with local storage, I will not be able to transfer each individual users data to an external server, without them having to reenter all of their information.
I was just wondering whether this was possible or not? And if you could provide details that would be very helpful.

Angular PWA Offline Storage

I’m building a new web application which needs to work seamlessly even when there is no internet connection. I’ve selected Angular and am building a PWA as it comes with built-in functionality to make the application work offline. So far, I have the service worker working perfectly and driven by the manifest file, this very nicely caches the static content and I’ve set it to cache a bunch of API requests which I want to use whilst the application is offline.
In addition to this, I’ve used localStorage to store attempts to invoke put, post and delete API requests when the user is offline. Once the internet connection is re-established, the requests stored in localStorage are sent to the server.
This far in my proof of concept, the user can access content whilst offline, edit data and the data gets synced with the server once the user’s internet connection is re-established. This is where my quandary begins though. There is API request data cached automatically by the service worker as defined in the manifest file, and there is a separate store of data for data edits whilst offline. This leads to a situation where the user edits some data, saves the data, refreshes the page and the data is served by the service worker cached API.
Is there a built in mechanism to update API data cached automatically by the service worker? I don’t fancy trying to unpick this manually as it seems hacky and I can’t imagine it’ll be future proof as service workers evolve.
If there isn’t a standard way to achieve what I need to do, is it common for developers to take full control of offline data by storing it all in IndexedDB/localStorage manually? i.e. I could invoke API requests and write some code which caches the results in a structured format in IndexedDB to form an offline database, then writes back to the offline database whenever the user edits some data, and uploads any data edits when the user is back online. I don’t envisage any technical problems with doing this, it just seems like a lot of effort to achieve something which I was hoping to be standard functionality.
I’m fairly new to developing with Angular, but have many years of other development experience. So please forgive me if I’m asking obvious questions, I just can’t seem to find a good article on best practices for working with data storage with service workers.
Thanks
I have a project where my users can edit local data when they are offline and I use Cloud Firestore to have a local database cached available. If I understood you correctly, this would be exactly your requirement.
The benefit of this solution is that with just one line of code, you get not only a local db, but also all the changes made offline are automatically synchronised with the server once the client gets online again.
firebase.firestore().enablePersistence()
.catch(function(err) {
// log the error
});
// Subsequent queries will use persistence, if it was enabled successfully
If using this NoSQL database is an option for you I would go with it, otherwise you need to implement the local updates by yourself as there is not a built in solution for that.

Recommendations for multi-user Ionic/CouchDB app

I need add multi-user capability to my single-page mobile app developed with Ionic 1, PouchDB and CouchDB. After reading many docs I am getting confused on what would be the best choice.
About my app:
it should be able to work offline, and then sync with the server when online (this why I am using PouchDB and CouchDB, working great so far)
it should let the user create an account with a username and password, which would then be stored within the app so that he does not have to log in again whenever he launches the app. This account will make sure his data are then synced on the server in a secure place so that other users cannot access it.
currently there is no need to have shared information between users
Based on what I have read I am considering the following:
on the server, have one database per user, storing his own data
on the server, have a master database, storing all the data of all users, plus the design docs. This makes it easy to change the design docs in a single place, and have them replicated on each user database (and then within the PouchDB database in the app). The synchronization of data, between the master and the user DBs, is done through a filter, so that only the docs belonging to one user (through some userId field) are replicated to this user's database only
use another module/plugin (SuperLogin? nolanlawson/pouchdb-authentication?) to manage the users from the app (user creation, login, logout, password reset, email notification for password lost, ...)
My questions:
do you think this architecture is appropriate, or do you have something better to recommend?
which software would you recommend for the users management? SuperLogin looks great but needs to run on a separate HTTP server, making the architecture more complex. Does it automatically create a new database for each new user (I don't think so)? Nolanlawson/pouchdb-authentication is client-only, but does it fit well with Ionic 1? Isn't there a LOT of things to develop around it, that come out of the box with SuperLogin? Do you have any other module in mind?
Many thanks in advance for your help!
This is an appropriate approach. The local PouchDBs will provide the data on the client side even if a client went offline. And the combination with a central CouchDB server is a great to keep data synchronized between server and clients.
You want to store the users credentials, so you will have to save this data somehow on your client side, which could be done in a separate PouchDB.
If you keep all your user data in a local PouchDB database and have one CouchDB database per user on the server, you can even omit the filter you mentioned, because the synchronization will only happen between this two user databases.
I recommend SuperLogin. Yes, you have to install NodeJS and some extra libraries (namely morgan, express, http, body-parser and cors), and you will have to open your server to at least one new port to provide this service. But SuperLogin is really powerful to manage user accounts and user databases on a CouchDB server.
For example, if a user registers, you just make a call to SuperLogin via http://server_address:port/auth/register, query the user name, password etc. and SuperLogin not only adds this new user to the user database, it also creates automatically a new database only for this user. Each user can have multiple databases (private or shared) and SuperLogin manages the access rights to all these databases. Moreover, SuperLogin can also send confirmation emails or resend forgotten passwords (an access token, respectively).
Sure, you will have to configure a lot (but, hey, at least you have all these options), and maybe you even have to write some additional API for functionality not covered by SuperLogin. But in general, SuperLogin saves a lot of pain regarding the development of a custom user management.
But if you are unsure about the server configuration, maybe a service such as Couchbase, Firebase etc. is a better solution. These services have also some user management capabilities, and you have to bother less with server security.

Need advice: How to share a potentially large report to remote users?

I am asking for advice on possibly better solutions for the part of the project I'm working on. I'll first give some background and then my current thoughts.
Background
Our clients can use my company's products to generate potentially large data sets for use in their industry. When the data sets are generated, the clients will file a processing request to us.
We want to send the clients a summary email which contains some statistical charts as well as sampling points from the data sets so they can do some initial quality control work. If the data sets are of bad quality, they don't need to file any request.
One problem is that the charts and sampling points can be potentially too large to be sent in an email. The charts and the sampling points we want to include in the emails are pictures. Although we can use low-quality format such as JPEG to save space, we cannot control how many data sets would be included in the summary email, so the total size could still exceed the normal email size limit.
In terms of technologies, we are mainly developing in Python on Ubuntu 14.04.
Goals of the Solution
In general, we want to present a report-like thing to the clients to do some initial QA. The report may contains external links but does not need to be very interactive. In other words, a static report should be fine.
We want to reduce the steps or things that our clients must do to read the report. For example, if the report can be just an email, the user only needs to 1). log in and 2). open the email. If they use a client software, they may skip 1). and just open and begin to read.
We also want to minimize the burden of maintaining extra user accounts for both us and our clients. For example, if the solution requires us to register a new user account, this solution is, although still acceptable, not ranked very high.
Security is important because our clients don't want their reports to be read by unauthorized third parties.
We want the process automated. We want the solution to provide programming interface so that we can automate the report sending/sharing process.
Performance is NOT a critical issue. Our user base is not large. I think at most in hundreds. They also don't generate data that frequently, at most once a week. We don't need real-time response. Even a delay of a few hours is still acceptable.
My Current Thoughts of Solution
Possible solution #1: In-house web service. I can set up a server machine and develop our own web service. We put the report into our database and the clients can then query via the Internet.
Possible solution #2: Amazon Web Service. AWS is quite mature but I'm not sure if they could be expensive because so far we just wanna share a report with our remote clients which doesn't look like a big deal to use AWS.
Possible solution #3: Google Drive. I know Google Drive provides API to do uploading and sharing programmatically, but I think we need to register a dedicated Google account to use that.
Any better solutions??
You could possibly use AWS S3 and Cloudfront. Files can easily be loaded into S3 using the AWS SDK's and API. You can then use the API to generate secure links to the files that can only be opened for a specific time and optionally from a specific IP.
Files on S3 can also be automatically cleaned up after a specific time if needed using lifecycle rules.
Storage and transfer prices are fairly cheap with AWS and remember that the S3 storage cost indicated is by the month so if you only have an object loaded for a few days then you only pay for a few days.
S3: http://aws.amazon.com/s3/pricing
Cloudfront: https://aws.amazon.com/cloudfront/pricing/
Here's a list of the SDK's for AWS:
https://aws.amazon.com/tools/#sdk
Or you can use their command line tools for Windows batch or powershell scripting:
https://aws.amazon.com/tools/#cli
Here's some info on how the private content urls are created:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
I will suggest to built this service using mix of your #1 and #2 options. You can do the processing and for transferring the data leverage AWS S3 which is quiet cheap.
Example: 100GB costs like approx $3.
Also AWS S3 will be beneficial as you are covered for any disaster on your local environment your data will be safe in S3.
For security you can leverage data encryption and signed URLS in AWS S3.

Keeping iPhone application in sync with GWT application

I'm working on an iPhone application that should work in offline and online modes.
In it's online mode it's supposed to feed all the information the user enters to a webservice backed by GWT/GAE.
In it's offline mode it's supposed to store the information locally, and when connection is available sync it up to the web service.
Currently my plan is as follows:
Provide a connection between an app and a webservice using Protobuffers for efficient over-the-wire communication
Work with local DB using Core Data
Poll the network status, and when available sync the database and keep some sort of local-db-to-remote-db key synchronization.
The question is - am I in the right direction? Are the standard patterns for implementing this? Maybe someone can point me to an open-source application that works in a similar fashion?
I am really new to iPhone coding, and would be very glad to hear any suggestions.
Thanks
I think you've blurring the questions together.
If you've got a question about making a GWT web interface, that's one question.
Questions about how to sync an iPhone to a web service are a different question. For that, you don't want to use GWT's RPCs for syncing, as you'd have to fake out the 'browser-side' of the serialization system in your iPhone code, which GWT normally provides for you.
about system design direction:
First if there is no REAL need do not create 2 different apps one GWT and other iPhone
create one but well written GWT app. It will work off line no problem and will manage your data using HTML feature -- offline application cache
If it a must to create 2 separate apps
than at least save yourself effort and do not write server twice as if you go with standard GWT aproach you will almost sertanly fail to talk to server from stand alone app (it is zipped JSON over HTTP with some tricky headers...) or will write things twise so look in to the RestLet library it well supported by the GAE.
About the way to keep sync with offline / online switching:
There are several aproaches to consider and all of them are not perfect. So when you conseder yours think of what youser expects... Do not be Microsoft Word do not try to outsmart the user.
If there at least one scenario in the use cases that demand user intervention to merge changes (And there will be - take it to the bank) - than you will have implement UI for this - than there is a good reason to use it often - user will get used to it. it better than it will see it in a while since he started to use the app because a need fro it is rare because you implemented a super duper merging logic that asks user only in very special cases... Don't do it.
balance the effort. Because the mess that a bug in such code will introduce to user is much more painful than the benefit all together.
so the HOW:
The one way is the Do-UnDo way.
While off line - keep the log of actions user did on data in timed order user did them
as soon as you connected - send to server and execute them. Same from server to client.
Will work fine in most cases as long as you are not writing a Photoshop kind of software with huge amounts of data per operation. Also referred as Action Pattern by the GangOfFour.
Another way is a source control way. - Versions and may be even locks. very application dependent. DBMS internally some times use it for transactions implementations.
And there is always an option to be Read Only when Ofline :-)
Wonder if you have considered using a Sync Framework to manage the synchronization. If that interests you can take a look at the open source project, OpenMobster's Sync service. You can do the following sync operations
two-way
one-way client
one-way device
bootup
Besides that, all modifications are automatically tracked and synced with the Cloud. You can have your app offline when network connection is down. It will track any changes and automatically in the background synchronize it with the cloud when the connection returns. It also provides synchronization like iCloud across multiple devices
Also, modifications in the Cloud are synched using Push notifications, so the data is always current even if it is stored locally.
Here is a link to the open source project: http://openmobster.googlecode.com
Here is a link to iPhone App Sync: http://code.google.com/p/openmobster/wiki/iPhoneSyncApp