I am asking for advice on possibly better solutions for the part of the project I'm working on. I'll first give some background and then my current thoughts.
Background
Our clients can use my company's products to generate potentially large data sets for use in their industry. When the data sets are generated, the clients will file a processing request to us.
We want to send the clients a summary email which contains some statistical charts as well as sampling points from the data sets so they can do some initial quality control work. If the data sets are of bad quality, they don't need to file any request.
One problem is that the charts and sampling points can be potentially too large to be sent in an email. The charts and the sampling points we want to include in the emails are pictures. Although we can use low-quality format such as JPEG to save space, we cannot control how many data sets would be included in the summary email, so the total size could still exceed the normal email size limit.
In terms of technologies, we are mainly developing in Python on Ubuntu 14.04.
Goals of the Solution
In general, we want to present a report-like thing to the clients to do some initial QA. The report may contains external links but does not need to be very interactive. In other words, a static report should be fine.
We want to reduce the steps or things that our clients must do to read the report. For example, if the report can be just an email, the user only needs to 1). log in and 2). open the email. If they use a client software, they may skip 1). and just open and begin to read.
We also want to minimize the burden of maintaining extra user accounts for both us and our clients. For example, if the solution requires us to register a new user account, this solution is, although still acceptable, not ranked very high.
Security is important because our clients don't want their reports to be read by unauthorized third parties.
We want the process automated. We want the solution to provide programming interface so that we can automate the report sending/sharing process.
Performance is NOT a critical issue. Our user base is not large. I think at most in hundreds. They also don't generate data that frequently, at most once a week. We don't need real-time response. Even a delay of a few hours is still acceptable.
My Current Thoughts of Solution
Possible solution #1: In-house web service. I can set up a server machine and develop our own web service. We put the report into our database and the clients can then query via the Internet.
Possible solution #2: Amazon Web Service. AWS is quite mature but I'm not sure if they could be expensive because so far we just wanna share a report with our remote clients which doesn't look like a big deal to use AWS.
Possible solution #3: Google Drive. I know Google Drive provides API to do uploading and sharing programmatically, but I think we need to register a dedicated Google account to use that.
Any better solutions??
You could possibly use AWS S3 and Cloudfront. Files can easily be loaded into S3 using the AWS SDK's and API. You can then use the API to generate secure links to the files that can only be opened for a specific time and optionally from a specific IP.
Files on S3 can also be automatically cleaned up after a specific time if needed using lifecycle rules.
Storage and transfer prices are fairly cheap with AWS and remember that the S3 storage cost indicated is by the month so if you only have an object loaded for a few days then you only pay for a few days.
S3: http://aws.amazon.com/s3/pricing
Cloudfront: https://aws.amazon.com/cloudfront/pricing/
Here's a list of the SDK's for AWS:
https://aws.amazon.com/tools/#sdk
Or you can use their command line tools for Windows batch or powershell scripting:
https://aws.amazon.com/tools/#cli
Here's some info on how the private content urls are created:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
I will suggest to built this service using mix of your #1 and #2 options. You can do the processing and for transferring the data leverage AWS S3 which is quiet cheap.
Example: 100GB costs like approx $3.
Also AWS S3 will be beneficial as you are covered for any disaster on your local environment your data will be safe in S3.
For security you can leverage data encryption and signed URLS in AWS S3.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I've been considering using Parse.com's service for my backend, but I'm skeptical about its scalability.
Can it really handle several thousand simultaneous users? If not, is their any good way transitioning away from it?
I know the question may be old, but wanted to provide my 2 cents for others out there who may be considering parse....
Under the simplest of scenarios, parse may work well. As soon as you need to scale up to more complex queries, I have personally found nothing but headaches.
Queries are limited to 1000 records. Initially, you may think this is not an issue, until you start dealing with sub queries, and realize weird data is returned because the sub query cuts records off without warning or error. (FYI, the default is 100 records unless you specify a limit up to 1000, so the problem is even worse if you are not paying attention).
For some strange reason there is a limit to the number of times you can issue a count query in a min. (and this limit appears to be really low). Be prepared to try and throttle your code so you don't hit this limit, otherwise errors are thrown.
Background Jobs do not run reliably. I have had a background job set to run every 5 min, and there are times it takes 20+ min before the job will kick in.
Lots of Timeouts. This is the one that gives me the most heartburn.
A. If you have a cloud function that takes a while to process, you have about 6 or 7 seconds to get it done or it will cut you off.
B. I get the feeling that there is a general instability with the system. Periodically, I run into issues which seems to last for about an hour or so where timeouts happen more frequently (and with relatively simple functions that should return immediately).
I fully regret my decision to use parse, and I am doing all I can to keep the app alive long enough for us to get funding, so we can move off the platform. If anyone has any better alternatives to parse, I am all ears.
[Edit: after three amazing years with the team, I've decided to move on and am no longer a Parse or Facebook employee. The team is in great hands and has done amazing things. The entire backend has been rewritten to increase performance and reliability dramatically. The roadmap is amazing, and I expect great things to come from the team. At the time of my departure, Parse powered over 600,000 applications and served a mind boggling number of requests each day. Were each Parse push to be sent to a unique person, they could form the world's fourth largest country in one day. For future help with Parse, please either post questions here with the parse.com tag or post to the parse-developers Google group.]
Full disclosure: I'm a Parse engineer.
Parse already hosts thousands of apps, let alone users. When we exited beta in late march, we announced over 10,000 applications running on Parse with a 40% month-over-month growth rate. Parse is staffed by a world-class team, many with years of experience in big data and high volume traffic.
We welcome your traffic with open arms; you will be in the company of great teams like Band of the Day and Hipmunk. We are so confident in our services that we built our One Click Export system so people like you can try Parse risk free. If you feel Parse does not meet your performance expectations, we will gladly send you off with all of your data intact.
We chose Parse as the backend for our app.
Conclusion: DON'T.
Stability is a disaster, performance is a disaster too, and so is support (probably because they can't really help you because all the issues are non-reproducible).
Running even the simplest of functions can lead to random timeouts inside Parse (I am talking about simple PFUser login calls for instance):
Error: Error Domain=NSURLErrorDomain Code=-1001 "The request timed out." UserInfo=0x17e42480 {NSErrorFailingURLStringKey=https://api.parse.com/2/client_events, NSErrorFailingURLKey=https://api.parse.com/2/client_events, NSLocalizedDescription=The request timed out., NSUnderlyingError=0x17d10e60 "The request timed out."} (Code: 100, Version: 1.2.20)
We encounter timeouts on a daily basis, and this is with an app we are testing with 10 users max!
This is the typical one we get back all the time, at completely arbitrary moments and impossible to reproduce. Calling a Cloud Code function that does a few queries and a few inserts:
{"code":124,"message":"Request timed out"}
Try the same 10 minutes later and it runs in less than a second. Try again 20 minutes later and it takes 30 seconds to execute.
Because there is no transactionality it is really a lot of fun when storing for instance 3 objects in 1 Cloud Code function, where Parse decides to bail out of the function randomly after let's say having saved 2 of the 3 objects. Great to keep your database consistent.
The "best" ones we got where these. Mind you, this is the actual data coming back from a Cloud Code function:
{"code":107,"message":"Received an error with invalid JSON from Parse: <!DOCTYPE html>\n<html>\n<head>\n <title>We're sorry, but something went wrong (500)</title>\n <style type=\"text/css\">\n body { background-color: #fff; color: #666; text-align: center; font-family: arial, sans-serif; }\n div.dialog {\n width: 25em;\n padding: 0 4em;\n margin: 4em auto 0 auto;\n border: 1px solid #ccc;\n border-right-color: #999;\n border-bottom-color: #999;\n }\n h1 { font-size: 100%; color: #f00; line-height: 1.5em; }\n </style>\n</head>\n\n<body>\n <!-- This file lives in public/500.html -->\n <div class=\"dialog\">\n <h1>We're sorry, but something went wrong.</h1>\n <p>We've been notified about this issue and we'll take a look at it shortly.</p>\n </div>\n</body>\n</html>\n"}
The stuff I describe here is not something that happens once in a blue moon in our project. Except for the 500 errors (which I encountered twice in a month) all the others are seen on a daily basis.
So yes, it's very easy to get started with, but you must take into account that you are working on an unstable platform, so make sure you got your retries and exponential backoff systems up and running, because you will need this!
What worries me the most is that I have no idea what would happen once 20.000 people start using my app on this backend.
edit:
Right now I have this when doing a PFUser login:
Error: Error Domain=PF_AFNetworkingErrorDomain Code=-1011 "Expected status code in (200-299), got 502" UserInfo=0x165ec090 {NSLocalizedRecoverySuggestion=<html><body><h1>502 Bad Gateway</h1>
The server returned an invalid or incomplete response.
</body></html>
, PF_AFNetworkingOperationFailingURLResponseErrorKey=<NSHTTPURLResponse: 0x16615c10> { URL: https://api.parse.com/2/get } { status code: 502, headers {
"Cache-Control" = "no-cache";
Connection = "keep-alive";
"Content-Length" = 107;
"Content-Type" = "text/html; charset=utf-8";
Date = "Mon, 08 Sep 2014 13:16:46 GMT";
Server = "nginx/1.6.0";
} }, NSErrorFailingURLKey=https://api.parse.com/2/get, NSLocalizedDescription=Expected status code in (200-299), got 502, PF_AFNetworkingOperationFailingURLRequestErrorKey=<NSMutableURLRequest: 0x166f68b0> { URL: https://api.parse.com/2/get }} (Code: 100, Version: 1.2.20)
Isn't it great?
If you're writing a small/simple app (or a throwaway prototype) with little to no logic on the backend then go for it, but for something larger/scalable it's best to avoid it, I can say that from first hand experience. It all sounds good with their user management, push notifications, abstracted storage and what not but in the end it's not worth the trouble. Namely I was developing the backend for an app on Parse, clients were so much into it because it sounded cool and promising (strong marketing I guess), being bought by Facebook and what not, but a few weeks into production major issues/limitations with the platform started arising, what should be a simple app turned out to be a nightmare to develop and scale.
The result/conclusion of the project:
- broke the time window for a relatively simple app - it should have lasted 2-3 months, it lasted almost a year and still isn't stable/reliable, if we used a custom stack it'd be done inside the time window for sure cause I made a similar demo project in 5-10 days with a custom node stack
- lost the client's trust, they're now remaking the app with another team who'll use a custom stack
- lost loads of cash for breaking the time window and trying to make it work
- did so much overtime cause of it that it started to reflect on my health
- never using some platform/solution that promises to have it all, always going with a custom/tried stack
First were the stability issues and constant failing of the platform like server downtimes and random errors, but they have all that sorted out (that was at the start-mid of 2014), but the following problems remain:
you can't debug your code, at least at the time being (there are ways you could make it work with an additional node server and some obscure lib)
the limits are ridiculous, a scalable platform which can do 50-60 API request per second (or more depending on your subscription), which isn't as low it sounds until you start to do strain testing, and when you hit it your code will constantly fail
API calls are measured like this: calling a server function (Parse job) - 1 call, querying the database - 1 call, another query (cause they don't have some advanced/complex query system in place, if you have a more complex database schema you'll realise very soon what I mean) - 1 call, if you need to get more than 1000 queries guess what - query again, etc., query for count (you need to do it as a separate query) which is unreliable (tends to return an approximation for a few thousand entries)
creating/saving ~1000+ simple objects is a strain on the platform/database, deleting 1000 or more objects, even more so, which is ridiculously fast for normal databases, but on Parse it tends to take 5-10 minutes (if you check it more closely it deletes 20 objects per batch)
no way to use most of the npm packages (only the pure JS ones by including the source directly)
if you go and read Parse forums you'll see users downvoting/roasting the Parse team constantly for the platform's lack of features and needing to jump through hoops for arbitrary logic implementation like fetching random entries and similar stuff
they support Stripe integration, but if you want to use Paypal or some other payment service (we decided to use Paypal cause it has a vastly superior country support over Stripe) you can't make it work on Parse, for Paypal integration I had to use a separate server to pull it off
no easy way to sync users and handle concurrency issues, you have to use hacks and some funny logic you wouldn't use or admit using nowhere never
want 100+, let alone 1000+ simultaneous users, good luck pulling that off
when you want to find out the number of entries in a table, you can hit the limit on calling the count query which it's funny, not documented and totally ridiculous, and in the end returns an approximate number
modularity is foreign to the platform, the functions you call from your jobs can't last more than a couple of seconds (7 seconds I think) and when you take into consideration the query time it's bound to happen a lot with more complex queries and some complex logic
You can have something like Cron jobs but they can't last more than 15 minutes (due to the low performance of the platform like multiple queries that's very, very short), they are limited to 2-3-4 simultaneous jobs depending on your subscription fee, and have a very limited/poor scheduling system in place (e.g. you can't edit it from your code, it's very limited so you have to use hacks to run the same job at 2 exact times during the day or something similar, it can't watch for time savings etc.)
When you get an error on the server it can be totally misleading, check the forums for that, can't remember anything from top of my mind
Push notifications are regularly late as much as 20-30 minutes
An arbitrary example: you want to fetch a random item from their database, your app makes the call to a job that'll provide it (1 API call), the job queries the database, but you have to make 2 calls, first to get the count of the items (1 API call) and then a second one to get a random item (1 API call), this is 3 API calls for that functionality, and with 60 requests per second, 20 users can make that call at a given time before hitting the request limit and the platform going haywire, after you include other users browsing through app screens and stuff, you see where this leads...
If it were any good wouldn't Facebook who bought it every mention using it for even some of their apps? I'd suggest 3 things:
- first - don't listen to the Parse guy, it's his platform so he has to promote it, listen to people who have been using it to make something using it
- second - if you need a serious and scalable platform and don't want to go fully custom, go for Amazon Cloud services or something similar that's tested and reliable
- third - stay away from the platform if you have any server side experience, if you don't then go and hire a backend dev for the project, it will be cheaper and you'll get a working solution in the end
I have spent the day looking into parse.com and here is my current opinion based on what I've found (Please bear in mind that I have only very brief experience of developing with the SDK as yet)..
Parse.com clearly has some very attractive positives which is why I found myself looking into it, but for the sake of debate I will concentrate on being critical as the great positives are all listed on their website. (Well done parse.com for attempting to solve such a great problem!)...
In the testimonials, Hipmunk is the biggest name I would say. It is listed as an app which uses the data portion of the SDK. Without approaching Hipmunk developers, I can't know for sure but I can't imagine them storing ALL their data in the parse.com cloud.
After trying and browsing most of the apps listed. None really stand out as being hugely dependent on a server back-end so I find it impossible to get an idea of whether or not scalability has been solved using parse.com based on these.
The website states 40,000 apps and counting. I feel (but do not know) that based on the app gallery, this figure is based on the amount of apps in their user-base, and not real live production apps in the app-stores. The app gallery would feature far more big names if that many apps were using parse.com.
Parse.com is a very new concept, and very different even to its closest rivals. So without concrete evidence on how scalable and stable (and all the rest) it is, then it is very hard for a developer on a project to consider committing to it as there is too much at stake.
I ran tests for my own answer to similar question and it can be VERY, VERY FAST. However , the results you get may depend on the details of your implementation...
Test compared Android SDK to Android using native HTTP stack making Parse/REST calls...
Test Details:
Test environment - newest Android version on 10 month old phone over fast WIFI connection.
( upload 63 pictures where avg filesize=80K )
test 1 using the android SDK RESULT=Slow performance
test 2 using native REST calls over android RESULT=VERT FAST
--EDIT-- as there is interest here....
Regarding http thruput , the parse SDK(android) and performance, it may be that parse.com has not optimized performance on the way that they implement android asyncTask() in the parse.android SDK? How the work that required 8 min. on parse.sdk could be done in 3 seconds on an optimized REST , DIY framework ( see links for details on implementations), i really do not know. If parse have not fixed their SDK implementation since these comparison tests ran, then you probably dont want their default SDK asnycTask stuff doing anything approaching a real workload on the network.
The great attraction about Parse (and similar SaaS) is that you can save tens of thousands on back-end development costs. Given that the back-end is often the most expensive aspect of a Web app; that head-ache is suddenly poof.
The problem with Parse and most (all) SaaS is that the region, power, memory, bandwidth, scalability, thresholds, alerts and various actions are out of your control.
Same with Shopify. It's a great Saas with comprehensive control over products, orders, inventory, and aesthetics -- but zero control over the machine. So, today's SaaS is not a heck of a lot different than godaddy. They invariably oversell or max-out their machines in order to make money; and you are stuck if you really care about ass-kicking performance. You cannot even buy that level of service.
I would like something AT LEAST as powerful and comprehensive as the AWS console. Most techies know and accept that Heroku and Parse are both hosted on AWS. Who cares. So charge more for the added service, but don't deny access to those critical low-level tools that make a Site and App and the user experience zing. Hint to those Parse employees.
At any rate, in answer to the question:
The Parse API is simple JSON. So you can pump out the data in the same JSON format that a Parse application expects.
You might even be able to utilize their PFObject (iOS). At some point, all that highlevel API goes to a common HTTP request/response. The good thing about REST's generality means common-of-the-shelf; things like http, url, strings, and utf. No funky Orb here.
Parse is great to start with especially helper functions/features about user management. But I started encountering issues ..
Long execution/ping times, 1000 object limit INCLUDING subqueries, no datacenters at europe (as far as I know)
It would've been a divine platform if they could sort performance and stability issues. I somehow regret developing with it, but I put 5000+ lines of code so I'm going to stick with it.
Maybe they should separate their DEV apps and PROD apps environments, and only allow PROD apps after some kind of supervision, or create a different environment with only paying customers?
We are in 2014, $20/month servers can handle unoptimized websites(60 not-cached db queries on homepage) with 1 million visits/month, this shouldn't be that hard come on Parse!
It's ok for prototyping the apps, especially if the iOS/Android developer doesn't know how to build a DB/API backend himself.
It's not ok at all, when it comes to developing an application with a logic that requires queries more complex than:
SELECT * FROM 'db' WHERE 'column' = 'value' LIMIT 100;
Related queries and inner joins do not exist on Parse. And good luck updating/removing 320 000 records if you need (that's the number I'm working with now).
The only thing that is really useful is handling the Users through the SDK. If I could find a good docs or even tutorial how to handle/create users through iOS/Android apps using Django and DRF/Tastypie, I'm instantly converting everything is being developed in our company to use that.
Can people give me examples of why they would use coreData in an application?
I ask this because most apps are just clients to a central server where an API of some sort gives you the information you need.
In my case I'm writing a timesheet application for a web app which has an API and I'm debating if there is any value in replicating the data structure on my server in core data(Sqlite)
e.g
Project has many timesheets
employee has many timesheets
It seems to me that I can just connect to the API on every call for lists of projects or existing timesheets for example.
I realize for some kind of offline mode you could store locally in core data but this creates way more problems because you now have a big problem with syncing that data back to the web server when you get connection again.. e.g. the project selected for a timesheet no longer exists.
Can any experienced developer shed some light on there experiences on when core data is best practice approach?
EDIT
I realise of course there is value in storing local persistance but the key value of user defaults seems to cover most applications I can think of.
You shouldn't think of CoreData simply as an SQLite database. It's not JUST an SQLite database. Sure, SQLite is an option, but there are other options as well, such as in-memory and, as of iOS5, a whole slew of custom data stores. The biggest benefit with CoreData is persistence, obviously. But even if you are using an in-memory data store, you get the benefits of a very well structured object graph, and all of the heavy lifting with regards to pulling information out of or putting information into the data store is handled by CoreData for you, without you necessarily needing to concern yourself with what is backing that data store. Sure, today you don't care too much about persistence, so you could use an in-memory data store. What happens if tomorrow, or in a month, or a year, you decide to add a feature that would really benefit from persistence? With CoreData, you simply change or add a persistent data store, and all of your methods to get information out or in remain unchanged. The overhead for that sort of addition is minimal in comparison to if you were trying to access SQLite or some other data store directly. IMHO, that's the biggest benefit: abstraction. And, in essence, abstraction is one of the most powerful things behind OOP. Granted, building the Data Model just for in-memory storage could be overkill for your app, depending on how involved the app is. But, just as a side note, you may want to consider what is faster: Requesting information from your web service every time you want to perform some action, or requesting the information once, storing it in memory, and acting on that stored value for the remainder of the session. An in-memory data store wouldn't persistent beyond that particular session.
Additionally, with CoreData you get a lot of other great features like saving, fetching, and undo-redo.
There are basically two kinds of apps. Those that provide you with local functionality (games, professional applications, navigation systems...) and those that grant access to a remote service.
Your app seems to be in the second category. If you access remote services, your users will want to access new or real-time data (you don't want to read 2 week old Facebook posts) but in some cases, local caching makes sense (e.g. reading your mails when you're on the train with unstable network).
I assume that the value of accessing cached entries when not connected to a network is pretty low for your customers (internal or external) compared to the importance of accessing real-time-data. So local storage might be not necessary at all.
If you don't have hundreds of entries in your timetable, "normal" serialization (NSCoding-protocol) might be enough. If you only access some "dashboard-data", you will be able to get along with simple request/response-caching (NSURLCache can do a lot of things...).
Core Data does make more sense if you have complex data structures which should be synchronized with a server. This adds a lot of synchronization logic to your project as well as complexity from Core Data integration (concurrency, thread-safety, in-app-conflicts...).
If you want to create a "client"-app with a server driven user experience, local storage is not necessary at all so my suggestion is: Keep it as simple as possible unless there is a real need for offline storage.
It's ideal for if you want to store data locally on the phone.
Seriously though, if you can't see a need for it for your timesheet app, then don't worry about it and don't use it.
Solving the sync problems that you would have with an "offline" mode would be detailed in your design of your app. For example - don't allow projects to be deleted. Why would you? Wouldn't you want to go back in time and look at previous data for particular projects? Instead just have a marker on the project to show it as inactive and a date/time that it was made inactive. If the data that is being synced from the device is for that project and is before the date/time that it was marked as inactive, then it's fine to sync. Otherwise display a message and the user will have to sort it.
It depends purely on your application's design whether you need to store some data locally or not, if it is a real problem or a thin GUI client around your web service. Apart from "offline" mode the other reason to cache server data on client side might be to take traffic load from your server. Just think what does it mean for your server to send every time the whole timesheet data to the client, or just the changes. Yes, it means more implementation on both side, but in some cases it has serious advantages.
EDIT: example added
You have 1000 records per user in your timesheet application and one record is cca 1 kbyte. In this case every time a user starts your application, it has to fetch ~1Mbyte data from your server. If you cache the data locally, the server can tell you that let's say two records were updated since your last update, so you'll have to download only 2 kbyte. Now you should scale up this for several tens of thousands of user and you will immediately notice the difference of the server bandwidth and CPU usage.