suppose I have the following server / software architecture for my application:
Server serves mainly as file server. Security by obscurity is achieved by putting the data into folders like /[random A-Z0-9]/mydata001.zip.
Clients requests data over a REST API. The send their secret token (access rights checked against database) and tags of desired data.
Server responses with JSON containing URL to the zip files for the requested data.
The Client now can download the data over plain HTTP.
So the main load comes from the downloads, right? How can I scale such an architecture to say three servers? Only be duplicating the data?
Thanks for some advice.
If security is a concern, then at least use HTTPS for the authentication. When the files need to be stored on multiple servers without duplicating the data I can think of several options:
Store the files for a specific user always on the same server. This means each user profile contains the server where its files are stored. This only works when there are a lot of users in order to distribute the files evenly.
Randomly store the file on one of the file servers and save the file location in the database.
Related
I am implementing REST endpoints for uploading large (multiple GB) files using multipart requests. For uploading I have POST /files and PUT /files/{sha256_file_hash}. For the latter endpoint, the client can calculate the hash of the uploaded file locally before the upload and then PUT it directly to the file hash. The idea behind this is performance optimization. The upload does not need to happen, if a file with the given hash was already uploaded to the server before.
When uploading a file that already exists, my server responds with 200 OK and then just cancels the multipart upload. Tools like curl do not like this and exit with an error complaining that they did not manage to upload all multipart data.
What would be RESTful behavior of the server? Is this performance optimization that I have implemented on my the server even RESTful? Should the server always accept all data that the client wants to upload? Should the server rely on the client to first GET /files and check on his own whether the file he wants to upload is already there?
React/Next front-end + Node.js and MongoDB on the back?
How would video storage work?
You can use post requests to send files to your remote server. You backend code should read request data and then store the file onto the disk or any object storage like s3. Most backend web frameworks have many libraries to store files received in an HTTP request directly to s3
Most of the web development frameworks have the capability to guess the mimetype but here since you know it's video/mp4 you can just save it.
I must warn you if you are trying to upload huge files it might be a better idea to use chunked uploads. This gives you ability to pause and resume and is robust to network failures.
We have different database as per client but all SP's and tables schema are same for all.
How to connect azure mobile service base on client?
Option:
publish service as per client, so number of client is equal to services.
put all connection string in config file. Read header value and pick connection accordingly.
any other option, do you know.
1st option is not feasible for us. because need publish code on all site for single change.
Please suggest me.
You cannot really use Azure Mobile Services for this. Azure Mobile Services is pretty much designed around a single database per service. I'd suggest switching over to Azure App Service. If you just need database access, you can set up a REST endpoint that provides the necessary access but looking up your connection strings on a per-authenticated user. You might want to use a schema per client instead to reduce the number of connection strings you have.
Short version: Look at the design of your service to reduce the number of SQL connection strings you are using. An ideal number is 1.
My app reads data from two sources, a local sqlite file and a remote server which is a clone of the local db but with lots of pictures. I do not write to the server database, but I do need multiple simultaneous fetch operations.
What DBMS should I use for storing information on the server?
It needs to be very easily used from an iPhone app, be reliable, etc.
Talking to a remote server should not be tied to any platform like iOS. If you have control over the remote db server, the best bet IMO is crafting a RESTful API which you express your queries in, the server processes it and sends you the pictures/records using proper content type. If you do NOT have such control over the remote db, you'll have to stick to the API the db hoster provides. There are plenty such "on the cloud" db hosters (including NoSQL solutions) that give you a web-services interface to your db. MongoLabs is one such provider for MongoDB(which is a NOSQL db - meaning no schemas, no bounds on the structure of a "table"). You can continue to stick to SQLite on the client side.
You seem to have two sources of data local storage and a remote server.
This question on SO might help you to decide approaches of storing data on the server.
Once you have downloaded data using something like NSURLConnection class the images could be stored in the filesystem using writeToFile or the likes.
- (BOOL)writeToFile:(NSString *)path atomically:(BOOL)flag method.
You might like to save the rest of the data in sqlite. We used sqlite and the CoreData framework to save data for one of our applications and it worked fine for us. CoreData allowed us to interact with the database without actual SQL queries.
The iPhone client resides on the phone while on the server side we might have a database and a webservice interacting with the db. Webservice itself might be implemented in python or php like scripting language. The client interacts with the webservice that might return data in formats like XML or JSON. Thus there is no direct communication between the client and db. However, the client does implement network communication code to communicate with the webservice.
This page shows how to consume an XML based web service.
S3 allows you to post directly from browser to S3 bypassing your webserver (http://doc.s3.amazonaws.com/proposals/post.html). How can I upload files to a database in a similar fashion. I don't want to first stage the file in the webserver in a temporary file and then upload from there to the database. Thanks.
If I cannot avoid the webserver, then how do I just use the webserver for streaming and not actually land the file in the webserver before loading to the database.
Thanks.
A handful of DBMSes provide an HTTP connection design, but this is more the exception, not the rule.
That said, you can make the HTTP server a thin layer over a more traditional database, but this is probably a bad idea, because most databases assume that anything that can access them has full privilege to execute queries on them, and an application (read "web server") will act as a gatekeeper between the database and obnoxious or malicious clients.
Basically, You're going to do best using a database engine that does all of these things at a fine grained level, expressly designed for it. MongoDB mostly addresses this exact use case. Otherwise, you'll just have to write an application that sits between HTTP and the raw database connection.