I'm a NEW Android developer. I'm developing a client-server app. I'm using free online host for my PHP & mySQL. My app will involve with data & images (both upload & download).
Since I'm using free hosting service, there are limitations (namely storage). I'm thinking of using two separate free hosts. One for the data and one for the images. I understand that I'll need to get data from one host and images from another. There will be multiple POSTs & GETs to complete a process. My images are stored on the server as actual images (not BLOB). I'll have to make HTTP post & get the images anyway once the links are provided/retrieved from the first post.
Do you think it's a bad idea?
Related
does anyone know of best practices or common strategies in backend design for serving dynamic images and videos to client applications?
Background: I'm currently building an application that allows users to upload their own images and videos. I'm not really sure about how to serve these media files back to the client in the most efficient way. Do I store the files on the same VPS that my application server is running on? Do I need to save the files in different qualities / densities to better adjust for the clients' screen resolution? (I'll have mostly mobile clients)
I tried googling these questions but apparently I'm asking the wrong questions :-)
I would really appreciate maybe a reference or professional vocabulary on these topics.
Thanks in advance.
1) You need to split web server and application server.
First of all do not try to stream media files from your backend unless you can offload low-level stuff to OS - most likely you will do it wrong.
Use proxy server as an web server to serve such files.
nginx will do.
Also you need to have backup of your media files the same way as you do backup of your database.
Storing static huge media files along with application server is wrong move - it will not scale at all.
You can add cron task to move files to some CDN server - when your move is complete you replace URL in database to match new location.
So by using nginx you will save precious CPU and RAM while file is getting moved to external server.
And CDN will help you to dedicate bandwidth and CPU/RAM resources to application server.
2) Regarding image resolution and downsampling:
Screens of modern handsets have the same or even better resolution compared to typical office workstation.
Link speeds have much bigger impact on UX.
If client has smartphone with huge screen but with slow link you still have to deliver image or video as fast as possible even if quality of media will not be match the resolution of handset.
It makes sense to downsample images on demand and store result on disk for nginx/CDN to serve it again.
In case of videos it makes sense to make "bad" version with big compression(quality loss) for the cases of slow link - device will downsample it itself during playback.
And you can keep client statistics (screen sizes/downlink speeds) and generate optimized versions of such video file later when you see that it is "popular".
FYI: Several years ago some social meda giant dropped idea to prepare all possible versions of the same media file in favour of FPGA on-the-fly resampler.
I do not remember the name of the company and URL to the article. It was probably instagram.
Some cloud providers have offers with FPGA or CUDA on board to do heavy lifting.
So in some cases you could exchange storage for heave horsepower to do conversion on the fly.
I'm currently utilizing tableau API to embed graphs published onto our server. Our server doesn't allow guest users because of company polices, but the server is internal, and one of the graphs we have is suposed to be shares with anyone willing to see it. The problem is that not everyone have access to Tableau Server, because we are charger by each user, monthly.
So the question is, is it possible to embed a specific user's authentication into a webpage?
Another question is, if it is possible, does the user have a limit of multiple connections at the same time?
One solution is to export your visualizations as static images which you then make available via a traditional web page. You can automate the image capture on a schedule easily, say by appending .png to the Tableau Server viz url and using a script to request the image and save to disk periodically. That way the image can stay up to date with data changes.
This produces static images without interactivity, but it respects the license agreement.
I am asking for advice on possibly better solutions for the part of the project I'm working on. I'll first give some background and then my current thoughts.
Background
Our clients can use my company's products to generate potentially large data sets for use in their industry. When the data sets are generated, the clients will file a processing request to us.
We want to send the clients a summary email which contains some statistical charts as well as sampling points from the data sets so they can do some initial quality control work. If the data sets are of bad quality, they don't need to file any request.
One problem is that the charts and sampling points can be potentially too large to be sent in an email. The charts and the sampling points we want to include in the emails are pictures. Although we can use low-quality format such as JPEG to save space, we cannot control how many data sets would be included in the summary email, so the total size could still exceed the normal email size limit.
In terms of technologies, we are mainly developing in Python on Ubuntu 14.04.
Goals of the Solution
In general, we want to present a report-like thing to the clients to do some initial QA. The report may contains external links but does not need to be very interactive. In other words, a static report should be fine.
We want to reduce the steps or things that our clients must do to read the report. For example, if the report can be just an email, the user only needs to 1). log in and 2). open the email. If they use a client software, they may skip 1). and just open and begin to read.
We also want to minimize the burden of maintaining extra user accounts for both us and our clients. For example, if the solution requires us to register a new user account, this solution is, although still acceptable, not ranked very high.
Security is important because our clients don't want their reports to be read by unauthorized third parties.
We want the process automated. We want the solution to provide programming interface so that we can automate the report sending/sharing process.
Performance is NOT a critical issue. Our user base is not large. I think at most in hundreds. They also don't generate data that frequently, at most once a week. We don't need real-time response. Even a delay of a few hours is still acceptable.
My Current Thoughts of Solution
Possible solution #1: In-house web service. I can set up a server machine and develop our own web service. We put the report into our database and the clients can then query via the Internet.
Possible solution #2: Amazon Web Service. AWS is quite mature but I'm not sure if they could be expensive because so far we just wanna share a report with our remote clients which doesn't look like a big deal to use AWS.
Possible solution #3: Google Drive. I know Google Drive provides API to do uploading and sharing programmatically, but I think we need to register a dedicated Google account to use that.
Any better solutions??
You could possibly use AWS S3 and Cloudfront. Files can easily be loaded into S3 using the AWS SDK's and API. You can then use the API to generate secure links to the files that can only be opened for a specific time and optionally from a specific IP.
Files on S3 can also be automatically cleaned up after a specific time if needed using lifecycle rules.
Storage and transfer prices are fairly cheap with AWS and remember that the S3 storage cost indicated is by the month so if you only have an object loaded for a few days then you only pay for a few days.
S3: http://aws.amazon.com/s3/pricing
Cloudfront: https://aws.amazon.com/cloudfront/pricing/
Here's a list of the SDK's for AWS:
https://aws.amazon.com/tools/#sdk
Or you can use their command line tools for Windows batch or powershell scripting:
https://aws.amazon.com/tools/#cli
Here's some info on how the private content urls are created:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
I will suggest to built this service using mix of your #1 and #2 options. You can do the processing and for transferring the data leverage AWS S3 which is quiet cheap.
Example: 100GB costs like approx $3.
Also AWS S3 will be beneficial as you are covered for any disaster on your local environment your data will be safe in S3.
For security you can leverage data encryption and signed URLS in AWS S3.
I have a website in which users would upload various and later access them.
The files are stored in a specific path in the server at this point. Now if I need to have multiple servers for the website, what is the best way to make the user uploaded files accessible across multiple servers. Amazon s3 is one option that has crossed my mind. What other options do I have?
First, you can try using a CDN (http://en.wikipedia.org/wiki/Content_delivery_network).
Also, you can make it in house, by having specialized servers setup for static content. You will need maybe a lookup server, to know for each file on what server can be found. It will also contain the logic to determine what is the best server to use to save the file. This is more complicated, as you will have to make the load balancing and take care of geographic location of users.
our devices (microscopes with cameras) produce images and additional information to each image.
Now a middleware supplies wants to connect these devices to lab automation system. They have to acquire the data and we have to provide it. An astonishing thing for me was their interface suggestion - a very cryptical token separated format (ASTM E1394-97). Unfortunatelly, they even can't accomodate images in their protocol, and are aiming to get file-paths.
I thought it is not the up-to date approach. While lookink for alternatives, I saw CoachDB.
So, my idea was, our devices would import data including images in CoachDB and they could get the data. It seems even, that using mustache, we could produce the format they want (ascii-text) and placing URLs as image references instead of path's.
My question is, did someone applied CoachDB for such a use case already? It seems to be a little-bit misuse of CoachDB, as the main intention is interface not data storage. Another point disturbing me is, that the inventor of CoachDB went to other project Coachbase. Could it mean lack of support for CoachDB in the future?
Thank you very much for any insights and suggestions!
It's ok use-case and actually we're using CouchDB in such way - as proxing middleware between medical laboratory analyzers and LIS. Some of them publish images or pdf data on shared folders and we'd just loading them into related document as attachments.
More over you'd like to know, CouchDB is able to serve external processes (aka os_daemons) and take care about their lifespan: restarting if someone had terminated and starting right after you update config options through HTTP interface. This helps to setup ASTM client and server processes since this protocol is different from HTTP (which is native for CouchDB) which communicates with devices and creates documents as regular CouchDB clients. In same way you may setup daemons to monitor shared folders for specific files. And all this is just CouchDB with few "low bounded" plugins.