What is the best way to link an image with a mongodb item? - mongodb

I'm currently building my first real project that includes Express and MongoDB. Since it's one of the first backend-heavy projects I've worked on outside of my Udemy course, I've run into a lot of questions.
My project is supposed to be a mock-online store that would display items I have created inside of my MongoDB server. The problem I'm having is that I don't know the proper way of serving those image files that should be associated with each item (such as the image of a hat, for a hat item). I could add them directly into the project's public folder, but I don't know if that would be feasible in terms of the scalability that I want this project to demonstrate. But it doesn't seem like MongoDB will let me store images within each item. How would I go about doing that?
Sorry in advance if any of this is unclear, it's my first time posting as well. I'll try and provide more information if I need too. Thanks!

If you want a scalable solution for images, you typically would use a separate service like AWS S3 or Imgix.
There are several benefits to using a 3rd party service. You don't bog down your web server with image requests, or image resizing. You get virtually unlimited space. Etc.
In your MongoDB document, you would then store a key like /item/1.jpg or whatever, rather than the image itself. Your front-end then uses the key to request the image when someone visits your website.
If you want a turn-key solution, I recommend starting with Imgix (or Cloudinary, or some similar service). It is more expensive than S3, but it is pretty cheap for a small project, and it will get you up and running a lot faster.

Related

When you have 2 sites, front end and a rest api, how/where do you store uploaded images?

I have 2 different sites basically. 1 is a rest api, the other is the front end built in vue.
Actually uploading a file isn't the issue. My question is how to have the Vue portion access the files that were uploaded via rest.
Should I save the files to c:...VueProjectFolder\images (could use some code help if this is the case)? Or should the Vue site be inside the rest api folder for relative access? Or is it better to save relative, then move the uploaded files? Or do I have vue access the files via the rest api address?
None really seem like the right answer and I'm failing with google atm.
Down the road, I would expect them to be served from a mapped drive as there will be many. Mostly images, but also sound files.
This is probably not a Vue or 'two sites' issue. This is an architecture issue, and like such issues, It depends. What I can do is tell you how I approached a similar situation.
I uploaded the files normally
I renamed them, created a folder structure based on the year, month, and day the picture was uploaded. Then I moved the image to the permanent location. So an image uploaded today will be located at .../assets/images/2020/09/01/randomImageName.png for instance.
I stored the image location along with whatever resource came with the uploaded image in the database.
Now in my frontend, I do a normal api call for a particular resource and it spits out everything about that resource, including the image location.
I think I should point out that my case was an ecommerce website with a REST API endpoint servicing the frontend requests. Generally this approach is advised since you can take advantage of backing up the image directory, backing up the database and easily moving between servers if need be.
This may not exactly be your case, but I hope it gives you insight into how to approach this efficiently.

Which kind of Google Cloud Platform mobile backend client is appropriate?

THE PROBLEM
I'm writing a mobile app which will allow a user to log in, save some preferences that must be stored in a database, and display congressional bills to the user.
I've only written simple RESTful services with PHP and MySQL in the past. I'd like to take advantage of newer technologies, and am a little lost on general direction.
The bill data (formatted as JSON) can be gathered by running the scrapers found here. Using docker, I managed to set a working directory and download the files on my local machine.
I've designed a MySQL database for holding the relevant bill and user data.
I started to mess around in Google Cloud Platform, and read the doc that describes different models. I'm thinking of a few different ideas, but aren't familiar with GCP or what I can actually accomplish.
QUESTIONS
1) What are App Engine, Compute Engine, and Container Engine each for? I get the gist that Container Engine holds different instances of stuff you load up with docker, and that Compute Engine sets up a VM, but I don't really understand the relationships. How should I think of them?
2) When I run those scrapers from the shell, where are the files being stored, and how can I check on them? On my computer, I set a working directory, but how do directories work in GCP? Is it just a directory in the currently selected VM, or is this what Buckets are for?
IDEAS
1) Since my bill data already comes as JSON, should I skip the entire process of building a database for the bills and insert them into Firebase somehow? Is this even possible? If so, am I stuck using Firebase's NoSQL, or can I still set up a relational database?
2) I could schedule the scrapers to run periodically, detect new files, and run a script to parse the JSON and insert new bill data into my a database (PostgrSQL?/MySQL?). Then I would write an API.
3) Download the JSON files to a bucket, and write an API that reads from them. Not sure how the performance would compare to using a DB.
I'm open to other suggestions as well.
For your use case (stateless web application), App Engine is probably your best choice. The Google documentation has severalcomparisons of your computing options
You can use App Engine with PHP and cloud-hosted MySQL if you want, which could be a good way to get your toes wet without going in over your head.

Fully scalable website with micro-applications

I'm in the process of designing a cloud deployed website for a new solution my company is looking to provide. I have been attempting to answer a few questions and haven't had any luck, so when in rome.
First, I don't want the website to be stuck to any one particular framework. I know there is no way to completely future proof a website, but I would rather not put all of our eggs in one basket.
Secondly, I want to have a separation between the front and back end entirely. I have a list of reasons why I'm looking to do this, don't necessarily want to get into the conversation of what they are. Server Side rendering for the most part is out of the question.
So where does that leave me?
My initial thoughts on the design are to have a REST API that can be accessed for any API calls (this may be turned to GraphQL in the future).
The design decisions that I'm mostly wresting with are for the front end. The website will be a dashboard type system, where tenants can log in and see screens for them.
I was thinking that I would have a sort of shell, that hooks on to the index.html. This would have it's own routing, that would render micro-applications that are completely separate from the shell logic.
So for example, if I load index.html, path being "/"
It has some routes that it's responsible for, lets say
"/todos"
"/account"
If I accessed the /todos route, my shell application would then render that micro app. This application would be completely separate from the shell, except some data that might be loaded via the window. Once this application is rendered via the shell application.
So my todos route, for example, could be a redux application that's independent. It could have it's own routing, etc.
Is this is a common architecture? Are there any examples of this? Is there a better way of going about this?
Thanks for any insight!
Sounds like your well and truly over engineering this beast.
You may take on such an architecture for a HUGE build with many dev teams all working separately. Small agile team, the above would create so much overhead in boilerplate and brain ache in context switching between each "app"
Micro-service architecture is seriously great. Just don't break it up too small, read your use case well and break your services up accordingly.
For example: we are a team of 3. We have a pretty large-ish app devised into:
Php API
Backend management interface (redux)
Frontend website (html, react, php)
Search service (elastic search)
Cache (redis)
Data store (mysql)
All on running in multiple docker containers across multiple hosts. Pull down the backend.. Fine the frontend website is still up and running!

iPhone SDK & MySQL Remote Database

I've tried looking around but honestly not finding much help. I am mostly seeking for advice as to how I should approach to develop what I am thinking.
I want to accomplish something like this.
Imagine a website, with a backend database. This database contains information fed by users themselves. The website is fully functional, now I want users to be able to have the same functionality on their iPhones. I don't use a local database because I want all users to be able to have access to the same database, and this changes constantly.
What would be the best approach to:
Allow users to access all the information currently available on the website (database perspective).
Able to edit & add new entries to the database
I don't know if me creating an array to hold all this data would be wise to do. Specially with large amounts of data. I dont know how well it can scale.
Should I create a duplicate SQL lite database on the phone itself duplicating that of that website? What do you guys feel would be a good approach to this?
Comments, links, references would be greatly appreciated.
Thanks!
Sounds like the perfect time to create an API for your website. If the size of you application is not very big, you can use the same database, but would be good to run the API separated from the web server.
Essentially, such an API should allow you to make requests to certain URLs for retrieving, updating and deleting information from the database.
Depending on what server-side platform you are currently using, there are many options.
Client-side, your iPhone app can use http://restkit.org/ or http://allseeing-i.com/ASIHTTPRequest/ if you feel confident.

How do we share data between two different services

I am currently working on a web service which is periodically polled. It does not store its state and is instantiated everytime it is queried. Essentially, it retrieves the state of other external entities e.g. databases and delivers it back to the requester.
Recently, the need to store state as arisen in that
There is the need to continously collect data from a particular source and store the bits that are important/relevant
There is the need to collect the aggregate of a particular data source over a period of time
I came up with the following idea:
My main concern here is the fact that I am using a static class (essentially a global) to share data between the two services. Is there a better way to doing this?
edit: Thanks for the responses thus far. Apologies for the vaguesness of this question: just trying to work out what is the best way to share data across different services and am unsure as to the specifics (i.e. what is required). The platform that I am developing on is the .NET framework and both services are simply WCF services hosted as a Windows service.
The database route sounds like the most conventional way to go - however I am reluctant to go down that path for now (mainly for deployment/setup issues; it introduces the need to create new tables, etc in addition to simply installing the software) for at this point the transfer of relatively small amounts of data. This may of course change in the future and going the database route might be the way to go at that point.
Is there any other way besides adding a database persistance layer?
If you need to collect and aggregate data, you might want to consider using a database between the two layers. Or have I misunderstood something?
You should consider enhancing your question with more requirements: pretty much all options are open here.
Sure - how about data binding? I don't have a lot of information to go on here - about your platform but most sufficiently advanced systems offer it in some form.
You could replace your static shared data with some database representation, with a caching layer (like memcached) between the database and the webservice, so that most of the time the data is available very quickly from the cache, but can be retrieved from the database as needed.
I appreciate that you want to keep the architecture simple. Depending on the magnitude of items you have to look up and there permanency, you might just consider leveraging your file system or a message queue. It sounds like you want a file system, because that sounds the least amount of impact to your design.
If you start dealing with tens of thousands of small files, your directories could get hard to navigate and slow to do file lookups on. I typically shoot for about 1000 - 10000 files per directory, and concoct a routine that can generate a path to the file depending on the file name pattern. Keeping the number of subdirectories even is important, some file systems have a limit on the number of subdirectories in a parent directory.