How will you setup infrastructure if you have to ask for eCommerce site which is on magento2 10,000 users hitting site simultaneously, some are on home page some are on checkout some are accessing admin ?
For high scalability, you can use AWS as your infrastructure.
For simplicity, you can use a dedicated server.
Both are relatively scalable, but AWS is more scalable than dedicated servers.
Related
We have successfully built a Firebase application with Firestore, functions, hosting, auth. Now we are working on an Atlassian confluence integration and a global rollout. The confluence plugin rest endpoints are served by an express app.
What is the proper way to achieve a unique url in all countries around the globe, e.g. https://myapp.com/confluence/api with no or at least acceptable latency to serve health checks as well? Is a hosting rewrite to function serving the express app enough? Do we need to manage any replication to regions around the globe by ourselves?
Thanks a lot for any advice.
You can use the Firebase hosting to connect a custom domain:
use a custom domain (like example.com or app.example.com) instead
of a Firebase-generated domain for your Firebase-hosted site.
Firebase Hosting provisions an SSL certificate for each of your
domains and serves your content over a global CDN.
Note the following about connecting custom domains:
Each custom domain can only be connected to one Hosting site.
Each custom domain is limited to having 20 subdomains per apex domain, due to SSL certificate minting limits.
When Firebase verifies domain ownership, an SSL certificate is being provisioned for your domain and it's being deployed across Firebase global CDN (content delivery network). This delivery network cashes your content on Firebase edge servers' SSDs to ensure quick content delivery and low latency globally.
We want to deploy our application on a cloud inside our corporate network so that it can be used to test APIs that exist within that network. We do not want to allow public access to this application nor to the internal APIs.
I've looked at deploying ICP internally onto resources (VMs) we've made available, but am wondering if IBM Cloud Dedicated is the better solution since I believe it's closer to IBM Cloud, which is where we've deployed our public-facing application.
IBM Cloud Dedicated is a single-tenant cloud environment, but it's hosted in an IBM data centre, so it might not meet your requirements. It can use VPN to securely connect to the local data centre - but that's also possible with public cloud, using the Secure Gateway. Depending on the sensitivity of the application, public cloud and secure gateway could be a good solution.
If you do want something inside the corporate nework, IBM Cloud Private (ICP) is a good choice. It's a significant part of IBM's hybrid cloud guidance so I personally wouldn't worry too much about technical differences between it and the public cloud.
I understand that Citrix NetScaler usually sits in front of citrix servers. Does it also sit in front of non-citrix servers?
>Does it also sit in front of non-citrix servers?
Yes. It is a full-blown load balancer. Or using the newer, fancier, term an "Application Delivery Controller".
It will do all the typical work
distributing to backend
monitoring backend (using several included service monitors)
arrange persistence to backend
offload authentication to frontend and authenticate to backend
offload SSL/TLS from backend
And also:
SSL-VPN gateway
Web cache
Web front end optimization (compression, JavaScript-minification, Sharding, etc.)
Web application firewall
There are several editions and only the most expensive one will give you all the features. Also SSL-VPN is licensed by concurrent users.
It can be used for all other servers for various purposes.
depends on how its configurated, you can use for Loadbalancing level 4( At Layer 4, a load balancer has visibility on network information such as application ports and protocol (TCP/UDP), Reverse Proxy, Storage, etc.
I'm using Owncloud on personal server for personal data, and need to connect to business-related server for business data. Server-to-server sharing is unappealing because of wasting costly hosted storage (and some other arguments). Is there a way to make windows client sync both servers simultaneously?
Such a feature currently doesn't exist. Two possible "workarounds" are listened here:
https://forum.owncloud.org/viewtopic.php?f=17&t=20521
An implementation of this feature without workarounds are planned for 1.9:
https://github.com/owncloud/client/issues/43
This question may be a bit subjective but I think will offer some valuable concrete information and solutions to proxying to heroku and debugging latency issues.
I have an app built using Sinatra/Mongo that exposes a REST API at api.example.com. It's on Heroku Cedar. Typically I serve static files through nginx at www and proxy requests to /api through to the api subdomain to avoid cross-domain browser complaints. I have a rackspace cloud instance so I put the front-end there temporarily on nginx and setup the proxy. Now latency is horrible when proxying, every 3 or 4 requests it takes longer than 1 minute, otherwise ~150ms. When going directly to the API (browser to api.example.com) average latency is ~40ms. While I know the setup isn't ideal I didn't expect it to be that bad.
I assume this is in part due to proxying from rackspace - server may well be on the west coast - to heroku on amazon ec2 east. My thought at the moment is that getting an amazon ec2 instance and proxying that to my heroku app would alleviate the problem, but I'd like to verify this somehow rather than guessing blindly (it's also more expensive). Is there any reasonable way to determine where the long latency is coming from? Also, any other suggestions as to how to structure this application? I know I can serve static files on Heroku, but I don't like the idea of my API serving my front-end, would rather these be able to scale independently of one another.
Since you're using Heroku to run your API, what I'd suggest is putting your static files into an Amazon S3 bucket, something named 'myapp-static', and then using Amazon Cloudfront to proxy your static files via a DNS CNAME record (static.myapp.com).
What's good about using S3 over Rackspace is that:
Your files will be faster for you to upload from Heroku, since both your app and storage are on the same network (AWS).
S3 is built for serving static files directly, without the overhead of running your own server proxying requests.
What's good about using Cloudfront is that it will cache your static files as long as you want (reducing multiple HTTP requests), and serve files from an endpoint closest to the user. EG: If a user in California makes an API request and gets a static file from you, it will be served from them from the AWS California servers as opposed to your East Coast Heroku instances.
Lastly, what you'll do on your application end is send the user a LINK to your static asset (eg: http://static.myapp.com/images/background.png) in your REST API, this way the client is responsible for downloading the content directly, and will be able to download the asset as fast as possible.