I was informed by my hosting service that my site is getting hammered. It's a tiny out of date portfolio site and I haven't made any enemies lately so I'm guessing this is random. Any tips for protecting against attacks in the future? Is there anything I can even do when using a small shared host? I have zero experience when it comes to security and generally deal more with front end issues. Any help is appreciated.
Related
So I've been looking into page speed and have been reading about Google's Server Side GTM. Wondering if a lot of people have transitioned to this and what the costs could be for a high traffic marketing site. If the site already has a ton of tags, is this something that can be set up by the marketing department or developers, or is it more likely needed to hire someone who specializes in GTM.
If anyone has experienced this transition and has any information to share regarding page speed results that would be greatly appreciated as well.
The page speed change will vary depending on your site and the health of your container. You can measure it. Just go to your site, go to devtools, block GTM, reload the page. Compare that to how fast it loads with it.
General conclusion: Most GTM containers have zero influence on page load speed. Even if the library is not cached. Before it's non-blocking.
Keep in mind that best practice for implementing server-side GTM is using it in conjunction with the regular GTM. Which kinda defeats the purpose since sending GA hits is not something affecting your site speed. Unless you create real hardware load with them. Which is pretty difficult.
Now, to maintain your sGTM, you do need to increase spendings, but the main expense is usually not on maintaining the instance with the container. The main expense has a more long-term nature. The maintenance of the backend instance will require more expensive experts, much tougher debugging, typically involvement of normally more expensive backend devs and so on.
Generally, going sGTM route just because it sounds like fun is a bad idea to do in production. There should be a good business case to justify it.
I'm new to server architecture and have been reading around a lot but have not yet had a solid opinion on if the setup below is good practice or not and was hoping someone with more experienced can give me confirmation if I'm setting up my architecture correctly:
Use Angular Universal to Pre render html to CDN (e.g. Cloudflare)
Cloudinary for Image assets
One/Few strong machines with ngix handling bus load and sending off to other servers listed below (all hosted in digital ocean):
Rest API (Express Server)
Database MongoDB
I'm really concerned about the speed of my rest api as the regions offered in digital ocean seem significantly smaller in contrast to a cdn like cloudflare. How much does this matter when affecting my speed and is a service?
I know this might sound ridiculous but the region issue makes me wonder if hosting a rest api express server on a cdn would be better than a place like digital ocean. (my instincts tell me I should't do this on a cdn but am at a loss for reasons and hope someone can provide clear reasons why I can or shouldn't host an express rest api server there.)
From my knowledge I would do this a little differently.
A CDN is used to serve content hence the name CDN (Content Delivery Network). The CDN its self doesn't serve the content but it routes the user to a server which serves it. For example if you have a server in the US, France and Asia and you where from the UK and requested the website with images hosted on these servers. The CDN would direct you the the closest/best server for you. In this case that would be the server in France.
So to answer your question it isn't a bad idea to host the RESTful API on the CDN but you would need multiple servers around the world (if you are going for worldwide) and use Cloudflare CDN to direct your traffic.
This is what I would do:
If your not expecting loads of traffic (like millions) just have 1-2 servers in each location so 1-2 in North America, South America, France (EU), Asia and maybe Australia. This will give you decent coverage. Then when you setup your CDN that should handle who goes where. Using node and nginx will help you a lot this will allow you to get cheaper not as powerful servers because they are pretty light weight.
Now for your databases you can do one of two things have one dedicated solution somewhere which will be as little latency for all regions somewhere like France (EU) so its more central or you can have multiple and have them sync. Having multiple databases which sync will be more work and will require quite a bit of research. Having the one server is a lot easier to manage.
The database will be your biggest problem deciding whether to do with one and deal with latency or multiple and have to manage them and keep them in sync. Keep in mind you could go with a cloud hosting platform to host your database this would help you with the issue because a lot of platforms will offer worldwide coverage as well as providing synchronised databases. You will however run into the cost issue when using cloud platforms.
Hope this answers your questions and provides you with the knowledge you need!
We're launching an iPhone app soon, and if everything goes well, we might reach up to tens of millions of user each day.
What server solution would you use for this? I guess a small VPS isn't enough. Is dedicated server a better choice? Is there any good hosting provider that can provide such servers?
I'm a newbie when It comes to servers, and would like some basic info about how to handle this.
Thanks in advance
Unfortunately, you are not really going to know the apps requirements until the app is launched. It all depends on how much the app needs to communicate with the server, and how often users are using the app. Depending on those variables and even more, a VPS might be enough, or you may need a dedicated box, or several. It also depends a lot on the performance of the VPS and dedicated boxes, furthermore it depends on how much access to the system you need.
Ultimately, it seems you may not even know how well the app is going to do, so I suggest you take the cheap/efficient route of using cloud computing. That way you will limit your expenses initially when you app has a small user base. Then your performance can amp up as quickly as your app requires (of course so will the price). That is the benefit of cloud computing, you will not be losing money in the beginning until you have the user base to use your server to its limit. Furthermore, you do not have downtime, etc when/if your server is no longer enough.
Check out Google's Cloud Computing to get a hint of what is possible. I personally like Google's cloud experience, but you have many more options with varying degrees of freedom that you will have to check out. Amazon of course is another possibility.
is it possible to let my own facebook apps (not generating revenue) being hosted by facebook?
The problem is that by using the iframe-version the traffic/requests are killing the server :-(
But I need to connect to a database and print/calculate values, so I think there is no other way than hosting everything on own servers. But maybe there are things I don't know.
What is the way you would go?
I don't think Facebook has an option to host apps, at least not that I've ever heard of or was quickly able to find on their developers site.
Honestly, when it comes to hosting a high-demand website, there's no free way to do it. Resources cost money. You can pick from tons of hosting providers and see who gives you the features you need at the best rate. Maybe some will offer free hosting if you include ads in the Facebook app, maybe some will offer free hosting for other means, etc.
For a non-revenue-generating app, when it becomes popular and successful and requires real resources to keep it running, it's generally time to start thinking about how to generate revenue from it. Maybe use it as a free gateway app to other revenue-generating apps (a loss leader), maybe have ads, maybe use it to generate useful marketing data, etc. For a successful site it may involve a good bit of personal investment and risk before the profits roll in (Facebook being a good, though extreme and uncommon example of this).
You have to host the application on your own, there's no way that FB does it for you.
Related question: What is the most efficient way to break up a centralised database?
I'm going to try and make this question fairly general so it will benefit others.
About 3 years ago, I implemented an integrated CRM and website. Because I wanted to impress the customer, I implemented the cheapest architecture I could think of, which was to host the central database and website on the web server. I created a desktop application which communicates with the web server via a web service (this application runs from their main office).
In hindsight this was rather foolish, as now that the company has grown, their internet connection becomes slower and slower each month. Now, because of the speed issues, the desktop software times out on a regular basis, the customer is left with 3 options:
Purchase a faster internet connection.
Move the database (and website) to an in-house server.
Re-design the architecture so that the CRM and web databases are separate.
The first option is the "easiest", but certainly not the cheapest long term. Second option; if we move the website to in-house hosting, the client has to combat issues like overloaded/poor/offline internet connection, loss of power, etc. And the final option; the client is loathed to pay a whole whack of cash for me to re-design and re-code the architecture, and I can't afford to do this for free (I need to eat).
Is there any way to recover from when you've screwed up the design of a distributed system so bad, that none of the options work? Or is it a case of cutting your losses and just learning from the mistake? I feel terrible that there's no quick fix for this problem.
You didn't screw up. The customer wanted the cheapest option, you gave it to them, this is the cost that they put off. I hope you haven't assumed blame with your customer. If they're blaming you, it's a classic case of them paying for a Chevy while wanting a Mercedes.
Pursuant to that:
Your customer needs to make a business decision about what to do. Your job is to explain to them the consequences of each of the choices in as honest and professional a way as possible and leave the choice up to them.
Just remember, you didn't screw up! You provided for them a solution that served their needs for years, and they were happy with it until they exceeded the system's design basis. If they don't want to have to maintain the system's scalability again three years from now, they're going to have to be willing to pay for it now. Software isn't magic.
I wouldn't call it a screw up unless:
It was known how much traffic or performance requirements would grow. And
You deliberately designed the system to under-perform. And
You deliberately designed the system to be rigid and non adaptable to change.
A screw up would have been to over-engineer a highly complex system costing more than what the scale at the time demanded.
In fact it is good practice to only invest as much as can currently be leveraged by the business, using growth to fund further investment in scalability, should it be required. It is simple risk management.
Surely as the business has grown over time, presumably with the help of your software, they have also set aside something for the next level up. They should be thanking you for helping grow their business beyond expectations, and throwing money at you so you can help them carry through to the next level of growth.
All of those three options could be good. Which one is the best depends on cost benefits analysis, ROI etc. It is partially a technical decision but mostly a business one.
Congratulations on helping build a growing business up til now, and on to the future.
Are you sure that the cause of the timeouts is the internet connection, and not some performance issues in the web service / CRM system? By timeout I'm going to assume you mean something like ~30 seconds, in which case:
Either the internet connection is to blame and so you would see these sorts of timeouts to other websites (e.g. google), which is clearly unacceptable and so sorting the internet is your only real option.
Or the timeout is caused either by the desktop application, the web serice, or due to exessively large amounts of information being passed backwards and forwards, in which case you should either address the performance issue how you might any other bug, or look into ways of optimising the Desktop application so that less information is passed backwards and forwards.
In sort: the architecture that you currently have seems (fundamentally) fine to me, on the basis that (performance problems aside) access for the company to the CRM system should be comparable to accesss for the public to the system - as long as your customers have reasonable response times, so should the company.
Install a copy of the database on the local network. Then let the client software communicate with the local copy and let the database software do the synchronization between the local database server and the database on the webserver. It depends on which database you use, but some of them have tools to make that work. In MSSQL it is called replication.
First things first how much of the code do you really have to throw away? What language did you use for the Desktop client? Something .NET and you may be able to salvage a good chuck of the logic of the system and only need to redo the UI and some of the connections.
My thoughts are that 1 and 2 are out of the question, while 1 might be a good idea it doesn't solve the real problem. And we as engineers should try and build solutions not dependent on the client when ever possible. And 2 makes them get into something they aren't experts at and it is better to keep the hosting else where.
Also since you mention a web service is all you are really losing the UI? You can alway reuse the webservices for the web server interface.
Lastly you could look at using a framework to help provide a simple web based CRUD to start and then expand from there.
Are you sure the connection is saturated? You could be hitting all sorts of network, I/O and database problems... Unless you've already done so, use wireshark to analyze the traffic; measure the throughput and share the results with us.