How to improve the cloud service performance in Windows azure? [closed] - asp.net-mvc-2

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Presently my application is running in staging environment. I need to release my application to market, but before that I want to come out of this performance issues and my application details are,
Cloud Services =============> West US
Storage Account ============> West US
Database Server ===========> North Central US
VM(Virtual Machine) Size =====> Small
here my cloud service and storage in one location and Database Server in another location.is that any effects on application performance?
and one more thing am using the one web role with one instance in my application.
Can you suggest me what changes will I do for improving my application performance?

You definitely should put your database and services in the same region. Database performance is most likely your performance achilles heel and you should be putting the database as 'close' as possible to your services. Having your database and services in the same region means that they are in the same datacentre and therefore on a high-speed backbone. Separate regions (datacentres) means that your data has to traverse lower speed trans-national infrastructure — both throughput and latency will suffer badly. Also, since data egress charges are by region, you will be paying for all the traffic from your database to the application — with them in the same region this will not cost a cent.
Other performance improvements can be made — look at the CPU load on your cloud service, for example, to determine if a single instance is enough. But start with the data. Get data as close to the service as possible, starting with region affinity, but also looking at caching (where the data is in memory on the same machine).

From the description above, a few things you would need to do first:
Make sure your database server and cloud service are in the same region (West US). Having your cloud service and database servers in different regions would create some latency issues.
In the production environment, you would need to ensure that you're running at least 2 instances of your web role. A single instance web role is not covered by Windows Azure SLA and if this single instance goes down for any reason, your application is unavailable for that duration.
After that, please follow #Alexei Levenkov's recommendation on setting up performance goals.

Related

Which segregation Kubernetes clusters for an production environment? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I'm wondering about the best practices for architecting my Kubernetes clusters.
For 1 environment (e.g. production), what organisation should I have in my clusters?
Examples: 1 cluster per technology stack, 1 cluster per exposure area (internet, private...), 1 cluster with everything ... ?
Thanks for your help
I'm not a Kubernetes expert, so I'll give you some generic guidance to help until someone who knows more weighs in.
By technology stack - no. That wouldn't provide any value I can think of.
By 'exposure' - yes. If one cluster is compromised the damage will hopefully be limited to that cluster only.
By solution - yes.
Solution vs Technology Stack
"Solution" is where you have a number of systems that exist to addresses a specific business problem or domain. This could be functional e.g. finance vs CRM vs HR.
Technology stacks in the literal sense is not likely to be relevant. True, it's not uncommon for different solutions & systems to be comprised of different technology (is that what you were meaning?) - but that's usually a by-product, not the primary driver.
Let's say you have two major solutions (e.g. the finance and CRM). It's likely that you will have situations that impacts one but shouldn't impact the other.
Planned functional changes: e.g. rolling out a major release. Object Orientated programmers and architects have had this nailed for years through designing systems that are cohesive but loosely-coupled (see: Difference Between Cohesion and Coupling), and through stuff like the Stable Dependencies Principle. Having both solutions dependent on the same cluster makes them coupled in that respect, which.
Planned infrastructure changes: e.g. patching, maintenance, resource reallocation, etc.
Unplanned changes: e.g. un-planned outage, security breaches.
Conclusion
Look at what will be running on the cluster(s), and what solutions they are part of, and consider separation along those lines.
The final answer might be a combination of both, some sort of balance between security concerns and solution (i.e. change) boundaries.
The best way would be is to have 1 kubernetes cluster and have the worker nodes in private subnets. You can choose to have the control plane in a public subnet with restricted access like your VPN cidr etc.
If you have multiple teams or application stacks, I'd suggest having different namespaces for each stack as this creates the logical separation of resources.
Also, check the resource limits and quotas that you can apply on kubernetes to prevent over consumption of the resources.
And, as you mentioned multiple application stacks, I am assuming you would have multiple services being exposed for each application or something similar. I would highly recommend using a ingress controller (nginx or anything) to work as single point of entry for each application. You can have more than 1 application listening to 1 load balancer.
Also, have prometheus or ELK monitoring in place as they are great with monitoring k8s components and metrics.
And, I would highly recommend using a tool kubecost and kubebench for enhancing your k8s cluster.
Kubecost is for cost analytics and reporting for k8s components and kubebench would audit your cluster against CIS standards and give you a report on what improvements are required and where.
Please note that the above recommendations are based on best practises and cost efficiency.

Loopback - API Connect - Licensing [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
My question: Can I use API Connect to develop loopback models for free, similar to how SLC ARC works?
I've been playing around with strongloop's ARC and it is fairly straightforward to discover models from a database schema. IBM is pushing API Connect and SLC ARC has a number of deficiencies (how it handles mssql schemas during discovery, the fact that custom connectors are not picked up and must be baked into ARC's source code, etc. that are unlikely to be addressed as ARC is no longer supported).
The Loopback.io homepage explicitly states that:
A free version of API Connect especially for developers is available
called API Connect Essentials.
However, to run apic edit I am forced to sign up for Bluemix. On the registration page, it appears that I am receiving only a trial.
Your 30-day trial is free, with no credit card required. You get
access to 2 GB of runtime and container memory to run apps, unlimited
IBM services and APIs, and complimentary support.
I don't care about online services or deploying my API to the cloud. I'd like to run loopback on my own servers and am simply planning to use API Connect for model generation. Any help is greatly appreciated!
The trial is for Bluemix itself, which allows you to use a variety of services for free for a period of time.
The API Connect Developer Toolkit is free to use regardless of your Bluemix account type.

Enterprise NoSQL Stack Solution for Mobile/Web [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm tasked with investigating for our firm a full-stack solution where we'll be using a NoSQL database backend. It'll most likely be fed from a data warehouse and/or operational data store of some type in near-realtime (hopefully :). It will be used mainly by our mobile and web applications via REST.
A few requirements/assumptions:
It will be read-only (in the near term) and consumed by clients in REST format
It has to be scalable
Fast response time
Enterprise support - or if lacking actual support, something industry proven if open-source (basically management wants to hold
someone accountable if something in the stack fails)
Minimal client data transformations - i.e: data should be stored in as close to ready-to-use format as possible
Service API Management of some sort will most likely be needed (eg: 3scale)
Services will be used internally, but solution shouldn't prevent us from exposing them externally as a longterm goal
Micro-services are preferable (provided sufficient API management is in place)
We have in-house expertise in Java and Grails for our mobile/portal solutions
Some of the options I was tossing around were:
CouchDB: inherently returns REST - no need for translation layer - as
long as clients speak REST, we're all good
MongoDB: need a REST layer in between client and DB - haven't found a widely used one based on my investigation (the ones on Mongo's site all seem in their infancy - i.e: RestHeart)
Some questions I have:
Do I need an appserver? Or any layer in between the client and DB
for performance/caching reasons? I was thinking a reverse-proxy like
nginx would be a good idea for this?
Why not use CouchDB in this solution if it supports REST out of the box?
I'm struggling with deciding between which NoSQL DB to use, whether or not I need a REST translation layer, appserver, etc. I've read the pros and cons of each and mostly they say go Mongo - but for what I'm trying to do the lack of a mature REST layer is concerning.
I'm just looking for some ideas, tips, lessons learned that anyone out there would be willing to share.
Thanks!
The problem with exposing the database directly to the client is that most databases do not support permission control which is as fine-grained as you want it to be. You often can not allow a client to view and edit its own data while also forbidding it from viewing and editing any data of other users or even worse from the server itself. At least not when you still want a sane database schema.
You will also often find yourself in the situation that you have a document with several fields of which only some are supposed to be under the control of the user and others are not. I can, for example, edit the content of this answer, but I can not edit the time it was posted, the name it was posted under or its voting score. So far I have never seen a database system which can handle permission for individual fields (when anyone has: feel free to post in the comments).
You might think about trying to handle this on the client and just don't offer any user interface for editing said fields. But that will only work in a trusted environment. When you have untrusted users, they could create a clone of your client-sided application which does expose this functionality. There is no way for you to tell the difference between the genuine client and a clone, especially not when you don't have a smart application server (and even then it is practically impossible).
For that reason it is almost always required to have an application server between clients and database which handles authentication and permission management of the clients and only forwards those requests to the persistence layer which are permitted.
I totally agree with the answer from #Philipp. In the case of using CouchDB you will minimum want to use a proxy server in front to enable SSL.
Almost all of your requirements can be fulfilled by CouchDB. Especially the upcoming v2 will give you the "datacenter-needs".
But it's simply very complex to answer what should be the right tool for you purpose. If you get some business model requirements on top like lets say: throttling - then you will definitely need an application server middleware like http://mcavage.me/node-restify/
Maybe it's a good idea to spend some money to professionals like
http://www.neighbourhood.ie/couchdb-support/ ? (I'm not involved)

Load balancing web servers: Benefits, disadvantages, mainstream? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
The questions I have regarding load balancing are:
Why exactly would you want to load balance web servers instead of upgrading to a new server?
Is it common practice to have a load balanced setup whether it's for web servers or sql servers?
Are there any disadvantages to load balancing?
How is user information maintained across both servers. If information during the session were stored locally on one server, how would the other servers access it?
Or if you know of any good reference materials that answer these questions, that would be good too.
Sometime you have a very powerful server but it fails to meet your performance requirements. You want it to work 4 times quicker... An option could be increase CPU speed 4 time, HDD speed 4 times, etc... 4 time.. That could costs 4*4=16 times more expensive. An option could be buy additional 4 server (to have totally 5, they would increase overall performance in 4.1-4.6 times). And also, higher availability - is a good benefit.
I wouldn't say it is a very common practice. it is not really too often you need that. Usually "it is a little bit expensive toy" :). And for SQL side I would suggest to use SQL Cluster and not an Load Balancing.
There are I would say a lot of thing you need to keep on mind when implementing load balancing. Starting from session storage, single cache storage,... you will go into more complicated business process: check if "user" don't update information changed in the same time by another user, etc...
Similarly to any kind of data you need to have a unique source of data. For sessions you need to have a "session provider" that for each server in your "farm" will decode "session id" and provide system with a session data. As Mitchel already said ASP.NET provides out-of-the-box solution for that.
References, I would start reading from something talking about writing scalable applications.
Few general references:
http://loadbalancing.servers.co.uk/benefits
http://en.wikipedia.org/wiki/Load_balancing_(computing)
http://refcardz.dzone.com/refcardz/scalability
From (.NET) developer perspective a good reading would be:
"Improving .NET Application Performance and Scalability", Microsoft, Pattern&Practice.
There are times, places, that you have more load than what you can handle with a single server. Also, load balancing gives you failover protection etc. If you have two servers, both would have to go down before you sites stopped responding etc. There are a number of other reasons as well.
Yes, load balancing is a very common practice, you will see it on the web server, and on the database side you will typically see SQL Server Clustering/MIrroring or other setups to get redundancy and more processing power. However, you have to be in a case where you either need the additional power, or need the redundancy.
Load balancing introduces issues, and there are things that you have to be aware of. You have more complex scenarios for management, but in the end, load balancing is the only way to scale with regards to expanding past single hardware capability and in a lot of cases true automatic redundancy.
For this I am going to assume that you are talking about ASP.NET, this is one the "tricky" points I was mentioning in an earlier point. With ASP.NET you can change session state from InProc (the default) to a SQL Server state service, OR you could use Sticky Sessions on your load balancer OR you could simply avoid/not use session within your application OR you could use a third party state management service.

How can I build something like Amazon S3 in Perl? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going):
Easy API implementation for client side apps. (maybe REST (?)
Centralized database server for the USERDB (maybe PostgreSQL (?).
Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?).
Easy server side configuration (config file(s) stored on the servers).
Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases)
Fast
High Uptime
Low memory usage
Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?).
Maybe a cache of some sort (memcached or parlbal or something else (?).
Thanks in advance
Perhaps MogileFS may help?
MogileFS homepage
Contributing to MogileFS
Google code repo (however note sixapart repo in contributing link).
Also there was a recent discussion about MogileFS performance on the Google Groups / Mailing list which maybe of interest to you.
/I3az/
here I found a ruby impl
https://github.com/jubos/fake-s3
hope that helps
mike
I have created a super simple server, see the put routine in Photo::Librarian::Server.pm it supports the s3cmd put of a file, nothing more for now.
https://github.com/h4ck3rm1k3/photo-librarian-server
https://github.com/h4ck3rm1k3/photo-librarian-server/commit/837706542e57fbbed21549cd9e59257669d0220c