I have a quite extensive application running under Azure.
As part of the operational management of the application, I have a set of Application Insight instances to provide monitoring, tracking and logging.
The overall application consists of three ASP.NET MVC websites and a Worker Role. Additionally, I have three instances ("environments") of the application overall deployed (QA, UAT and Production).
I noticed a while back that for one of the App Insight instances (for the same MVC website across all environments) it was quite heavy on the number of Dependency data points that is being collected. Specifically, this is causing me to exceed the 5 million data points included in the monthly quota.
Noting this, I changed the Web Tests (for availability) to hit a different endpoint (one that doesn't invoke the dependencies).
However, I am still seeing the old endpoint being hit.
Digging a little further into this, I believe that I have an old rogue Web Test that is still active, and still hitting the old endpoint.
Issue is - I can't find it.
Is there a way to query, even if via the Powershell Cmdlets, the subscription in an attempt to find this? I've trawled through the portal and cannot see it anywhere.
Could this be the "Proactive Detection" feature? If so, can you change the endpoint it monitors?
You should definitely open a support ticket with us. Check out the dev support options and look at either option 3 or 4. It's preferred you open a support ticket via Azure with a support plan (option 3) if you have one. But, if you don't have a support plan check out option 4 and you can get in contact with us that way.
Related
My Teams app:
multi-tenant
deployed using Teams Toolkit to Azure Storage, CDN enabled with a Custom Domain
in alpha use by internationally distributed organisation (third party, not me), users around the world
the app functionality works fine including multi-tenant
in rapid development so frequent code updates. Very rare manifest updates.
Problem:
I frequently update the app's code and deploy the update to Azure using Teams Toolkit
when I do this users often report 'blank tabs' for a period of time, can be many hours. They see the tab menu but the tab contents are simply blank. Purging the CDN doesn't seem to help.
seems most common using Teams desktop app but also reported using browser and mobile Teams app
I think this may be an issue of code deployment .js files (each of which gets a new filename) not being available to the install, I can sometimes reproduce but very unreliably. Other times I can access the app, using a user account on the client's AAD, successfully from different locations (using a VPN to emulate location).
Previously the app's Custom Domain was managed on Cloudflare's proxy.
I disabled this and implemented Azure CDN.
Users continue to report the problem.
This is very poor user experience.
Does anyone have experience of this or hypotheses on what may be happening?
Thanks.
Would suggest to test one thing first: manually deploy a new code change to Azure storage, with the same storage-CDN-custom domain setup.
See if this also causes the hours delay symptom.
By doing this, if the issue is reproducible, it may indicate that the Azure Storage-CDN configuration needs to be optimized.
Otherwise please share the result and it will help narrow down the root causes.
I need some help with deciding on the architecture of my project (a web app for unlocking discounts). I am first planning on creating the website (React for the front-end & Django for the back-end, PostgreSQL database). In the future, I may create a mobile app too for Android & iOS (unsure what front-end framework yet).
So I have decided I want the front-end and back-end to be completely separated so the back-end is a REST api. This will allow me to not have to create multiple back-ends for mobile apps.
But, after researching, I have found that this could be quite expensive in terms of server costs. This is a new business and I am the only developer so funding isn't high. So I was thinking that I could deploy the front-end & back-end on the same server but as separate apps that talk via nginx?
I have 4 questions about this:
If I do this, would it still be possible to reuse the back-end as a REST api for the mobile apps or is that a no because it's linked to the web front-end?
If it is possible, would I be able to host the mobile front-end in the same server (so have everything hosted on 1 server)?
Is this a stupid idea - would I just be better off deploying everything into separate servers in the long-run (to reduce load)?
Should I just worry about this in the future? And for now just deploy the separated web front-end & back-end to the same server.
I have never really deployed anything into a real life production environment so I'm sorry if my questions seem silly. I haven't started development yet but I want to think about scalability & future extensibility before I start. Thank you.
Nowadays I'd go with a serverless approach. Instead of having servers to maintain you can focus on your app functionalities.
There are a lot of options. You can check, for example, AWS Amplify (https://aws.amazon.com/amplify/) or Netlify (https://www.netlify.com/) for a more "full-stack" approach.
In AWS, you also can keep separated projects, having your backend in lambdas and your frontend served through S3 + CloudFront. You also don't have servers to care about.
There are only examples of how you can solve your problem without servers, but answering your questions:
You can reuse your APIs regardless of the way your app is deployed. It will be more related to how you designed them;
Yes, you can host everything in a single server if you want, but I really don't recommend that;
If you don't want to pay for 24/7 servers, you can go for a serverless approach;
As I told you before, you can do what you want without worrying about servers.
Your main point of focus is to keep the cost lower and to implement a good solution also. My suggestion would be to look for AWS Lightsail. Lightsail offers fixed price VM which you can configure yourself, and it starts from $3.5 / month at the time of writing this answer.
My answers to your questions
If I do this, would it still be possible to reuse the back-end as a REST api for the mobile apps or is that a no because it's linked to the web front-end?
Yes, it's possible. Keep the frontend and backend in different repo, and you can deploy it as docker instances on the same server. You will have 1 frontend docker container and 1 backend docker container, and they can communicate with each other.
If it is possible, would I be able to host the mobile front-end in the same server (so have everything hosted on 1 server)?
For mobile, you will develop a mobile application which you can publish to playstore or deploy to smartphone. Your app can then call the backend service and get the JSON in response. So you have to design your backend in such a way that it can serve data to both requests.
Is this a stupid idea - would I just be better off deploying everything into separate servers in the long-run (to reduce load)?
For long term and design perspective, you need to consider factors like scalability, maintainability, security etc.., so its always better to have multiple server to avoid single point of failure.
Should I just worry about this in the future? And for now just deploy the separated web front-end & back-end to the same server.
My advice to you will be to think carefully now, so you don't get nightmares in the future. Invest your time now and design a stable solution which could help you in long-term. As you mentioned that its a small business, but your solution should be able to easy handle growth.
My suggestion
As suggested by the Paulo, S3 + CloudFront looks good for frontend. You can get 1 year free CDN using Lightsail.
For Backend, you should at least have 2 (I will suggest minimum 3) servers and deploy backend docker containers. You can use docker compose to automate the deployment. If you want to orchestrate then Docker Swarm Mode is best. With this you will avoid single point of failure. You can get very affordable servers from Amazon Lightsail
For database, you need to make it scalable. To ensure scalability and High Avalability we should have replicated DB. Minimum 3 DB instances will be good starting point. MongoDB is a good choice. With simple configuration you can enable DB replication. 1 Master 2 slaves instances.
1 Load-balancer in front of your servers to distribute the load. To save the cost you can configure the Load-balancer yourself but this will add learning curve and you will have to spent time and understanding the details. The better solution is to use a managed load balancer. Lightsail offers Load Balancer for $18 / month at the time writing this answer.
The above mentioned solution is cost-effective and will give you long-term benefit and also you can estimate the cost based on your solution.
Obviously, this can still be improved but I tried to cover the necessary aspects of the question asked.
We are developing some REST api's for internal use. To test these microservices we are toying with the idea that every service has a sandbox mode so we can do integration tests that are as close as possible to the real deal.
To see if this path is worth trying we are looking for documentation / best practices on how to manage this sandbox and how to implement this internally. When we look for the keywords Sandbox, REST API and Best Practices we only find how to implement as consumer of existing sandboxes.
So does anyone have some documentation / links in how to tackle this problem and what the pro's and con's are of the different ways?
Kr,
Thomas
I'd say there are two ways to proceed:
Basic: keep a separate sandbox instance of a service. You always deploy a new code to this instance first and run automated/manual tests to verify if everything works fine. A datastore could be a snapshot from the production data or artificial testing data. I would rather we have a "Snapshot" but it depends whether it is applicable in your particular case (privacy etc.)
Advanced: I spied this technique on Facebook Marketing API. This API provides an interface to set up and launch advertising campaigns. They didn't provide a sandbox api for testing purposes (at least last year when the system I was working on had been integrating with Facebook). However if you use a keyword "test" in a name of a campaign or an adset (key entities in the ad world) they would never launch and spend your money. You can try extend this concept on your particular domain and run tests on (or very close to) your production
Hope this helps
I've been ramping up on Azure Mobile Services over the past week. There are definitely some PROs and CONs in using them over a standard Azure Web Site where I can write APIs that hit SQL DB, etc.
One of the biggest negatives I see is developing the server side code and DB structures ON THE SERVER. I've watched lots of videos from launch and beyond, read lots of blog posts about tips and tricks around WAMS, but nobody seems to talk about the downside of developing the code (server scripts) and database structures on the server, at your live URL.
This is all great for developing the first version of your mobile app and associated mobile services. But once it's all deployed, how do you ever build version 2? Real apps hitting real APIs and data, but now I want to develop/change/play with the server scripts and database schema?
With Azure Web Sites, I can develop locally and only publish code and DB changes to the server on my schedule.
Have any of you seen or heard of the "v2 development story" around Azure Mobile Services?
Only thing I can think of would be to create another set of tables and APIs around them, most likely "virtual tables" that allow me to write APIs against the original set of data. Seems like a huge hassle, since the client code would now have to know about the original set of tables and the new set of tables... that's only for v2...
Thanks for any thoughts / insight.
You should have two services, one dev and one production and use scripts to promote your code from dev to production (pretty similar to how most workflows go, in moving from a test setup to a production one).
http://channel9.msdn.com/Events/Build/2013/3-511
Ideally, I'd like to use Azure table storage as the provider, but SQL Azure will also work. Anything I've dug up so far is over a year old, using deprecated approaches. I.e., outdated code samples, SDKs and IDEs.
As the title states, this would be applied to an MVC2 app running in Azure. Examples, code, links, etc. do not necessarily have to be for MVC. Anything related to a .Net 4.0 web app using Forms Authentication on Azure will do.
Microsoft originally released a set of sample providers with the PDC08 SDK - but these definitely are not recommended for commercial use.
Recently this project has produced some new ones - http://azureproviders.codeplex.com/ - I'd recommend going with that one as it is "live code" - you might also be able to contribute something back to it.
If you do use these providers, please be aware that Azure charges per transaction - at a base rate of $0.01 per 10000 transactions - and that the logic within these providers can cause "quite a few" transactions to occur. So if your site is busy and has a lot of membership activity, then it could work out quite expensive to operate.
If you are using SQL Azure membership, then the membership SQL is standard - http://support.microsoft.com/kb/2006191 - the only differences in the ASP SQL scripts is in the Session storage (since Session uses SQL Agent to clear sessions - and SQL agent is not supported on SQL Azure)
Personally, I've use the Table storage for test/demo sites - but for anything "real" I've moved towards SQL Azure - it's easier to query, to run reports, to backup, etc
Unfortunately, unless you role your own provider, the only sample I have seen is the outdated one you mentioned. For user authentication (RoleProvider), it is not too bad (i.e. no bugs I have heard about). However, for Session state, it has some issues. I don't think it does any sort of encryption however, so the passwords might be in plaintext. Worst case scenario, you could at least use it as starting point for your own.
A quick look around and I can't even find the 'Additional Samples' anymore. They might have been lost when Code Gallery did an update awhile back. I know it is still used in http://phluffyfotos.codeplex.com, so you could pull it from the source there at least.
I would not use ATS Forms authentication, because of transaction cost associated, if your site is going to have alot of authentication requests (even token authorization requires check against ATS)
I would use Forms Authentication against SQL Azure with standard SqlMembershipProvider
It works just fine. I've manually migrated necessary aspnet tables & stored procs over to SQL Azure from a local SQL server instance without problems. Just update the aspnet_schemaversions table to have this content:
common 1 1
membership 1 1
personalization 1 1
profile 1 1
role manager 1 1