How to programmatically change tier of Azure SQL Database - powershell

We have a large SQL Database running on Azure which is only generally in use during normal office hours, although from time to time, overtime/weekend staff will require performant access to the database.
Currently, we run the database on the S3 Tier during office hours, and reduce it to S0 at all other times.
I know that there are a number of example PowerShell scripts that can be used together with automation tasks to automatically modify the database tiers according to a predefined timetable. However, we would like to control it from within our own .Net application. The main benefit is that this would allow us to give control to admin staff to switch up the database tier during out-of-hours as required without the need for technical staff to get involved.
There are a number of articles/videos on the Microsoft site that mention "scaling up/down" (as opposed to "scaling out/in", i.e. creating/removing additional shards), but the sample code provided seems to deal exclusively with sharding, and not with vertical "scaling up/down".
Is this possible? Can anyone point me in the direction of any relevant resources?

You can, yes. You have to use the REST API to do the call to our endpoints and update the database.
The description and the parameters of the PUT required to the update is explained here -> Update Database Details
You can change the tier programmatically from there.

Yes, you can change database tiers using REST API and call Azure endpoints to update the tier.
The parameters to be used for PUT are explained on this msdn page: Update Database

This can now be done using multiple methods besides using the REST API directly.
https://learn.microsoft.com/en-us/azure/azure-sql/database/single-database-scale
See Azure CLI example
az sql db update -g mygroup -s myserver -n mydb --edition Standard --capacity 10 --max-size 250GB
See PowerShell example:
Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -DatabaseName "Database01" -ServerName "Server01" -Edition "Standard" -RequestedServiceObjectiveName "S0"

I just tried doing this via SQL and it worked - set DTUs to 10 by default (this might take a loooong time):
ALTER DATABASE [mydb_name] MODIFY(EDITION='Standard' , SERVICE_OBJECTIVE='S0')
Reference: https://www.c-sharpcorner.com/blogs/change-the-azure-sql-tier-using-sql-query

Related

Setting up MongoDB environment requirements for Parse Server

I have my instance running and am able to connect remotely however I'm stuck on where to set this parameter to false since it states that the default is set to true:
failIndexKeyTooLong
Setting the 'failIndexKeyTooLong' is a three-step process:
You need to go to the command console in the Tools menu item for the admin database of your database instance. This command will only work on the admin database, pictured here:
Once there, pick any command from the list and it will give you a short JSON text for that command.
Erase the command they provide (I chose 'ping') and enter the following JSON:
{
"setParameter" : 1,
"failIndexKeyTooLong" : false
}
Here is an example to help:
Note if you are using a free plan at MongoLab: This will NOT work if you have a free plan; it only works with paid plans. If you have the free plan, you will not even see the admin database. HOWEVER, I contacted MongoLab and here is what they suggest:
Hello,
First of all, welcome to MongoLab. We'd be happy to help.
The failIndexKeyTooLong=false option is only necessary when your data
include indexed values that exceed the maximum key value length of
1024 bytes. This only occurs when Parse auto-indexes certain
collections, which can actually lead to incorrect query results. Parse
has updated their migration guide to include a bit more information
about this, here:
https://parse.com/docs/server/guide#database-why-do-i-need-to-set-failindexkeytoolong-false-
Chances are high that your migration will succeed without this
parameter being set. Can you please give that a try? If for any reason
it does fail, please let us know and we can help you on potential next
steps.
Our Dedicated and Shared Cluster plans
(https://mongolab.com/plans/pricing/) do provide the ability to toggle
this option, but because our free Sandbox plans are running on shared
server processes, with other Sandbox users, this parameter is not
configurable.
When launching your mongodb server, you can set this parameter to false :
mongod --setParameter failIndexKeyTooLong=false
I have wrote an article that help you to Setting up Parse-Server and all its dependencies on your own server:
https://medium.com/#jcminarro/run-parse-server-on-your-own-server-using-digitalocean-b2a7d66e1205

meteor: use different database for each user

I currently assign a mongodb to my meteor app using the env variable
"MONGO_URL": "mongodb://localhost:27017/dbName" when I start the meteor instance.
So all data gets written to the mongo database with the name "dbName".
I am looking for a way to individually set the dbName for each custumer upon login in order to seperate their data into different databases.
This generally unsupported as this is defined at startup. However, this thread offers a possible solution:
https://forums.meteor.com/t/switch-database-while-meteor-is-running/4361/6
var database = new MongoInternals.RemoteCollectionDriver("<mongo url>");
MyCollection = new Mongo.Collection("collection_name", { _driver: database });
This would allow you to define the database name in the mongo url but would require a fair bit of extra work to redefine your collections on a customer by customer basis.
Here's another approach that will make your life eternally easier:
Create a generic site with no accounts at mysite.com
When they login at mysite.com, figure out what site they actually belong to and redirect them to customerName.mysite.com and log them in there
Run a separate instance of Meteor configured for a different mongo at each site
nginx might help you with the above.
It is generally good practice to run separate DBs when offering a B2B
solution.
That's a matter of opinion that depends heavily on the platform. Many SaaS providers would argue that point.

What is the best practice to handle Multitenant security in Breeze?

I'm developing an Azure application using this stack:
(Client) Angular/Breeze
(Server) Web API/Breeze Server/Entity Framework/SQL Server
With every request I want to ensure that the user actually has the authorization to execute that action using server-side code. My question is how to best implement this within the Breeze/Web API context.
Is the best strategy to:
Modify the Web API Controller and try to analyze the contents of the
Breeze request before passing it further down the chain?
Modify the EFContextProvider and add an authorization test to
every method exposed?
Move the security all into the database layer and make sure that a User GUID and Tenant GUID are required parameters for every query and only return relevant data?
Some other solution, or some combination of the above?
If you are using Sql Azure then one option is to use Azure Federation to do exactly that.
In a very simplistic term if you have TenantId in your table which stores data from multiple tenants then before you execute a query like SELECT Col1 FROM Table1, you execute USE FEDERATION... statement to restrict the query results to a particular TenantId only, and you don't need to add WHERE TenantId=#TenantId to your query,
USE FEDERATION example: http://msdn.microsoft.com/en-us/library/windowsazure/hh597471.aspx
Note that use of Sql Azure Federation comes with lots of strings attached when it comes to Building a DB schema one of the best blog I have found about it is http://blogs.msdn.com/b/cbiyikoglu/archive/2011/04/16/schema-constraints-to-consider-with-federations-in-sql-azure.aspx.

stubbing data in REST apis for large system/integration tests

The Problem
Say I've got a cool REST resource /account.
I can create new accounts
POST /account
{accountName:"matt"}
which might produce some json response like:
{account:"/account/matt", accountName:"matt", created:"November 5, 2013"}
and I can look up accounts created within a date range by calling:
GET /account?created-range-start="June 01, 2013"&created-range-end="December 25, 2013"
which might also produce something like:
{accounts: {account:"/account/matt", accountName:"matt", created:"November 5, 2013"}, {...}, ...}
Now, let's say I want to set up some sample data and write some tests against the GET /account resource within some specified creation date range.
For example I want to somehow insert the following accounts into the system
name=account1, created=January 1, 2010
name=account2, created=January 2, 2010
name=account3, created=December 29, 2010
name=account4, created=December 30, 2010
then call
GET /account?created-range-start="January 2, 2010"&created=range-end="December 29,2010"
and verify that only accounts 2 and 3 are returned.
How should I insert these sample accounts to write my tests?
Possible Solutions
1) I could use inversion of control and allow the user to specify the creation date for new accounts.
POST /account
{account:"matt", created="June 01, 2013"}
However, even if the created field were optional, I don't like this approach because I may not want to allow my users the ability to set the creation date of their account. I surely need to be able to do it for testing but having that functionality as part of the public api seems wrong to me. Maybe I want to give a $5 credit to anyone who joined prior to some particular day. If they can specify their create date users can game the system. Not good.
2) I could add one or more testing configuration resources
PUT /account/creationDateTimestampProvider
{provider="DefaultProvider"}
or
PUT /account/creationDateTimestampProvider
{provider="FixedDateProvider", date="June 01, 2013"}
This approach affords me the ability to lock down these resources with security constraints so that only my test context can call them, but it also necessarily has side effects on the system that may become a pain to manage, especially if I have a bunch of backdoor configuration resources.
3) I could interact directly with the database circumventing the REST api altogether to set my sample data.
INSERT INTO ACCOUNTS ...
GET /account?...
However this can allow me to get into states that using the REST api may not allow me to get into and as the db model evolves maintaining these sql scripts might also be a pain.
So... how do i test my GET /account resource? Is there another way I'm not thinking of that is more elegant?
There are a lot of ways to do this, and you've come up with some solid (though maybe not perfect for your situation) solutions.
In the setup for the test, I would spin up an in-memory database like HSQLDB (there are others) and do the inserts. The test configuration will inject the appropriate database configuration into your service provider class. Run the tests, and then shut the database down on teardown.
This post provides a good example at least for the persistence side of things.
Incidentally, do not change the API of your service just to help facilitate a test. Maybe I misunderstood and you aren't anyway, but I thought I would mention just in case.
Hope that helps.
For what it's worth, these days I'm primarily using the second approach for most of my system level (black box) tests.
I create backdoor admin / test apis that have security requirements which only my system tests can access. These superpower apis allow me to seed data. I try to limit the scope of these apis as much as possible so they are not overly coupled to the specific implementation details but are flexible enough to allow specifying whatever is needed for the desired seed data.
The reason I prefer this approach to the database solution that Vidya provided, is so that my tests aren't coupled to the specific data storage technology. If I decide to switch from mongo to dynamo or something like that; using an admin api frees me from having to update all of my tests--instead I only need to update the admin api/impl.

Azure Rest Service to create OS disk

I'm trying to add a disk to a Subscription using the Add Disk REST service ( http://msdn.microsoft.com/en-us/library/windowsazure/jj157178.aspx )
I tried pretty much every combination explained but no matter what I do, the disk is listed as a Data Disk.
Trying use fiddler to inspect how the Azure PowerShell (https://www.windowsazure.com/en-us/manage/downloads/ ) just results in an error.
According to MS, you should specify HasOperatingSystem but you don’t supply it when using Microsoft’s PScmdlet. If you do a List Disks ( http://msdn.microsoft.com/en-us/library/windowsazure/jj157176 ) it should send this too, but the only way to distinct Data disks from OS disk’s is weather ”OS” is null or contains ”windows”/”Linux”. Given that information I tried creating the disk with/without OS and/or HasOperatingSystem in all combinations, and no matter what I always end up being a Data disk.
Using Microsoft PowerShell CDMLets allow using both HTTP and HTTPS in URI, so tried both of those too.
Does anyone have a WORKING example of the xml to send, to create an OS disk?
<Disk xmlns="http://schemas.microsoft.com/windowsazure">
<HasOperatingSystem>true</HasOperatingSystem>
<Label>d2luZ3VuYXY3MDEtbmF2NzAxLTAtMjAxMjA4MjcxNTA5NTU=</Label>
<MediaLink>http://winguvhd.blob.core.windows.net/nav701/nav701-0-20120827150955_osdisk.vhd</MediaLink>
<Name>wingunav701-nav701-0-20120827150955</Name>
<OS>Windows</OS>
</Disk>
As I mentioned in my comment, there is indeed an issue with the documentation.
Try this as your request payload. This was provided to me by the person from Microsoft who wrote Windows Azure PowerShell Cmdlets:
<Disk xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/windowsazure">
<OS>Windows</OS>
<Label>mydisk.vhd</Label>
<MediaLink>https://vmdemostorage.blob.core.windows.net/uploads/mydisk.vhd</MediaLink>
<Name>mydisk.vhd</Name>
</Disk>
I just tried using the XML above, and I can see an OS Disk in my subscription.