I'm trying to add a disk to a Subscription using the Add Disk REST service ( http://msdn.microsoft.com/en-us/library/windowsazure/jj157178.aspx )
I tried pretty much every combination explained but no matter what I do, the disk is listed as a Data Disk.
Trying use fiddler to inspect how the Azure PowerShell (https://www.windowsazure.com/en-us/manage/downloads/ ) just results in an error.
According to MS, you should specify HasOperatingSystem but you don’t supply it when using Microsoft’s PScmdlet. If you do a List Disks ( http://msdn.microsoft.com/en-us/library/windowsazure/jj157176 ) it should send this too, but the only way to distinct Data disks from OS disk’s is weather ”OS” is null or contains ”windows”/”Linux”. Given that information I tried creating the disk with/without OS and/or HasOperatingSystem in all combinations, and no matter what I always end up being a Data disk.
Using Microsoft PowerShell CDMLets allow using both HTTP and HTTPS in URI, so tried both of those too.
Does anyone have a WORKING example of the xml to send, to create an OS disk?
<Disk xmlns="http://schemas.microsoft.com/windowsazure">
<HasOperatingSystem>true</HasOperatingSystem>
<Label>d2luZ3VuYXY3MDEtbmF2NzAxLTAtMjAxMjA4MjcxNTA5NTU=</Label>
<MediaLink>http://winguvhd.blob.core.windows.net/nav701/nav701-0-20120827150955_osdisk.vhd</MediaLink>
<Name>wingunav701-nav701-0-20120827150955</Name>
<OS>Windows</OS>
</Disk>
As I mentioned in my comment, there is indeed an issue with the documentation.
Try this as your request payload. This was provided to me by the person from Microsoft who wrote Windows Azure PowerShell Cmdlets:
<Disk xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/windowsazure">
<OS>Windows</OS>
<Label>mydisk.vhd</Label>
<MediaLink>https://vmdemostorage.blob.core.windows.net/uploads/mydisk.vhd</MediaLink>
<Name>mydisk.vhd</Name>
</Disk>
I just tried using the XML above, and I can see an OS Disk in my subscription.
Related
I am trying to use magic-wormhole to receive a file.
My partner and I are in different time zones, however.
If my partner types wormhole send filename, for how long will this file persist (i.e. how much later can I type wormhole receive keyword and still get the file)?
From the "Timing" section in the docs:
The program does not have any built-in timeouts, however it is expected that both clients will be run within an hour or so of each other ... Both clients must be left running until the transfer has finished.
So... maybe? Consider using some cloud storage instead, depending on the file. You could also encrypt it before uploading it to cloud storage if the contents of the file is private.
Large Files in Microsoft Office 365 SharePoint do not support Versions request.
I've isolated the problem to the file size. This is a sharepoint deployment on Microsoft O365.
The issue seems to be with Microsoft’ Rest API and it appears to be related to the size of the file. This makes no sense to me, but my test data reflects these facts.
When I request Version I receive this response, but only for files that are greater than 1 GB in size:
Operation is not valid due to the current state of the object.
I've tried Microsoft support and they have provided no help in regards to the error message or how to change the state of the object.
Any insights from the community would be greatly appreciated as this error has stopped my migration between sites dead in its tracks.
My tests
The file is unlocked. It is visible and checked in to the users. I have administrative privileges. A very simple versions GET results in an error.
Test 1: Does the format of the file make any difference?
Step 1: I zipped the ISO file and stored it in the same location.
Step 2: When I used the REST API to explore the versions, I was presented with the same error:
Operation is not valid due to the current state of the object.
Test 2: Is the file corrupt in some way?
Step 1: Upload the .ISO file to the same location with a new name.
Step 2: when I used the REST AP to explore the versions, I was presented with the same error:
Operation is not valid due to the current state of the object.
Test 3: Is the error related to size?
Step 1: I created a small program to generate a text file, which repeats a text string, This is a test string that is 4, over and over again for the size of the file in gigabytes.
Step 2: Create a 1 GB file, test.txt
Step 3: create a 4 GB file, test4.txt
Step 4: Transfer files to the folder.
Step 5: Use the REST API to retrieve test.txt versions, it works.
<?xml version="1.0" encoding="utf-8"?><feed ...
Step 6: Use the REST API to retrieve the test4.txt versions, it fails:
<m:error
xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
<m:code>-1, System.InvalidOperationException</m:code> <m:message
xml:lang="en-US"> Operation is not valid due to the current state of
the object. </m:message> </m:error>
This leads me to believe that the O365 implementation of Sharepoint has a size related issue. In the past, there has been a problem with files over 2GB in size but today Microsoft claims that up to 30 GB can be stored in a single file:
https://support.office.com/en-us/article/File-size-limits-for-workbooks-in-SharePoint-Online-9E5BC6F8-018F-415A-B890-5452687B325E
Given that all of my files are within the guidance provided by Microsoft, all of the files should behave in the same manner.
From the Microsoft site
What are the current file size limits for workbooks?
Your file size limits are determined by your particular subscription to Office 365.
If your Office 365 subscription includes… And the workbook is stored here… Then these file size limits apply to workbooks in a browser window
SharePoint Online A library in a site such as a team site 0-30 MB
Outlook Web App Attached to an email message 0-10 MB
If you’re trying to open a workbook that is attached to an email message in Outlook Web App, a smaller file size limits applies. In this case, the workbook must be smaller than 10 MB to open in a browser window.
If you are using Excel Online and Power BI, different file size limits apply. For more information, see Data storage in Power BI and Reduce the size of a workbook for Power BI.
Here’s the REST API GET, it's very simple:
https://oceusnetworks.sharepoint.com/opp/_api/web/GetFileByServerRelativeUrl('/opp/Opportunities%202018/RPW%20Spectrum%20Prototype%20Technologies/03-Delivery/Deliverables/Software%20and%20License%20Keys/Windows/WIN2016-SESS_X64FRE_EN-US_DV9.ISO')/Versions
Here’s the URL:
https://oceusnetworks.sharepoint.com/opp/_api/web/GetFileByServerRelativeUrl('/opp/Opportunities%202018/RPW%20Spectrum%20Prototype%20Technologies/03-Delivery/Deliverables/Software%20and%20License%20Keys/Windows/WIN2016-SESS_X64FRE_EN-US_DV9.ISO')/Versions
and the response:
<m:error xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
<m:code>-1, System.InvalidOperationException</m:code>
<m:message xml:lang="en-US">
Operation is not valid due to the current state of the object.
</m:message>
</m:error>
Another file in the same directory:
https://oceusnetworks.sharepoint.com/opp/_api/web/GetFileByServerRelativeUrl('/opp/Opportunities%202018/RPW%20Spectrum%20Prototype%20Technologies/03-Delivery/Deliverables/Software%20and%20License%20Keys/Windows/WIN2016-SESS_X64FRE_EN-US_DV9.MDS')/Versions
with the expected response (XML response truncated for brevity):
< ? xml version="1.0" encoding="utf-8"?><feed ... ></feed>
I'm currently writing code to use Amazon's S3 REST API and I notice different behavior where the only difference seems to be the Amazon endpoint URI that I use, e.g., https://s3.amazonaws.com vs. https://s3-us-west-2.amazonaws.com.
Examples of different behavior for the the GET Bucket (List Objects) call:
Using one endpoint, it includes the "folder" in the results, e.g.:
/path/subfolder/
/path/subfolder/file1.txt
/path/subfolder/file2.txt
and, using the other endpoint, it does not include the "folder" in the results:
/path/subfolder/file1.txt
/path/subfolder/file2.txt
Using one endpoint, it represents "folders" using a trailing / as shown above and, using the other endpoint, it uses a trailing _$folder$:
/path/subfolder_$folder$
/path/subfolder/file1.txt
/path/subfolder/file2.txt
Why the differences? How can I make it return results in a consistent manner regardless of endpoint?
Note that I get these same odd results even if I use Amazon's own command-line AWS S3 client, so it's not my code.
And the contents of the buckets should be irrelevant anyway.
Your assertion notwithstanding, your issue is exactly about the content of the buckets, and not something S3 is doing -- the S3 API has no concept of folders. None. The S3 console can display folders, but this is for convenience -- the folders are not really there -- or if there are folder-like entities, they're irrelevant and not needed.
In Amazon S3, buckets and objects are the primary resources, where objects are stored in buckets. Amazon S3 has a flat structure with no hierarchy like you would see in a typical file system. However, for the sake of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping objects. Amazon S3 does this by using key name prefixes for objects.
http://docs.aws.amazon.com/AmazonS3/latest/UG/FolderOperations.html
So why are you seeing this?
Either you've been using EMR/Hadoop, or some other code written by someone who took a bad example and ran with it... or is doing something differently than it should have been done for quite some time.
Amazon EMR is a web service that uses a managed Hadoop framework to process, distribute, and interact with data in AWS data stores, including Amazon S3. Because S3 uses a key-value pair storage system, the Hadoop file system implements directory support in S3 by creating empty files with the <directoryname>_$folder$ suffix.
https://aws.amazon.com/premiumsupport/knowledge-center/emr-s3-empty-files/
This may have been something the S3 console did many years ago, and apparently (since you don't report seeing them in the console) it still supports displaying such objects as folders in the console... but the S3 console no longer creates them this way, if it ever did.
I've mirrored the bucket "folder" layout exactly
If you create a folder in the console, an empty object with the key "foldername/" is created. This in turn is used to display a folder that you can navigate into, and upload objects with keys beginning with that folder name as a prefix.
The Amazon S3 console treats all objects that have a forward slash "/" character as the last (trailing) character in the key name as a folder
http://docs.aws.amazon.com/AmazonS3/latest/UG/FolderOperations.html
If you just create objects using the API, then "my/object.txt" appears in the console as "object.txt" inside folder "my" even though there is no "my/" object created... so if the objects are created with the API, you'd see neither style of "folder" in the object listing.
That is probably a bug in the API endpoint which includes the "folder" - S3 internally doesn't actually have a folder structure, but instead is just a set of keys associated with files, where keys (for convenience) can contain slash-separated paths which then show up as "folders" in the web interface. There is the option in the API to specify a prefix, which I believe can be any part of the key up to and including part of the filename.
EMR's s3 client is not the apache one, so I can't speak accurately about it.
In ASF hadoop releases (and HDP, CDH)
The older s3n:// client uses $folder$ as its folder delimiter.
The newer s3a:// client uses / as its folder marker, but will handle $folder$ if there. At least it used to; I can't see where in the code it does now.
The S3A clients strip out all folder markers when you list things; S3A uses them to simulate empty dirs and deletes all parent markers when you create child file/dir entries.
Whatever you have which processes GET should just ignore entries with "/" or $folder at the end.
As to why they are different, the local EMRFS is a different codepath, using dynamo for implementing consistency. At a guess, it doesn't need to mock empty dirs, as the DDB tables will host all directory entries.
I feel like this should be a lot easier than it's been on me.
copy table
from 's3://s3-us-west-1.amazonaws.com/bucketname/filename.csv'
CREDENTIALS 'aws_access_key_id=my-access;aws_secret_access_key=my-secret'
REGION 'us-west-1';
Note I added the REGION section after having a problem but did nothing.
Where I am confused though is that in the bucket properties there is only the https://path/to/the/file.csv. I can only assume that all the documentation that I have read calling for the path to start with s3://... that I would just change https to s3 like shown in my example.
However I get this error:
"Error : ERROR: S3ServiceException:
The bucket you are attempting to access must be addressed using the specified endpoint.
Please send all future requests to this endpoint.,Status 301,Error PermanentRedirect,Rid"
I am using navicat for PostgreSQL to connect to Redshift and Im running on a mac.
The S3 path should be 's3://bucketname/filename.csv'. Try this.
Yes, It should be a lot easier :-)
I have only seen this error when your S3 bucket is not in US Standard. In such cases you need to use endpoint based address e.g. http://s3-eu-west-1.amazonaws.com/mybucket/myfile.txt.
You can find the endpoints for your region in this documentation page,
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
We have a large SQL Database running on Azure which is only generally in use during normal office hours, although from time to time, overtime/weekend staff will require performant access to the database.
Currently, we run the database on the S3 Tier during office hours, and reduce it to S0 at all other times.
I know that there are a number of example PowerShell scripts that can be used together with automation tasks to automatically modify the database tiers according to a predefined timetable. However, we would like to control it from within our own .Net application. The main benefit is that this would allow us to give control to admin staff to switch up the database tier during out-of-hours as required without the need for technical staff to get involved.
There are a number of articles/videos on the Microsoft site that mention "scaling up/down" (as opposed to "scaling out/in", i.e. creating/removing additional shards), but the sample code provided seems to deal exclusively with sharding, and not with vertical "scaling up/down".
Is this possible? Can anyone point me in the direction of any relevant resources?
You can, yes. You have to use the REST API to do the call to our endpoints and update the database.
The description and the parameters of the PUT required to the update is explained here -> Update Database Details
You can change the tier programmatically from there.
Yes, you can change database tiers using REST API and call Azure endpoints to update the tier.
The parameters to be used for PUT are explained on this msdn page: Update Database
This can now be done using multiple methods besides using the REST API directly.
https://learn.microsoft.com/en-us/azure/azure-sql/database/single-database-scale
See Azure CLI example
az sql db update -g mygroup -s myserver -n mydb --edition Standard --capacity 10 --max-size 250GB
See PowerShell example:
Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -DatabaseName "Database01" -ServerName "Server01" -Edition "Standard" -RequestedServiceObjectiveName "S0"
I just tried doing this via SQL and it worked - set DTUs to 10 by default (this might take a loooong time):
ALTER DATABASE [mydb_name] MODIFY(EDITION='Standard' , SERVICE_OBJECTIVE='S0')
Reference: https://www.c-sharpcorner.com/blogs/change-the-azure-sql-tier-using-sql-query