I'm trying to do some testing with Cloud Data Fusion, however, I'm receiving issues with connections when running my pipelines. I've come to understand that it is using the default network, and I would like to change my System Compute Profile over to a different network.
The problem is, I don't have the option to create a new System Compute Profile (The option doesn't show up under the Configuration tab). How can I go about getting the correct access to create a new compute profile? I have the role of Data Fusion Admin.
Thank you.
Creating a new compute profile is only available in Data Fusion Enterprise edition. In the basic edition, only the default compute profile can be used. But you can customize the profile when you run the pipeline. To do that:
Go to the pipeline page
Click on Configure, in the Compute config, click Customize
This will pop up the settings for the profile, in General Settings, you can set the value for the network.
Just an update on this thread for future viewer. Custom profile can be created in Cloud data fusion Version 6.2.2 (Basic)
Related
We can apply Row Level Security (RLS) in Power BI by clicking on the Manage Roles button, but I was wondering if this can also be done programmatically, for example, through a REST API?
Also that article says that there's a known issue where you'll get an error message if you try to publish a previously published report from Power BI Desktop. If its possible to do RLS programmatically, would that issue still remain there and therefore we would have to still do some additional manual step?
I am trying to import a cloud-enabled Debian Linux image for the Power architecture to run on the IBM public cloud, which supports this architecture.
I think I am following the instructions, but the behavior I am seeing is that, at image-import-time, after filling in all the relevant information, when I hit the "import" button, the GUI just exits silently, with no apparent effect, and no reported error.
I am reasonably experienced doing simple iaas stuff on AWS, but am new to the IBM cloud, and have not deployed a custom image on any cloud provider. I'm aware of "cloud-init", and have a reasonable general knowledge of what problem it solves (mapping cloud-provider metadata to config entries in the resulting VM at start-time), but not a great deal about how it actually works.
What I have done is:
Got an IBM cloud account, and upgraded out of the free tier, for access to Power.
Activated the Power Systems Virtual Server service.
Activated the Cloud Object Storage service.
Created a bucket in the COS.
Created an HMAC-enabled service credential for this bucket.
Uploaded my image, in .tar.gz format, to the bucket (via the CLI, it's too big to upload by GUI).
The image is from here -- that page is a bit vague on which cloud providers it may be expected to work with, but AFAIK the IBM cloud is the only public cloud supporting Power?
Then, from the Power Systems Virtual Server service page, I clicked the "Boot Images" item on the left, to show the empty list, then "Import Image" at the top of the list, and filled in the form. I have answers for all of the entries -- I can make up a new name, I know the region of my COS, the image file name" (the "key", in key-object storage parlance), the bucket name, and the access key and secret keys, which are available from the credential description in the COS panel.
Then the "import" button lights up, and I click it, and the import dialog disappears, no error is reported, and no image is imported.
There are various things that might be wrong that I'm not sure how to investigate.
It's possible the credential is not connected to the bucket in the right way, I didn't really understand the documentation about that, but in the GUI it looks like it's in the right scope and has the right data in it.
It's also possible that only certain types of images are allowed, and my image is failing some kind of validation check, but in that case I would expect an error message?
I have found the image-importing instructions for the non-Power-IAAS, but it seems like it's out of scope. I have also found some docs on how to prepare a custom image, but they also seem to be non-Power-IAAS.
What's the right way to do this?
Edit to add: Also tried doing this via the CLI ("ibmcloud pi image-import"), where it gets a time-out, apparently on the endpoint that's supposed to receive the image. Also, the command-line tool has an --os-type flag that apparently only takes [aix | sles | redhat | ibmi] -- my first attempt used raw, which is an error.
This is perhaps additional evidence that what I want to do is actually impossible?
PowerVS supports only .ova images. Those are not the same supported by VMWare, for instance.
You can get from here https://public.dhe.ibm.com/software/server/powervs/images/
Or you can use the images available in the regional pool of images:
ibmcloud pi image-list-catalog
Once you have your first VM up and running you can use https://github.com/ppc64le-cloud/pvsadm to create a new .ova. Today the tool only supports RHEL, CentOS and CoreOS.
If you want to easily play with PowerVS you can also use https://github.com/rpsene/powervs-actions.
I'm trying to create alerts in LogAnlytics in azure portal, need to create 6 alerts for 5 db's, so have to create 30 alerts manually and is time consuming.
Hence would require an automated approach.
Tried to create via Creating Alerts Using Azure PowerShell, but this creates the alerts in the Alerts Classic under Monitor but this is not what is required, require it to be created in Log Analytics.
Next approach was via Create a metric alert with a Resource Manager template but this was metric alert and not LogAnalytics alert
At last tried Create and manage alert rules in Log Analytics with REST API, but this is a tedious process need to get the search id, schedule id, threshold id and action id. Even after trying to create the threshold id or action id the error I'm facing is "404 - File or directory not found." (as in the image).
Could someone please suggest me on how should this be proceeded, or is there any other way to create alerts apart from the manual creation?
If you use the Add activity log alert to add a rule, you will find it in the Alerts of Log Analytics in the portal.
Please refer to the Log Analytics Documentation,
Alerts are created by alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals.
Update:
Please refer to my test screenshots, I think you should check the specific resource group or other things, etc.
Even so, activity log alert belongs to the alerts(classic), alerts is a new metric alert type. You could check the link new metric alert type in this article, it points the alerts. it is not supported by powershell and CLI currently.
Please refer to:
1.Use PowerShell to create alerts for Azure services
2.Use the cross-platform Azure CLI to create classic metric alerts in Azure Monitor for Azure services
As mentioned in the two articles:
This article describes how to create older classic metric alerts. Azure Monitor now supports newer, better metric alerts. These alerts can monitor multiple metrics and allow for alerting on dimensional metrics. PowerShell support for newer metric alerts is coming soon.
This article describes how to create older classic metric alerts. Azure Monitor now supports newer, better metric alerts. These alerts can monitor multiple metrics and allow for alerting on dimensional metrics. Azure CLI support for newer metric alerts is coming soon.
#Shashikiran : You can use the script published in the GITHUB https://github.com/microsoft/manageability-toolkits/tree/master/Alert%20Toolkit
This can create a few sample alerts. For now we have included some sample core machine monitoring alerts like CPU , hardware failures , SQL , etc... Also these are only the log alerts. You can use this as a sample code and come up with your code.
https://cloud.google.com/dataproc/docs/guides/dataproc-images specifies that
The custom image is saved in Cloud Compute Images, and is valid to create a Cloud Dataproc cluster for 30 days. You must re-create the custom image to reuse it after the 30-day period.
Is that limitation temporary while the custom image feature is in beta, or will it be perpetual?
This is a perpetual limitation, which will be present after custom images will go to GA (General Availability).
If you have a feedback on how and why this is impacting your use case you can send it to dataproc-feedback#google.com for Dataproc team consideration.
I need to add Relying parties in ADFS everytime a new client comes on. I would like to automate this by just specifying either the url to the federation metadata or a file picker for the admin to load the federation metadata file.
I have been following the instructions on this site Adding a New Relying Party Trust
However I get the following error
ADMIN0120: The client is not authorized to access the endpoint
net.tcp://localhost:1500/policy.
The client process must be run with elevated administrative privileges.
not sure what I am doing wrong. I guess the bigger question is : is this the best way to set up Relying parties and Claims using code or should I use powershell commands?
This error doesn't means you have code issue. It is something related to the privilege. Test it by right mouse click the client and "Run as administrator" to see if it goes through.
As per your link, there are three ways:
Using the AD FS 2.0 Management console
Using the Windows PowerShell command-line interface
Programmatically using the AD FS 2.0 application programming interface (API)
All three are equally valid - the only difference is how much work you have to do for each e.g. the wizard is lots of mouse clicks.
What I do is set up the RP the first time via the wizard and then save the setup using PowerShell (Get RP, Get Claims etc.) and then use these to set up subsequent ones as you migrate from dev. to test. to staging etc.