Custom Image under AzureBatch ImageReference class not working - azure-batch

I have a custom VHD file with me. I am able to create Pool with my custom image through portal. But i want to try the same with .Net SDK. But it is throwing error "Operation returned an invalid status code 'Forbidden".
I am referring this link Azure Batch
I am able to create Pool from MarketPlace images from same code
Below is my code
ImageReference imageReference = new ImageReference("/subscriptions/XXXXXXXXXXXXXXX/resourceGroups/RG-OneGolden/providers/Microsoft.Compute/images/OMGoldenImage");
VirtualMachineConfiguration virtualMachineConfiguration =
new VirtualMachineConfiguration(
imageReference: imageReference,
nodeAgentSkuId: "batch.node.windows amd64");
try
{
CloudPool pool = batchClient.PoolOperations.CreatePool(
poolId: PoolId,
targetDedicatedComputeNodes: PoolNodeCount,
virtualMachineSize: PoolVMSize,
virtualMachineConfiguration: virtualMachineConfiguration);
pool.Commit();
}
catch (BatchException be)
{
// Accept the specific error code PoolExists as that is expected if the pool already exists
if (be.RequestInformation?.BatchError?.Code == BatchErrorCodeStrings.PoolExists)
{
Console.WriteLine("The pool {0} already existed when we tried to create it", PoolId);
}
else
{
throw; // Any other exception is unexpected
}
}

You need to ensure you have met the prerequisites for custom images in Azure Batch:
The ARM Image is in the same subscription and region as the Batch account.
You are using Azure Active Directory to authenticate with the Batch service.

Related

MS Dynamics 365 Online Plugin External Rest API access gives error

I am trying to access an external third party API from a Dynamics 365 Online plugin using the following code:
public void Execute(IServiceProvider serviceProvider)
{
//Extract the tracing service for use in plug-in debugging.
ITracingService tracingService =
(ITracingService)serviceProvider.GetService(typeof(ITracingService));
try
{
tracingService.Trace("Downloading the target URI: " + webAddress);
try
{
//<snippetWebClientPlugin2>
// Download the target URI using a Web client. Any .NET class that uses the
// HTTP or HTTPS protocols and a DNS lookup should work.
using (WebClient client = new WebClient())
{
byte[] responseBytes = client.DownloadData(webAddress);
string response = Encoding.UTF8.GetString(responseBytes);
//</snippetWebClientPlugin2>
tracingService.Trace(response);
// For demonstration purposes, throw an exception so that the response
// is shown in the trace dialog of the Microsoft Dynamics CRM user interface.
throw new InvalidPluginExecutionException("WebClientPlugin completed successfully.");
}
}
catch (WebException exception)
{
string str = string.Empty;
if (exception.Response != null)
{
using (StreamReader reader =
new StreamReader(exception.Response.GetResponseStream()))
{
str = reader.ReadToEnd();
}
exception.Response.Close();
}
if (exception.Status == WebExceptionStatus.Timeout)
{
throw new InvalidPluginExecutionException(
"The timeout elapsed while attempting to issue the request.", exception);
}
throw new InvalidPluginExecutionException(String.Format(CultureInfo.InvariantCulture,
"A Web exception occurred while attempting to issue the request. {0}: {1}",
exception.Message, str), exception);
}
}
catch (Exception e)
{
tracingService.Trace("Exception: {0}", e.ToString());
throw;
}
}
}
But I am getting the error:
Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.'
I have checked MS documentation but nothing suggests why I am unable to do this. I know about sandboxed plugins but according to MS I should be able to do this using their own sample code.
This is expected in CRM Online, as this is SaaS and you're in a shared tenant in cloud. You can do either webhook or Azure service hub to trigger external endpoint with CRM context for processing. Read more
And if you've got CRM Online, then the normal solution is to offload the processing to an environment that you have more control over. The most common option is to offload the processing to Azure, using the Azure Service Bus or Azure Event Hub. The alternative, new to CRM 9, is to send the data to a WebHook, which can be hosted wherever you like.

Azure pipeline run status queued

I am new to Azure environment. i have written some code using .net core that starts Azure pipeline using Azure Data factory. The status of the pipeline run status when trying from my local is always success. I deployed the code in the azure environment. When try to start the pipe line from azure server always the status is queued. what is queued status and what i have to do with it. can some one please help. do i need to change any settings in azure so that the pipeline run will be success
AuthenticationContext context = new AuthenticationContext("https://login.windows.net/" + tenantID);
ClientCredential cc = new ClientCredential(applicationId, authenticationKey);
AuthenticationResult result = context.AcquireTokenAsync("https://management.azure.com/", cc).Result;
ServiceClientCredentials cred = new TokenCredentials(result.AccessToken);
var client = new Microsoft.Azure.Management.DataFactory.DataFactoryManagementClient(cred) { SubscriptionId = subscriptionId };
CreateRunResponse runResponse = client.Pipelines.CreateRunWithHttpMessagesAsync(resourceGroup, dataFactoryName, pipelineName).Result.Body;
string RunId = runResponse.RunId;
PipelineRun pipelineRun;
while (true)
{
pipelineRun = client.PipelineRuns.Get(resourceGroup, dataFactoryName, runResponse.RunId);
if (pipelineRun.Status == "InProgress")
System.Threading.Thread.Sleep(15000);
else
break;
}
You should try to look into the diagnotics logs of the IR , it does have valuable info which should help .
Navigate to IR -> diagnotics log or open event viewer -> Application and service logs ->

orient db unable to open any kind of graph

I am new to orient-db and have run into a major block even trying to open a simple in memory database.
Here is my two lines of code (in java)
OrientGraphFactory factory = new
OrientGraphFactory("memory:test").setupPool(1,10);
// EVERY TIME YOU NEED A GRAPH INSTANCE
OrientGraph g = factory.getTx();
try {
} finally {
g.shutdown();
}
I get the following error:
Exception in thread "main" com.orientechnologies.orient.core.exception.OStorageException: Cannot open local storage 'test' with mode=rw
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.open(OAbstractPaginatedStorage.java:210)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.open(ODatabaseDocumentTx.java:223)
at com.orientechnologies.orient.core.db.OPartitionedDatabasePool.acquire(OPartitionedDatabasePool.java:287)
at com.tinkerpop.blueprints.impls.orient.OrientBaseGraph.<init>(OrientBaseGraph.java:163)
at com.tinkerpop.blueprints.impls.orient.OrientTransactionalGraph.<init>(OrientTransactionalGraph.java:78)
at com.tinkerpop.blueprints.impls.orient.OrientGraph.<init>(OrientGraph.java:128)
at com.tinkerpop.blueprints.impls.orient.OrientGraphFactory.getTx(OrientGraphFactory.java:74)
Caused by: com.orientechnologies.orient.core.exception.OStorageException:
Cannot open the storage 'test' because it does not exist in path: test
at
com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage .open(OAbstractPaginatedStorage.java:154)
... 7 more
What 'path' is it talking about? How is a path even relevant when trying to open a simple in memory database? Furthermore I have also tried this with plocal:/..... ,,, and I always get the above error.
Regards,
Bhargav.
Try to create the database first :
OrientGraphNoTx graph = new OrientGraphNoTx ("memory:test");
Then use the pool :
OrientGraphFactory factory = new OrientGraphFactory ("memory:test").setupPool (1, 10);
By the way which db version are you using ?
Databases created as in-memory only needs to be created first and the pool didn't allow it (fixed in last snapshot). Try acquiring an instance from the factory without pool, like:
OrientGraphFactory factory = newOrientGraphFactory("memory:test");
factory.getTx().shutdown(); // AUTO-CREATE THE GRAPH IF NOT EXISTS
factory.setupPool(1,10);
// EVERY TIME YOU NEED A GRAPH INSTANCE
OrientGraph g = factory.getTx();
try {
} finally {
g.shutdown();
}

Handle JDBC exception in BIRT API

I have a scheduler job which is based on a standalone RunAndRenderTask. The report design connects to a remote mysql database to fetch data. The scheduler generates a PDF and emails the report as attachment to a set of people. This works as long as the database is available.
But when the database is unavailable, then I can see the error in the logs, but the RunAndRenderTask still generates a PDF report which is blank and useless, and this gets emailed by the scheduler. I need to be able to catch this exception and instead email another set of people who can fix the DB issue. I tried various things but couldn't figure out how to do it.
In the code below, I expect the API to return an exception, and hence print "BirtException" or "Exception", but this code prints "Success" even when there is a JDBC exception.
Any help is appreciated.
Here's the code I have.
IReportEngine engine = null;
IRunAndRenderTask runAndRenderTask = null;
try {
EngineConfig config = new EngineConfig();
config.setEngineHome("birt-runtime-4_4_0/RuntimeEngine");
Platform.startup(config);
IReportEngineFactory factory = (IReportEngineFactory) Platform
.createFactoryObject(IReportEngineFactory.EXTENSION_REPORT_ENGINE_FACTORY);
engine = factory.createReportEngine(config);
IReportRunnable reportRunnable = engine.openReportDesign(DATA_PATH + "sample.rptdesign");
runAndRenderTask = engine.createRunAndRenderTask(reportRunnable);
PDFRenderOption option = new PDFRenderOption();
option.setOutputFileName(DATA_PATH + "output.pdf");
option.setOutputFormat("pdf");
runAndRenderTask.setRenderOption(option);
runAndRenderTask.run();
System.out.println("Success!");
} catch (BirtException e) {
System.out.println("BirtException");
e.printStackTrace();
} catch (Throwable e) {
System.out.println("Exception");
e.printStackTrace();
} finally {
if (runAndRenderTask != null) {
runAndRenderTask.close();
}
if (engine != null) {
engine.destroy();
}
Platform.shutdown();
RegistryProviderFactory.releaseDefault();
}
This is the exception stacktrace, which never gets propagated back by RunAndRenderTask.run()
INFO: Loaded JDBC driver class in class path: com.mysql.jdbc.Driver
Jun 26, 2014 9:26:43 PM org.eclipse.birt.data.engine.odaconsumer.ConnectionManager openConnection
SEVERE: Unable to open connection.
org.eclipse.birt.report.data.oda.jdbc.JDBCException: There is an error in get connection, Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server..
at org.eclipse.birt.report.data.oda.jdbc.JDBCDriverManager.doConnect(JDBCDriverManager.java:336)
at org.eclipse.birt.report.data.oda.jdbc.JDBCDriverManager.getConnection(JDBCDriverManager.java:235)
at org.eclipse.birt.report.data.oda.jdbc.Connection.connectByUrl(Connection.java:252)
at org.eclipse.birt.report.data.oda.jdbc.Connection.open(Connection.java:162)
at org.eclipse.datatools.connectivity.oda.consumer.helper.OdaConnection.open(OdaConnection.java:250)
at org.eclipse.birt.data.engine.odaconsumer.ConnectionManager.openConnection(ConnectionManager.java:165)
at org.eclipse.birt.data.engine.executor.DataSource.newConnection(DataSource.java:224)
at org.eclipse.birt.data.engine.executor.DataSource.open(DataSource.java:212)
at org.eclipse.birt.data.engine.impl.DataSourceRuntime.openOdiDataSource(DataSourceRuntime.java:217)
at org.eclipse.birt.data.engine.impl.QueryExecutor.openDataSource(QueryExecutor.java:435)
at org.eclipse.birt.data.engine.impl.QueryExecutor.prepareExecution(QueryExecutor.java:322)
at org.eclipse.birt.data.engine.impl.PreparedQuery.doPrepare(PreparedQuery.java:463)
at org.eclipse.birt.data.engine.impl.PreparedDataSourceQuery.produceQueryResults(PreparedDataSourceQuery.java:190)
at org.eclipse.birt.data.engine.impl.PreparedDataSourceQuery.execute(PreparedDataSourceQuery.java:178)
at org.eclipse.birt.data.engine.impl.PreparedOdaDSQuery.execute(PreparedOdaDSQuery.java:178)
at org.eclipse.birt.report.data.adapter.impl.DataRequestSessionImpl.execute(DataRequestSessionImpl.java:637)
at org.eclipse.birt.report.engine.data.dte.DteDataEngine.doExecuteQuery(DteDataEngine.java:152)
at org.eclipse.birt.report.engine.data.dte.AbstractDataEngine.execute(AbstractDataEngine.java:275)
at org.eclipse.birt.report.engine.executor.ExtendedGenerateExecutor.executeQueries(ExtendedGenerateExecutor.java:205)
at org.eclipse.birt.report.engine.executor.ExtendedGenerateExecutor.execute(ExtendedGenerateExecutor.java:65)
at org.eclipse.birt.report.engine.executor.ExtendedItemExecutor.execute(ExtendedItemExecutor.java:62)
at org.eclipse.birt.report.engine.internal.executor.dup.SuppressDuplicateItemExecutor.execute(SuppressDuplicateItemExecutor.java:43)
at org.eclipse.birt.report.engine.internal.executor.wrap.WrappedReportItemExecutor.execute(WrappedReportItemExecutor.java:46)
at org.eclipse.birt.report.engine.internal.executor.l18n.LocalizedReportItemExecutor.execute(LocalizedReportItemExecutor.java:34)
at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:65)
at org.eclipse.birt.report.engine.layout.html.HTMLPageLM.layout(HTMLPageLM.java:92)
at org.eclipse.birt.report.engine.layout.html.HTMLReportLayoutEngine.layout(HTMLReportLayoutEngine.java:100)
at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:181)
at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.run(RunAndRenderTask.java:77)
at test.ReportTester.test(ReportTester.java:50)
at test.ReportTester.main(ReportTester.java:19)
In addition to catching BirtException, you should be aware that the way BIRT handles Javascript errors is - by default - browser-like. That is, BIRT tries to continue generating the report.
There are different ways to handle this for production-quality code (where task is a RunAndRenderTask or RunTask or RenderTask):
Use task.setErrorHandlingOption(CANCEL_ON_ERROR) (see BIRT docs). Personally, I have never tried this.
After task.run(...), but before task.close(), call task.getErrors(). If this list is not empty, your code should output these messages and throw an exception.
You need to add catch block that catches EngineException, not JDBC exception.
You can find javadocs at link.

"Forbidden" error when uploading file through Google Cloud Storage API

I am using the "google-api-services-storage-v1beta2-rev5-java-1.15.0-rc.zip" Google Cloud Storage library together with the "StorageSample.java" sample program from here
I have followed the sample program's setup instructions and have set up the "client_secrets.json" and "sample_settings.json" files. The sample program compiles OK but runs only partially OK.
I have modified the "uploadObject" method of the "StorageSample.java" program so that it uploads a test file created by me (rather than upload a randomly generated file). The program runs OK in the following methods :
tryCreateBucket();
getBucket();
listObjects();
getObjectMetadata();
However, when running the "uploadObject(true)" method, I get the following error
================== Uploading object. ==================
Forbidden
My modified "uploadObject" method is listed below :
private static void uploadObject(boolean useCustomMetadata) throws IOException {
View.header1("Uploading object.");
File file = new File("My_test_upload_file.txt");
if (!file.exists() || !file.isFile()) {
System.out.println("File does not exist");
System.exit(1);
}
InputStream inputStream = new FileInputStream(file);
long byteCount = file.length();
InputStreamContent mediaContent = new InputStreamContent("application/octet-stream", inputStream);
mediaContent.setLength(byteCount);
StorageObject objectMetadata = null;
if (useCustomMetadata) {
List<ObjectAccessControl> acl = Lists.newArrayList(); // empty acl (seems default acl).
objectMetadata = new StorageObject()
.setName("myobject")
.setMetadata(ImmutableMap.of("key1", "value1", "key2", "value2"))
.setAcl(acl)
.setContentDisposition("attachment");
}
Storage.Objects.Insert insertObject = storage.objects().insert("mybucket", objectMetadata, mediaContent);
if (!useCustomMetadata) {
insertObject.setName("myobject");
}
if (mediaContent.getLength() > 0 && mediaContent.getLength() <= 2 * 1000 * 1000 /* 2MB */) {
insertObject.getMediaHttpUploader().setDirectUploadEnabled(true);
}
insertObject.execute();
}
In the 1st run of the program, a bucket is created and I get the "Forbidden" error when uploading my created test file. In subsequent runs, the "Forbidden" errors persist.
I think that as the bucket is created by the program, the program should have enough access right to upload a file to that bucket.
Is there any setup / operation that I have missed ? Thanks for any suggestion.
Oh, what a careless mistake. I have forgotten to change the "mybucket" name to my created bucket's name.
The program now runs OK.