Starting Parpool in MATLAB - matlab

I tried starting parpool in MATLAB 2015b. Command as follows,
parpool('local',3);
This command should allocate 3 workers. Whereas I received an error stating failure to start parpool. The error message as follows,
Error using parpool (line 94)
Failed to start a parallel pool. (For information in addition to
the causing error, validate the profile 'local' in the Cluster Profile
Manager.)
A similar query was posted in (https://nl.mathworks.com/matlabcentral/answers/196549-failed-to-start-a-parallel-pool-in-matlab2015a). I followed the same procedure, to validate the local profile as per the suggestions.
Using distcomp.feature( 'LocalUseMpiexec', false); or distcomp.feature( 'LocalUseMpiexec', true) in startup.m didn't create any improvement. Also when attempting to validate local profile still gives error message as follows,
VALIDATION DETAILS
Profile: local
Scheduler Type: Local
Stage: Cluster connection test (parcluster)
Status: Passed
Description:Validation Passed
Command Line Output:(none)
Error Report:(none)
Debug Log:(none)
Stage: Job test (createJob)
Status: Failed
Description:The job errored or did not reach state finished.
Command Line Output:
Failed to determine if job 24 belongs to this cluster because: Unable to
read file 'C:\Users\varad001\AppData\Roaming\MathWorks\MATLAB
\local_cluster_jobs\R2015b\Job24.in.mat'. No such file or directory..
Error Report:(none)
Debug Log:(none)
Stage: SPMD job test (createCommunicatingJob)
Status: Failed
Description:The job errored or did not reach state finished.
Command Line Output:
Failed to determine if job 25 belongs to this cluster because: Unable to
read file 'C:\Users\varad001\AppData\Roaming\MathWorks\MATLAB
\local_cluster_jobs\R2015b\Job25.in.mat'. No such file or directory..
Error Report:(none)
Debug Log:(none)
Stage: Pool job test (createCommunicatingJob)
Status: Skipped
Description:Validation skipped due to previous failure.
Command Line Output:(none)
Error Report:(none)
Debug Log:(none)
Stage: Parallel pool test (parpool)
Status: Skipped
Description:Validation skipped due to previous failure.
Command Line Output:(none)
Error Report:(none)
Debug Log:(none)
I am receiving these error only in my cluster machine. But launching parpool in my standalone PC is working perfectly. Is there a way to rectify this issue?

Related

The program expected this account to be already initialized

I'm using the Solana package with Jupiter swap token. After getting the swap result. I'm executing a transaction synchronously with the Solana package sendTransaction method at that time. I'm getting the below error code.
Jupiter Swagger collection.
{accounts: null, err: {InstructionError: [0, {Custom: 3012}]}, logs: [Program JUP3c2Uh3WA4Ng34tw6kPd2G4C5BB21Xo36Je1s32Ph invoke [1], Program log: Instruction: SetTokenLedger, Program log: AnchorError caused by account: token_account. Error Code: AccountNotInitialized. Error Number: 3012. Error Message: The program expected this account to be already initialized., Program JUP3c2Uh3WA4Ng34tw6kPd2G4C5BB21Xo36Je1s32Ph consumed 5139 of 600000 compute units, Program JUP3c2Uh3WA4Ng34tw6kPd2G4C5BB21Xo36Je1s32Ph failed: custom program error: 0xbc4], unitsConsumed: 0}
Jupiter can create up to 3 transactions (setupTransaction, swapTransaction, cleanupTransaction). Make sure that you execute all of them (if they are not null) in this order.

The slice of type 'Discovery' is 'Aborted' because of the error : System.Exception: NUnit Adapter 4.0.0.0: Test discovery complete

I tried to run Azure DevOps tests in a build pipeline.
Tests are executed on a new agent, i got the following error.
Setup Azure DevOps
##[error]The slice of type 'Discovery' is 'Aborted' because of the error : System.Exception: NUnit Adapter 4.0.0.0: Test discovery complete
Received the command : Stop
TestExecutionHost.ProcessCommand. Stop Command handled
SliceFetch Aborted. Moving to the TestHostEnd phase
Test run '1007278' is in 'Aborted' state.
##[error]Test run is aborted. Logging details of the run logs.
##[error]System.Exception: The test run was aborted, failing the task.
The Problem is that after slice process the test case filter isn't working, solution was the rename to “TestCategory”, before it was “Category“ for TestCaseFilter

SAM Deployment failed Error- Waiter StackCreateComplete failed: Waiter encountered a terminal failure state

When I try to deploy package on SAM, the very first status comes in cloud formation console is ROLLBACK_IN_PROGRESS after that it gets changed to ROLLBACK_COMPLETE
I have tried deleting the stack and trying again, but every time same issue occurs.
Error in terminal looks like this-
Sourcing local options from ./SAMToolkit.devenv
SAM_PARAM_PKG environment variable not set
SAMToolkit will operate in legacy mode.
Please set SAM_PARAM_PKG in your .devenv file to run modern packaging.
Run 'sam help package' for more information
Runtime: java
Attempting to assume role from AWS Identity Broker using account 634668058279
Assumed role from AWS Identity Broker successfully.
Deploying stack sam-dev* from template: /home/***/1.0/runtime/sam/template.yml
sam-additional-artifacts-url.txt was not found, which is fine if there is no additional artifacts uploaded
Replacing BATS::SAM placeholders in template...
Uploading template build/private/tmp/sam-toolkit.yml to s3://***/sam-toolkit.yml
make_bucket failed: s3://sam-dev* An error occurred (BucketAlreadyOwnedByYou) when calling the CreateBucket operation: Your previous request to create the named bucket succeeded and you already own it.
upload: build/private/tmp/sam-toolkit.yml to s3://sam-dev*/sam-toolkit.yml
An error occurred (ValidationError) when calling the DescribeStacks operation: Stack with id sam-dev* does not exist
sam-dev* will be created.
Creating ChangeSet ChangeSet-2020-01-20T12-25-56Z
Deploying stack sam-dev*. Follow in console: https://aws-identity-broker.amazon.com/federation/634668058279/CloudFormation
ChangeSet ChangeSet-2020-01-20T12-25-56Z in sam-dev* succeeded
"StackStatus": "REVIEW_IN_PROGRESS",
sam-dev* reached REVIEW_IN_PROGRESS
Deploying stack sam-dev*. Follow in console: https://console.aws.amazon.com/cloudformation/home?region=us-west-2
Waiting for stack-create-complete
Waiter StackCreateComplete failed: Waiter encountered a terminal failure state
Command failed.
Please see the logs above.
I set SQS as event source for Lambda, but didn't provided the permissions like this
- Effect: Allow
Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
Resource: "*"
in lambda policies.
I found this error in "Events" tab of "CloudFormation" service.

Matlab fails to validate parallel environment

When I run Parallel >> Manage Congifurations..., Matlab fails to pass the Distributed Job, the Parallel Job and the Matlabpool tests. My system has a double core: Intel Core i5 CPU M520 # 2.40GHz 2.40GHZ, 2GB RAM, Win7 64bit, Matlab R2011b. After the failed validation, I get the following output:
Validation Details
Configuration: "local" Type: local
-------------------------------------- Stage: Find Resource
Status: Passed Description: Validation passed
Command Line Output: (none)
-------------------------------------- Stage: Distributed Job
Status: Failed Description: The given stage reached the default or
user-specified timeout.
Command Line Output: (none)
Error Report: (none)
Debug Log: LOG FILE OUTPUT:
-------------------------------------- Stage: Parallel Job
Status: Failed Description: The given stage reached the default or
user-specified timeout.
Command Line Output: (none)
Error Report: (none)
Debug Log: LOG FILE OUTPUT:
-------------------------------------- Stage: Matlabpool
Status: Failed Description: A MATLAB pool is already open and might
interfere with further testing. To avoid this, before the next test
run try executing "matlabpool close".
Command Line Output: (none)
Error Report: (none)
Debug Log: (none)
This is pretty much what I get if I've called matlabpool prior to running the validation checks. You did pay attention to the advice given in the Status report from the Matlabpool stage didn't you, about closing an open matlabpool ?

Inspect and retry resque jobs via redis-cli

I am unable to run the resque-web on my server due to some issues I still have to work on but I still have to check and retry failed jobs in my resque queues.
Has anyone any experience on how to peek the failed jobs queue to see what the error was and then how to retry it using the redis-cli command line?
thanks,
Found a solution on the following link:
http://ariejan.net/2010/08/23/resque-how-to-requeue-failed-jobs
In the rails console we can use these commands to check and retry failed jobs:
1 - Get the number of failed jobs:
Resque::Failure.count
2 - Check the errors exception class and backtrace
Resque::Failure.all(0,20).each { |job|
puts "#{job["exception"]} #{job["backtrace"]}"
}
The job object is a hash with information about the failed job. You may inspect it to check more information. Also note that this only lists the first 20 failed jobs. Not sure how to list them all so you will have to vary the values (0, 20) to get the whole list.
3 - Retry all failed jobs:
(Resque::Failure.count-1).downto(0).each { |i| Resque::Failure.requeue(i) }
4 - Reset the failed jobs count:
Resque::Failure.clear
retrying all the jobs do not reset the counter. We must clear it so it goes to zero.