MATLAB failed to validate parallel toolbox's local cluster profile - matlab

When I validate my local cluster profile, it gives me the following errors (the second picture is detailed information). And I googled it but can't find any feasible solution. Anyone can tell me how can I solve this problem?

Related

Flow does not run on Tableau Server - where can I see the actual error?

When I run a flow in Tableau Server, it fails with the following error message:
Unfortunately this error is not helpful in understanding the actual cause of the problem.
Is there a way to see the actual underlying error? Or how am I supposed to debug this?
The flow runs fine in my Tableau Prep.
(EDIT: I used state here that I used a different data source to test in prep, but this is no longer true)
Arguably that error log does give you a hint as to what the issue is. The issue is with the Output step. This is most likely due to a permissions error when Tableau Server goes to publish the output since you can do it locally in Tableau Prep.
Are credentials for your flows able to be embedded on server? This will impact whether the output will be accessible. Are all flows run using a service account? Make sure that service account has access to the output location.
If these troubleshooting steps don't work, check the server logs. For this you'll need to check the logs on Tableau Server using the command line to see if there is a more detailed response. If you have the access, run tsm maintenance ziplogs to zip the log files and investigate.

Azure App Service ambiguous network name error on docker-compose deploy

We've created a new AppService and successfully deployed our solution backend through docker-compose (preview), but when we update one (or more) image on the registry and we restart the appservice to get docker-compose to get the new version, we get errors of ambiguous network name like this one
InnerException: Docker.DotNet.DockerApiException, Docker API responded with status code=BadRequest, response={"message":"network my_app_service_multi_nw__0 is ambiguous (94 matches found on name)"}
In our docker-compose.yml we don't have any network name specified (since network is a not-supported option)
At the moment, the only solution we found is to delete entirely the appservice and create a new one (WITH A DIFFERENT NAME, also)
What we're doing wrong? Is it possible to prune all unused networks?
We've contacted Microsoft for support, they've told us that it's a known bug and a temporary fix could be changing App Service plan.
Apparently, changing from S1 to B1 fixed the issue for us.
The answer from #Doc is currently not working for us, as scaling up or down the plan does not seem to reset the underling VM. We can't delete the app or change the name, therefore we are currently stuck with this issue...
EDIT: It was not sufficient to change from B2 to S2, but changing from B2 to P1V2 fixed the issue. A hint should be that, when you change the plan, you should see a message at the bottom "Outgoing IP addresses for your app might change", which ensures that your App Service app is actually migrating to a different VM, therefore resetting the Docker configuration.
I got the same issue in local when try to run with Visual Studio and got fixed with following steps.
step 1: List existing networks
docker network ls
Find duplicate network names
Step 2: Remove one network
docker network rm <NETWORK ID>

Kubernetes error code 403: user anonymous cannot get path

I been trying to install kubernetes for the first time, after the initial setup, I was finally able to execute the kube-up.bash script:
kubernetes/cluster/kube-up.bash
Everything goes well and I even see the list of cluster services installed:
Then the book I am checking says:
"Go to: https//your_master_ip/ui/"
But when I try I can only see the following:
I assumed I did not performed a proper setup on during the auth process, so I did:
gcloud auth list
But my active account is there with my Google email, so I am not sure what I am doing wrong.
I am able to access the Google Cloud Platfrom and see the project I created for this, I can see the traffic also.
Also, the kubectl commands are not working on my system, it throws a bash error that It was not able to locate the order.
Can someone please assist me?
Regards

Accessing streamsets web UI on another node in a cluster than where installed, which file system does it 'look in'?

I have a cluster of machines hosting hadoop (MapR) and have install streamsets on one of the nodes (say node002) following the RPM documentation. However, I am accessing the web UI for the data collector from another node, node001.
My question is, when I specify files paths (eg. an origin directory), which file system is the web UI going to be referring to? Eg. if I put an origin directory as /home/myuser/mydata, will the pipeline created in the web UI be looking for that directory in node001 or node002? New to using streamsets, so a more detailed answer would be appreciated. Thanks.
** Ultimately I am asking this because I am currently getting "FileNotFound" and "permission denied" errors while trying to follow the documentation's tutorial and am trying to debug the situation.
From the streamsets community forums: It will be the path to the local file on the machine running that particular SDC instance.
The FileNotFound and permission errors have to do with the fact that the default user for the sdc service is a user called sdc. Still working on how to fix this part, but can produce a workable prototype by setting the read and write access for the directories in question to allow public access (still need to work on this part, but this answers the posted question).

Using MPI with two RaspberryPi

I am trying to make a 'dual core' RaspberryPi for a project I am working on. I had followed this tutorial by Simon Cox. Unfortunately I could not get the two RasPi to talk to each other. (This was using Hydra as the process manager)
After looking more carefully at the MPICH installers guide, which can be found here, I tried to use the -phrase to pass the passphrase I had created. However I could not find it as part of the hydra commands. So I re-installed with smpd and after many compiling attempts. I configured with:
/configure -prefix=/home/pi/mpich-install --with-pm=smpd --with-pmi=smpd
I also had to install libbsl-dev to get the MD5 that smpd requires. I also exported the path that the commands mpiexec and mpicc are in. After setting the passphrase I copied the image to a second SD card and put it in a second RasPi. I then set up the passphrase using ssh-keygen.
I was able to run the cpi program on the master Pi and the slave Pi individually but when I tried to run multiple processes on both at the same time I got the error
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init``_thread(392).................:
MPID_Init(139)........................: channel initialization failed
MPIDI_CH3_Init(38)....................:
MPID_nem_init(196)....................:
MPIDI_CH3I_Seg_commit(366)............:
MPIU_SHMW_Hnd_deserialize(324)........:
MPIU_SHMW_Seg_open(863)...............:
MPIU_SHMW_Seg_create_attach_templ(637): open failed - No such file or directory
Can someone please suggest how I can either fix this problem or get the RaspberryPis to communicate using MPICH?
Thanks
E.Lee
If anyone else has this problem make sure your hosts don't have the same name!
You can change it by following this tutorial http://raspi.tv/2012/how-to-change-the-name-of-your-raspberry-pi-new-hostname