Good afternoon
I'm using Ambari, Zeppelin and the interpreter %spark2.pyspark and cronexpression to execute the process, but sometimes RemoteInterpreterServer.java[createInterpreter] don't working
I have two log one of them exactly at 03:29:00 execute correctly
enter image description here
And the other does not show this information, nor does it show an error.
enter image description here
This happend every tow days, the only form this execute again it is when I restart all Zeppelin from Ambari
Why is the reason for this inconsistence?
Thanks a lot for your help
Related
A while ago I wrote myself a nice SQL Developer plugin (back then for Oracle SQL Developer v19.x).
I haven't used for a while and meanwhile I migrated to SQL Developer v21.2.1.204.
When I wanted to run my plugin again, there is no output displayed anywhere!? Where does the output generated by a plugin and emitted by dbms_output.put_line(...) end up?
In "Messages - Log" which used to be the tab where the output ended up, the execution only emits a final "PL/SQL procedure successfully completed." but nothing else.
For my colleagues who still run Oracle SQL Developer v19 it still works - all output goes to "Messages - Log".
I also tried "Dbms Output" (View --> Dmbs Outout) but nothing appears there.
Thus my question: Where does the output of an SQL Developer Plugin go to in OSD v21+? Do I need to enable anything beforehand to capture or redirect its output?
Nevermind - problem solved:
while experimenting I had commented the script's preamble
set serveroutput on;
set wrap off;
set linesize 4000;
...
and then - of course - there is no script output returned to SQLDeveloper.
Everything's working now...
I want to segment pelvic in MRI from the SMIR dataset using the MONAILABEL plugin. I have read relatively a lot about this plugin. However, I can’t perform the segmentation well enough yet.
These are the steps I take to do so:
connecting the server using the anaconda prompt
enabling the plugin in Slicer and loading one image
labeling the pelvis manually using the paint button in the scribbles part and the updating
click “submit label”
and then repeating the process for some other images while the network is being trained.
After the training process, I actually could not see anything when opening the mask files.
I also encounter these two errors:
" AssertionError: Not a valid Label "
" TypeError: object of type ‘NoneType’ has no len() "
Is it the correct way of using MONAI or am I missing something?
Which aspects should I take into account before starting the process?
I would appreciate your help and suggestions.
Thank you
My simple experiment reads from an Azure Storage Table, Selects a few columns and writes to another Azure Storage Table. This experiment runs fine on the Workspace (Let's call it workspace1).
Now I need to move this experiment as is to another workspace(Call it WorkSpace2) using Powershell and need to be able to run the experiment.
I am currently using this Library - https://github.com/hning86/azuremlps
Problem :
When I Copy the experiment using 'Copy-AmlExperiment' from WorkSpace 1 to WorkSpace 2, the experiment and all it's properties get copied except the Azure Table Account Key.
Now, this experiment runs fine if I manually enter the account Key for the Import/Export Modules on studio.azureml.net
But I am unable to perform this via powershell. If I Export(Export-AmlExperimentGraph) the copied experiment from WorkSpace2 as a JSON and insert the AccountKey into the JSON file and Import(Import-AmlExperiment) it into WorkSpace 2. The experiment fails to run.
On PowerShell I get an "Internal Server Error : 500".
While running on studio.azureml.net, I get the notification as "Your experiment cannot be run because it has been updated in another session. Please re-open this experiment to see the latest version."
Is there anyway to move an experiment with external dependencies to another workspace and run it?
Edit : I think the problem is something to do with how the experiment handles the AccountKey. When I enter it manually, it's converted into a JSON array comprising of RecordKey and IndexInRecord. But when I upload the JSON experiment with the accountKey, it continues to remain the same and does not get resolved into RecordKey and IndexInRecord.
For me publishing the experiment as a private experiment for the cortana gallery is one of the most useful options. Only the people with the link can see and add the experiment for the gallery. On the below link I've explained the steps I followed.
https://naadispeaks.wordpress.com/2017/08/14/copying-migrating-azureml-experiments/
When the experiment is copied, the pwd is wiped for security reasons. If you want to programmatically inject it back, you have to set another metadata field to signal that this is a plain-text password, not an encrypted password that you are setting. If you export the experiment in JSON format, you can easily figure this out.
I think I found the issue why you are unable to export the credentials back.
Export the JSON graph into your local disk, then update whatever parameter has to be updated.
Also, you will notice that the credentials are stored as 'Placeholders' instead of 'Literals'. Hence it makes sense to change them to Literals instead of placeholders.
This you can do by traversing through the JSON to find the relevant parameters you need to update.
Here is a brief illustration.
Changing the Placeholder to a Literal:
I want to save the details of each worker as it is seen in the mpiprofile summary report. Profsave does not give all the details very clearly for the report generated by the mpiprofile viewer option.
Is there any other way to save the report?
One way I figured to do something similar is the following.
I am assuming you are using "pmode" because as far as my understanding goes, you need to run on "pmode" to run mpiprofile.
So this is how it goes.
Saving
Save the mpiprofile info inside each worker(or it is often called lab from my understanding).
mpiprofile mpistruct;
Then you transfer to the client using the following command
pmode lab2client mpistruct;
Then save the mpistruct into a .mat file
save('mpistruct');
Loading:
When you are trying to view the mpiprofile result:
Load the mpi file
load('mpistruct.mat');
Run mpiprofile viewer
mpiprofile('viewer', mpistruct);
This should pop up the browser.
Note. Above code was tested with 2015b.
(I wrote the same exact answer to Matlab community. I copied for your convenience.)
Hi I'm using a Confluence macro called 'PockketQuery'(PQ). I have connected to a server located at my client's base through PostgreSQL. I run PQ to fetch results from the db into my confluence page. However, it's fecthing an extra unwanted word "Hallo" along with every result. I m unable to figure out where this string maybe coming from and getting attached to my results like this. Please help me get rid of it.
For example I run a PQ on the db which is supposed to fetch me result "Jack London", so the result that I see is "hallo Jack London".
Note: I use VPN to connect to my client's server and Confluence.
Are you using the latest version from the Marketplace 1.14.6? This issue shouldn't exist in the latest version.
I got an upgrade to version 1.14.6 of Confluence's PocketQuery macro. The issue that I had is resolved, the unwanted string in the result is there no more. The bad part is they don't mention it anywhere on the macro's bug fixes. There are no release notes attached to this fix.Thank you Felix for your help.