Say I made complex numeric calculations with Scipy factories, using the Ipython notebook. Now, I want to call variables resulting from calculations with Scipy from code in Javascript (still within IPYNB).
Below is a simplistic illustration of what I am willing to accomplish:
# Get a vector of 4 normal random numbers using numpy - the variable 'rnd'
import numpy as np
mu, sig = 0.05, 0.2
rnd = np.random.normal(loc=mu, scale=sig, size=4)
Now, I want to use the variable rnd above in Javascript, for illustrative purpose:
%%javascript
element.append(rnd);
The lines above returns a message error: ReferenceError: rnd is not defined.
Then, how can one use a python variable in javascript code within the Ipython Notebook?
It may not be possible to do this with the %%Javascript cell magic. However you can use the IPython.display.Javascript(...) method to inject Python strings into the browser
output area. Here's a modification of your code fragment that seems to answer your question.
from IPython.display import Javascript
import numpy as np
mu, sig = 0.05, 0.2
rnd = np.random.normal(loc=mu, scale=sig, size=4)
## Now, I want to use the variable rnd above in Javascript, for illustrative purpose:
javascript = 'element.append("{}");'.format(str(rnd))
Javascript(javascript)
Paste this code into an input cell and each time you execute the cell a new and different array of
random numbers will be displayed in the output cell.
(Code was tested with IPython version 2.2)
Arbitrary Python (including retrieving the value of variables) can be executed from the JavaScript side of things in IPython, although it is a bit messy. The following code works for me in IPython 3.1 and Python 2.7:
%%javascript
IPython.notebook.kernel.execute(
"<PYTHON CODE TO EXECUTE HERE>",
{
iopub: {
output: function(response) {
// Print the return value of the Python code to the console
console.log(response.content.data["text/plain"]);
}
}
},
{
silent: false,
store_history: false,
stop_on_error: true
}
)
Related
I have a notebook which will process the file and creates a data frame in structured format.
Now I need to import that data frame created in another notebook, but the problem is before running the notebook I need to validate that only for some scenarios I need to run.
Usually to import all data structures, we use %run. But in my case it should be combinations of if clause and then notebook run
if "dataset" in path": %run ntbk_path
its giving an error " path not exist"
if "dataset" in path": dbutils.notebook.run(ntbk_path)
this one I cannot get all the data structures.
Can someone help me to resolve this error?
To implement it correctly you need to understand how things are working:
%run is a separate directive that should be put into the separate notebook cell, you can't mix it with the Python code. Plus, it can't accept the notebook name as variable. What %run is doing - it's evaluating the code from specified notebook in the context of the current Spark session, so everything that is defined in that notebook - variables, functions, etc. is available in the caller notebook.
dbutils.notebook.run is a function that may take a notebook path, plus parameters and execute it as a separate job on the current cluster. Because it's executed as a separate job, then it doesn't share the context with current notebook, and everything that is defined in it won't be available in the caller notebook (you can return a simple string as execution result, but it has a relatively small max length). One of the problems with dbutils.notebook.run is that scheduling of a job takes several seconds, even if the code is very simple.
How you can implement what you need?
if you use dbutils.notebook.run, then in the called notebook you can register a temp view, and caller notebook can read data from it (examples are adopted from this demo)
Called notebook (Code1 - it requires two parameters - name for view name & n - for number of entries to generate):
name = dbutils.widgets.get("name")
n = int(dbutils.widgets.get("n"))
df = spark.range(0, n)
df.createOrReplaceTempView(name)
Caller notebook (let's call it main):
if "dataset" in "path":
view_name = "some_name"
dbutils.notebook.run(ntbk_path, 300, {'name': view_name, 'n': "1000"})
df = spark.sql(f"select * from {view_name}")
... work with data
it's even possible to do something like with %run, but it could require a kind of "magic". The foundation of it is the fact that you can pass arguments to the called notebook by using the $arg_name="value", and you can even refer to the values specified in the widgets. But in any case, the check for value will happen in the called notebook.
The called notebook could look as following:
flag = dbutils.widgets.get("generate_data")
dataframe = None
if flag == "true":
dataframe = ..... create datarame
and the caller notebook could look as following:
------ cell in python
if "dataset" in "path":
gen_data = "true"
else:
gen_data = "false"
dbutils.widgets.text("gen_data", gen_data)
------- cell for %run
%run ./notebook_name $generate_data=$gen_data
------ again in python
dbutils.widgets.remove("gen_data") # remove widget
if dataframe: # dataframe is defined
do something with dataframe
I try to convert these matlab scripts to octave. However in getGroundTruthBoxes.m, it has following code:
freq = cell2mat(accumarray(inst(inst>0), segm(inst>0), [], #(x){linIt(histc(x,1:numClass))'}, {zeros(1,numClass)}$
When i try to run with octave, it gives " linIt undefined" error. I googled "linIt" functions , but i can not reach any information about linIt. Can you give information about this "linIt" function?
Thanks.
The user s-gupta whose repository you're using seems to have another repository called utils, where he defines this function https://github.com/s-gupta/utils/blob/master/matlab/linIt.m
Essentially it seems to be a tiny helper function that converts an array to its linearly indexed column-vector, i.e.
function a = linIt(A)
a = A(:);
end
I'm porting matlab code to python and come across the code below. Looks like it creates a matrix but I'm not sure what the shape of the matrix would be. Can anybody help me understand what this code mean especially '...' and '].^2'?
somevarialbe = [var1...
var2...
var3].^2;
It is the item-wise operator and power each item to 2. In the other words, it is equivalent to the following code:
somevarialbe = [var1^2...
var2^2...
var3^2];
And ... means next line in the code. Hence, it is equivalent to the following code:
somevarialbe = [var1^2 var2^2 var3^2];
I want to remove an instance or row with missing values.
It's so simple to do it by using Impute widget, but now I want to do it in Python Script Widget.
How do I do this?
Write this in Python Script widget:
import numpy as np
from Orange.preprocess import impute
drop_instances = impute.DropInstances()
var = in_data.domain.attributes[0] # choose a variable you wanna check
mask = drop_instances(in_data, var)
out_data = in_data[[np.logical_not(mask)]]
If you need more information, you are welcome to comment a question below!
I've just recently discovered that you can right-click an array in Spyder and get a quick plot of the data. With sample data like this:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Some numbers in a data frame
nsample = 440
x1 = np.linspace(0, 100, nsample)
y = np.sin(x1)
dates = pd.date_range(pd.datetime(2016, 1, 1).strftime('%Y-%m-%d'), periods=nsample).tolist()
df = pd.DataFrame({'dates':dates, 'x1':x1, 'y':y})
df = df.set_index(['dates'])
df.index = pd.to_datetime(df.index)
you can go to the Variable explorer, right-click y and get the following directly in the console:
which will give you this:
The same option does not seem to be available to a pandas dataframe:
Sure, you could easily go for df.plot():
But I really like the right-click option to check whether the variables and dataframes look the way I expect them to when I'm messing around with a lot of data. So, is there any library I'd have to import? Or maybe something in the settings? I've also noticed that what happens in the console is this little piece of magic: %varexp --plot y, but can't seem to find an equivalent for data frames.
Thank you for any suggestions!
(Spyder developer here) This is just a bit of missing functionality for Dataframes, but it's very easy to implement.
Please open an issue in our issue tracker, so we don't forget to do it in a future release.