I want to make a progress bar that updates asynchronously within the Jupyter notebook (with an ipython kernel)
Example
In [1]: ProgressBar(...)
Out[1]: [|||||||||------------------] 35% # this keeps moving
In [2]: # even while I do other stuff
I plan to spin up a background thread to check and update progress. I'm not sure how to update the rendered output though (or even if this is possible.)
This might help put you on the right track, code taken from lightning-viz, which borrowed a lot of it from matplotlib. warning this is all pretty underdocumented.
In python you have to instantiate a comm object
from IPython.kernel.comm import Comm
comm = Comm('comm-target-name', {'id': self.id})
full code here https://github.com/lightning-viz/lightning-python/blob/master/lightning/visualization.py#L15-L19. The id is just there in case you want to manage multiple different progress bars for example.
then do the same in javascript:
var IPython = window.IPython;
IPython.notebook.kernel.comm_manager.register_target('comm-target-name', function(comm, data) {
// the data here contains the id you set above, useful for managing
// state w/ multiple comm objects
// register the event handler here
comm.on_msg(function(msg) {
})
});
full example here. Note the javascript on_msg code is untested as I've only used comm to go from js -> python. To see what that handler looks like see https://github.com/lightning-viz/lightning-python/blob/master/lightning/visualization.py#L90
finally to send a message in python:
comm.send(data=data)
https://ipython.org/ipython-doc/3/api/generated/IPython.kernel.comm.comm.html#IPython.kernel.comm.comm.Comm.send
Related
It has been over a year since I'm using VSCode. Almost every day I search the web for ways to display the time taken (speed) and space taken(during execution) by my program. This info is very important. But unfortunately, I haven't found(or missed) a way to display these metrics. VSCode is cool to use and lightweight etc. etc., but these metrics were visible by default in some other IDEs (like codeblocks). Some extension or some setting I missed in the many articles I went through. If someone could help me out here, I'll be super grateful.
Thank you in advance
if __name__ == "__main__":
'''
Example code shows to display
the Time and Space taking of the program(creating N_gram language model)
'''
import os, psutil, time
start = time.time()
m = create_ngram_model(3, '/content/train_corpus.txt') # Replace withyour custom function
process = psutil.Process(os.getpid())
print('Memory usage in Mega Bytes: ', process.memory_info().rss/(1024**2)) # in bytes
print(f'Time Taken: {time.time() - start}')
You can add this psutil.Process(os.getpid()) at any function(at main thread to see entire memory usage of the program) that you wish to see the status byprocess.memory_info().rss in the running time Memory usage: 0.5774040222167969. Also check this code for to know the program runtime.
I writing my code within a Jupyter notebook in VS Code. I am hoping to play some of the audio within my data set. However, when I execute the cell, the console reports no errors, produces the widget, but the widget displays 0:00 / 0:00 (see below), indicating there is no sound to play.
Below, I have listed two ways to reproduce the error.
I have acquired data from the hub data store. Looking specifically at the spoken MNIST data set, I cannot get the data from the audio tensor to play
import hub
from IPython.display import display, Audio
from ipywidgets import interactive
# Obtain the data using the hub module
ds = hub.load("hub://activeloop/spoken_mnist")
# Create widget
sample = ds.audio[0].numpy()
display(Audio(data=sample, rate = 8000, autoplay=True))
The second example is a test (copied from another post) that I ran to see if it was something wrong with the data or something wrong with my console, environment, etc.
# Same imports as shown above
# Toy Function to play beats in notebook
def beat_freq(f1=220.0, f2=224.0):
max_time = 5
rate = 8000
times = np.linspace(0,max_time,rate*max_time)
signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)
display(Audio(data=signal, rate=rate))
return signal
v = interactive(beat_freq, f1=(200.0,300.0), f2=(200.0,300.0))
display(v)
I believe that if it is something wrong with the data (this is a well-known data set so, I doubt it), then only the second one will play. If it is something to do with the IDE or something else, then neither will work, as is the case now.
Apologies for the late reply! In the future, please tag the questions with activeloop so it's easier to sort through (or hit us up directly in community slack -> slack.activeloop.ai).
Regarding the Free Spoken Digit Dataset, I managed to track the error with your usage of activeloop hub and audio display.
adding [:,0] to 9th line will help fixing display on Colab as Audio expects one-dimensional data
%matplotlib inline
import hub
from IPython.display import display, Audio
from ipywidgets import interactive
# Obtain the data using the hub module
ds = hub.load("hub://activeloop/spoken_mnist")
# Create widget
sample = ds.audio[0].numpy()[:,0]
display(Audio(data=sample, rate = 8000, autoplay=True))
(When we uploaded the dataset, we decided to upload the audio as (N,C) where C is the number of channels, which happens to be 1 for the particular dataset. The added dimension wasn't added automatically)
Regarding the VScode... the audio, unfortunately, would still not work (not because of us, but VScode), but you can still try visualizing Free Spoken Digit Dataset (you can play the music there, too). Hopefully this addresses your needs!
Let us know if you have further questions.
Mikayel from Activeloop
I'm working on a vector layer where I have to merge all n+i [id] attributes into entity(n)[id] where entities(n+i)[id] equals the entity(n)[id], then delete all n+i entities.
All works fine but I call several times startEditing functions before commiting changes, and my question is: does calling commitChanges closes startEditing, or does it let it opened, like if it was a file descriptor or a pointer which we needed to free after the job's done?
The code is:
olayer.startEditing()
olayer.changeAttributeValue(n,id_obj,id_obj_sum,NULL,True)
olayer.commitChanges()
olayer.startEditing()
i= i-1
while i >=1:
olayer.deleteFeature(n+i)
i=i-1
olayer.commitChanges()
As you can see, we call several times olayer.startEditing, even more because all that code is in while body...
So will that spawn hordes of startEditing "pointers" or will it just continuously set the olayer editable status as "open to edition" ?
Actually the code works, but it's painfully slow, is this the reason why ?
By and large, you should not start the edit mode more than once in your layer.
You can commit at the end of all your changes to the layer, so that your modifications stay in an edit buffer in the meantime.
QgsVectorLayer.commitChanges() can let the edit mode open if you pass a False as parameter (the parameter is called stopEditing, see the docs). Like this:
# If the commit is sucessful, let the edit mode open
layer.commitChanges(False)
Also, have a look at the with edit(layer): syntax in the QGIS docs. Using such syntax you avoid starting/committing/closing the edit mode, QGIS does it for you.
with edit(layer):
# All my layer modifications here
# When this line is reached, my layer modifications are already committed.
When initializing a component using reactfire, each time I add a reactfire hook (e.g. useFirestoreDocData), it triggers a re-render and therefore repeats all previous initialization. For example:
const MyComponent = props => {
console.log(1);
const firestore = useFirestore();
console.log(2);
const ref = firestore.doc('count/counter');
console.log(3);
const { value } = useFirestoreDocDataOnce(ref);
console.log(4);
...
return <span>{value}</span>;
};
will output:
1
1
2
3
1
2
3
4
This seems wasteful, is there a way to avoid this?
This is particularly problematic when I need the result of one reactfire hook to create another (e.g. retrieve data from one document to determine which other document to read) and it duplicates the server calls.
See React's documentation of Suspense.
Particulary that part: Approach 3: Render-as-You-Fetch (using Suspense)
Reactfire uses this mechanics. It is not supposed to fetch more than one time for each call even if the line is executed more than once. The mechanics behind "understand" that the fetch is already done and will start the next one.
In your case, react try to render your component, see it needs to fetch, stop rendering and show suspense's fallback while fetching. When fetch is done it retry to render your component and as the fetch is completed it will render completely.
You can confirm in your network tab that each calls is done only once.
I hope I'm clear, please don't hesitate to ask for more details if i'm not.
I have this code ipython code:
import ipywidgets as widgets
from IPython.display import display
import time
w = widgets.Dropdown(
options=['Addition', 'Multiplication', 'Subtraction'],
value='Addition',
description='Task:',
)
def on_change(change):
print("changed to %s" % change['new'])
w.observe(on_change)
display(w)
It works as expected. When the value of the widget changes, the on_change function gets triggered. However, I want to run a long computation and periodically check for updates to the widget. For example:
for i in range(100):
time.sleep(1)
# pull for changes to w here.
# if w.has_changed:
# print(w.value)
How can I achieve this?
For reference, I seem to be able to do the desired polling with
import IPython
ipython = IPython.get_ipython()
ipython.kernel.do_one_iteration()
(I'd still love to have some feedback on whether this works by accident or design.)
I think you need to use threads and hook into the ZMQ event loop. This gist illustrates an example:
https://gist.github.com/maartenbreddels/3378e8257bf0ee18cfcbdacce6e6a77e
Also see https://github.com/jupyter-widgets/ipywidgets/issues/642.
To elaborate on the OP's self answer, this does work. It forces the widgets to sync with the kernel at an arbitrary point in the loop. This can be done right before the accessing the widget.value.
So the full solution would be:
import IPython
ipython = IPython.get_ipython()
last_val = 0
for i in range(100):
time.sleep(1)
ipython.kernel.do_one_iteration()
new_val = w.value
if new_val != old_val:
print(new_val)
old_val = new_val
A slight improvement to the ipython.kernel.do_one_iteration call used
# Max iteration limit, in case I don't know what I'm doing here...
for _ in range(100):
ipython.kernel.do_one_iteration()
if ipython.kernel.msg_queue.empty():
break
In my case, I had a number of UI elements, that could be clicked multiple times between do_one_iteration calls, which will process them one at a time, and with a 1 second time delay, that could get annoying. This will process at most 100 at a time. Tested it by mashing a button multiple times, and now they all get processes as soon as the sleep(1) ends.