Python REPL inside a Rich Panel? - rich

Motivation:
I'm interacting with a complex piece of HW via a Data Acquisition (DAQ) system.
I would like to send complex interactive debugging control sequences to the DUT while also seeing status in one or more Rich Live Panels.
I would like to leverage Rich to handle the screen-update grunt-work.
Normally, I might try to run the Rich Live Panels in one Python process and the REPL in another (IPython), but the DAQ does not like sharing access with more than one process.
Is there an easy way to leverage Rich Live to keep my status information displayed in Live Panels while still having access to the power and flexibility of the Python REPL?
Alternately, is there another approach that would be as relatively easy-to-use as Rich?
from rich.live import Live
from rich.layout import Layout
from rich.panel import Panel
layout = Layout(name="root")
layout.split_column(Layout(name="Inputs panel"),
Layout(name="Outputs panel"),
Layout(name="REPL panel"),
)
layout["REPL panel"].size = 20
with Live(layout, refresh_per_second=1) as Live:
while True:
layout["Inputs panel"].update(input_panel())
layout["Ouputs panel"].update(output_panel())
layout["REPL panel"].???
Similar

Related

Audio widget within Jupyter notebook is **not** playing. How can I get the widget to play the audio?

I writing my code within a Jupyter notebook in VS Code. I am hoping to play some of the audio within my data set. However, when I execute the cell, the console reports no errors, produces the widget, but the widget displays 0:00 / 0:00 (see below), indicating there is no sound to play.
Below, I have listed two ways to reproduce the error.
I have acquired data from the hub data store. Looking specifically at the spoken MNIST data set, I cannot get the data from the audio tensor to play
import hub
from IPython.display import display, Audio
from ipywidgets import interactive
# Obtain the data using the hub module
ds = hub.load("hub://activeloop/spoken_mnist")
# Create widget
sample = ds.audio[0].numpy()
display(Audio(data=sample, rate = 8000, autoplay=True))
The second example is a test (copied from another post) that I ran to see if it was something wrong with the data or something wrong with my console, environment, etc.
# Same imports as shown above
# Toy Function to play beats in notebook
def beat_freq(f1=220.0, f2=224.0):
max_time = 5
rate = 8000
times = np.linspace(0,max_time,rate*max_time)
signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)
display(Audio(data=signal, rate=rate))
return signal
v = interactive(beat_freq, f1=(200.0,300.0), f2=(200.0,300.0))
display(v)
I believe that if it is something wrong with the data (this is a well-known data set so, I doubt it), then only the second one will play. If it is something to do with the IDE or something else, then neither will work, as is the case now.
Apologies for the late reply! In the future, please tag the questions with activeloop so it's easier to sort through (or hit us up directly in community slack -> slack.activeloop.ai).
Regarding the Free Spoken Digit Dataset, I managed to track the error with your usage of activeloop hub and audio display.
adding [:,0] to 9th line will help fixing display on Colab as Audio expects one-dimensional data
%matplotlib inline
import hub
from IPython.display import display, Audio
from ipywidgets import interactive
# Obtain the data using the hub module
ds = hub.load("hub://activeloop/spoken_mnist")
# Create widget
sample = ds.audio[0].numpy()[:,0]
display(Audio(data=sample, rate = 8000, autoplay=True))
(When we uploaded the dataset, we decided to upload the audio as (N,C) where C is the number of channels, which happens to be 1 for the particular dataset. The added dimension wasn't added automatically)
Regarding the VScode... the audio, unfortunately, would still not work (not because of us, but VScode), but you can still try visualizing Free Spoken Digit Dataset (you can play the music there, too). Hopefully this addresses your needs!
Let us know if you have further questions.
Mikayel from Activeloop

How do I run my syncronus search function wihout blocking my flutter app?

I am building a flutter application which searches a low level communication bus for devices and displays them in a table. Communication over the low level bus is slow, default speed is 4800 bits/s.
I want to run my search in the background so that the application is not halted for 10+ seconds every time the user performs a search. I also want to add the results to the table as the search function finds them (using the onFound argument to search).
SearchBar(
onSearch: (selection, parametersToDisplay) {
clearSearchResults();
communcationBus.search(selection, parametersToDisplay, onFound: (device) {addToSearchResults(device)})
},
onUpdateSearch: (display) {}, // TODO
)
You can achieve this using Isolates.
For a short conceptual intro, take a look at this Medium post by a Dart documentation writer.
The FlutterIsolate package (pub link) can help you to abstract out some of the complicated things.
You can use it to spawn a new isolate, which performs your slow operations. You can then store it in your app or use the SendPort/ReceivePort to send the result data to your main isolate.

Create Jupyter Notebook object that changes after the cell has run

I want to make a progress bar that updates asynchronously within the Jupyter notebook (with an ipython kernel)
Example
In [1]: ProgressBar(...)
Out[1]: [|||||||||------------------] 35% # this keeps moving
In [2]: # even while I do other stuff
I plan to spin up a background thread to check and update progress. I'm not sure how to update the rendered output though (or even if this is possible.)
This might help put you on the right track, code taken from lightning-viz, which borrowed a lot of it from matplotlib. warning this is all pretty underdocumented.
In python you have to instantiate a comm object
from IPython.kernel.comm import Comm
comm = Comm('comm-target-name', {'id': self.id})
full code here https://github.com/lightning-viz/lightning-python/blob/master/lightning/visualization.py#L15-L19. The id is just there in case you want to manage multiple different progress bars for example.
then do the same in javascript:
var IPython = window.IPython;
IPython.notebook.kernel.comm_manager.register_target('comm-target-name', function(comm, data) {
// the data here contains the id you set above, useful for managing
// state w/ multiple comm objects
// register the event handler here
comm.on_msg(function(msg) {
})
});
full example here. Note the javascript on_msg code is untested as I've only used comm to go from js -> python. To see what that handler looks like see https://github.com/lightning-viz/lightning-python/blob/master/lightning/visualization.py#L90
finally to send a message in python:
comm.send(data=data)
https://ipython.org/ipython-doc/3/api/generated/IPython.kernel.comm.comm.html#IPython.kernel.comm.comm.Comm.send

Controlling light using midi inputs

I currently am using Max/MSP to create an interactive system between lights and sound.
I am using Philips hue lighting which I have hooked up to Max/MSP and now I am wanting to trigger an increase in brightness/saturation on the input of a note from a Midi instrument. Does anyone have any ideas how this might be accomplished?
I have built this.
I used the shell object. And then feed an array of parameters into it via a javascipt file with the HUE API. There is a lag time of 1/6 of a second between commands.
Javascript file:
inlets=1;
outlets=1;
var bridge="192.168.0.100";
var hash="newdeveloper";
var bulb= 1;
var brt= 200;
var satn= 250;
var hcolor= 10000;
var bulb=1;
function list(bulb,hcolor,brt,satn,tran) {
execute('PUT','http://'+bridge+'/api/'+hash+'/lights/'+bulb+'/state', '"{\\\"on\\\":true,\\\"hue\\\":'+hcolor+', \\\"bri\\\":'+brt+',\\\"sat\\\":'+satn+',\\\"transitiontime\\\":'+tran+'}"');
}
function execute($method,$url,$message){
outlet(0,"curl --request",$method,"--data",$message,$url);
}
To control Philips Hue you need to issue calls to a restful http based api, like so: http://www.developers.meethue.com/documentation/core-concepts, using the [jweb] or [maxweb] objects: https://cycling74.com/forums/topic/making-rest-call-from-max-6-and-saving-the-return/
Generally however, to control lights you use DMX, the standard protocol for professional lighting control. Here is a somewhat lengthy post on the topic: https://cycling74.com/forums/topic/controlling-video-and-lighting-with-max/, scroll down to my post from APRIL 11, 2014 | 3:42 AM.
To change the bri/sat of your lights is explained in the following link (Registration/Login required)
http://www.developers.meethue.com/documentation/lights-api#16_set_light_state
You will need to know the IP Address of your hue hue bridge which is explained here: http://www.developers.meethue.com/documentation/getting-started and a valid username.
Also bear in mind the performance limitations. As a general rule you can send up to 10 lightstate commands per second. I would recommend having a 100ms gap between each one, to prevent flooding the bridge (and losing commands).
Are you interested in finding out details of who to map this data from a MIDI input to the phillips HUE lights within max? or are you already familiar with Max.
Using Tommy b's javascript (which you could put into a js object), You could for example scale the MIDI messages you want to use using midiin and borax objects and map them to the outputs you want using the scale object. Karlheinz Essl's RTC library is a good place to start with algorithmic composition if you want to transform the data at all http://www.essl.at/software.html
+1 for DMX light control via Max. There are lots of good max-to-dmx tutorials and USB-DMX hardware is getting pretty cheap. However, as someone who previously believed in dragging a bunch of computer equipment on stage just to control a light or two with an instrument, I'd recommend researching and purchasing a simple one channel "color organ" circuit kit (e.g., Velleman MK 110). Controlling a 120/240V light bulb via audio is easier than you might think; a computer for this type of application is usually overkill. Keep it simple and good luck!

tlab Audio conversion

I recorded my voice in Matlab. Now i want to convert that audio in to strings i-e; written sentences in Matlab. Is there a way to convert audio in to text.
I'm pretty sure MATLAB does not have native speech-to-text functionality.
A quick Google search turned up at least one project integrating speech-to-text into MATLAB.
http://www.ee.ic.ac.uk/hp/staff/dmb/voicebox/voicebox.html
Some other software that can translate recorded speech into text are Microsoft's SAPI (built into Windows Vista and Windows 7, and available as a download for Windows XP), and CMU's Sphinx project. Nuance Dragon Naturally Speaking is an option, but it is comparatively expensive. It's not obvious to me how these could be integrated into MATLAB though.
You can achieve somewhat limit mileage using the Builtin Windows Speech API. It depends on your operating system etc. and you need to follow similar principles from the API documentation:
http://msdn.microsoft.com/en-us/library/ms723627(v=vs.85).aspx
Using MATLAB's activeX server (
http://www.mathworks.co.uk/help/matlab/ref/actxserver.html)
You need the first declare a speech recogniser engine
RC = actxserver('SAPI.SpSharedRecoContext'); %connect to speech engine
And then setup various call back functions for each state of the recogniser:
RC.registerevent({'Recognition' #CallbackFunction; 'Hypothesis' #CallbackFunction; 'FalseRecognition' #CallbackFunction})
The contents of the callback function should be along these lines:
function word = CallbackFunction(varargin)
global word
result = varargin{length(varargin)-2};
word = result.Phraseinfo.GetText;
end
Then finally switch the recogniser on:
RC.Recognizer.State = 'SRSActive';
You would need to reference the documentation for which callback functions are called and when.
You will need to also setup a grammar dictionary to get meaningful results. As the engine will be attempting to recognise any word otherwise.