How to control the volume of raspberry pi with python 3? - raspberry-pi

I have been looking up the web for any reference to control the volume of raspberry pi (b+) with python script . I come up with this thread previously asked but python-alsaaudio doesn't works with python 3 or say in the thonny python idle .
So I need to know any correct way to change the volume of pi as per the user input .

Another way is to control volume through a command line tool. There is a tool for the Alsa command line called amixer:
amixer sset Master 50%
Now you can create a simple python script that runs the above command:
import subprocess
# a value between 0 and 100
volume = 50
command = ["amixer", "sset", "Master", "{}%".format(volume)]
subprocess.Popen(command)
You can change Master to other soundcards. You can get a list of controls:
$ amixer scontrols
Simple mixer control 'Master',0
Simple mixer control 'PCM',0
Simple mixer control 'Line',0
Simple mixer control 'CD',0
Simple mixer control 'Mic',0
Simple mixer control 'Mic Boost (+20dB)',0
Simple mixer control 'Video',0
Simple mixer control 'Phone',0
Simple mixer control 'IEC958',0
Simple mixer control 'Aux',0
Simple mixer control 'Capture',0
Simple mixer control 'Mix',0
Simple mixer control 'Mix Mono',0

Related

Nodered - No data from smartmeter

I am running noderd on a raspberry pi 4 as docker container (as well as mosquitto, timescaledb and grafana). But I fail to get data from a smartmeter (sml protocol) into nodered. The raspi is connected to an optical sensor via usb cable and I get data on the raspi (sudo cat /dev/ttyUSB0 | od -tx1).
I do not find any parameter configuration for the smartmeter node (node-red-contrib-smartmeter) to get any data into nodered. Below you see the flow (connection details: 9600 Baud rate, 8N1 - should be fine since it is from the manual and already worked before).
To check the serial device connection with the docker container I have installed serialport in nodered. After some adjustments the serialport node in nodered could connect to /dev/ttyUSB0. Now, I get values - strang ones - from my serial device into nodered.
But for the smartmeter node I still get no values even though the parameters are the same than from the serialport node. Do you have any idea? Is there an alternative for the smartmeter node that should work?
Thank you very much in advance!

How to make node-red run 24/7 on Raspberry Pi

I want to host node-red 24/7 on Raspberry Pi. Something was running it constantly, then, the power had gone out, then it came back. But, then, the data was not coming.
I want the node-red to run constantly (at least when the raspberry pi is running). I tried making an .SH file to make it constantly run, here is the code:
node-red
that is the command prompt line to run node-red.
That did not work, is there something that I can do about this?
What you're looking for is a process manager, there are many process managers available each with their own features.
Natively many linux distributions support this functionality via a system called systemd also called an init system.
Docker as a system is also an option but this may be excessive for your use-case.
Some popular process manager applications are:
supervisord
PM2
Additional reading:
A supervisord tutorial on Digital Ocean by Alex Garnett
A systemd tutorial on Digital Ocean by Justin Ellingwood
The node-red install instructions for raspberry pi cover the in the running as a service section
https://nodered.org/docs/getting-started/raspberrypi#running-as-a-service
It is most likely that you had used node-red-start to start Node-RED as a service, but you had not completed the latter step to enable the service to start on boot.
Autostart on boot
If you want Node-RED to run when the Pi is turned on, or re-booted,
you can enable the service to autostart by running the command:
sudo systemctl enable nodered.service
To disable the service, run the command:
sudo systemctl disable nodered.service

How can I find "Exec" node's commands to program Raspberry Pi in Node-Red?

I am new at programming Raspberry Pi, can you help me to find commands for "Exec" node?
vcgencmd measure_temp
For example this code gives the temperature of Raspberry Pi if you write it on "Exec" node in Node-Red.
There is no list of commands for the exec node.
This node can be used to run any application that can be executed on the platform running Node-RED, in this case linux.
(These applications should be things that run without requesting any further input after starting and provided output to stdout/stderr)

Matlab/Simulink: run batch of simulations in parallel?

I have to run a series of simulations and save the results. Since by default Matlab only uses one core, I wonder if it is possible to open multiple worker tasks and assign different simulation runs to them?
You could run each simulation in a separate MATLAB instance and let the OS handle the process to core assignment.
One master MATLAB could synchronize each child instances checking for example if simulation results file are existing.
I aso have the same problem but I did not manage to really understand how to make it in MatLab. The documentation in matlab is too advanced to get to know how to make it.
Since I am working with Ubuntu I find a way to do the work calling the unix command from MatLab and using the parallel GNU command
So I mange to run my simulation in parallel with 4 cores.
unix('parallel --progress -j4 flow > /dev/null :::: Pool.txt','-echo')
you can find more info in the link
Shell, run four processes parallel
Details of the syntaxis can be found at https://www.gnu.org/software/parallel/
but breifly I can tell you
--progress shows a status of the progress
-j4 tells the amount or jobs in parallel you want to have
flow is the name of my simulator
/dev/null was just to avoid the screen run output of the simulator to show up
Pool.txt is a file I made with the required simulator input that is basically the path and the main simulator file.
echo I do not remember now what was it for :D

Using IPython Parallel on the Sun Grid Engine

I'm trying to use IPython Parallel for a very common scenario, where I want to run simulations on a cluster running Sun Grid Engine, and I can't find a reliable way to do this.
Here's what I am trying to do:
I want to run numerical simulations (using Numpy arrays) with several different parameter values -- the tasks are obviously/'embarrassingly' parallel. I have access (through ssh) to the head node of cluster running Grid Engine. Till now, I was running shell scripts with the QSUB command, but this is quite clumsy(handling node crashes etc.) and I was looking for a way to all of this in Python.
IPython seems ideally suited for this scenario, but it's turning out to be cumbersome to get the setup working smoothly. I start n (say 20) engines using IPCLUSTER on the head node, and then copy the .json files to my local machines from where I connect using IPython.parallel.Client.
I have set IPClusterStart.controller_launcher_class = 'SGEControllerLauncher'
and IPClusterEngines.engine_launcher_class = 'SGEEngineSetLauncher'
IPCLUSTER seems to be running fine; I get this output from the head node on the ssh terminal:
-- [IPClusterStart] Starting Controller with SGEControllerLauncher
-- [IPClusterStart] Job submitted with job id: '143396'
-- [IPClusterStart] Starting 4 Engines with SGEEngineSetLauncher
-- [IPClusterStart] Job submitted with job id: '143397'
-- [IPClusterStart] Engines appear to have started successfully
However, I have these issues:
Very often, many of the engines will fail to register with the controller even after I see the message above which says the engines have started successfully. When I start IPCLUSTER with 20 engines, I can see 10 - 15 engines showing up on the Grid Engine queue. I have no idea what happens to the other engines -- there are no output files. Out of these 10-15 engines which start only some of them register with the controller and I see this on their output files:
... [IPEngineApp] Using existing profile dir: .../.ipython/profile_sge'
... [IPEngineApp] Loading url_file ... .ipython/profile_sge/security/ipcontroller-engine.json'
... [IPEngineApp] Registering with controller at tcp://192.168.87.106:63615
... [IPEngineApp] Using existing profile dir: .../.ipython/profile_sge'
... [IPEngineApp] Completed registration with id 0
On others I see this:
... [IPEngineApp] Using existing profile dir: .../.ipython/profile_sge'
... [IPEngineApp] Loading url_file .../.ipython/profile_sge/security/ipcontroller-engine.json'
... [IPEngineApp] Registering with controller at tcp://192.168.87.115:64909
... [IPEngineApp] Registration timed out after 2.0 seconds
Any idea why this happens?
Sometimes, the engines start and register successfully but they start dying when I make them run something very simple like view.execute('%pylab') and the only exception I get back is this:
[Engine Exception]
Traceback (most recent call last):
File "/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/IPython/parallel/client/client.py", line 708, in _handle_stranded_msgs
raise error.EngineError("Engine %r died while running task %r"%(eid, msg_id))
EngineError: Engine 1 died while running task 'b9601e8a-cff5-4037-b9d9-a0b93ca2f256'
Starting the engines this way means that I occupy the nodes and the queue as long as the engines are running, even if they aren't executing anything. Is there an easy way to start the engines so that they will be spawned only when you want to run some script and they will close once they return the result of their computation?
The Grid Engine seems to start the controller on a different node every time, so the --ruse flag in the IPCLUSTER config files is not useful; I have to copy the JSON files every time I use IPCLUSTER. Is there a way to avoid this?
It would be really helpful if someone can give a simple work-flow for this common scenario: using IPython parallel to submit obviously parallel jobs to a SGE cluster over a SSH connection. There should be some way of handling resubmission for engine crashes, and it would also be nice if there is a way to use the cluster resources only for the duration of the simulation.
This comes a little late, and it's not actually answering your specific question. However, have you tried with pythongrid?