Turn off leds of Raspberry Pi - raspberry-pi

I would like to turn off the leds of my Raspberry Pi.
I tried modifying the file echo none >/sys/class/leds/led0/trigger but nothing changed.
Is this possible?

RaspberryMediaCenter:/sys/class/leds # echo 0 >/sys/class/leds/led1/brightness
RaspberryMediaCenter:/sys/class/leds # echo 0 >/sys/class/leds/led0/brightness
led0 green one
led1 red one

According to the RaspberryPi forums: echo 1 >/sys/class/leds/led0/brightness #Turn on
echo 0 >/sys/class/leds/led0/brightness #Turn off
Though I think some kernel hacking may be involved to control all of them, I believe this only works with the OK LED.

It's 2022, and the answer today is:
The documentation is located here, but it may or may not be up to date:
on your local file system: /boot/overlays/README
online at GitHub: the README file
The README is a rather choppy document, but you can find enough to get started. The parameters of interest are act_led_*, and pwr_led_*. There are three device tree parameters (dtparam) for both act_led and pwr_led: _trigger, _activelow and _gpio, but the documentation doesn't mention all possible values for them. Through guesswork, I learned the following values will turn the activity/green and power/red LEDs OFF:
To turn both act_led and pwr_led OFF, add these two lines to the file /boot/config.txt, and then reboot:
dtparam=act_led_trigger=none
dtparam=pwr_led_trigger=none
However:
Changes made on Aug 8, 2022 to the Raspberry Pi's proprietary closed-source firmware have rendered the above configuration ineffective on some models of RPi:
Raspberry Pi 3 Model B+
Raspberry Pi 4 Model B
Raspberry Pi 400
Raspberry Pi Compute Module 4
For these models, with firmware versions issued since Aug. 8, 2022, the following configuration is needed to extinguish the Red Power LED (pwr_led):
dtparam=pwr_led_trigger=default-on # The default
dtparam=pwr_led_activelow=off
There are also parameters for extinguishing the Ethernet LEDs also, but they only work for the 3B+ & 4B models: eth_led0 & eth_led1. Fortunately, the documentation does enumerate a set of values for the 3B+ and the 4B.
UPDATE, 3/22/22: Additional details are now posted on GitHub
UPDATE, 8/27/22: A recent software/firmware change by The RPi Organization seems to have broken the device tree configuration (dtparam) that disabled the Red Power LED. A bug report was filed on 2022/08/21. I won't attempt to characterize the maintainer's responses; you may review them & draw your own conclusions.
As of now, I feel that the answer to the OP's question is that "it depends on the Raspberry Pi model". I've edited my answer above based on the latest information, but this saga will likely have more episodes! FWIW, the sysfs interface - deprecated ~ 2 years ago - still seems to be working if the correct file & value are used; the details are presented in another Q&A on the same subject.
UPDATE, 12/27/22:
Any further updates to this answer will be posted to this GitHub repo.

On the Pi you can control the 2 Leds (red and green) by editing the files located under:
/sys/class/leds/led[num]
For example to turn off the usual blinking of the green led when the Pi is accessing the sd card, you can run (as admin):
echo none > /sys/class/leds/led0/trigger
And to turn on or off one led, you can change the status of the brightness file (as admin):
echo 1 > /sys/class/leds/led0/brightness # turn on
echo 0 > /sys/class/leds/led0/brightness # turn off
This is my very inelegant workaround in Python to actually control the status:
import time
import os
# turn off the default trigger of the green LED
os.system("sudo bash -c \"echo none > /sys/class/leds/led0/trigger\"")
# turn on the green LED
os.system("sudo bash -c \"echo 1 > /sys/class/leds/led0/brightness\"")
# keep it on 5 seconds
time.sleep(5)
# turn off the green LED on PI
os.system("sudo bash -c \"echo 0 > /sys/class/leds/led0/brightness\"")

Depending on which LED you are talking about, it looks like it is not possible.
For more information, read How can I turn the lights off on my pi? (and that's also a good place to ask RPi questions)

I realize that this is an old question. But, it was the first in the Google results for me, and it didn't work for my Raspberry Pi2 B+. For anyone else like me finding this now, the techniques at http://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi did work.

Related

List of all registers to be queried with `r`

I was looking for a way to get the contents of the MXCSR register in WinDbg. Looking up the help for the r command I found a lot of options. I thought I had covered all registers with the command
0:000> rM 0xfe7f
However, the MXCSR register was still not included. So I did a full search in WinDbg help, which did not give me any results (sorry for the German screenshot):
So I continued my search in the Internet and finally found
0:000> r mxcsr
mxcsr=00001f80
I am now wondering whether there are other registers that will not be displayed by rM 0xfe7f but are available anyways. I am especially interested in user mode and x86 and AMD64 architecture.
I had a look at dbgeng.dll (version 10.0.20153.1000) and found a few more registers by trying some strings around offset 7DC340. Based on some of that information, I found the MSDN websites x64 registers and x86 registers.
In addition I found
brto, brfrom, exto, exfrom
The registers zmm0 through zmm15 can be used as zmm0h, possibly for the high half.
The registers xmm0/ymm0 through xmm15/ymm15 can be used as ymm0h and ymm0l, likely for the high and low half.
some more which didn't work either because of my CPU model or because I tried it in user mode instead of kernel mode.

Using IPMItool to set system shutdown on upper critical temperature

I've been digging quite a bit into IPMItool commands and have yet to find a comprehensive list of raw hex commands. We have approximately 90 Dell C6220 II machines that I need to set a trigger (Dell calls these Platform Event Filters) to have the system shutdown upon reaching the Upper Critical Threshold that I set (ironically with IPMItool) for inlet temperature. Our Dell rep tells me this isn't possible and that I'll have to pull up the web interface for all 90 machines and set this by hand. They also told me it wasn't possible to set the inlet temperature thresholds with IPMItool and I did that so my faith in Dell is dwindling. What little I've been able to find on the internet it looks like I might be able to make it happen with raw hex commands. Can anyone in the great internet wild help me?
I ended up using the freeipmi tools ipmi-sensors-config and ipmi-pef-config. First I ran ipmi-sensors-config -L | grep Inlet to find which sensor number corresponded to the inlet temp (for my C6220 II machines it was sensor 16, but for my C6320s it was 110, or sometimes 10, so be sure to do this). I then ran ipmi-sensors-config -c -e '16_Inlet_Temp:Upper_Non_Critical_Threshold=30' &&
ipmi-sensors-config -c -e '16_Inlet_Temp:Upper_Critical_Threshold=32'. This sets the temps to what you want, but we're not done. We have to actually have to set an event to react to these. For that I ran ipmi-pef-config -c -e 'Event_Filter_4:Event_Filter_Action_Power_Off=Yes' &&
ipmi-pef-config -c -e 'Event_Filter_5:Event_Filter_Action_Power_Off=Yes'. Event 4 and 5 in my system corresponds to Temp Non-Critical and Temp Critical events for all temp sensors. To find these I ran ipmi-pef-config -o > pefconf.txt, and then used Vim to search for "Temp".

Win7-64 %windir%, %path% environment variables disappear, can't reload

I am having recurring intermittent problems with "losing" my environment variables, most onerously %windir% and %path%. The problem occurs when I have locked the keyboard and log back in. Rebooting the system (cold- and warm-boot) does not reliably bring them back, but eventually multiple iterations of booting has (so far) brought everything back.
If I open a command window and type echo %windir% and echo %path% and find that the variables exist and are properly defined, and if I leave that command window open, I can leave my system running for days without a problem.
I have captured the results of set to list all envars, both when the system is broken, and when it is fixed. The broken list is much shorter (%windir% is not even defined, %path% contains the definition from registry HKCU\Environment, but not from HKLM\SYSTEM\CurrentControlSet\Control\SessionManager\Environment).
I am guessing that the boot-up process is getting sidetracked.
Spent all morning with Geek Squad but they had no concrete suggestions. (They did suggest "taking the computer back to a previous restore point", but I fear that could cause more problems... and they didn't have confidence it would help.)
Do I have any options beyond possibly reinstalling everything?
I did finally find the answer, and although I understand this subject was closed by the commenter, I thought others might want to know. This link explains it pretty well:
https://superuser.com/questions/355594/windows-7s-path-and-environment-variables-are-corrupted
The short version is this: My system PATH exceeded a Windows maximum of 2048 bytes (it was more than 2200 bytes long). When that happens, the boot process fails to instantiate PATH and WINDIR.
The "fix" was to run c:\windows\system32\systempropertiesadvanced.exe from a command prompt (because without WINDIR, you can't open the Control Panel app for environment variables), and manually extract anything from the PATH I thought I could live without, until I whittled the PATH string down to under 2048 bytes.

Difference between machine language, binary code and a binary file

I'm studying programming and in many sources I see the concepts: "machine language", "binary code" and "binary file". The distinction between these three is unclear to me, because according to my understanding machine language means the raw language that a computer can understand i.e. sequences of 0s and 1s.
Now if machine language is a sequence of 0s and 1s and binary code is also a sequence of 0s and 1s then does machine language = binary code?
What about binary file? What really is a binary file? To me the word "binary file" means a file, which consists of binary code. So for example, if my file was:
010010101010010
010010100110100
010101100111010
010101010101011
010101010100101
010101010010111
Would this be a binary file? If I google binary file and see Wikipedia I see this example picture of binary file which confuses me (it's not in binary?....)
Where is my confusion happening? Am I mixing file encoding here or what? If I were to ask one to SHOW me what is machine language, binary code and binary file, what would they be? =) I guess the distinction is too abstract to me.
Thnx for any help! =)
UPDATE:
In Python for example, there is one phrase in a file I/O tutorial, which I don't understand: Opens a file for reading only in binary format. What does reading a file in binary format mean?
Machine code and binary are the same - a number system with base 2 - either a 1 or 0. But machine code can also be expressed in hex-format (hexadecimal) - a number system with base 16. The binary system and hex are very interrelated with each other, its easy to convert from binary to hex and convert back from hex to binary. And because hex is much more readable and useful than binary - it's often used and shown. For instance in the picture above in your question -uses hex-numbers!
Let say you have the binary sequence 1001111000001010 - it can easily be converted to hex by grouping in blocks - each block consisting of four bits.
1001 1110 0000 1010 => 9 14 0 10 which in hex becomes: 9E0A.
One can agree that 9E0A is much more readable than the binary - and hex is what you see in the image.
I'm honestly surprised to not see the information I was looking for, looking back though, I guess the title of this thread isn't fully appropriate to the question the OP was asking.
You guys all say "Machine Code is a bunch of numbers".
Sure, the "CODE" is a bunch of numbers, but what people are wondering (I'm guessing) is "what actually is happening physically?"
I'm quite a novice when it comes to programming, but I understand enough to feel confident in 'roughly' answering this question.
Machine code, to the actual circuitry, isn't numbers or values.
Machine code is a bunch of voltage gates that are either open or closed, and depending on what they're connected to, a certain light will flicker at a certain time etc.
I'm guessing that the "machine code" dictates the pathway and timing for specific electrical signals that will travel to reach their overall destination.
So for 010101, 3 voltage gates are closed (The 0's), 3 are open (The 1's)
I know I'm close to the right answer here, but I also know it's much more sophisticated - because I can imagine that which I don't know.
010101 would be easy instructions for a simple circuit, but what I can't begin to fathom is how a complex computer processes all of the information.
So I guess let's break it down?
x-Bit-processors tell how many bits the processor can process at once.
A bit is either 1 or 0, "On" or "Off", "Open" or "Closed"
so 32-bit processors process "10101010 10101010 10101010 10101010" - this many bits at once.
A processor is an "integrated circuit", which is like a compact circuit board, containing resistors/capacitors/transistors and some memory. I'm not sure if processors have resistors but I know you'll usually find a ton of them located around the actual processor on the circuit board
Anyways, a transistor is a switch so if it receives a 1, it sends current in one direction, or if it receives a 0, it'll send current in a different direction... (or something like that)
So I imagine that as machine code goes... the segment of code the processor receives changes the voltage channels in such a way that it sends a signal to another part of the computer (why do you think processors have so many pins?), probably another integrated circuit more specialized to a specific task.
That integrated circuit then receives a chunk of code, let's say 2 to 4 bits 01 or 1100 or something, which further defines where the final destination of the signal will end up, which might be straight back to the processor, or possibly to some output device.
Machine code is a very efficient way of taking a circuit and connecting it to a lightbulb, and then taking that lightbulb out of the circuit and switching the circuit over to a different lightbulb
Memory in a computer is highly necessary because otherwise to get your computer to do anything, you would need to type out everything (in machine code). Instead, all of the 1's and 0's are stored inside some storage device, either a spinning hard disk with a magnetic head pin that 'reads' 1's or 0's based on the charge of the disk, or a flash memory device that uses a series of transistors, where sending a voltage through elicits 1's and 0's (I'm not fully aware how flash memory works)
Fortunately, someone took the time to think up a different base number system for programming (hex), and a way to compile those numbers (translate them) back into binary. And then all software programs have branched out from there.
Each key on the keyboard creates a specific signal in binary that translates to
a bunch of switches being turned on or off using certain voltages, so that a current could be run through the specific individual pixels on your screen that create "1" or "0" or "F", or all the characters of this post.
So I wonder, how does a program 'program', or 'make' the computer 'do' something... Rather, how does a compiler compile a program of a code different from binary?
It's hard to think about now because I'm extremely tired (so I won't try) but also because EVERYTHING you do on a computer is because of some program.
There are actively running programs (processes) in task manager. These keep your computer screen looking the way you've become accustomed, and also allow for the screen to be manipulated as if to say the pictures on the screen were real-life objects. (They aren't, they're just pictures, even your mouse cursor)
(Ok I'm done. enough editing and elongating my thoughts, it's time for bed)
Also, what I don't really get is how 0's are 'read' by the computer.
It seems that a '0' must not be a 'lack of voltage', rather, it must be some other type of signal
Where perhaps something like 1 volt = 1, and 0.5 volts = 0. Some distinguishable difference between currents in a circuit that would still send a signal, but could be the difference between opening and closing a specific circuit.
If I'm close to right about any of this, serious props to the computer engineers of the world, the level of sophistication is mouthwatering. I hope to know everything about technology someday. For now I'm just trying to get through arduino.
Lastly... something I've wondered about... would it even be possible to program today's computers without the use of another computer?
Machine language is a low-level programming language that generally consists entirely of numbers. Because they are just numbers, they can be viewed in binary, octal, decimal, hexadecimal, or any other way. Dave4723 gave a more thorough explanation in his answer.
Binary code isn't a very well-defined technical term, but it could mean any information represented by a sequence of 1s and 0s, or it could mean code in a machine language, or it could mean something else depending on context.
Technically, all files are stored in binary, we just don't usually look at the binary when we view a file. However, the term binary file is usually used to refer to any non-text file; e.g. an .exe, a .png, etc.
You have to understand how a computer works in its basic principles and this will clear things up for you... Therefore I recommend on reading into stuff like Neumann Architecture
Basically in a very simple computer you only have one memory like an array
which has instructions for your processor, the data and everything is a binary numbers.
Your program starts at a certain place in your memory and reads the first number...
so here comes the twist: these numbers can be instructions or data.
Your processor reads these numbers and interprets them as instructions
Example: the start address is 0
in 0 is a instruction like "read value from address 120 into the ALU (Math-Unit)
then it steps to address 1
"read value from address 121 into ALU"
then it steps to address 2
"subtract numbers in ALU"
then it steps to address 3
"if ALU-Value is smaller than zero go to address 10"
it is not smaller than zero so it steps to address 4
"go to address 20"
you see that this is a basic if(a < b)
You can write these instructions as numbers and they can be run by your processor but because nobody wants to do this work (that was what they did with punchcards in the 60s)
assembler was invented...
that looks like:
add 10 ,11, 20 // load var from address 10 and 11; run addition and store into address 20
In Conclusion:
Assembler (processor instructions) can be called binary because it's stored in plain numbers
But everything else can be a Binary file, too.
In reality if you have a simple .exe file it is both... If you have variables in there like a = 10 and b = 20, these values can be stored some where between if clauses and for loops... It depends on the compiler where it put these
But if you have a complex 3D-model it can be stored in a separate file with no executable code in it...
I hope it helps to clear things up a little.

"All programs are interpreted". How?

A computer scientist will correctly explain that all programs are
interpreted and that the only question is at what level. --perlfaq
How are all programs interpreted?
A Perl program is a text file read by the perl program which causes the perl program to follow a sequence of actions.
A Java program is a text file which has been converted into a series of byte codes which are then interpreted by the java program to follow a sequence of actions.
A C program is a text file which is converted via the C compiler into an assembly program which is converted into machine code by the assembler. The machine code is loaded into memory which causes the CPU to follow a sequence of actions.
The CPU is a jumble of transistors, resistors, and other electrical bits which is laid out by hardware engineers so that when electrical impulses are applied, it will follow a sequence of actions as governed by the laws of physics.
Physicists are currently working out what makes those rules and how they are interpreted.
Essentially, every computer program is interpreted by something else which converts it into something else which eventually gets translated into how the electrons in your local neighborhood fly around.
EDIT/ADDED: I know the above is a bit tongue-in-cheek, so let me add a slightly less goofy addition:
Interpreted languages are where you can go from a text file to something running on your computer in one simple step.
Compiled languages are where you have to take an extra step in the middle to convert the language text into machine- or byte-code.
The latter can easily be easily be converted into the former by a simple transformation:
Make a program called interpreted-c, which can take one or more C files and can run a program which doesn't take any arguments:
#!/bin/sh
MYEXEC=/tmp/myexec.$$
gcc -o $MYEXEC ${1+"$#"} && $MYEXEC
rm -f $MYEXEC
Now which definition does your C program fall into? Compare & contrast:
$ perl foo.pl
$ interpreted-c foo.c
Machine code is interpreted by the processor at runtime, given that the same machine code supplied to a processor of a certain arch (x86, PowerPC etc), should theoretically work the same regardless of the specific model's 'internal wiring'.
EDIT:
I forgot to mention that an arch may add new instructions for things like accessing new registers, in which case code written to use it won't work on older processors in the range. Much like when you try to use an old version of a library and then try to use capabilities only found in newer libraries.
Example: many Linux distros are released as 686 only, despite the fact it's in the 'x86 family'. This is due to the use of new instructions.
My first thought was too look inside the CPU — see below — but that's not right. The answer is much much simpler than that.
A high-level description of a CPU is:
1. execute the current op
2. grab the next op
3. goto 1
Compare it to Perl's interpreter:
while ((PL_op = op = op->op_ppaddr(aTHX))) {
}
(Yeah, that's the whole thing.)
There can be no doubt that the CPU is an interpreter.
It just goes to show how useless it is to classify something is interpreted or not.
Original answer:
Even at the CPU level, programs get rewritten into simpler instructions to allow the CPU to execute more them more quickly. This is done by changing the order in which they are executed and executing them in parallel. For example, Intel's Hyperthreading.
Even deeper, each instruction is considered a program of its own, one that routes electronic signals. See microcode.
The Levels of interpretions are really easy to explain:
2: Runtimelanguage (CLR, Java Runtime...) & Scriptlanguage (Python, Ruby...)
1: Assemblies
0: Binary Code
Edit: I changed the level of Scriptinglanguages to the same level of Runtimelanguages. Thank's for the hint. :-)
I can write a Game Boy interpreter that works similarly to how the Java Virtual Machine works, treating the z80 machine instructions as byte code. Assuming the original was written in C1, does that mean C suddenly became an interpreted language just because I used it like one?
From another angle, gcc can compile C into machine code for a number of different processors. There's no reason the target machine has to be the same as the machine you're compiling on. In fact, this is a common way to compile C code for AVRs and other microcontrollers.
As a matter of abstraction, the compiler's job is to translate flat text into a structure, then translate that structure into something that can be executed somewhere. Whatever is doing the execution may have its own levels of breaking out the structure before really executing it.
A lot of power becomes available once you start thinking along these lines.
A good book on this is Structure and Interpretation of Computer Programs. Even if you only get through the first chapter (or half of the first chapter), I think you'll learn a lot.
1 I think most Game Boy stuff was hand coded ASM, but the principle remains.