I'm building an application that uses pty.js to open up a pseudo terminal on my computer. I'm getting responses that look like:
]0;ec2-user#ip-172-31-62-237:~[?1034h[ec2-user#ip-172-31-62-237 ~]$ ls
]0;ec2-user#ip-172-31-62-237:~[ec2-user#ip-172-31-62-237 ~]$ pwd
/home/ec2-user
I'm assuming pty.js is sending back a specific encoding, but I'm not sure what the encoding is and how to decode it. Any help would be appreciated, thanks.
Those aren't responses (the terminal would respond), but control sequences sent by an application (not the terminal). I see a few instances (OSC might print as ^[], and CSI as ^[[ if the escape character were shown as ^[):
]0;ec2-user#ip-172-31-62-237:~
looks like the control for setting the window title (from xterm although several programs support it),
OSC Ps ; Pt BEL
OSC Ps ; Pt ST
...
Ps = 0 -> Change Icon Name and Window Title to Pt.
and
[?1034h
looks like another sequence from xterm's repertoire (generally not supported by other programs):
CSI ? Pm h
DEC Private Mode Set (DECSET).
...
Ps = 1 0 3 4 -> Interpret "meta" key, sets eighth bit.
(enables the eightBitInput resource).
For the given example, encoding isn't a factor.
For capturing output from your application, the script program is useful. I use a small utility (unmap) to translate the resulting typescript files into readable form, but cat -v is often adequate for this purpose.
Further reading: XTerm Control Sequences
Related
I have two CentOS 7.4.1708 machines, running gnuplot 4.6 patchlevel 2,
currently behaving differently, and I can't work out why.
Install fonts:
sudo yum install dejavu-sans-mono-fonts
Then create the following GNUplot script:
cat << EOF > test.gnuplot
set terminal pngcairo enhanced font "DejaVuSansMono,10"
set encoding utf8
set title "同"
plot sin(x)
EOF
Finally, pipe it into the application:
cat test.gnuplot | gnuplot > test.png
On one machine I get this:
But on another I get this:
I can't work out the cause of the discrepancy. The desired character is U+540C so it's not like the second machine is interpreting the input bytes any differently; it's just not rendering the glyph.
What differences in system configuration should I be looking for?
More broadly, how can I "fix" the output in the second case? I don't even vastly care if some characters end up getting replaced by placeholders like this (after all, I must recognise that not all fonts implement all glyphs), but those placeholders being rendered at super-size is a problem.
This post is rather a collection of observations than a complete answer, but perhaps it might be useful as well (I tried your example on an almost fresh install of CentOS and it does reproduce the second plot in your post):
judging from the charset table printed by the command
fc-match -v DejaVuSansMono
it seems that 540C is indeed not supported. Perhaps the first machine has some additional fonts installed which are used as a fallback for this particular glyph? How would the output of fc-list differ?
Hard to say if it is complete, but the list of fonts supporting this glyph seems to be rather limited. Nonetheless, for example Google Droid is available via yum, so if I do
sudo yum install google-droid-sans-fonts google-droid-sans-mono-fonts
and rerun the Gnuplot script, the plot renders in an acceptable way.
as for the size of the "fallback" box, I first noticed that its size is directly proportional to the specified font size, i.e., it also doubles if one doubles the font size. From src/wxterminal/gp_cairo.c, it seems that Gnuplot uses by default an "oversampling" strategy to render the text, i.e., it renders everything in plot->oversampling_scale times larger resolution and then scales it back (via the transformation matrix defined in void gp_cairo_initialize_context(plot_struct*)).
For example, when rendering the text, it calls
pango_font_description_set_size(desc, \
(int) (plot->fontsize*PANGO_SCALE*plot->oversampling_scale));
However, for some reason, the "fallback" box is not scaled back and thus is plot->oversampling_scale times larger than the specified font size. The default value of plot->oversampling_scale is set to GP_CAIRO_SCALE which is defined to be 20 in src/wxterminal/gp_cairo.h.
I downloaded the source of Gnuplot 4.6.2 and replaced plot->oversampling = TRUE; with plot->oversampling = FALSE; in void gp_cairo_initialize_plot(plot_struct*) in src/wxterminal/gp_cairo.c. After recompilation, the "fallback" box is rendered with the same size as the rest of text. Unfortunately, I haven't found a way how to change this behavior directly from Gnuplot.
I would like to turn a led (character device) of an embedded linux board (BeagleBone Black) on and off with a script written in D.
Via the command line a led can be turned on and off (e.g. for led "USER LEDS D2 0") with:
cd /sys/class/leds/beaglebone:green:usr0
echo none > trigger
echo 1 > brightness
echo 0 > brightness
(echo none > trigger disables the default "heartbeat" flashing)
In the D Cookbook on page 93 I found info about how to make linux system calls via the C interface like follows:
void main(){
import core.sys.posix.unistd; // analogous to #include <unistd.h>
string hello = "Hello, world!";
write(1 /*stdout file descriptor*/, hello.ptr, hello.length);
}
Is that a suitable way to access a character device or are there better alternatives?
The unistd calls are indeed the correct way to do it. A character device in Linux is a special kind of file and is accessed the same way: you open it by path, then read or write to it, and close it when finished.
Note that open is actually inside core.sys.posix.fcntl, while read is in core.sys.posix.unistd.
You could also use std.file.write() from the D standard library to be a bit shorter. There's also chdir in there. So your shell example would literally become:
import std.file;
chdir("/sys/class/leds/beaglebone:green:usr0");
std.file.write("trigger", "none"); // write "filename", "data string"
std.file.write("brightness", "1");
std.file.write("brightness", "0");
You don't strictly have to use std.file.write as the full name with the import, I just like to since write is such a common word it clears up which one we mean.
Anyway, this function just wraps up the unistd calls for you: it opens, writes the string, and closes all in one (just like the shell echo!).
One small difference is shell echo sticks a \n at the end of the string. I didn't do that here. If the code doesn't work, try "1\n" and such instead, maybe the device requires that. But I doubt it.
But the std.file.write vs the core.sys.posix.unistd.write aren't that much different. The former is more convenient, the latter gives more precise control over it.
echo "abc" | less
less receives the 4 bytes "a", "b", "c", "\x0A" over STDIN, and displays "abc" to the user in its own special way (with the alternate screen mode, etc.).
Then user types "n" at the keyboard, less responds by writing "Pattern not found (press RETURN)" in reverse color at the bottom-left of the terminal. We also see it print a series of tilde's along the left.
Clearly less had to have received input of the "n" character in order to know to attempt to search what was in its search buffer.
Where did it get "n" through? I typed it into the terminal, but is the terminal attached to less's STDIN? If so, wouldn't less just have stuck the "n" into the display buffer? Well, it'd be able to tell, but not if I ran echo "abc" | tail -f for example.
How can I mess around with this file descriptor? I am building a perl program to convert mouse escape codes to key codes so I can make a special pipeline or wrapped command/program that works like a pager but can be mouse-interactive. But I can't figure out how to get in there if less's STDIN is the input file rather than the terminal itself. I really hope it's possible for me to pipe the interactive terminal output through my mapper program.
I also know that it's possible to check if a program's STDIN is a terminal or not, but that doesn't help me to find out
whether a terminal is connected at all, or
how to redirect/pipe/munge that terminal file input that may not be STDIN
Update: Okay I did some digging, it looks like I've been searching for the magical /dev/tty. Hopefully someone can show me how to mess around with it.
Update: Refined question: Do I essentially need to build and run a pty on the specific instances of pagers that I need to translate their events for? Sounds like it will be a pain as I'll need to have all of this process management and stuff. IO:Pty:Easy looks promising though.
if you do
$ lsof -p <pid-of-less>
you will see at the bottom something like
less 30875 user 3r CHR 5,0 0t0 1037 /dev/tty
which shows you how less has open the controlling terminal /dev/tty to read commands.
I grasp the basic concept of stdin, stdout, stderr and how programs work with a command line/terminal.
However, I've always wondered how utilities like less in Linux and git log work because they are interactive.
My current thought is that the program does not quit, writes to stdout, listens to key events and writes more to stdout until the user quits pressing q or sends the close signal.
Is my intuition right or is there more to it? Do they detect the amount of lines and characters per line to determine how much to output? And do they clear the screen always before output?
Very interesting question, even if it's a bit open ended.
As mentioned by #tripleee, ncurses is a foundational library for interactive CLI apps.
A bit of history ...
Terminal == "printer" ...
To understand POSIX "terminals", you have to consider their history ... more specifically, you need to think about what a "terminal" meant in the 70's where you'd have a keyboard+printer attached to a serial cable. As you type, a stream of bytes flows to the mainframe which echos them back to the printer causing the printer to echo the command as you type it. Then, typically after pressing ENTER, the mainframe would go off and do some work and then send output back to be printed. Since it basically a glorified dot-matrix printer, we are talking append-only here. There was no "painting the screen" or anything fancy like that.
Try this:
echo -e "Hi there\rBye"
and you'll see it print "Bye there". "\r" is a carriage return with no line feed. A carriage is that printer part that moves back and forth in an old dot-matrix printer and actually does the printing. So if you return the carriage back to the left side of the page and fail to advance the paper (ie, "line feed"), then you will start printing over the current line of text. "terminal" == "printer".
Monitors and software terminals ... still line-oriented
So flash forward a bit and a revolutionary tech called "monitors" comes about where you have a virtualized terminal display that can be rewritten. So like all good tech, we innovated incrementally by adding more and more special escape codes. For example, check out the ANSI color codes. If you are on a terminal that doesn't recognize those escape codes, you'll see a bunch of gibberish in the output from those uninterpreted codes:
"methodName": ESC[32m"newInstance"ESC[39m,
"fileName": ESC[32m"NativeConstructorAccessorImpl.java"ESC[39m,
"className": ESC[32m"sun.reflect.NativeConstructorAccessorImpl"ESC[39m,
"nativeMethod": ESC[33mfalseESC[39m,
When your terminal sees '\033' (ESC), '[', ..., 'm', it interprets it as a command to change the color. Try this:
echo -e "\033[32mgreen\033[39m"
So anyways, that's the history/legacy of the Unix terminal system which was then inherited by Linux and BSD (eg, macs), and semi-standardized as POSIX. Check out termio.h which defines the kernel interface for interacting with terminals. Almost certainly, Linux/BSD have a bunch of more advanced functions that aren't fully standardized into POSIX. While talking about "standards", there's also a bunch of de-facto terminal device protocol standards like the venerable VT100. Software "terminal emulators" like SSH or PuTTY know how to speak VT100 and usually a bunch of more advanced dialects as well.
Shoe-horning "interactive" onto a "line-oriented" interface ...
So ... interactive ... that doesn't really fit well with a line-printer view of the world. It's layered on top. Input is easy; instead of automatically echoing each keystroke typed and waiting for ENTER (ala "readline"), we have the program consume keystrokes as they come in from the TTY. Output is more complex. Even though the fundamental abstraction is a stream of output, with enough escape codes you can repaint the screen by positioning the "caret" and writing new text on top of old text (just like with my "\r" example). None of this is fun to implement yourself, especially when you want to support multiple environments with different escape codes. .... thus the libraries, of which ncurses is one of the most well known. To get an idea of the funky magic done to efficiently render a dynamic screen into a line-oriented TTY, check out Output and Screen Updating from "A Hacker's Guide to NCURSES".
I'm trying to learn about color text in a terminal window. (In case it matters I'm using Terminal.app on OS X.) I'd like to get the terminal's current foreground and background color pair. It looks like I should be able to get this info in a perl script using the Term::Cap library, but the solution eludes me.
In a perl script how would I query the terminal's current foreground and background color pair value?
The feature is outside the scope of terminfo and termcap, because it deals with terminal responses, while terminfo/termcap describe these capabilities:
how to tell the terminal to do some commonly-implemented feature (such as clearing the screen), or
what sequence of characters might some special key (such as Home) send from the keyboard.
While in principle, there is no limitation on what could be part of a terminal description, there was little commonality across terminals back in the 1980s for responses. A few terminals could report specific features, most of those were constant (e.g., version information). Most of the variable responses came after terminfo/termcap had more or less solidified in X/Open Curses. ncurses extends that, but again, most of the extensions are either features or special keys.
Terminal.app implements the most commonly-used features of xterm, but (like other imitators) omits most of the terminal responses. Among other things, xterm provides terminal responses which can tell an application what the window's colors are currently. There are a couple of command-line utilities (xtermset and xtermcontrol) which have been written to use this information (and again, they cover only a part of the repertoire). Using xtermcontrol demonstrates that Terminal.app is lacking in this area — see screenshot:
I don't think most terminals support reporting this -- and it doesn't look like termcap or terminfo have any entries for it. You're just expected to set the color pair as necessary, not to ask the terminal what it's set to right now. In the ECMA-48 standard (better known as "ANSI" after ANSI X3.64, where it used to live), the only command that makes reference to color is SGR "Set Graphic Rendition", which is purely write-only.
Dunno about perl or Terminal.app, but xterm etc will write foreground/background color control sequences to stdin if you output "\033]10;?\07" or "\033]11;?\07" respectively. Check out http://invisible-island.net/xterm/ctlseqs/ctlseqs.html, http://invisible-island.net/xterm/ctlseqs/ctlseqs.html#h2-Operating-System-Controls in particular.