I checked the fish documentation, but I did not find what I needed (or if I did, I didn't understand it).
I made a function in fish that suspends my PC after some time:
function ss --d 'sleeps the pc after specified time'
sleep $argv && systemctl suspend
end
Then I type ss 20s and it shuts down the PC in 20 seconds.
But what if I want to pass flags to systemctl suspend? Do I also need $argv after that command in the function?
I did try something similar, but I got a Too many arguments error.
Well first, I think the Too many arguments came from passing any additional argument to systemctl suspend. As far as I know, it doesn't have any additional options, does it? Even if I call sudo systemctl suspend help (or anything as an extra argument), I get the same error.
But as for how to structure a function to handle passing different arguments to multiple commands ...
You are correct on the second point. The documentation covers it, but I absolutely understand that it can be difficult for new users to know what they are looking for. I still find things that are "new to me" when reading the doc again, even after using fish for over a year now.
Hopefully by pointing out the relevant information, it will make more sense. Don't worry, I'll give you the "answer", too, but I want to make sure you have the background you need to understand it.
First, in the Introduction - Functions section, the key takeaway is:
$argv is a list variable
From there, we need to know how to work with list variables. There are two sections that cover that. First Functions - Lists and second, how to access the list variable contents through indexing.
Note that, unlike many languages, fish indices start at 1. So your sleep argument (the first number you pass in, will be $argv[1]. The second, $argv[2], and so on, of course. So if you just want two arguments, that's all you need.
But if you want "all the rest of the arguments" (regardless of how many) to be passed to systemctl suspend, that gets a bit more tricky. That's going to be $argv[2..-1], which means "the second argument through the last argument."
So it should be:
function ss --d 'sleeps the pc after specified time'
sleep $argv[1] && systemctl suspend $argv[2..-1]
end
Again, that assumes that you have a valid argument to pass to suspend in the first place, of course.
Related
In my previous questions here on stack we determined my command should run like this.
(& C:\Gyb\Gyb.exe --email $DestinationGYB --action restore --local-folder $GYBFolder --label-restored $GYBLabel --service-account)
The problem with this is if I run that same command in a command prompt I would see a bunch of status information.
When I run the command as above all I see in VSCode is it ran that line and its waiting. How can I make it show me like the command prompt without opening a new window?
here is GYB
https://github.com/jay0lee/got-your-back
Remove the parentheses () around your command if you want to see the output at the same time. Otherwise, this behavior is expected and is not unique to the VSCode terminal.
The group-expression operator () is used to control the order of which code is executed in PowerShell. Expressions are evaluated like order of operations (re: PEMDAS) in Mathematics, the inner-most parentheses get evaluated first. You can also use the group-expression operator to invoke a property or method from the returned expression in the group.
The problem is, group-expressions don't output to the parent level directly, that only happens when the group-expression is done executing. So when you have something that can run for several minutes or even hours like gyb.exe, you don't see that output until the command exits and execution continues.
Contrast this to running outside of the group-expression; as STDOUT is written to the success stream the success stream is immediately written to console as it comes. There is no additional mechanism you are proxying your output through.
Note: You will experience nearly the same behavior with the sub-expression operator $() as well, although do not conflate sub-expressions and group-expressions as they serve different purposes. Here is a link to the official explanation of theGrouping Operator ( ), the Subexpression Operator `$( ) is explained immediately below it.
I am defining a variable in the beginning of my source code in MATLAB. Now I would like to know at which lines this variable effects something. In other words, I would like to see all lines in which that variable is read out. This wish does not only include all accesses in the current function, but also possible accesses in sub-functions that use this variable as an input argument. In this way, I can see in a quick way where my change of this variable takes any influence.
Is there any possibility to do so in MATLAB? A graphical marking of the corresponding lines would be nice but a command line output might be even more practical.
You may always use "Find Files" to search for a certain keyword or expression. In my R2012a/Windows version is in Edit > Find Files..., with the keyboard shortcut [CTRL] + [SHIFT] + [F].
The result will be a list of lines where the searched string is found, in all the files found in the specified folder. Please check out the options in the search dialog for more details and flexibility.
Later edit: thanks to #zinjaai, I noticed that #tc88 required that this tool should track the effect of the name of the variable inside the functions/subfunctions. I think this is:
very difficult to achieve. The problem of running trough all the possible values and branching on every possible conditional expression is... well is hard. I think is halting-problem-hard.
in 90% of the case the assumption that the output of a function is influenced by the input is true. But the input and the output are part of the same statement (assigning the result of a function) so looking for where the variable is used as argument should suffice to identify what output variables are affected..
There are perverse cases where functions will alter arguments that are handle-type (because the argument is not copied, but referenced). This side-effect will break the assumption 2, and is one of the main reasons why 1. Outlining the cases when these side effects take place is again, hard, and is better to assume that all of them are modified.
Some other cases are inherently undecidable, because they don't depend on the computer states, but on the state of the "outside world". Example: suppose one calls uigetfile. The function returns a char type when the user selects a file, and a double type for the case when the user chooses not to select a file. Obviously the two cases will be treated differently. How could you know which variables are created/modified before the user deciding?
In conclusion: I think that human intuition, plus the MATLAB Debugger (for run time), and the Find Files (for quick search where a variable is used) and depfun (for quick identification of function dependence) is way cheaper. But I would like to be wrong. :-)
I am getting the above error when trying to run a script produce to a report. It is a pre-existing script that has been run, successfully many times before. Research has told me that that it is something to do with the stack size? I’m running 10.2B02 in WRQ Reflections. Can anyone tell me what this statement means and how I look up the value of my –S.
Thanks,
Paul
-s is a client startup parameter. You mention "Reflections" so you are probably using a character terminal session. The -s parameter is on the command line used to start Progress (which might be inside a script). If there is a -pf somefile.pf on the command line then it is inside that "parameter file". If it is not specified the default value is 40. The maximum value is limited by available memory but setting it in the hundreds or even in the thousands is not unheard of.
You can also get the startup values by sending a SIGUSR1 to the _progres process that the session is running. I.e. kill -USR1 That will (safely) create a "protrace." file that includes startup parameters and a 4gl stack trace. The file will appear in either the current directory, the home directory or the temp-file directory (I forget which, just look for protrace*).
This error usually means that your code is manipulating a field that is too large. (Like the error says.) That might be for a lot of reasons.
One common possibility is string concatenation in a loop.
Or you might be calling lots of sub-procedures and passing parameters around.
If "nothing has changed" in the code then it probably just means that some data structure has grown slightly larger over time and increasing -s is really no big deal so long as it solves the problem.
If you keep having to increase it then it is more likely that you have some sort of coding issue. Maybe you're passing things by value that ought to be passed by reference or maybe you have run away recursion. Or something else. You'd need to provide a lot more detail to say for sure.
It is also possible (but unlikely) that you have a corrupt data record that appears to have a field in it that is too large. You could run "proutil dbName -C dbanalys" as an initial step to see if that is true.
Part of the error message is non-standard -- I'm not certain which log file it is coming from or how it got there (applications can write their own messages) but it seems that it might have something to do with trying to send an e-mail. So I'd be suspicious that either the list of recipients got too long or that the body of the e-mail is too large.
I have some Makefiles that are flexible based on the existence of certain variables by using ifdef to check for them. It is a bit annoying that I have to actually set the variable equal to something on the command line. make all DEBUG does not trigger the ifdef but make all DEBUG=1 does. Perhaps I am just using the C pre-processor approach where it does not belong.
Q1) Is it possible to specify a variable on the command line to be empty? Without even more characters?
Q2) What is the preferred approach for such boolean parameters to a make?
I assume you mean make all DEBUG= here, right? Without the = make will consider DEBUG to be a target to build, not a variable assignment.
The manual specifies that a variable that has a non-empty value causes ifdef to return true. A variable that does not exist or exists but contains the empty string, causes ifdef to return false. Note ifdef does not expand the variable, it just tests whether the variable has any value.
You can use the $(origin ...) function to test whether a variable is really not defined at all, or is defined but empty, like this:
ifeq ($(origin DEBUG),undefined)
$(info Variable DEBUG is not defined)
else
$(info Variable DEBUG is defined)
endif
As #MadScientist explained few minutes ago,
make all DEBUG
adds a target DEBUG to your make. Luckily, there is a workaround:
ifneq (,$(filter DEBUG,$(MAKECMDGOALS)))
DEBUG:=1 # or do whatever you want
DEBUG: all; #echo -n
endif
It is essential to supply a dummy rule (e.g. echo nothing, as above) to the dummy target. And either put this statement at the bottom of your makefile, or specify the prerequisite target explicitly as in the example. Otherwise, make may wrongly choose DEBUG target instead of all.
Note that this is not a preferred approach; the convention is like using V=1 to turn echo on.
Another caveat is that make processes the command-line goals sequentially, e.g. make A B will first take care of A target, then of B target, whether these targets are independent, or depend one on the other. Therefore writing make DEBUG PERFECT and make PERFECT DEBUG could produce different results. But the order of parameters is irrelevant, therefore make PERFECT=1 DEBUG=1 and make DEBUG=1 PERFECT=1 are equivalent.
It is already clarified why you can't use just DEBUG. But I would like to add something.
You can use shell script before running make that setup all variables you need, so, for example in linux shell it will look like this:
$source debug_setup.sh
$make all
Make is starting...
Debug is enabled
...
where debug_setup.sh contains all environment variables you need to set up:
export DEBUG=1
export DEBUG_OPTION=some_option
This is nice since you can make comments there, you can comment out if you don't need something at the moment and would like to keep for the future, etc.
Then you can have several setup scripts that must/can be used as a part of standard routine. This all depends on how many variables you need to set up, how many sets of variables you would like to have, etc.
Note that it is a good idea to notify user somehow which set of variables is selected.
I am optimizing a very time/memory consuming program by running it over a dataset and under multiple parameters. For each "run", I have a csv file, "setup.csv" set up with "runNumber","Command" for each run. I then import this into a perl script to read the command for the run number I would like, extrapolate the variables, then execute it on the system via the system command. Should I be worried about the potential for this to be exploited, (I am worried right now)? If so, what can I do to protect our server? My plan now is to change the file permissions of the "setup.csv" to read only and ownership to root, then go in as root whenever I need to append another run to the list.
Thank you very much for your time.
Run your code in taint mode with -T. That will force you to carefully launder your data. Only pass through strings that are ones you are expecting. Do not launder with .*, but rather check against a list of good strings.
Ideally, there a list of known acceptable values, and you validate against that.
Either way, you want to avoid the shell by using the multi-argument form of system or by using IPC::System::Simple's systemx.
If you can't avoid the shell, you must properly convert the text to pass to the command into shell literals.
Even then, you have to be careful of values that start with -. Lots of tools accept -- to denote the end options, allowing other values to be passed safely.
Finally, you might want to make sure the args don't contain the NUL character (\0).
systemx('tool', '--', #args)
Note: Passing arbitrary strings is not possible in Windows. Extra validation is required.