Skip printing to stdout when using "mongo --eval" - mongodb

I need the js variable to be passed through command line arguments. The option --eval did this. But it also printing the variable value to the shell by default. I want to skip the variable value printed to the shell.
Ex:-
I have used "user_name" inside the script.js
Passing the value using mongo --eval="user_name='john'" script.js
But it un necessarily printing the value
  john
Any ideas.

There is a --quiet flag - but evaluated items are still printed to stdout.
You could pipe to /dev/null or a file to capture the stdout eg:
mongo --eval="user_name='john'" script.js > /dev/null

For this use case you could use var in your declaration. This will not print the value:
mongo --eval="var user_name='john';" script.js

Related

"log=..." command-line parameter to send script output to STDOUT? [duplicate]

I'm working with a command line utility that requires passing the name of a file to write output to, e.g.
foo -o output.txt
The only thing it writes to stdout is a message that indicates that it ran successfully. I'd like to be able to pipe everything that is written to output.txt to another command line utility. My motivation is that output.txt will end up being a 40 GB file that I don't need to keep, and I'd rather pipe the streams than work on massive files in a stepwise manner.
Is there any way in this scenario to pipe the real output (i.e. output.txt) to another command? Can I somehow magically pass stdout as the file argument?
Solution 1: Using process substitution
The most convenient way of doing this is by using process substitution. In bash the syntax looks as follows:
foo -o >(other_command)
(Note that this is a bashism. There's similar solutions for other shells, but bottom line is that it's not portable.)
Solution 2: Using named pipes explicitly
You can do the above explicitly / manually as follows:
Create a named pipe using the mkfifo command.
mkfifo my_buf
Launch your other command with that file as input
other_command < my_buf
Execute foo and let it write it's output to my_buf
foo -o my_buf
Solution 3: Using /dev/stdout
You can also use the device file /dev/stdout as follows
foo -o /dev/stdout | other_command
Named pipes work fine, but you have a nicer, more direct syntax available via bash process substitution that has the added benefit of not using a permanent named pipe that must later be deleted (process substitution uses temporary named pipes behind the scenes):
foo -o >(other command)
Also, should you want to pipe the output to your command and also save the output to a file, you can do this:
foo -o >(tee output.txt) | other command
For the sake of making stackoverflow happy let me write a long enough sentence because my proposed solution is only 18 characters long instead of the required 30+
foo -o /dev/stdout
You could use the magic of UNIX and create a named pipe :)
Create the pipe
$ mknod -p mypipe
Start the process that reads from the pipe
$ second-process < mypipe
Start the process, that writes into the pipe
$ foo -o mypipe
foo -o <(cat)
if for some reason you don't have permission to write to /dev/stdout
I use /dev/tty as the output filename, equivalent to using /dev/nul/ when you want to output nothing at all. Then | and you are done.

How do I update a datastore variable from inside a bash variable?

I have a variable set in a bbclass file like:
#some-class.bbclass
PROC ??= ""
In a recipe inheriting the class, I have a bash function where I modify that variable and immediately read its value. But, the value never gets updated.
#some-bb-file.bb
inherit some-class.bbclass
some_configure() {
PROC=$(grep -r "Processor.*${cpu_id}" ... something)
bbnote "PROC is ${PROC}"
}
I always get "PROC is " in the logs. I have tried printing the output of "(grep -r "Processor.*${cpu_id}" ... something)" and it returns a valid string. Can someone please tell me what I am missing?
Usage of bitbake and shell variables in your code snippet is mixed. Your bbnote line should omit the curly braces to access the shell variable, i.e.:
bbnote "PROC is $PROC"
Explanation: The bitbake and local shell variables are different. If you are in the shell function, then ${PROC} is the variable defined in some-class.bbclass. That variable isn't redefined when you do PROC="foo". If you use $PROC, the shell variable defined by PROC="foo" is used.
And your question in the title - I'm not sure if it is possible to update datastore variable from shell. You can get and set datastore variables in Python functions (using d.getVar and d.setVar).
Datastore variables can be read from Shell using :
${#d.getVar('PROC')}
In case you have to use others operations, then switch to Python
I guess you missed backticks
PROC=`grep -r "Processor.*${cpu_id}" ... something`
bbnote "PROC is ${PROC}"

Read command output line by line in sh (no bash)

I am basically looking for a way to do this
list=$(command)
while read -r arg
do
...
done <<< "$list"
Using sh intead of bash. The code as it is doesn't run because of the last line:
syntax error: unexpected redirection
Any fixes?
Edit: I need to edit variables and access them outside the loop, so using | is not acceptable (as it creates a sub-shell with independent scope)
Edit 2: This question is NOT similar to Why does my Bash counter reset after while loop as I am not using | (as I just noticed in the last edit). I am asking for another way of achiving it. (The answers to the linked question only explain why the problem happens but do not provide any solutions that work with sh (no bash).
There's no purely syntactic way to do this in POSIX sh. You'll need to use either a temporary file for the output of the command, or a named pipe.
mkfifo output
command > output &
while read -r arg; do
...
done < output
rm output
Any reason you can't do this? Should work .. unless you are assigning any variables inside the loop that you want visible when it's done.
command |
while read -r arg
do
...
done

How to set environment variables for a shell command

I often see this command in node.js programs: NODE_ENV=test node app.js which sets the NODE_ENV variable to test and works. I also read here https://en.wikipedia.org/wiki/Environment_variable that this should work for any shell command, but running some tests on my own, here is what I see
$ HELLO="WORLD"
$ HELLO="MARS" echo "$HELLO"
WORLD
$
I would expect this to print MARS. Is there something I am missing here?
The syntax VAR=value command means that the command will be invoked with the environment variable VAR set to VALUE, and this will apply only for the scope of that command.
However, when you are using the command line:
HELLO="MARS" echo "$HELLO"
The shell first interprets the "$HELLO" parameter, determines that it is WORLD, and then what it actually does is run:
HELLO="MARS" echo "WORLD"
So the echo may have the HELLO variable set, but it doesn't affect what it prints - it has already been interpreted before.
Doing
HELLO="MARS"; echo "$HELLO"
does something else entirely. First it sets HELLO to MARS in the current shell, and then it goes on to interpret the echo command. By this time HELLO contains MARS, not WORLD. But this is an entirely different effect - the variable HELLO stays with the value MARS, which is not the case in the command without the ;.
Your problem is that echo is just a poor choice for a demonstartion of this. You can do other demonstrations to prove that HELLO is changed properly:
HELLO="MARS" eval 'echo $HELLO'
In this case, the shell will not interpret the $HELLO because it is within a string in single quotes. It will first put MARS in HELLO, and then call the eval 'echo $HELLO' with that variable set. The eval command with then run echo $HELLO, and you'll get the output you were expecting.
This syntax is best used for things that don't use the given variable as part of the command line, but rather use it internally.
Other answers are correct, but here a refinement :
There are 2 cases in fact when defining a list of variable separated by spaces in bash whether it ends or not with a command.
VAR1=value1 VAR2=value2 ... VARn=valuen command arg1 arg2 ... argn
and
VAR1=value1 VAR2=value2 ... VARn=valuen
don't export VAR1 ... VARn the same way.
In first case VAR1 ... VARn will be set only for command and will then not be exported to current shell.
In second case VAR1 ... VARn will alter current shell.
then ( remark that ';' is very same of using a new line )
HELLO=WORLD
HELLO=MARS echo "i don't export HELLO."
echo "HELLO=$HELLO"
will display
i don't export HELLO.
HELLO=WORLD
and
HELLO=WORLD
HELLO=MARS ; echo "i did export HELLO."
echo "HELLO=$HELLO"
will display
i did export HELLO.
HELLO=MARS

Printing Mongo query output to a file while in the mongo shell

2 days old with Mongo and I have a SQL background so bear with me. As with mysql, it is very convenient to be in the MySQL command line and output the results of a query to a file on the machine. I am trying to understand how I can do the same with Mongo, while being in the shell
I can easily get the output of a query I want by being outside of the shell and executing the following command:
mongo localhost:27017/dbname --eval "printjson(db.collectionName.findOne())" > sample.json
The above way is fine, but it requires me to exit the mongo shell or open a new terminal tab to execute this command. It would be very convenient if I could simply do this while still being inside the shell.
P.S: the Question is an offshoot of a question I posted on SO
AFAIK, there is no a interactive option for output to file, there is a previous SO question related with this: Printing mongodb shell output to File
However, you can log all the shell session if you invoked the shell with tee command:
$ mongo | tee file.txt
MongoDB shell version: 2.4.2
connecting to: test
> printjson({this: 'is a test'})
{ "this" : "is a test" }
> printjson({this: 'is another test'})
{ "this" : "is another test" }
> exit
bye
Then you'll get a file with this content:
MongoDB shell version: 2.4.2
connecting to: test
> printjson({this: 'is a test'})
{ "this" : "is a test" }
> printjson({this: 'is another test'})
{ "this" : "is another test" }
> exit
bye
To remove all the commands and keep only the json output, you can use a command similar to:
tail -n +3 file.txt | egrep -v "^>|^bye" > output.json
Then you'll get:
{ "this" : "is a test" }
{ "this" : "is another test" }
We can do it this way -
mongo db_name --quiet --eval 'DBQuery.shellBatchSize = 2000; db.users.find({}).limit(2000).toArray()' > users.json
The shellBatchSize argument is used to determine how many rows is the mongo client allowed to print. Its default value is 20.
If you invoke the shell with script-file, db address, and --quiet arguments, you can redirect the output (made with print() for example) to a file:
mongo localhost/mydatabase --quiet myScriptFile.js > output
There are ways to do this without having to quit the CLI and pipe mongo output to a non-tty.
To save the output from a query with result x we can do the following to directly store the json output to /tmp/x.json:
> EDITOR="cat > /tmp/x.json"
> x = db.MyCollection.find(...).toArray()
> edit x
>
Note that the output isn't strictly Json but rather the dialect that Mongo uses.
In the new mongodb shell 5.0+ mongosh, it integrate the Node.js fs module, so you can simply do below in the new mongosh shell:
fs.writeFileSync('output.json', JSON.stringify(db.collectionName.findOne()))
This also avoid problems such as the ObjectId(...) being included in the tojson result, which is not valid JSON string.
The above code works according to the docs describes:
The MongoDB Shell, mongosh, is a fully functional JavaScript and Node.js 14.x REPL environment for interacting with MongoDB deployments. You can use the MongoDB Shell to test queries and operations directly with your database.
The old mongo shell already marked as Legacy, so use the mongosh if possible.
It may be useful to you to simply increase the number of results that get displayed
In the mongo shell > DBQuery.shellBatchSize = 3000
and then you can select all the results out of the terminal in one go and paste into a text file.
It is what I am going to do :)
(from : https://stackoverflow.com/a/3705615/1290746)
Combining several conditions:
write mongo query in JS file and send it from terminal
switch/define a database programmatically
output all found records
cut initial output lines
save the output into JSON file
myScriptFile.js
// Switch current database to "mydatabase"
db = db.getSiblingDB('mydatabase');
// The mark for cutting initial output off
print("CUT_TO_HERE");
// Main output
// "toArray()" method allows to get all records
printjson( db.getCollection('jobs').find().toArray() );
Sending the query from terminal
-z key of sed allows treat output as a single multi-line string
$> mongo localhost --quiet myScriptFile.js | sed -z 's/^.*CUT_TO_HERE\n//' > output.json