How to load a file from the same directory as the script, in MongoDB shell - mongodb

I'm running a bash script that executes a javascript file in the Mongo shell:
outerscript.sh
SCRIPTPATH="$( cd "$(dirname "$0")" ; pwd -P )"
LOGDIR=~/logs
LOGFILE=$LOGDIR/outerscript.log
mongo < $SCRIPTPATH/jsscript.js >> $LOGFILE 2>&1
jsscript.js
load(myVars.js);
MongoDB is v4.0.19
The script may be run locally or on a remote server, and it may be run from the command line or from a cron job, and I want to load in myVars.js which is always in the same folder as outerscript.sh.
I can't find a way to load myVars.js when running outerscript.sh from a cron job.
pwd() returns the user's home directory, not the location of the currently running script, so this doesn't work for the cron job:
load(pwd() + '/myVars.js');
This also doesn't work for the cron job, I'm guessing for the same reason:
load('./myVars.js');
Alternatively, is there another way to pass variables into my JavaScript file, that will still let me log output to the logfile? As far as I can tell, you can't combine --eval with logging output to a file, and I haven't found a way to read environment variables directly from inside the JS file.
Thanks!

You don't need a redirect, try mongo $SCRIPTPATH/jsscript.js >> $LOGFILE 2>&1
See
$ mongo --help
MongoDB shell version v4.4.1
usage: mongo [options] [db address] [file names (ending in .js)]
db address can be:
foo foo database on local machine
192.168.0.5/foo foo database on 192.168.0.5 machine
192.168.0.5:9999/foo foo database on 192.168.0.5 machine on port 9999
mongodb://192.168.0.5:9999/foo connection string URI can also be used
You can use cat() in your js script.
e.g.
echo '{"path": "some folder"}' > /tmp/foldername.js
Then in your js script:
var txt = cat("/tmp/foldername.js")
var folderObj = JSON.parse(txt)
Then you can use folderObj object in your script.
Why do you not simply use SCRIPTPATH="$(dirname "$0")"?

Wernfried Domscheit answered my question perfectly and I have accepted that answer.
I also worked out how to pass in environment variables as parameters and am posting it here just in case it helps anybody.
outerscript.sh
SCRIPTPATH="$( cd "$(dirname "$0")" ; pwd -P )"
ENVVARS='var mongoAddress='\'"${MONGO_ADDRESS}"\''; var databaseName='\'"${DATABASE_NAME}"\'
mongo --eval "${ENVVARS}" $SCRIPTPATH/jsscript.js >> $LOGFILE 2>&1
jsscript.js
print(mongoAddress);
print(databaseName);

Related

How can I make a function run every time cd successfully changes to another directory within sh on FreeBSD?

I'm using sh as my shell on FreeBSD but I want to be able to have a pretty prompt like the one bash gives me on Ubuntu. There are two things that the FreeBSD implementation of sh seems to lack as far as PS1 escape characters go:
The \w works but does not expand $HOME to ~, so this is something I have already hacked up myself
I can use PS1 to update the prompt on the terminal, but as far as I can tell it is not possible to use the PS1 variable to update the title bar as well. ESC and BEL fail to set the title as one would expect if they were using bash or ksh
Here is my .shrc file
update_prompt() {
case "$PWD" in
"$HOME"*)
pretty_pwd="~${PWD#*"${HOME}"}"
;;
"/usr$HOME"*)
pretty_pwd="~${PWD#*"/usr${HOME}"}"
;;
*)
pretty_pwd="$PWD"
;;
esac
case "$TERM" in
xterm*|rxvt*)
PS1="[$USER#\\h $pretty_pwd]\\$ "
;;
*)
;;
esac
printf "\\033]0;[%s#$(hostname -s): %s]\\007" "$USER" "$pretty_pwd"
}
update_prompt
So when I fire up a terminal or log in via ssh, it gives the pretty prompt that I like. But now I need this function to run every time that cd is executed and returns an exit status of 0.
I was going to use an alias that was something like:
alias cd="cd $1 && update_prompt"
but that was before I realized that aliases do not except arguments. How might I go about doing something like this?
You can use a function instead of an alias:
cd() {
command cd "$#" && update_prompt
}
Just put it into ~/.shrc. You have to use command here to let sh know that you are referring to the actual cd builtin command instead of the function you've just defined.
Refer to the sh(1) manual page for the details on how to make sh(1) source the ~/.shrc file when it starts:
Therefore, a user should place commands that are to be executed only at login
time in the .profile file, and commands that are executed for every shell
inside the ENV file. The user can set the ENV variable to some file by placing
the following line in the file .profile in the home directory, substituting for
.shrc the filename desired:
ENV=$HOME/.shrc; export ENV
I use this trick in my cd alias manager. Here's a link to the source code of the function: https://github.com/0mp/goat/blob/v2.5.0/libgoat.sh#L31-L57
You can do it with alias+arguments if you swap the commands:
$ alias cd="echo change; cd"
$ pwd
/nas
$ cd /
change
$ pwd
/
$ cd /etc
change
$ pwd
/etc
$

Printing Mongo query output to a file while in the mongo shell

2 days old with Mongo and I have a SQL background so bear with me. As with mysql, it is very convenient to be in the MySQL command line and output the results of a query to a file on the machine. I am trying to understand how I can do the same with Mongo, while being in the shell
I can easily get the output of a query I want by being outside of the shell and executing the following command:
mongo localhost:27017/dbname --eval "printjson(db.collectionName.findOne())" > sample.json
The above way is fine, but it requires me to exit the mongo shell or open a new terminal tab to execute this command. It would be very convenient if I could simply do this while still being inside the shell.
P.S: the Question is an offshoot of a question I posted on SO
AFAIK, there is no a interactive option for output to file, there is a previous SO question related with this: Printing mongodb shell output to File
However, you can log all the shell session if you invoked the shell with tee command:
$ mongo | tee file.txt
MongoDB shell version: 2.4.2
connecting to: test
> printjson({this: 'is a test'})
{ "this" : "is a test" }
> printjson({this: 'is another test'})
{ "this" : "is another test" }
> exit
bye
Then you'll get a file with this content:
MongoDB shell version: 2.4.2
connecting to: test
> printjson({this: 'is a test'})
{ "this" : "is a test" }
> printjson({this: 'is another test'})
{ "this" : "is another test" }
> exit
bye
To remove all the commands and keep only the json output, you can use a command similar to:
tail -n +3 file.txt | egrep -v "^>|^bye" > output.json
Then you'll get:
{ "this" : "is a test" }
{ "this" : "is another test" }
We can do it this way -
mongo db_name --quiet --eval 'DBQuery.shellBatchSize = 2000; db.users.find({}).limit(2000).toArray()' > users.json
The shellBatchSize argument is used to determine how many rows is the mongo client allowed to print. Its default value is 20.
If you invoke the shell with script-file, db address, and --quiet arguments, you can redirect the output (made with print() for example) to a file:
mongo localhost/mydatabase --quiet myScriptFile.js > output
There are ways to do this without having to quit the CLI and pipe mongo output to a non-tty.
To save the output from a query with result x we can do the following to directly store the json output to /tmp/x.json:
> EDITOR="cat > /tmp/x.json"
> x = db.MyCollection.find(...).toArray()
> edit x
>
Note that the output isn't strictly Json but rather the dialect that Mongo uses.
In the new mongodb shell 5.0+ mongosh, it integrate the Node.js fs module, so you can simply do below in the new mongosh shell:
fs.writeFileSync('output.json', JSON.stringify(db.collectionName.findOne()))
This also avoid problems such as the ObjectId(...) being included in the tojson result, which is not valid JSON string.
The above code works according to the docs describes:
The MongoDB Shell, mongosh, is a fully functional JavaScript and Node.js 14.x REPL environment for interacting with MongoDB deployments. You can use the MongoDB Shell to test queries and operations directly with your database.
The old mongo shell already marked as Legacy, so use the mongosh if possible.
It may be useful to you to simply increase the number of results that get displayed
In the mongo shell > DBQuery.shellBatchSize = 3000
and then you can select all the results out of the terminal in one go and paste into a text file.
It is what I am going to do :)
(from : https://stackoverflow.com/a/3705615/1290746)
Combining several conditions:
write mongo query in JS file and send it from terminal
switch/define a database programmatically
output all found records
cut initial output lines
save the output into JSON file
myScriptFile.js
// Switch current database to "mydatabase"
db = db.getSiblingDB('mydatabase');
// The mark for cutting initial output off
print("CUT_TO_HERE");
// Main output
// "toArray()" method allows to get all records
printjson( db.getCollection('jobs').find().toArray() );
Sending the query from terminal
-z key of sed allows treat output as a single multi-line string
$> mongo localhost --quiet myScriptFile.js | sed -z 's/^.*CUT_TO_HERE\n//' > output.json

Run javascript script (.js file) in mongodb including another file inside js

I want to write a long script for inserting and updating mongodb data.
Is it possible to call external js file that contains the script?
Is it possible to include another js file from the running js file?
Use Load function
load(filename)
You can directly call any .js file from the mongo shell, and mongo will execute the JavaScript.
Example : mongo localhost:27017/mydb myfile.js
This executes the myfile.js script in mongo shell connecting to mydb database with port 27017 in localhost.
For loading external js you can write
load("/data/db/scripts/myloadjs.js")
Suppose we have two js file myFileOne.js and myFileTwo.js
myFileOne.js
print('From file 1');
load('myFileTwo.js'); // Load other js file .
myFileTwo.js
print('From file 2');
MongoShell
>mongo myFileOne.js
Output
From file 1
From file 2
Another way is to pass the file into mongo in your terminal prompt.
$ mongo < myjstest.js
This will start a mongo session, run the file, then exit.
Not sure about calling a 2nd file from the 1st however. I haven't tried it.
Yes you can.
The default location for script files is data/db
If you put any script there you can call it as
load("myjstest.js") // or
load("/data/db/myjstest.js")
for running mutilple js files
#!/bin/bash
cd /root/migrate/
ls -1 *.js | sed 's/.js$//' | while read name; do
start=`date +%s`
mongo localhost:27017/wbars $name.js;
end=`date +%s`
runtime1=$((end-start))
runtime=$(printf '%dh:%dm:%ds\n' $(($runtime1/3600)) $(($secs%3600/60)) $(($secs%60)))
echo ############# $runtime $name.js completed ###########
echo "$name.js completed"
sync
echo 1 > /proc/sys/vm/drop_caches
echo 2 > /proc/sys/vm/drop_caches
echo 3 > /proc/sys/vm/drop_caches
done
To call external file you can use :
load ("path\file")
Exemple: if your file.js file is on your "Documents" file (on windows OS), you can type:
load ("C:\users\user_name\Documents\file.js")

Script response if md5sum returns FAILED

Say I had a script that checked honeypot locations using md5sum.
#!/bin/bash
#cryptocheck.sh
#Designed to check md5 CRC's of honeypot files located throughout the filesystem.
#Must develop file with specific hashes and create crypto.chk using following command:
#/opt/bin/md5sum * > crypto.chk
#After creating file, copy honeypot folder out to specific folders
locations=("/share/ConfData" "/share/ConfData/Archive" "/share/ConfData/Application"
"/share/ConfData/Graphics")
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done
And the output looked like this:
http://pastebin.com/b4AU4s6k
Where would you start to try and recognize the output and perhaps trigger some sort of response by the system if there is a 'FAILED'?
I've worked a bit with PERL trying to parse log files before but my attempts typically failed miserably for one reason or another.
This may not be the proper way to go about this, but I'd want to be putting this script into a cronjob that would run every minute. I had some guys telling me that an inotify job or script (I'm not familiar with this) would be better than doing it this way.
Any suggestions?
--- edit
I made another script to call the script above and send the output to a file. The new script then runs a grep -q on 'FAILED' and if it picks anything up, it sounds the alarm (tbd what the alarm will be).
#!/bin/bash
#cryptocheckinit.sh
#
#rm /share/homes/admin/cryptoalert.warn
/share/homes/admin/cryptocheck.sh > /share/homes/admin/cryptoalert.warn
grep -q "FAILED" /share/homes/admin/cryptoalert.warn && echo "LIGHT THE SIGNAL FIRES"
Use:
if ! /opt/bin/md5sum -c /share/homes/admin/crypto.chk
then
# Do something
fi
Or pipe the output of the loop:
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done | grep -q FAILED && echo "LIGHT THE SIGNAL FIRES"

How to concatenate two string environment /w env stanza in Upstart?

Here is a part of my .conf file.
env SERVICE_ROOT="/data/service_root"
env LOG_DIR="$SERVICE_ROOT/logs"
and I checked all variables with following..
echo "\n`env`" >> /tmp/listener.log 2>&1
I expect that $LOG_DIR is "/data/service_root/logs" but what I got is..
SERVICE_ROOT=/data/service_root
LOG_DIR=$SERVICE_ROOT/logs
Did I miss something?
Defined environment variable is not accessible to the Job Configuration File itself.
Upstart allows you to set environment variables which will be accessible to the jobs whose job configuration files they are defined in.
As explained in 8.2 Environment Variables:
Note that a Job Configuration File does not have access to a user's environment variables, not even the superuser. This is not possible since all job processes created are children of init which does not have a user's environment.
Defined variable $SERVICE_ROOT is accessible to defined job.
# /etc/init/test.conf
env SERVICE_ROOT="/data/service_root"
script
export LOG_DIR="$SERVICE_ROOT/logs"
# prints "LOG_DIR='/data/service_root/logs'" to system log
logger -t $0 "LOG_DIR='$LOG_DIR'"
exec /home/vagrant/test.sh >> /tmp/test.log
end script
Variable $LOG_DIR exported in script block is available for processes called within the same block.
#!/bin/bash -e
# /home/vagrant/test.sh
echo "running test.sh"
echo "\n`env`" | grep 'LOG_DIR\|SERVICE_ROOT'
After running sudo start test content of /tmp/test.log will be:
running test.sh
SERVICE_ROOT=/data/service_root
LOG_DIR=/data/service_root/logs
In syslog you will find:
Jul 16 01:39:39 vagrant-ubuntu-raring-64 /proc/self/fd/9: LOG_DIR='/data/service_root/logs'