Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How can I get wget to do this:
Download a file from a location say x, only if the local copy of the file has an older time stamp than that of the file's time stamp on x. That means, it should download a file from a given location, only if there is a newer version of the file.
In case there is a newer version of the file, wget should overwrite the file.
Is it possible to do this?
Sounds like you're looking for the TimeStamping functionality of wget: http://www.gnu.org/software/wget/manual/wget.html#Time_002dStamping
Say you would like to download a file
so that it keeps its date of
modification.
wget -S http://www.gnu.ai.mit.edu/
A simple ls -l shows that the time stamp on the local file equals the state of the
Last-Modified header, as returned by
the server. As you can see, the
time-stamping info is preserved
locally, even without ā-Nā (at least
for http).
Several days later, you would like
Wget to check if the remote file has
changed, and download it if it has.
wget -N http://www.gnu.ai.mit.edu/
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I'm trying to backup a database which uses mongodb (Ubuntu LTS 18.04)
So I managed to run mongodump command without any errors but still I can't find the location of the backup database. I only used the mongodump command so it must save it in the default location. Can someone help me?
Thanks
By default it will save your dumped data into directory dump which will be created in the same directory you run the mongodump command.
You can also specify the out folder for the dump like so:
mongodump -u"username" -p"xxxxxx" --db=dbname --out=mongodata/
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Task:
To install MongoDB after downloading
To Note:
I can't install it from terminal for some reasons. And I am using Ubuntu 14.04
I just don't know what to do after downloading. Coz there are no executable files or something.
This is preety good article for installation of mongodb step by step.
How-to-install-mongodb-on-ubuntu
Click here for mongoDB tutorials
To start your mongodb server : mongod
You may find error that database directory is not created. For that,
Create your database directory(default path) : mkdir -p /data/db and then restart your server again.If you did not find any error then skip this part.You can also change your database directory path. Here is command for that mongod --dbpath /your/path
Open new terminal and execute : mongo
If you have any query feel free to comment.Good luck.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I am having a problem while using the below command in the windows operating system and I have installed the oracle 10g server on my local machine which I am able to connect using client IDE
When I try to use the below command to import a dump file in my local DB
"imp system/ file=tms.dump log=test.log" in command prompt
where the binary of imp and the dump file is located in
"C:\oraclexe\app\oracle\product\10.2.0\server\bin"
I am getting the below error
error: unable to write logfile
I do not know how to create the log file
Thanks
The most likely reason for the error is that you are using an account which doesn't have write privileges on bin. You haven't specified a path, so like most utilities, imp will write its log files to the current directory.
bin is traditionally the sub-directory for holding executables. It is a very bad idea to use it for storing application data such as dump files.
Instead you should be working from a different location, ideally some sub-directory which you use solely for storing dump files. Either way, it must be a directory for which your user has write privileges.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've just purchased a new computer (mac OSX) and I want to continue developing using the same database I had on the old computer. I don't want it remotely done cause I dont want to keep the other computer on. I just want to copy the db and put it on this computer for development. I have a USB stick I can use but I'm not sure how to proceed. brew, rails, ruby, rvm, pg are all installed and configured.
pg_dumpall ?
To dump all databases:
$ pg_dumpall > db.out
To reload this database use, for example:
$ psql -f db.out postgres
I had to do it with -o option for the oids
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have a samba mount located within /opt. I have a script in init.d called sysinit that is linked to in rc6.d. This gets called on a reboot (the first thing, I set it to K01sysinit) and it is supposed to unmount the /opt directory. However, on reboot I see that it is failing from the commands in the rc.sysinit file. When I manually run my sysinit script and then reboot, everything works fine. Am I running into some sort of race condition here where the rc.sysinit umount command is getting run before the other script is done unmounting /opt, or is something else going on? Or do I not understand how run levels work? I thought that what happened on a reboot is that the stuff from rc6.d is run first and then the unmounting from rc.sysinit occurs.
The solution I found was that I needed to create a lock file in /var/lock/subsys so that the rc.sysinit file knew that the service I created was "running". Without that, it would never create the KXXsysinit symlinks necessary so that my script would be run with a "stop" command on shutdown or reboot.