Moodle: PDFs are empty - moodle

Many PDFs from different courses appear to have been corrupted or something. We first noticed when viewing to view in CHrome and got the error "Failed to load PDF document." In Internet Explorer the page just shows up empty.
When viewing the file in the "Updating file in" area, it says the following: "Either the file does not exist or there is a permission problem." It has a file size, but when I click on Download, the file is 0 kb.
Where are the files saved? Why are they corrupted?
Update: I've narrowed it down to that the /moodledate/filedir lost all the references. The folders are there as well as the files. Is there any way to fix this without having to reupload all PDFs?
I am on version 3.6.3 on Windows

The content/path hash is stored in the mdl_files table - maybe have a look in there to see if you can match up the files. The hash should match the folder/file name.
SELECT *
FROM mdl_files
WHERE filename LIKE '%pdf%'
OR mimetype LIKE '%pdf%'
OR source LIKE '%pdf%'
Also, check the file permissions. I don't use Windows, so not sure how it works on there. But on Linux, the web server should have access to the data folder.
Something like:
sudo chown -R www-data:www-data /pathto/moodledata/
sudo chmod -R 02777 /pathto/moodledata/
see https://docs.moodle.org/38/en/Security_recommendations#Most_secure.2Fparanoid_file_permissions

Related

What am I screwing up trying to download particular file types with wget?

I am attempting to regularly archive a few file types hosted on a community website where our admin has been MIA for years, in case he dies or just stops paying for the hosting.
I am able to download all of the files I need using wget -r -np -nd -e robots=off -l 0 URL but this leaves me with about 60,000 extra files to waste time both downloading and deleting.
I am really only looking for files with the extensions "tbt" and "zip". When I add in -A tbt,zip to the input, wget then only downloads a single file, "index.html.tmp". It immediately deletes this file because it doesn't match the file type specified, and then the process stops entirely, with wget announcing that it is finished. It does not attempt to download any of the other files that it grabs when the -A flag is not included.
What am I doing wrong? Why does specifying file types in the way that I did cause it to finish after only looking at one file?
Possibly you're hitting the same problem I've hit when trying to do something similar. When using --accept, wget determines whether a links refers to a file or directory based on whether or not it ends with a /.
For example, say I have a directory named files, and a web page that has:
Lots o' files!
If I were to request this with wget -r, then I wget would happily GET /files, see that it was an HTML document containing a bunch of links, and continue to download those links.
However, if I add -A zip to my command line, and run wget with --debug, I see:
appending ‘http://localhost:8080/files’ to urlpos.
[...]
Deciding whether to enqueue "http://localhost:8080/files".
http://localhost:8080/files (files) does not match acc/rej rules.
Decided NOT to load it.
In other words, wget thinks this is a file (no trailing /) and it doesn't match our acceptance criteria, so it gets rejected.
If I modify the remote file so that it looks like...
Lots o' files!
...then wget will follow the link and download files as desired.
I don't think there's a great solution to this problem if you need to use wget. As I mentioned in my comment, there are other tools available that may handle this situation more gracefully.
It's also possible you're experiencing a different issue; the output of adding --debug to your command line clarify things in that case.
I also experienced this issue, on a page where all the download links looked something like this: filedownload.ashx?name=file.mp3. The solution was to match for both the linked file, and the downloaded file. So my wget accept flag looked like this: -A 'ashx,mp3'. I also used the --trust-server-names flag. This catches all the .ashx that are linked in the webpage, then when wget does the second check, all the mp3 files that were downloaded will stay.
As an alternative to --trust-server-names, you may also find the --content-disposition flag helpful. Both flags help rename the file that gets downloaded from filedownload.ashx?name=file.mp3 to just file.mp3.

Blast+ Local Configuration: How to configure nt and nr databases?

I am configuring Blast+ on my mac (os sierra) and am having trouble configuring my nr and nt databases that I also downloaded locally. I am trying to follow NCBI's instructions here, and am getting hung up on the Configuration and Example Execution steps.
They say to change my .bash_profile so that it says:
export PATH=$PATH:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/ncbi-blast-2.6.0+/bin
That works fine, and they say configure a path for BLASTDB "similarly" but to the file where my DB will be, so I have done this:
export BLASTDB=$BLASTDB:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/blastdb/nt.00
which specifies the exact folder that I got when I unzipped the nt tar file from their FTP. With this path, if I run the command...
blastn -query test_query.fa -db nt.00 -task blastn -outfmt "7 qseqid sseqid evalue bitscore" -max_target_seqs 5
then it runs successfully and I get results, but I am worried that these are only being checked against the nt.00 section of the entire nt.00 database file, especially because if I run my test_query.fa sequence on the Web Blast, I get different results.
Also, their instructions say that the path only needs to point to the folder that contains the whole database folder nt.00, from the tar I unzipped--and not the specific nt.00 itself--, which in my case would just be "blastdb/" (As opposed to "blastdb/nt.00/" which then contains nt.00.nhd, nt.00.nal, etc.). That makes sense because when I am working I want to be able to blastn on the nt database but also blastp on the nr one, etc. by changing the -db flag on my command, and there shouldn't be a problem with having them all in this folder, right? But if I must specify the path for BLASTDB with the nt.00 DB added to the end, how could I ever use nr.00 in the same folder (blastdb/)? Essentially, I want to do as the instructions say, and just have this:
export BLASTDB=$BLASTDB:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/blastdb/
And then depending on what database I want to use I could just say so after the -db flag on my command. But when I make the path like that above, it gives me this error:
BLAST Database error: No alias or index file found for nucleotide database [nt] in search path [/Users/LJStout::/Users/LJStout/Documents/Luke/Research/Pedulla 17-18/blast/blastdb:]
I have tried running that same blastn command from above and swapping out "nt" for "nt.00", and have tried these commands with the path for BLASTDB ending in both "blastdb/" and "blastdb/nt" and of course "blastdb/nt.00" which is the only one that runs without errors.
Here's an example of another thread I read where the OP is worried about his executions not checking the entire nt.00 folder, this was different than my problem however.
Thanks for you help!
This whole problem came down to having the nt.00 & nr.00 files, the original folders that result from unzipping their respective .tar.gz's, in the same parent folder when it should be that their contents are in the same parent folder. I simply deleted the folders they came in and copied the contents over to my new, singular parent. I was kind of mislead by the instructions, it was a simple mistake. Now, I have one folder, blastdb/ that contains all of the contents of every database I plan on using, including nt,nr, and refseq.

Logrotate not generating all files after run

Hello people
It's my first time using logrotate and I don't know if I'm configuring it in the right way. I'm using it with loggerhead log file in Ubuntu 11.04
Log is under
/log/loggerhead/loggerheadd.log
My configuration file looks like this
/log/loggerhead/loggerheadd.log {
daily
rotate 7
compress
delaycompress
missingok
}
Then I run a force rotation
logrotate -f /etc/logrotate.d/loggerhead
and that change the name of the log file to
/log/loggerhead/loggerheadd.log.1
And didn't create the original file (loggerheadd.log) again, so I couldn't run a new force rotation, because "the file doesn't exist".
So, it's supposed that the application write entries in "loggerheadd.log" but when logrotate run the file will be renamed, so where will be written the log entries? Am I missing something?
Hope you can help me
By default logrotate will just rename your files, so your old file will be gone.
You can either use the create option to create a new file after the old one is used, or copytruncate to copy the original file to one with a new name, then truncate the original. Either option will do what you're asking for (more details on the man page here)

laravel - cannt open paths.php on server

this ones a weird one. For some reason, out of the blue, everytime I create a new project and upload to my server, it wont allow me to edit the paths.php file through FTP.
I accessed the server through command line earlier on today to install a bundle and noticed the paths.php file was green and has a star next to it. Does any one know what this means and is it affecting me from opening this file?
regards
The permission of the file is 755 which mean:
755 = rwx r-x r-x
Owner has Read, Write and Execute
Group has Read and Execute only
Other has Read and Execute only
Viewing the picture, qsradmin is the owner of the file, so he is the only one who can write or edit the file.
In order to change the owner of the file, use chown command like this:
chown NameOfTheUser path.php
For more information checkout Unix File permission

parallels plesk file permission

I,m trying to install a joomla site in parallels plesk panel via akeeba backup . Where I,m facing file permission issue.
An error occured
Could not open /var/www/vhosts/xyz.com/httpdocs/pearl_new/jquery.min.js for writing.
As searched all over and also in Plesk forum . I found this is a very common problem. Some suggested installing mod_suphp can solve the problem. I tried but don't know is it successfully installed or not.
Then I have created a new service plan from where in hosting parameter I select Run PHP as FastCGI
After that I took my domain to that service plan. I thought it will solve the problem. But still getting same error. Can anyone help please ?
On the ssh command line try:
find /var/www/vhosts/xyz.com/httpdocs/ -type f -exec chmod 664 {} \;
find /var/www/vhosts/xyz.com/httpdocs/ -type d -exec chmod 775 {} \;
these will set the permissions correct for writing to by user and group for files (f) and directories (d). you also need to make sure that apache is in the psacln and psaserv groups in the /etc/group file: the lines should look like this:
psaserv:x:504:apache,psaftp,psaadm
psacln:x:505:apache
Then you can run the commad:
chown -R siteusername.psacln /var/www/vhosts/xyz.com/httpdocs/*
where "siteusername" is the username of the site's files.
Hope this helps.
this is common issue in linux and users had shared hosting.
So simple.
If you already selected PHP module with FAST CGi so follow the following steps:
Open file manager
Make new folder "ABC"
Click "ALL" on right side to view all files on the tree.
Select all files and folders except "plesk-stats"
Select Copy/move button
in the path filed type /httpdocs/abc/
Click Move.
If all files moved and then open "abc" folder
Select all files and folders.
Select Copy/move button
in the path filed type /httpdocs/
that's it issue sorted out.
I tried these steps for many clients.
I hope this helps for someone.