Owncloud Calendar ICS Backup - owncloud

I wanted to have a regular backup of my Owncloud calendars as ICS files, in case the server runs into a problem that I don't have time to fix right away. For this purpose I wrote a little script, which can be run as a cronjob.
Any feedback, improvements, alterations are welcome!

I have been using this script for quite a while. It was a big help in having a backup for calendars and contacts from my onwCloud installation. Thanks!
However, one thing really bugged me with the script of envyrus: new calendars/addressbooks need to be shared manually with the „backup-user“, whose calendars will be backed up. This made the script basically useless for me, because my wife is creating and deleting her calendars and task-lists quite often.
There is a script which can automatically deal with additionally created/deleted calendars, since it fetches all data from the database and not via http-request (like the script from envyrus). It just creates a backup of every single calendar/addressbook existing in the database. Giving a username/password combination is not necessary when using this script. Also there is no need to share calendars to be backed up with a certain user. Last but not least, the script doesn‘t require root privileges.
From the scripts‘ README:
This Bash script exports calendars and addressbooks from
ownCloud/Nextcloud to .ics and .vcf files and saves them to a
compressed file. Additional options are available.
Starting with version 0.8.0, there is no need anymore for a file with
user credentials because all data is fetched directly from the
database. If only calendars/addressbooks of certain users shall be
backed up, list them in users.txt without any passwords.
Maybe this is also a help for others: calcardbackup

DISCLAIMER: I created this script for a little Owncloud instance that I run for myself and 1-2 other friends - it is not meant for any "serious business", so to speak. I used the scripts from this and this site as a starting point - thank you!
To create ics backups of all the user calendars, I created an Owncloud user called "calendarBackup", who other users can share their calendars with. I wrote a little script, that loops through all those calendars and downloads the ics files. They are then put into a shared folder owned by the calendarBackup, and the backup is distributed across users. (An easy adjustment could be made, so that each user gets his own calendar files.)
The advantage to this approach is that the script doesn't need to know all the user passwords.
Here the code:
#!/bin/bash
#owncloud login data for calendar backup user
OCuser=owncloudUserName
OCpassword="owncloudUserPassword"
OCpath="/var/www/owncloud/"
OCbaseURL="https://localhost/owncloud/"
OCdatabase="owncloudDatabaseName"
#destination folder for calendar backups
dest="/var/www/owncloud/data/owncloudUserName/files/Backup/"
#mysql user data with access to owncloud database
MSQLuser=owncloudMysqlUser
MSQLpassword="owncloudMysqlUserPassword"
#timestamp used as backup name
timeStamp=$(date +%Y%m%d%H%M%S)
archivePassword="passwordForArchivedCalendars"
#apachee user and group
apacheUser="apacheUser"
apacheGroup="apacheGroup"
#create folder for new backup files
mkdir "$dest$timeStamp"
#create array of calendar names from Owncloud database query
calendars=($(mysql -B -N -u $MSQLuser -p$MSQLpassword -e "SELECT uri FROM $OCdatabase.oc_calendars"))
calendarCount=${#calendars[#]}
#create array of calendar owners from Owncloud database query
owners=($(mysql -B -N -u $MSQLuser -p$MSQLpassword -e "SELECT principaluri FROM $OCdatabase.oc_calendars"))
loopCount=0
#loop through all calendars
while [ $loopCount -lt $calendarCount ]
do
#see if owner starts with "principals/users/"
#(this part of the script assumes that principaluri for normal users looks like this: principal/users/USERNAME )
if [ "${owners[$loopCount]:0:17}" = "principals/users/" ]
then
#concatenate download url
url=$OCbaseURL"remote.php/dav/calendars/$OCuser/${calendars[$loopCount]}_shared_by_${owners[$loopCount]:17}?export"
#echo $url
#download the ics files (if download fails, delete file)
wget \
--output-document="$dest$timeStamp/${owners[$loopCount]:17}${calendars[$loopCount]}.ics" \
--no-check-certificate --auth-no-challenge \
--http-user=$OCuser --http-password="$OCpassword" \
"$url" || rm "$dest$timeStamp/${owners[$loopCount]:17}${calendars[$loopCount]}.ics"
#echo ${owners[$loopCount]:17}
fi
#echo "${calendars[$loopCount]} ${owners[$loopCount]}"
loopCount=$(($loopCount + 1))
done
#zip backed up ics files and remove the folder (this could easily be left out, change the chown command though)
zip -r -m -j -P $archivePassword "$dest$timeStamp" "$dest$timeStamp"
rm -R $dest$timeStamp
#chown needed so owncloud can access backup file
chown $apacheUser:$apacheGroup "$dest$timeStamp.zip"
#update owncloud database of calendar backup user
sudo -u $apacheUser php "$OCpath"occ files:scan $OCuser
A few notes on the script:
It is written for a Debian shell.
It works for Owncloud 9.1 with Mysql.
It assumes the download URL for a shared calendar looks like this:
OwncloudURL/remote.php/dav/calendars/LoggedInOwncloudUser/CalendarName_shared_by_CalendarOwner?export
To check for the correct URL, simply download a shared calendar in the web interface and check the download URL.
It assumes that the calendar names are stored in the column "uri" of the table "oc_calendars".
It assumes that the calendar owner is stored in the column "principaluri" of the table "oc_calendars" and that all normal users are prefixed with "principals/users/".
It needs sudo permission to update Owncloud file structure.
It needs zip to be installed.

Related

Using p4 zip and unzip to export files from one perforce server to another

I was trying to export files along with their revision history inside my depot folder from 2015.2 to 2019 perforce server.Also , I would want perforce to create new user on my new server corresponding to the commiter/submitter on my original 2015 repo.
Perforce replicate looked like overkill for my current task and then I came across this read on perforce's website that mentioned P4 zip.
This looked like it will solve my problem, but the article has a few issues I could not understand.
Let's say I am moving data from server1_ip:port --> server2_ip:port
I am currently following these steps
Making zip of folder to be copied using
p4 remote my_remote_spec , setting
Address: server1_ip:port
DepotMap://depot/... //depot2/...
p4 -p server1_ip:port zip -o test.zip -r my_remote_spec -A //depot/.... But on this step I get permission denied error. This is weird to me because the user although not super/admin has access to files i ask to get zipped.
Also, when i did try with a super user, i could not find test.zip even though i was not prompted any errors.
Isn't the above command supposed to generate a zip file inside the directory which i run it from?
Is the unzip command supposed to be run after a p4 login from user of second server?
Lastly, from the document why is a third port , 1667 mentioned in the transfer of files from server running on 1666 and 1777.
on this step I get permission denied error. This is weird to me because the user although not super/admin has access to files i ask to get zipped.
This is expected:
C:\Perforce\test>p4 help zip
zip -- Package a set of files and their history for use by p4 unzip
...
The zip command requires super permission granted by p4 protect.
Isn't the above command supposed to generate a zip file inside the directory which i run it from?
Similar to p4 admin checkpoint, the zip file is written to the server machine (relative to the server root, if you don't specify an absolute path), rather than being transferred to the local client directory. This is not explicitly stated in the documentation (which seems like an oversight), but if you look in the root directory of the server where you ran the zip, you should find your test.zip there.
Is the unzip command supposed to be run after a p4 login from user of second server?
Yes, any time you run a command against a particular server, you will need to be logged in to that server. In the case of p4 unzip you will need at least admin permission on the second server.
Lastly, from the document why is a third port , 1667 mentioned in the transfer of files from server running on 1666 and 1777.
I'm pretty sure that's a typo; whoever wrote the article started off using ports 1666 and 1777, changed their mind halfway through, and didn't proofread. :)

Blast+ Local Configuration: How to configure nt and nr databases?

I am configuring Blast+ on my mac (os sierra) and am having trouble configuring my nr and nt databases that I also downloaded locally. I am trying to follow NCBI's instructions here, and am getting hung up on the Configuration and Example Execution steps.
They say to change my .bash_profile so that it says:
export PATH=$PATH:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/ncbi-blast-2.6.0+/bin
That works fine, and they say configure a path for BLASTDB "similarly" but to the file where my DB will be, so I have done this:
export BLASTDB=$BLASTDB:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/blastdb/nt.00
which specifies the exact folder that I got when I unzipped the nt tar file from their FTP. With this path, if I run the command...
blastn -query test_query.fa -db nt.00 -task blastn -outfmt "7 qseqid sseqid evalue bitscore" -max_target_seqs 5
then it runs successfully and I get results, but I am worried that these are only being checked against the nt.00 section of the entire nt.00 database file, especially because if I run my test_query.fa sequence on the Web Blast, I get different results.
Also, their instructions say that the path only needs to point to the folder that contains the whole database folder nt.00, from the tar I unzipped--and not the specific nt.00 itself--, which in my case would just be "blastdb/" (As opposed to "blastdb/nt.00/" which then contains nt.00.nhd, nt.00.nal, etc.). That makes sense because when I am working I want to be able to blastn on the nt database but also blastp on the nr one, etc. by changing the -db flag on my command, and there shouldn't be a problem with having them all in this folder, right? But if I must specify the path for BLASTDB with the nt.00 DB added to the end, how could I ever use nr.00 in the same folder (blastdb/)? Essentially, I want to do as the instructions say, and just have this:
export BLASTDB=$BLASTDB:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/blastdb/
And then depending on what database I want to use I could just say so after the -db flag on my command. But when I make the path like that above, it gives me this error:
BLAST Database error: No alias or index file found for nucleotide database [nt] in search path [/Users/LJStout::/Users/LJStout/Documents/Luke/Research/Pedulla 17-18/blast/blastdb:]
I have tried running that same blastn command from above and swapping out "nt" for "nt.00", and have tried these commands with the path for BLASTDB ending in both "blastdb/" and "blastdb/nt" and of course "blastdb/nt.00" which is the only one that runs without errors.
Here's an example of another thread I read where the OP is worried about his executions not checking the entire nt.00 folder, this was different than my problem however.
Thanks for you help!
This whole problem came down to having the nt.00 & nr.00 files, the original folders that result from unzipping their respective .tar.gz's, in the same parent folder when it should be that their contents are in the same parent folder. I simply deleted the folders they came in and copied the contents over to my new, singular parent. I was kind of mislead by the instructions, it was a simple mistake. Now, I have one folder, blastdb/ that contains all of the contents of every database I plan on using, including nt,nr, and refseq.

Postgres ERROR: could not open file for reading: Permission denied

Computer: Mac OS X, version 10.8
Database: Postgres
Trying to import csv file into postgres.
pg> copy items_ordered from '/users/darchcruise/desktop/items_ordered.csv' with CSV;
ERROR: could not open file "/users/darchcruise/desktop/items_ordered.csv" for reading: Permission denied
Then I tried
$> chown postgres /users/darchcruise/desktop/items_ordered.csv
chown: /users/darchcruise/desktop/items_ordered.csv: Operation not permitted
Lastly, I tried
$> ls -l
-rw-r--r-- 1 darchcruise staff 1016 Oct 18 21:04 items_ordered.csv
Any help is much appreciated!
Assuming the psql command-line tool, you may use \copy instead of copy.
\copy opens the file and feeds the contents to the server, whereas copy tells the server the open the file itself and read it, which may be problematic permission-wise, or even impossible if client and server run on different machines with no file sharing in-between.
Under the hood, \copy is implemented as COPY FROM stdin and accepts the same options than the server-side COPY.
Copy the CSV file to /tmp
For me this solved the issue.
chmod a+rX /users/darchcruise/ /users/darchcruise/desktop /users/darchcruise/desktop/items_ordered.csv
This will change access rights for your folder. Note that everyone will be able to read your file.
You can't use chown being a user without administrative rights.
Also consider learning umask to ease creation of shared files.
Copy your CSV file into the /tmp folder
Files named in a COPY command are read or written directly by the server, not by the client application. Therefore, they must reside on or be accessible to the database server machine, not the client. They must be accessible to and readable or writable by the PostgreSQL user (the user ID the server runs as), not the client. COPY naming a file is only allowed to database superusers, since it allows reading or writing any file that the server has privileges to access.
I had the issue when I was trying to export data from a remote server into the local disk. I hadn't realised that SQL copy actually is executed on the server and that it tries to write to a server folder. Instead the correct thing to do was to use \copy which is the psql command and it writes to the local file system as I expected. http://www.postgresql.org/message-id/CAFjNrYsE4Za_KWzmfgN1_-MG7GTw_vpMRxPk=OEjAiLqLskxdA#mail.gmail.com
Perhaps that might be useful to someone else too.
Another way to do this, if you have pgAdmin and are comfortable using the GUI is to go the table in the schema and right click on the table you wish to import the file to and select "Import" browse your computer for the file, select the type your file is, the columns you want the data to be imputed into, and then select import.
That was done using pgAdmin III and the 9.4 version of PostgreSQL
I resolved the same issue with a recursive chown on the parent folder:
sudo chown -R postgres:postgres /home/my_user/export_folder
(my export being in /home/my_user/export_folder/export_1.csv)
for macbook first i opened terminal then type
open /tmp
or in finder directory you directly enter command+shift+g then type /tmp in go to the folder.
it opens temp folder in finder. then i paste copied csv file into this folder.then again i go to postgres terminal and typed below command and then it is copied my csv data into db table
\copy recharge_operator FROM '/private/tmp/operator.csv' DELIMITER ',' CSV;
COPY your table (Name, Latitude, Longitude) FROM 'C:\Temp\your file.csv' DELIMITERS ',' CSV HEADER;
Use c:\Temp\"Your File"\.
For me it worked to simply to add sudo (or run as root) for the chown command:
sudo chown postgres /users/darchcruise/desktop/items_ordered.csv
You must grant the pg_read_server_files permission to the user if you are not using postgres superuser.
Example:
GRANT pg_read_server_files TO my_user WITH ADMIN OPTION;
just in case you're facing this problem under windows 10 , add the group of users "youcomputer\Users" on the security Tab and grant it full control , that solved my issue
I had the same error message but was using psycopg2 to communicate with PostgreSQL. I fixed the permission issues by using the functions copy_from and copy_expert that will open the file on the client side as the user running the python script and feed the data to the database over STDIN.
Refer to this link for further information.
This answer is only for Linux Beginners.
Assuming initially the DB user didn't have file/folder(directory) permission on the client side.
Let's constrain ourselves to the following:
User: postgres
Purpose: You wanted to (write to / read from) a specific folder
Tool: psql
Connected to a specific database: YES
FILE_PATH: /home/user/training/sql/csv_example.csv
Query: \copy (SELECT * FROM table_name TO FILE_PATH, DELIMITER ',' CSV HEADER;
Actual Results: After running the query you got an error : Permission Denied
Expected Results: COPY COUNT_OF_ROWS_COPIED
Here are the steps I'd follow to try and resolve it.
Confirm the FILE_PATH permissions on your File system.
Inside a terminal to view the permissions for a file/folder you need to long list them by entering the command ls -l.
The output has a section that shows sth like this -> drwxrwxr-x
Which is interpreted in the following way:
TYPE | OWNER RIGHTS | GROUP RIGHTS | USER RIGHTS
rwx (r: Read, W: Write, X: Execute)
TYPE (1 Char) = d: directory, -: file
OWNER RIGHTS (3 Chars after TYPE)
GROUP RIGHTS (3 Chars after OWNER)
USER RIGHTS (3 Chars after GROUP)
If permissions are not enough (Ensure that a user can at least enter all folders in the path you wanted path) - x.
This means for FILE_PATH, All the directories (home , user, training, sql) should have at least an x in the USER RIGHTS.
Change permissions for all parent folders that you need to enter to have a x. You can use chmod rights_you_want parent_folder
Assuming /training/ didn't have an execute permission.
I'd go the user folder and enter chmod a+x training
Change the destination folder/directory to have a w if you want to write to it. or at least a r if you want to read from it
Assuming /sql didn't have a write permission.
I would now chmod a+w sql
Restart the postgresql server sudo systemctl restart postgresql
Try again.
This would most probably help you now get a successful expected result.
On Linux you can fix this by giving the postgres user read/write/execute permissions on the target directory. Eg:
setfacl -m u:postgres:rwx /home/hi
I just copied the source csv file to another folder in which you have more permissions (C:/temp), and it worked fine.
May be You are using pgadmin by connecting remote host then U are trying to update there from your system but it searches for that file in remote system's file system... its the error wat I faced May be its also for u check it

Google Cloud Storage upload files modified today

I am trying to figure out if I can use the cp command of gsutil on the Windows platform to upload files to Google Cloud Storage. I have 6 folders on my local computer that get daily new pdf documents added to them. Each folder contains around 2,500 files. All files are currently on google storage in their respective folders. Right now I mainly upload all the new files using Google Cloud Storage Manager. Is there a way to create a batch file and schedule to run it automatically every night so it grabs only files that have been scanned today and uploads it to Google Storage?
I tried this format:
python c:\gsutil\gsutil cp "E:\PIECE POs\64954.pdf" "gs://dompro/piece pos"
and it uploaded the file perfectly fine.
This command
python c:\gsutil\gsutil cp "E:\PIECE POs\*.pdf" "gs://dompro/piece pos"
will upload all of the files into a bucket. But how do I only grab files that were changed or generated today? Is there a way to do it?
One solution would be to use the -n parameter on the gsutil cp command:
python c:\gsutil\gsutil cp -n "E:\PIECE POs\*" "gs://dompro/piece pos/"
That will skip any objects that already exist on the server. You may also want to look at using gsutil's -m flag and see if that speeds the process up for you:
python c:\gsutil\gsutil -m cp -n "E:\PIECE POs\*" "gs://dompro/piece pos/"
Since you have Python available to you, you could write a small Python script to find the ctime (creation time) or mtime (modification time) of each file in a directory, see if that date is today, and upload it if so. You can see an example in this question which could be adapted as follows:
import datetime
import os
local_path_to_storage_bucket = [
('<local-path-1>', 'gs://bucket1'),
('<local-path-2>', 'gs://bucket2'),
# ... add more here as needed
]
today = datetime.date.today()
for local_path, storage_bucket in local_path_to_storage_bucket:
for filename in os.listdir(local_path):
ctime = datetime.date.fromtimestamp(os.path.getctime(filename))
mtime = datetime.date.fromtimestamp(os.path.getmtime(filename))
if today in (ctime, mtime):
# Using the 'subprocess' library would be better, but this is
# simpler to illustrate the example.
os.system('gsutil cp "%s" "%s"' % (filename, storage_bucket))
Alternatively, consider using Google Cloud Store Python API directly instead of shelling out to gsutil.

laravel - cannt open paths.php on server

this ones a weird one. For some reason, out of the blue, everytime I create a new project and upload to my server, it wont allow me to edit the paths.php file through FTP.
I accessed the server through command line earlier on today to install a bundle and noticed the paths.php file was green and has a star next to it. Does any one know what this means and is it affecting me from opening this file?
regards
The permission of the file is 755 which mean:
755 = rwx r-x r-x
Owner has Read, Write and Execute
Group has Read and Execute only
Other has Read and Execute only
Viewing the picture, qsradmin is the owner of the file, so he is the only one who can write or edit the file.
In order to change the owner of the file, use chown command like this:
chown NameOfTheUser path.php
For more information checkout Unix File permission