Moodle database too large to import in phpMyAdmin - moodle

I want to transfer a Moodle website and its database to another host (shared host), but my database backup is too large (120 MB), meanwhile max file size allowed to be imported is 50Mb, at phpMyAdmin. Is there any way to import this whole database at a time or to separate it into smaller .sql files ?

You can either increase the necessary values in your php.ini file ''/etc/php.ini'' (RedHat)
post_max_size = 250M
upload_max_filesize = 250M
; Optional if the error persists
max_execution_time = 5000
max_input_time = 5000
memory_limit = 1000M
or you do as the other two mentioned and import the file on shell using
mysql -u user -p database < YourSQL.sql

If your hosting account has SSH, Please restore the dump in commandline (using mysql command)

You can login to mysql console and select the db and run source command.
use database-name
source /file-path/to-sql-file.sql
Source command runs in small chunk so should not have problem with packet size

Related

Is it safe to delete archive_status log file in postgresql 10

I am not a DBA but i am using postgresql for production server and i am using postgresql 10 database. I am using Bigsql and i started replication of my production server to other server and on replication server everything is working but on my production server their is no space left. And after du command on my production server i am getting that pg_wal folder have 17 gb file and each file is of 16 mb size.
After some google search i change my postgresql.conf file as:
wal_level = logical
archive_mode = on
archive_command = 'cp -i %p /etc/bigsql/data/pg10/pg_wal/archive_status/%f'
i install postgresql 10 from Bigsql and did above changes.
After changes the dir /pg_wal/archive_status had 16 gb of log. So my question is that should i delete them manually or i have to wait for system delete them automatically.
And is that if i write archive_mode to on should that wal file getting removed automatically??
Thanks for your precious time.
This depends on how you do your backups and whether you'd ever need to restore the database to some point in time.
Only a full offline filesystem backup (offline meaning with database turned off) or an on-line logical backup with pg_dumpall will not need those files for a restore.
You'd need those files to restore a filesystem backup created while the database is running. Without them the backup will fail to restore. Though there exist backup solutions that copy needed WAL files automatically (like Barman).
You'd also need those files if your replica database will ever fall behind the master for some reason. Or you'd need to restore the database to some past point-in-time.
But these files compress pretty well - should be less than 10% size after compression - you can write your archive_command to compress them automatically instead of just copying.
And you should delete them eventually from the archive. I'd recommend to not delete them until they're at least a month old and also at least 2 full successful backups are done after creating them.

Moodle with Amazon Aurora: Index column size too large. The maximum column size is 767 bytes

When performing the database creation, Aurora is throwing the following error to Moodle:
ERROR 1709 (HY000): Index column size too large. The maximum column size is 767 bytes.
It happens on every table that has a BIGINT(10) id column, like mdl_config or mdl_course.
It seems that it's something related to Barracuda format. The InnoDB file variables present in the database are:
innodb_file_format = Barracuda
innodb_file_format_check = ON
innodb_file_format_max = Antelope
innodb_file_file_per_table = ON
I have to say that in the Aurora Parameter Groups there's no way to change the innodb_file_format_max configuration.
The Moodle version I'm using is 3.1.6.
I found the workaround for this problem. We need to change the ROW_FORMAT to "Dynamic" only then it works. To change the ROW_FORMAT open a following file from Moodle directory:
moodle/lib/dml/mysqli_native_moodle_database.php
Edit line 420 from { $rowformat = "ROW_FORMAT=Compressed"; } to { $rowformat = "ROW_FORMAT=Dynamic"; }
It actually nullifies the if condition which checks if DB supports Compressed ROW_FORMAT if yes, then set the ROW_FORMAT to Compressed. This is the only hack that which made it working for me.
I stumbled across this issue installing Moodle 3.11 on Windows/IIS/MySQL/AWS/RDS. Following the "{ $rowformat = "ROW_FORMAT=Compressed"; } to { $rowformat = "ROW_FORMAT=Dynamic"; }" hints from #Rohit and #ErrorCode67 above I opened "moodle/lib/dml/mysqli_native_moodle_database.php" but unfortunately line 420 was not what I was expecting.
Instead on line 401 I found the following -
// All the tests passed, we can safely use ROW_FORMAT=Compressed in sql statements.
$this->compressedrowformatsupported = true;
Changing to the following seemed to do the trick for me -
$this->compressedrowformatsupported = false;
The solution to this problem is to create a MySQL RDS instance instead of an Aurora RDS, proceed with Moodle installation and after it finishes, create a backup of the MySQL RDS and restore it inside an Aurora RDS.
The problem is only appearing on the install phase, after that Aurora RDS can be used with the installation schema, previously created.
If you are migrating an existing onprem moodle to aws aurora mysql database do the following (assuming linux on both sides).
ensure you have upgraded your current moodle to the same version you will be putting on aws (do backups first)
perform a mysqldump ex:mysqldump --allow-keywords --opt -uAdminUser -p MoodleDBName > moodle_onprem.sql
tar up and compress the the sql (makes the transfer coping smaller) ex: tar cvzf moodle_onprem.tgz moodle_onprem.sql
copy the tgz file to a 3c2 instance (probably your moodle server) that has access to auroa using your favorite file transfer tool. (I used a simple scp as we have a direct connection) ex: scp -i /home/ec2-user/id_rsa someuser#onprembox:/var/www/html/moodle-onprem.tgz ./
Untar/compress the file. ex: tar xvzf moodle-onprem.tgz
Important step. Change the row format to DYNAMIC ex: sed -i "s/ ROW_FORMAT=COMPRESSED/ ROW_FORMAT=DYNAMIC/" moodle-onprem.sql
Also modify moodle/lib/dml/mysqli_native_moodle_database.php see Moodle with Amazon Aurora: Index column size too large. The maximum column size is 767 bytes answer by rowit to edit line from { $rowformat = "ROW_FORMAT=Compressed"; } to { $rowformat = "ROW_FORMAT=Dynamic"; }
Restore your moodle db. ex:mysql -h YourAruoraDBEndNode -u YourAdminUser -p YourMoodleDBName < moodle-onprem.sql
Transfer over your moodle and moodle data to your aws moodle server and you should be all set.

Max open files on ununtu keeps reseting

I am migrating from a file based cache to using a MongoDB. I am having issues with he max amount of open files. The mongo daemon keeps crashing because too many files are open. I set the limit of max open files in /etc/sysctl.conf to 500,000. I have set the max open files using ulimit -n 500000 and unlimited. When I do ulimit -n 50000 it says that it updates. Image for setting limit to 500,000
As soon as i close the session, it goes back to the default 1024. Image here. Even if I start the mongo daemon on that session that says it has the 500,000 open files, it still crashes at the 1,000 file limit.
What should I do to fix this? I am running Ubuntu 16.04 and mongo 3.3.0
Your new session may not get resource limit conf the from pam_limits PAM module. Check your /etc/pam.d/su file to see if pam_limits is there.

Importing large data to postgresql

I recently dual booted my system by installing Ubuntu over Windows. Now I have to import a file in postgresql , which is stored in host file system. The host filesystem has 190 GB of space. But when I log into postgres as sudo su postgres, it would take me into root filesystem(the default postgres folder) and query would be executed in that. Now my data set is of 3 GB and after sometime query would return 'OUT of disk space' as the root filesystem is of 3.5-4 GB. So it would be great if anyone can suggest solution to this? . Do I need to change default folder of postgres?
Thanks
Ravinder
I'd create a new file on host, which I'd configure as a second hard drive image for your Ubuntu. And I'd use this drive to create partition there and mount it where the PGDATA directory would be.

SQL syntax error while importing mysql dump output on same server

I've backed up all of my mysql databases using this command(I'm running mysql 5 on debian linux) :
mysqldump --user="root" --password="pass" --routines --triggers --events --all-databases > dbs_backup.sql
Then I shutted down my mysql server to change innodb configuration according to this link; After restarting the server, & when I want to import dump output using this command:
mysql -u root -p pass < dbs_backup.sql
I get some syntax errors on middle of this file (it executes lot of queries & some databases imports successfully, but the error occurs only while creating some stored procedures). I wonder why this happens, cause the server has no major change & the dumped databases all was fine & worked well before dumping.
what can cause this problem ???