postgresql pgbadger error - can not load incompatible binary data, binary file is from version < 4.0 - postgresql

I am trying to use pgbadger to make html report for postgres slow query log files. My postgres logfiles are in csvlog format in folder pg_log. I transfer all logfiles
(80 files with 10 MB each) to my local windows machine and trying to generate single html report for all files. I created all one file from all files in below way,
type postgresql-2020-06-18_075333.csv > postgresql.csv
type postgresql-2020-06-18_080011.csv >> postgresql.csv
....
....
type postgresql-2020-06-18_094812.csv >> postgresql.csv
I downloaded "pgbadger-11.2" and tried below command but getting error.
D:\pgbadger-11.2>perl --version
This is perl 5, version 28, subversion 1 (v5.28.1) built for MSWin32-x64-multi-thread
D:\pgbadger-11.2>perl pgbadger "D:\June-Logs\postgresql.csv" -o postgresql.html
[========================>] Parsed 923009530 bytes of 923009530 (100.00%), queries: 1254764, events: 53
can not load incompatible binary data, binary file is from version < 4.0.
LOG: Ok, generating html report...
postgresql.html is created but no data in any tab.But it works when i create separate report for individual csv. like below.
D:\pgbadger-11.2>perl pgbadger "D:\June-Logs\postgresql-2020-06-18_075333.csv" -o postgresql-2020-06-18_075333.html
D:\pgbadger-11.2>perl pgbadger "D:\June-Logs\postgresql-2020-06-18_080011.csv" -o postgresql-2020-06-18_080011.html
...
D:\pgbadger-11.2>perl pgbadger "D:\June-Logs\postgresql-2020-06-18_094812.csv" -o postgresql-2020-06-18_094812.html
Please suggest me something to fix this issue.

I going to say this due to:
type postgresql-2020-06-18_075333.csv > postgresql.csv
type postgresql-2020-06-18_080011.csv >> postgresql.csv
Pretty sure that is introducing Windows line endings and pgBadger is looking for Unix line endings. Can you do the concatenate on the server?
UPDATE. Hmm. Ran across this
https://github.com/darold/pgbadger/releases
"This new release breaks backward compatibility with old binary or JSON
files. This also mean that incremental mode will not be able to read
old binary file [...] Add a warning about version and skip loading incompatible binary file.
Update code formatter to pgFormatter 4.0."
Not sure why it is failing on CSV logs, still what is version of pgBadger generating logs?

Related

PHP Warning: Module 'imagick' already loaded in Unknown on line 0, need some assistance

So I checked my spam folder today and had 122 messages send through the server stating:
PHP Warning: Module 'imagick' already loaded in Unknown on line 0
Non stop it keeps sending me this.
Googled it and still have no clue on how to fix it as I need a step by step answer.
Found a similar question on stack but it's too complicated for me to understand.
I'm on the latest Plesk Onyx and OS=Ubuntu 16.04.6 LTS‬.
As I'm running php 7.3 I hoped when I uninstalled php 7.0 in plesk it would go away but unfortunately it didn't.
It's send from here
Cron <root#server> [ -x /usr/lib/php/sessionclean ] && /usr/lib/php/sessionclean
And after php --ini (read about it on a forum) this was the result:
PHP Warning: Module 'imagick' already loaded in Unknown on line 0
Configuration File (php.ini) Path: /etc/php/7.0/cli
Loaded Configuration File: /etc/php/7.0/cli/php.ini
Scan for additional .ini files in: /etc/php/7.0/cli/conf.d
Additional .ini files parsed: /etc/php/7.0/cli/conf.d/00-ioncube-loader-7.0.ini,
/etc/php/7.0/cli/conf.d/10-mysqlnd.ini,
/etc/php/7.0/cli/conf.d/10-opcache.ini,
/etc/php/7.0/cli/conf.d/10-pdo.ini,
/etc/php/7.0/cli/conf.d/15-xml.ini,
/etc/php/7.0/cli/conf.d/20-calendar.ini,
/etc/php/7.0/cli/conf.d/20-ctype.ini,
/etc/php/7.0/cli/conf.d/20-curl.ini,
/etc/php/7.0/cli/conf.d/20-dom.ini,
/etc/php/7.0/cli/conf.d/20-exif.ini,
/etc/php/7.0/cli/conf.d/20-fileinfo.ini,
/etc/php/7.0/cli/conf.d/20-ftp.ini,
/etc/php/7.0/cli/conf.d/20-gd.ini,
/etc/php/7.0/cli/conf.d/20-gettext.ini,
/etc/php/7.0/cli/conf.d/20-iconv.ini,
/etc/php/7.0/cli/conf.d/20-imagick.ini,
/etc/php/7.0/cli/conf.d/20-imap.ini,
/etc/php/7.0/cli/conf.d/20-json.ini,
/etc/php/7.0/cli/conf.d/20-mbstring.ini,
/etc/php/7.0/cli/conf.d/20-mysqli.ini,
/etc/php/7.0/cli/conf.d/20-pdo_mysql.ini,
/etc/php/7.0/cli/conf.d/20-pdo_sqlite.ini,
/etc/php/7.0/cli/conf.d/20-phar.ini,
/etc/php/7.0/cli/conf.d/20-posix.ini,
/etc/php/7.0/cli/conf.d/20-readline.ini,
/etc/php/7.0/cli/conf.d/20-shmop.ini,
/etc/php/7.0/cli/conf.d/20-simplexml.ini,
/etc/php/7.0/cli/conf.d/20-sockets.ini,
/etc/php/7.0/cli/conf.d/20-sqlite3.ini,
/etc/php/7.0/cli/conf.d/20-sysvmsg.ini,
/etc/php/7.0/cli/conf.d/20-sysvsem.ini,
/etc/php/7.0/cli/conf.d/20-sysvshm.ini,
/etc/php/7.0/cli/conf.d/20-tokenizer.ini,
/etc/php/7.0/cli/conf.d/20-wddx.ini,
/etc/php/7.0/cli/conf.d/20-xmlreader.ini,
/etc/php/7.0/cli/conf.d/20-xmlwriter.ini,
/etc/php/7.0/cli/conf.d/20-xsl.ini,
/etc/php/7.0/cli/conf.d/20-zip.ini,
/etc/php/7.0/cli/conf.d/zend_extensions_psa.ini
I already know it's probably easily fixed when you know linux but I'm learning on the go and posses very little knowledge yet.
Could anyone assist me with the right commands?
Cheers
Locate where is your php.ini file by:
php -i | grep Configuration
Configuration File (php.ini) Path => /usr/local/etc/php/7.4
Loaded Configuration File => /usr/local/etc/php/7.4/php.ini
Open the php.ini file and remove the line:
extension=imagick.so
or change the line to a comment:
; extension=imagick.so

Where is the Odoo-10's CSV binary field value gets stored in postgres?

I have created a binary field in Odoo-10 that is supposed to store CSV file on server. But when I am checking it's table at postgres instead of getting binary data in that column, I am getting something like this
<memory at 0x7f1539393648>
Where is my binary file getting stored exactly?
my odoo-version is 10.
I am also trying to migrate table from openerp-6 to Odoo-10, the column that stores the CSV binary has okay data at postgres table for version-6, But when I migrate that table, CSV binary column contains this "memory at 0x7f1539393648" again at table in version-10
Where I am making the mess. Help appreciated.
Binary data storing has shifted out of database into normal storage on the filesystem around Odoo 7 or 8 as default.
You can find the files under (from odoo/odoo/tools/appdirs.py):
Typical user data directories are:
Mac OS X: ~/Library/Application Support/<AppName>
Unix: ~/.local/share/<AppName> # or in $XDG_DATA_HOME, if defined
Win XP (not roaming): C:\Documents and Settings\<username>\Application Data\<AppAuthor>\<AppName>
Win XP (roaming): C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>
Win 7 (not roaming): C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>
Win 7 (roaming): C:\Users\<username>\AppData\Roaming\<AppAuthor>\<AppName>
If you have set a value data_dir in your Odoo server config, the files can be found there.

unoconv fails to save in my specified directory

I am using unoconv to convert an ods spreadsheet to a csv file.
Here is the command:
unoconv -vvv --doctype=spreadsheet --format=csv --output= ~/Dropbox
/mariners_site/textFiles/expenses.csv ~/Dropbox/Aldeburgh/expenses
/expenses.ods
It saves the output file in the same directory as the source file, not in the specified directory. The error message is:
Output file: /home/richard/Dropbox/mariners_site/textFiles/expenses.csv
unoconv: UnoException during export phase:
Unable to store document to file:///home/richard/Dropbox/mariners_site
/textFiles/expenses.csv (ErrCode 19468)
I'm sure that this worked initially, but it has since stopped.
I have checked for permissions and they are identical for both directories.
I translated ErrCode 19468 for you and it boils down to meaning ERRCODE_SFX_DOCUMENTREADONLY.
You can find more information about the specific meaning of LibreOffice ErrCode numbers from the unoconv documentation at: https://github.com/dagwieers/unoconv/blob/master/doc/errcode.adoc
The clue here is that you have a whitespace-character between --output= and the filename (--output= ~/Dropbox
/mariners_site/textFiles/expenses.csv) and because of that unoconv gets an empty output value (which means the current directory) and is given 2 files. And that explains why you get this specific error IMO

wget files from FTP-like listings

So, site that used to use FTP now has an HTTP front-end and won't allow FTP connections. The site in question (for an example directory) will show a page with links to different dates. Inside each of these different dates, there are many files, and I typically just need to get some file with some clear pattern e.g. *h17v04*.hdf. I thought this could work:
wget -I "${PLATFORM}/${PRODUCT}/${YEAR}.*" -r -l 4 \
--user-agent="Mozilla/5.0 (Windows NT 5.2; rv:2.0.1) Gecko/20100101 Firefox/4.0.1" \
--verbose -c -np -nc -nd \
-A "*h17v04*.hdf" http://e4ftl01.cr.usgs.gov/$PLATFORM/$PRODUCT/
where PLATFORM=MOLT, PRODUCT=MOD09GA.005 and YEAR=2004, for example. This seems to start looking into all the useful dates, finds the index.html, and then just skips to the next directory, without downloading the relevant hdf file:
--2013-06-14 13:09:18-- http://e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/
Reusing existing connection to e4ftl01.cr.usgs.gov:80.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/index.html'
[ <=> ] 174,182 134K/s in 1.3s
2013-06-14 13:09:20 (134 KB/s) - `e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/index.html' saved [174182]
Removing e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/index.html since it should be rejected.
--2013-06-14 13:09:20-- http://e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.02/
[...]
If I ignore the -A option, only the index.html file is downloaded to my system, but it appears it's not parsed and the links are not followed. I don't really know what more is required to make this work, as I can't see why it doesn't!!!
SOLUTION
In the end, the problem was due to an old bug in the local version of wget. However, I ended up writing my own script for downloading MODIS data from the server above. The script is pure Python, and is available from here.
Consider to use pyModis instead of wget which is a Free and Open Source Python based library to work with MODIS data. It offers bulk-download for user selected time ranges, mosaicking of MODIS tiles, and the reprojection from Sinusoidal to other projections, convert HDF format to other formats. See
http://www.pymodis.org/

Is there a way to tell django compressor to create source maps

I want to be able to debug minified compressed javascript code on my production site. Our site uses django compressor to create minified and compressed js files. I read recently about chrome being able to use source maps to help debug such javascript. However I don't know how/if possible to tell the django compressor to create source maps when compressing the js files
I don't have a good answer regarding outputting separate source map files, however I was able to get inline working.
Prior to adding source maps my settings.py file used the following precompilers
COMPRESS_PRECOMPILERS = (
('text/coffeescript', 'coffee --compile --stdio'),
('text/less', 'lessc {infile} {outfile}'),
('text/x-sass', 'sass {infile} {outfile}'),
('text/x-scss', 'sass --scss {infile} {outfile}'),
('text/stylus', 'stylus < {infile} > {outfile}'),
)
After a quick
$ lessc --help
You find out you can put the less and map files in to the output css file. So my new text/less precompiler entry looks like
('text/less', 'lessc --source-map-less-inline --source-map-map-inline {infile} {outfile}'),
Hope this helps.
Edit: Forgot to add, lessc >= 1.5.0 required for this, to upgrade use
$ [sudo] npm update -g less
While I couldn't get this to work with django-compressor (though it should be possible, I think I just had issues getting the app set up correctly), I was able to get it working with django-assets.
You'll need to add the appropriate command-line argument to the less filter source code as follows:
diff --git a/src/webassets/filter/less.py b/src/webassets/filter/less.py
index eb40658..a75f191 100644
--- a/src/webassets/filter/less.py
+++ b/src/webassets/filter/less.py
## -80,4 +80,4 ## class Less(ExternalTool):
def input(self, in_, out, source_path, **kw):
# Set working directory to the source file so that includes are found
with working_directory(filename=source_path):
- self.subprocess([self.less or 'lessc', '-'], out, in_)
+ self.subprocess([self.less or 'lessc', '--line-numbers=mediaquery', '-'], out, in_)
Aside from that tiny addition:
make sure you've got the node -- not the ruby gem -- less compiler (>=1.3.2 IIRC) available in your path.
turn on the sass source-maps option buried away in chrome's web inspector config pages. (yes, 'sass' not less: less tweaked their debug-info format to match sass's since since sass had already implemented a chrome-compatible mapping and their formats weren't that different to begin with anyway...)
Not out of the box but you can extend a custom filter:
from compressor.filters import CompilerFilter
class UglifyJSFilter(CompilerFilter):
command = "uglifyjs -c -m " /
"--source-map-root={relroot}/ " /
"--source-map-url={name}.map.js" /
"--source-map={relpath}/{name}.map.js -o {output}"