Where is the Odoo-10's CSV binary field value gets stored in postgres? - postgresql

I have created a binary field in Odoo-10 that is supposed to store CSV file on server. But when I am checking it's table at postgres instead of getting binary data in that column, I am getting something like this
<memory at 0x7f1539393648>
Where is my binary file getting stored exactly?
my odoo-version is 10.
I am also trying to migrate table from openerp-6 to Odoo-10, the column that stores the CSV binary has okay data at postgres table for version-6, But when I migrate that table, CSV binary column contains this "memory at 0x7f1539393648" again at table in version-10
Where I am making the mess. Help appreciated.

Binary data storing has shifted out of database into normal storage on the filesystem around Odoo 7 or 8 as default.
You can find the files under (from odoo/odoo/tools/appdirs.py):
Typical user data directories are:
Mac OS X: ~/Library/Application Support/<AppName>
Unix: ~/.local/share/<AppName> # or in $XDG_DATA_HOME, if defined
Win XP (not roaming): C:\Documents and Settings\<username>\Application Data\<AppAuthor>\<AppName>
Win XP (roaming): C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>
Win 7 (not roaming): C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>
Win 7 (roaming): C:\Users\<username>\AppData\Roaming\<AppAuthor>\<AppName>
If you have set a value data_dir in your Odoo server config, the files can be found there.

Related

postgresql pgbadger error - can not load incompatible binary data, binary file is from version < 4.0

I am trying to use pgbadger to make html report for postgres slow query log files. My postgres logfiles are in csvlog format in folder pg_log. I transfer all logfiles
(80 files with 10 MB each) to my local windows machine and trying to generate single html report for all files. I created all one file from all files in below way,
type postgresql-2020-06-18_075333.csv > postgresql.csv
type postgresql-2020-06-18_080011.csv >> postgresql.csv
....
....
type postgresql-2020-06-18_094812.csv >> postgresql.csv
I downloaded "pgbadger-11.2" and tried below command but getting error.
D:\pgbadger-11.2>perl --version
This is perl 5, version 28, subversion 1 (v5.28.1) built for MSWin32-x64-multi-thread
D:\pgbadger-11.2>perl pgbadger "D:\June-Logs\postgresql.csv" -o postgresql.html
[========================>] Parsed 923009530 bytes of 923009530 (100.00%), queries: 1254764, events: 53
can not load incompatible binary data, binary file is from version < 4.0.
LOG: Ok, generating html report...
postgresql.html is created but no data in any tab.But it works when i create separate report for individual csv. like below.
D:\pgbadger-11.2>perl pgbadger "D:\June-Logs\postgresql-2020-06-18_075333.csv" -o postgresql-2020-06-18_075333.html
D:\pgbadger-11.2>perl pgbadger "D:\June-Logs\postgresql-2020-06-18_080011.csv" -o postgresql-2020-06-18_080011.html
...
D:\pgbadger-11.2>perl pgbadger "D:\June-Logs\postgresql-2020-06-18_094812.csv" -o postgresql-2020-06-18_094812.html
Please suggest me something to fix this issue.
I going to say this due to:
type postgresql-2020-06-18_075333.csv > postgresql.csv
type postgresql-2020-06-18_080011.csv >> postgresql.csv
Pretty sure that is introducing Windows line endings and pgBadger is looking for Unix line endings. Can you do the concatenate on the server?
UPDATE. Hmm. Ran across this
https://github.com/darold/pgbadger/releases
"This new release breaks backward compatibility with old binary or JSON
files. This also mean that incremental mode will not be able to read
old binary file [...] Add a warning about version and skip loading incompatible binary file.
Update code formatter to pgFormatter 4.0."
Not sure why it is failing on CSV logs, still what is version of pgBadger generating logs?

Load all records that contain `sym value from splayed tables in directory

I have tables called; quotes, trades and sym saved as splayed tables in a directory called splay in my q directory. I cannot figure out how to load these tables using the methods identified on the code.kx.com website. When I check the file properties, it says file type is File, so I do not know what type of file to open after the filename. Once I have managed to load these files, I need to select all records that contain the symbol IBM (in the column sym of the tables). I have tried so far:
q)\cd splay
q)\l quotes
'quotes. OS reports: The system cannot find the file specified.
[0] (.Q.l)
q)\l trades
'trades. OS reports: The system cannot find the file specified.
[0] (.Q.l)
.Q )\l trades.q
'trades.q. OS reports: The system cannot find the file specified.
[2] (<load>)
))\l trades.dat
'trades.dat. OS reports: The system cannot find the file specified.
[4] (.Q.l)
to no avail. the same approach but for the directory itself.
q)\l splay
I have tried to just run the files without loading by being in the directory but this has also not been successful.
q)\cd splay
q)\cd
"C:\\Users\\Lewis\\splay"
q)t:get`:trades
'trades. OS reports: The system cannot find the file specified.
[0] t:get`:trades
^
q)q:get `:quotes
'quotes. OS reports: The system cannot find the file specified.
[0] q:get `:quotes
^
q)load`quotes
'quotes. OS reports: The system cannot find the file specified.
[0] load`quotes
^
One of the ways the code.kx.com website says to do this, and one of my first approaches:
C:\Users\Lewis\q>q/q.exe splay
KDB+ 3.5 2017.10.11 Copyright (C) 1993-2017 Kx Systems
w32/ 4()core . . .
Welcome to kdb+ 32bit edition
For support please see http://groups.google.com/d/forum/personal-kdbplus
Tutorials can be found at http://code.kx.com/q
To exit, type \\
To remove this startup msg, edit q.q
'/q.exe. OS reports: The system cannot find the file specified.
[0] (.Q.l)
.Q )
and the final approach I have had to load these files or directory is:
q)))load `splay
'splay. OS reports: Access is denied.
[6] load `splay
^
q))))\cd splay
q))))load `splay
'splay. OS reports: Access is denied.
[9] load `splay
^
Please, help me!
If you are in the directory /Users/Lewis you should be able to pass the splay as a command line parameter, like this: q splay. There may be an issue with the path you are using to your q application q\q.exe which is causing an error to flag up.
Alternatively you should be able to open it from inside an active q session like: \l splay provided you are in the directory /Users/Lewis OR like \l . if you are in the directory /Users/Lewis/splay, where . is a shortcut for the current directory.
Additionally you stated that you have the tables trade, quote and sym. It all depends how you saved the data to disk but the sym file should not be a table like the other two, which you should see when you load the data in.
The error OS reports: Access is denied. is probably due to the q process not having appropriate permissions to access the file. If you start the process with admin privileges you should be able to get around this error.

Loading Data from PostgreSQL into Stata

When I load data from PostgreSQL into Stata some of the data has unexpected characters appended. How can I avoid this?
Here is the Stata code I am using:
odbc query mydatabase, schema $odbc
odbc load, exec("SELECT * FROM my_table") $odbc allstring
Here is an example of the output I see:
198734/0 one/0/r April/0/0/0
893476/0 two/0/r May/0/0/0
324192/0 three/0/r June/0/0/0
In Postgres the data is:
198734 one April
893476 two May
324192 three June
I see this in mostly in larger tables and with fields of all datatypes in PostgreSQL. If I export the data to a csv there are no trailing characters.
The odbci.ini file I am using looks like this:
[ODBC Data Sources]
mydatabase = PostgreSQL
[mydatabase]
Debug = 1
CommLog = 1
ReadOnly = no
Driver = /usr/lib64/psqlodbcw.so
Servername = myserver
Servertype = postgres
FetchBufferSize = 99
Port = 5432
Database = mydatabase
[Default]
Driver = /usr/lib64/psqlodbcw.so
I am using odbc version unixODBC 2.3.1 and PostgreSQL version 9.4.9 with server encoding UTF8 and Stata version 14.1.
What is causing the unexpected characters in the data imported into Stata? I know that I can clean the data once it’s in Stata but I would like to avoid this.
I was able to fix this by adding the line
set odbcdriver ansi
to the Stata code.

What corruption is indicated by WinDbg and !chkimg?

I am having often BSODs and WinDbg report similar corruption for most of them
4: kd> !chkimg -lo 50 -d !nt
fffff80177723e6d-fffff80177723e6e 2 bytes - nt!MiPurgeZeroList+6d
[ 80 fa:00 e9 ]
2 errors : !nt (fffff80177723e6d-fffff80177723e6e)
and
CHKIMG_EXTENSION: !chkimg -lo 50 -d !nt
fffff8021531ae6d-fffff8021531ae6e 2 bytes - nt!MiPurgeZeroList+6d
[ 80 fa:00 aa ]
2 errors : !nt (fffff8021531ae6d-fffff8021531ae6e)
What does it mean? What with what is compared and how it can be that corruption is similar? Does it explicitly indicates RAM problem?
UPDATE
What do these numbers mean? fffff80177723e6d and fffff8021531ae6d? What does it mean, that endings conincide?
What does the following code mean: nt!MiPurgeZeroList+6d?
I already answered this on superuser.com. Windbg downloads the original Exe/DLLs from the Symbol Server and now the chkimg command detects corruption in the images of executable files by comparing them to the copy on a symbol store.
All sections of the file are compared, except for sections that are
discardable, that are writeable, that are not executable, that have
"PAGE" in their name, or that are from INITKDBG. You can change this
behavior can by using the -ss, -as, or -r switches.
!chkimg displays any mismatch between the image and the file as an
image error, with the following exceptions:
Addresses that are occupied by the Import Address Table (IAT) are not checked.
Certain specific addresses in Hal.dll and Ntoskrnl.exe are not checked, because certain changes occur when these sections are loaded.
To check these addresses, include the -nospec option.
If the byte value 0x90 is present in the file, and if the value 0xF0 is present in the corresponding byte of the image (or vice
versa), this situation is considered a match. Typically, the symbol
server holds one version of a binary that exists in both uniprocessor
and multiprocessor versions. On an x86-based processor, the lock
instruction is 0xF0, and this instruction corresponds to a nop (0x90)
instruction in the uniprocessor version. If you want !chkimg to
display this pair as a mismatch, set the -noplock option.
If the RAM is fine, check the HDD / HDD cables for errors (disk diag tool and run chkdsk to detect and fix NTFS issues). You can also connect the HDD to different SATA port on the mainboard.

PostgreSQL error: converted to the required encoding

I am trying to update a column:
update t_references
set reference = 'Stöcker W, et al. Autoimmunity to Pancreatic Juice in Crohn’s Disease.
Results of an Autoantibody Screening in Patients With Chronic Inflammatory Bowel Disease. <i>Scand J Gastroenterol Suppl</i>. 1987;139:41-52.'
,index = 9
where reference_id = 161;
I got error:
The query could not be converted to the required encoding.
Please advise.
I had to login to the machine and then ran this from the command.
On experiencing the same error, I found I could recreate it when opening a .SQL file saved from PGAdmin in an editor(even Notepad) and then copying and pasting the contents into PGAdmin. When I opened the file directly with PGAdmin, there was no issue.
HTH