I need to make changes to the mdwrite function in the /src/backend/storage/smgr/md.c file (part of code, because i can't pin screenshot)
seekpos = (off_t) BLCKSZ * (blocknum % ((BlockNumber) RELSEG_SIZE));
Assert(seekpos < (off_t) BLCKSZ * RELSEG_SIZE);
**buffer[0] = 'A';**
nbytes = FileWrite(v->mdfd_vfd, **buffer**, BLCKSZ, seekpos, WAIT_EVENT_DATA_FILE_WRITE);
**buffer[0] = 'B';**
TRACE_POSTGRESQL_SMGR_MD_WRITE_DONE(forknum, blocknum,
reln->smgr_rnode.node.spcNode,
reln->smgr_rnode.node.dbNode,
reln->smgr_rnode.node.relNode,
reln->smgr_rnode.backend,
nbytes,
BLCKSZ);
compilation and installation was successful, but when I configure:
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
It give me writing block 0 of relation global/1136 on ubuntu console. How should I work with source code?
What was the point of the change? It seems to be designed to cause havoc, which is what it did.
The full message should be something like this:
LOG: request to flush past end of generated WAL; request 41/28, currpos 0/1523128
CONTEXT: writing block 0 of relation global/1213
FATAL: xlog flush request 41/28 is not satisfied --- flushed only to 0/1523128
CONTEXT: writing block 0 of relation global/1213
So you corrupted the LSN in the page header of the buffer to be written, which then caused a request of a WAL flush which is impossible to perform.
Related
I am saving a table which is enumerated and to be splayed on my hdb. After which we load the hdb directory
There was a corruption which caused
'2022.01.01T00:00:01.000 part
(.Q.L) error"
https://code.kx.com/q/ref/dotq/#ql-load/
This is due to load, and I tried corrupting partition to replicate:
Corrupted .d file
Corrupted splayed column(s)
Removed entire partition
Most of the above cases are handled in our exception.
".\2022.01.1\tbl. OS reports: No such file or directory"
but I couldn't replicate the use case where .Q.l happened nor find why it happened from my logs.
Can someone please suggest what kind of corruption could have caused the part error during load.
One possible cause would be a partition folder without correct read permissions:
$ mkdir -p badHDB/2001.01.01/tab1
$ mkdir -p badHDB/20011.01.01/tab2
$ mkdir -p badHDB/2002.01.01
$ chmod 000 badHDB/2002.01.01
Running it we see the error:
q badHDB
'part
[2] (.Q.L)
[0] \l badHDB
You could write a small function to try to narrow down the issues:
// https://code.kx.com/q/ref/system/#capture-stderr-output
q)tmp:first system"mktemp"
q)tab:flip `part`date`osError`files`error!flip {d:1_string x;{y:string y;(y;"D"$y),{r:system x;$[0~"J"$last r;(0b;-1_r;"");(1b;();first r)]} "ls ",x,"/",y," > ",tmp," 2>&1;echo $? >> ",tmp,";cat ",tmp}[d] each key x} `:badHDB
This would result in:
part date osError files error
-----------------------------------------------------------------------------------------------------------
"2001.01.01" 2001.01.01 0 ,"tab1" ""
"20011.01.01" 0 ,"tab2" ""
"2002.01.01" 2002.01.01 1 () "ls: cannot open directory 'badHDB/2002.01.01': Permission denied"
For a larger HDB filter down to partitions with issues:
select from tab where or[null date;osError]
As part of my build process, I'd like to get statistics on the build time and whether ccache found the item in the cache. I know about ccache -s where I can compare the previous and current cache hit counts.
However, if I have hundreds of compilation threads running in parallel, the statistics don't tell me which file caused the hit.
The return code of ccache is that of the compiler. Is there any way I can get ccache to tell me if it was successful?
There are two options:
Enable the ccache log file: Set log_file in the configuration (or the environment variable CCACHE_LOGFILE) to a file path. Then you can figure out the result of each compilation from the log data. It can be a bit tedious if there are many parallel ccache invocations (the log file is shared between all of them, so log records from the different processes will be interleaved) but possible by taking the PID part of each log line into account.
In ccache 3.5 and newer, it's better to enable the debug mode: Set debug = true in the configuration (or the environment variable CCACHE_DEBUG=1). ccache will then store the log for each produced object file in <objectfile>.ccache-log. Read more in Cache debugging in the ccache manual.
I wrote a quick-n-dirty script that tells me which files had to be rebuild and what the cache miss ratio was:
Sample output (truncated):
ccache hit: lib/expression/unary_minus_expression.cpp
ccache miss: lib/expression/in_expression.cpp
ccache miss: lib/expression/arithmetic_expression.cpp
=== 249 files, 248 cache misses (0.995984 %)===
Script:
#!/usr/bin/env python3
from pathlib import Path
import re
import os
files = {}
for filename in Path('src').rglob('*.ccache-log'):
with open(filename, 'r') as file:
for line in file:
source_file_match = re.findall(r'Source file: (.*)', line)
if source_file_match:
source_file = source_file_match[0]
result_match = re.findall(r'Result: cache (.*)', line)
if result_match:
result = result_match[0]
files[source_file] = result
break
if len(files) == 0:
print("No *.ccache-log files found. Did you compile with ccache and the environment variable CCACHE_DEBUG=1?")
sys.exit(1)
common_path_prefix = os.path.commonprefix(list(files.keys()))
files_shortened = {}
misses = 0
for file in files:
shortened = file.replace(common_path_prefix, '')
if files[file] == 'miss':
misses += 1
print("ccache miss: %s" % (shortened))
print("\n=== %i files, %i cache misses (%f %%)===\n" % (len(files), misses, float(misses) / len(files) * 100))
Note that this takes all ccache-log files into account, not only those of the last build. If you want the latter, simply remove the log files first.
I checked the log file and I think this is the part that caused the problem:
Setting up database
[15:30:54] Configuring pg11 to point to existing data dir D:\John's Files\My Documents\Code\Databases\PostgreSQL\data\pg11
Setting PostgreSQL port
= 5432
Executing C:\Installed Software\Developer Software\PostgreSQLv11.1/pgc config pg11 --datadir "D:\John's Files\My Documents\Code\Databases\PostgreSQL\data\pg11"
Script exit code: 1
Script output: ################################################
# FATAL SQL Error in check_release
# SQL Message = near "s": syntax error
# SQL Statement = SELECT r.component FROM releases r, versions v
WHERE r.component = v.component
AND r.component LIKE '%D:\John's Files\My Documents\Code\Databases\PostgreSQL\data\pg11%' AND v.is_current = 1
################################################
Script stderr: Program ended with an error exit code
Error with configuration or permissions. Please see log file for more information.
Problem running post-install step. Installation may not complete correctly
Error with configuration or permissions. Please see log file for more information.
I think the problem is the apostrophe in "John's". Does anyone know if that's right? Is there a fix to this problem? I don't want to rename my directory because Postgresql can't handle apostrophes.
I have the following problem:
I am using the following command:
EXPORT TO "D:\ExportFiles\ACTIVATE_DICT.csv" OF DEL MODIFIED BY TIMESTAMPFORMAT="YYYY/MM/DD HH:MM:SS" STRIPLZEROS MESSAGES "D:\ExportFiles\FMessage.txt" SELECT * FROM DB2INST4.ACTIVATE_DICT;
In the Command Editor of the program, the Control Center successfully exported data from the ACTIVATE_DICT table to a CSV file ACTIVATE_DICT.csv.
But for a number of reasons, I need you to execute this command in the IBM Data Studio or DataGrip program, and there it cannot be executed in this form.
Therefore, I read the following manual enter link description here
and based on it wrote the following command:
CALL SYSPROC.ADMIN_CMD('EXPORT to /lotus/ExportFiles/ACTIVATE_DICT.csv OF DEL MODIFIED BY TIMESTAMPFORMAT="YYYY/MM/DD HH:MM:SS" STRIPLZEROS MESSAGES /lotus/ExportFiles/FMessage.txt SELECT * FROM DB2INST4.ACTIVATE_DICT');
Here is the message on the result of the command:
[2018-10-11 15:15:23] [ ][3107] There is at least one warning
message in the message file.. SQLCODE=3107, SQLSTATE= ,
DRIVER=4.23.42 [2018-10-11 15:15:23] 1 row retrieved starting from 1
in 75 ms (execution: 29 ms, fetching: 46 ms)
And in the / lotus / ExportFiles / directory there is no ACTIVATE_DICT.csv file and there is no FMessage.txt file in the / lotus / ExportFiles / directory.
Question: How then to correctly execute this command ??? Maybe I'm doing something wrong?
sqlcode 3107 is a warning message:
SQL3107W At least one warning message was encountered during LOAD processing.
Explanation
You can load data into a database from a file, tape, or named pipe using the LOAD command. You can specify that any warnings or errors from the LOAD processing be printed to a message file. If no message file is specified, the warnings or errors are printed to standard out (unless the database manager instance is configured as a partitioned-database environment.)
It is to tell you to read message log in the message file you specified. In your case: /lotus/ExportFiles/FMessage.txt
Please read into the file to see what error is logged and if you need help understand what is logged, please post the content of the file.
This message is returned when at least one warning was received during processing. If a message file is being used, the warnings and errors will be printed there.
This warning does not affect processing.
User response
Review the message file warning.
EXPORT command using the ADMIN_CMD procedure
See use of the 'MESSAGES ON SERVER' clause, and how to get these messages using the result set returned by this routine in this case.
I am very new to Oracle so please base with me if this is covered else where.
I have a MS SQL box running Jobs calling batch files running scripts in SQLPLUS to ETL to an Oracle 10G database.
I have an intermittent issue with a script that is causing the ETL to fail which at the minute without error logging is something of an unknown. The current solution highlights the load failure based on rowcounts for before and after the script has finsihed.
I'd like to be able to insert any errors encoutered whilst running the offending script into an error log table on the same database receiving the data loads.
There's nothing too technical about the script, at a high level is performs the following steps all via SQL code and no procedural calls.
Updates a table with Date and current row counts
Pulls data from a remote source into a staging table
Merges the Staging table into an intermediate staging table
Performs some transformational actions
Merges the intermediate staging table into the final Fact table
Updates a table with new row counts
Please advise whether it is possible to pass error messages, codes, line number etc etc via SQLPLUS into a Database table? and if so the easiest method to achieve this.
A first few lines of the script are shown below to give a flavour
/*set echo off*/
set heading off
set feedback off
set sqlblanklines on
/* ID 1 BATCH START TIME */
INSERT INTO CITSDMI.CITSD_TIMETABLE_ORDERLINE TGT
(TGT.BATCH_START_TIME)
(SELECT SYSDATE FROM DUAL);
COMMIT;
insert into CITSDMI.CITSD_TIMETABLE_ALL_LOADS
(LOAD_NAME, LOAD_CRITICALITY,LOAD_TYPE,BATCH_START_TIME)
values
('ORDERLINE','HIGH','SMART',(SELECT SYSDATE FROM DUAL));
commit;
/* Clear the Staging Tables */
TRUNCATE TABLE STAGE_SMART_ORDERLINE;
Commit;
TRUNCATE TABLE TRANSF_FACT_ORDERLINE;
Commit;
and so it goes on with the rest of the steps.
Any assistant will be greatly appreciated.
Whilst not fully understanding your requirement, a couple of pointers.
The WHENEVER command will help you control what sqlplus should do when an error occurs, e.g.
WHENEVER SQLERROR EXIT FAILURE ROLLBACK
WHENEVER OSERROR EXIT FAILURE ROLLBACK
INSERT ...
INSERT ...
This will cause sqlplus to exit with error status 1 if any of the following statements fail.
You can also have WHENEVER SQLERROR CONTINUE ...
Since the WHENEVER ... EXIT FAILURE/SUCCESS controls the exit status, the calling script/program will know if it worked failed.
Logging
use SPOOL to spool the out to a file.
Logging to table.
Best way is to wrap your statements into PLSQL anonymous blocks and use exception hanlders to log errors.
So, putting the above together, using a UNIX shell as the invoker:
sqlplus -S /nolog <<EOF
WHENEVER SQLERROR EXIT FAILURE ROLLBACK
CONNECT ${USRPWD}
SPOOL ${SPLFILE}
BEGIN
INSERT INTO the_table ( c1, c1 ) VALUES ( '${V1}', '${V2}' );
EXCEPTION
WHEN OTHERS THEN
INSERT INTO the_error_tab ( name, errno, errm ) VALUES ( 'the_script', SQLCODE, SQLERRM );
COMMIT;
END;
/
SPOOL OFF
QUIT
EOF
if [ ${?} -eq 0 ]
then
echo "Success!"
else
echo "Oh dear!! See ${SPLFILE}"
fi