Output of pstack applied to a coredump should contain a stack backtrace for every thread. But in some cases the output contains truncated backtraces for each thread, only one entry per thread. Here is an example, showing an excerpt of the pstack output:
core 'core' of 8714: ./rds_rdprod.solsparc64
----------------- lwp# 1 --------------------------------
ffffffff7c2d7bd0 ???????? (ffffffff7ffff5b0, ffffffff7ffff570, 0, 4, 16cf50, ffffffff7c10d240)
----------------- lwp# 2 --------------------------------
ffffffff7c2d7bd0 ???????? (100bf12b0, 100bf12d8, 0, 4, 16cf50, ffffffff7c10e9c0)
----------------- lwp# 3 --------------------------------
ffffffff7c2d7bd0 ???????? (100bf12b0, 100bf12d8, 0, 4, 16cf50, ffffffff7c10e9c0)
----------------- lwp# 4 --------------------------------
ffffffff7c2d7bd0 ???????? (100bf12b0, 100bf12d8, 0, 4, 16cf50, ffffffff7c10e9c0)
----------------- lwp# 5 --------------------------------
ffffffff7c2d7bd0 ???????? (100bf12b0, 100bf12d8, 0, 4, 16cf50, ffffffff7c10e9c0)
... [omitting a number of threads]
----------------- lwp# 52 --------------------------------
ffffffff7c2d7bd0 ???????? (105286af0, 105286b18, 0, 4, 16cf50, ffffffff7c10f600)
----------------- lwp# 53 --------------------------------
0000000100196e80 _ZN9CAtomMap43AddEP6CAtom4S1_ (5ad245980, 19f6751f8, 19f6751f8, 19f6751f8, 25bf97af0, 28) + 60
----------------- lwp# 54 --------------------------------
00000001001a1de8 _ZN12CBasicSearch6ExistsER18ICacheItemIdentity (19b43ed78, ffffffff6faff3b8, 6, 0, 0, 2aa932388) + 28
----------------- lwp# 55 --------------------------------
ffffffff7d300838 ???????? (0, 287812ec8, 5382b3098, 45dde3558, 188ce9053, 15faa9d38)
... [omitting a number of threads]
As you can see, there is only one line per LWP, where we normally get a full backtrace. And you may also notice that a few threads have a valid mangled C++ symbol, and this is probably because the processor was executing appication code at the time of the core dump. But why do all backtraces get truncated?
The last time this happened the faulting thread threw a SIGABRT, according to pflags applied to the same core dump.
There is enough disk space for the core dump, and no issue with permissions to write the core dump. Machine is a Sparc with 32 cores, SunOS 5.10 Generic_144488-17. Application is built with gcc 4.7.2.
Related
Task: I need to sum up relevant values from a json for a specific id. How can I accomplish this in postgreSQL?
I receive post insights from Facebook's Graph API and it contains a cell with a json listing countries with their two letter abbreviation and the corresponding watchtime in ms from that country.
post_id
date
watchtime_per_country
107_[pageID]
2022-09-01
** see json below **
The second part is a table that contains the relevant countries for each [page_id]
page_id
target country
P01
Germany (DE)
P01
Italy (IT)
P02
Mozambique (MZ)
P02
Colombia (CO)
Now I would like to get the sum of
Germany (DE): 162 and Japan (JP): 24 --> 186 for P01
Mozambique (MZ): 3 and 6 --> 9 for P02
So far I have unnested the json and unpacked all possible +-250 country values into own columns but I am not sure whether this is a good approach. After that I am not sure how to build those sums in a flexible efficient way. Or whether it is possible at all in postgreSQL.
Does anyone have an idea?
**** json ****
{"Brazil (BR)": 9210, "Germany (DE)": 162, "Portugal (PT)": 68, "Japan (JP)": 24, "United States (US)": 17, "Italy (IT)": 13, "France (FR)": 9, "United Kingdom (GB)": 8, "Netherlands (NL)": 6, "Belgium (BE)": 6, "Colombia (CO)": 6, "Austria (AT)": 5, "Sweden (SE)": 4, "Canada (CA)": 4, "Argentina (AR)": 3, "Mozambique (MZ)": 3, "Angola (AO)": 3, "Switzerland (CH)": 2, "Saudi Arabia (SA)": 2, "New Zealand (NZ)": 2, "Norway (NO)": 2, "Indonesia (ID)": 2, "Denmark (DK)": 2, "United Arab Emirates (AE)": 2, "Russia (RU)": 2, "Spain (ES)": 1, "China (CN)": 1, "Israel (IL)": 1, "Chile (CL)": 0, "Bulgaria (BG)": 0, "Australia (AU)": 0, "Cape Verde (CV)": 0, "Ireland (IE)": 0, "Egypt (EG)": 0, "Luxembourg (LU)": 0, "Bolivia (BO)": 0, "Paraguay (PY)": 0, "Uruguay (UY)": 0, "Czech Republic (CZ)": 0, "Hungary (HU)": 0, "Finland (FI)": 0, "Algeria (DZ)": 0, "Peru (PE)": 0, "Mexico (MX)": 0, "Guinea-Bissau (GW)": 0}
You have a couple ways you can go. If you will do little the post insights then you can get page sums directly processing the JSON.
Your later comment indicates there may be more. Unpacking the JSON into a single table is the way to go; it is data normalization.
One very slight correction. The 2 character is not a MS coding for the country. It is the ISO-3166 alpha-2 code (defined in ISO-3166-1) (yes MS uses it).
Either way the first step is to extract the keys from the JSON then use those keys to extract the Values. Then JOIN the relevant_countries table on the alpha2 code.
with work_insights (jkey,country_watchtime) as
( select json_object_keys(country_watchtime), country_watchtime
from insights_data_stage
)
, watch_insights(cntry, alpha2, watchtime) as
( select trim(replace(substring(jkey, '^.*\('),'(',''))
, upper(trim(replace(replace(substring(jkey, '\(.*\)'),'(',''),')','')) )
, (country_watchtime->> jkey)::numeric
from work_insights
)
, relevant_codes (page_id, alpha2) as
( select page_id, substring(substring(target_country, '\(..\)'),2,2) alpha2
from relevant_countries
)
select rc.page_id, sum(watchtime) watchtime
from relevant_codes rc
join watch_insights wi
on (wi.alpha2 = rc.alpha2)
where rc.page_id in ('P01','P02')
group by rc.page_id
order by rc.page_id;
For the normalization process you need a country table (as you already said you have) and another table for the normalized insights data. Populating begins the same
parsing as above, but developing columns for each value. Once created you JOIN this table with relevant_countries. (See demo containing both). Note: I normalized the relevant_countries table.
select rc.page_id, sum(pi.watchtime) watchtime
from post_insights pi
join relevant_countries_rev rc on (rc.alpha2 = pi.alpha2)
group by rc.page_id
order by rc.page_id;
Update: The results for P01 do not match your expected results. Your expectations indicate to sum Germany and Japan, but your relevant_countries table indicates Germany and Italy.
The picture shows the table schema:
The line reported an error when querying the data:
select *,bBands(close, 5, 2, 2, 0) as `high`mid`low from kline
The error message is:
The calculated column was not initialized or an exception was raised.
However, another script I tried didn't report the error:
close = 7.2 6.97 7.08 6.74 6.49 5.9 6.26 5.9 5.35 5.63 3.81 3.935 4.04 3.74 3.7 3.33 3.64 3.31 2.69 2.72
date = (2020.03.02 + 0..4 join 7..11).take(20)
symbol = take(`F,10) join take(`GPRO,10)
t = table(symbol, date, close)
select *, bBands(close, 5, 2, 2, 2) as `high`mid`low from t context by symbol
Why the error occurs and how can I avoid it in my code?
This error is raised because the names of the columns coincide with UP and DN of the Bollinger Bands.
You can use the following code:
select *,bBands(close, 5, 2, 2, 0) as `bBand_high`mid`bBand_low from kline.
I have been having fun with an issue where I need to break apart a string in SQL Server 2012 and test for values it may or may not contain. The values, when present, will be separated by up to two different ; symbols.
When there is nothing, it will be blank.
When there is a single value, it will show up without the delimiter.
When there are two or more, up to 3, they will be separated by the delimiter.
As I said, if there is nothing in the record, it will be blank. Below are some example of how the data may come across:
' ',
'1',
'24',
'15;1;24',
'10;1;22',
'5;1;7',
'12;1',
'10;12',
'1;5',
'1;1;1',
'15;20;22'
I have searched the forums and found many clues, but I have not been able to come up with a total solution given all potential data values. Essentially, I would like to break it into 3 separate values.
text before the first delimiter or in the absence of the delimiter, just the text.
Text after the first delimiter and before the second in situation where there are two delimiters.
The following has worked consistently:
substring(SUBSTRING(Food_Desc, charindex(';', Food_Desc) + 1, 4), 0,
charindex(';', SUBSTRING(Food_Desc, charindex(';', Food_Desc) + 1, 4))) as [Middle]
Text after the second delimiter in the even there are two delimiters and there is a third value
The main challenge is the fact that the delimiter, when present, moves depending on the value in the table. values 1-9 make it show up as the second character in the string, values 10-24 make it show up as the 3rd, etc.
Any help would be greatly appreciated.
This is simple if you have a well written t-sql splitter function. For this solution I'm using Jeff Moden's delimitedsplit8k.
sample data and solution
DECLARE #table table (someid int identity, sometext varchar(100));
INSERT #table VALUES (' '),('1'),('24'),('15;1;24'),('10;1;22'),
('5;1;7'),('12;1'),('10;12'),('1;5'),('1;1;1'),('15;20;22');
SELECT
someid,
sometext,
ItemNumber,
Item
FROM #table
CROSS APPLY dbo.DelimitedSplit8K_LEAD(sometext, ';');
results
someid sometext ItemNumber Item
----------- ----------------- ----------- --------
1 1
2 1 1 1
3 24 1 24
4 15;1;24 1 15
4 15;1;24 2 1
4 15;1;24 3 24
5 10;1;22 1 10
5 10;1;22 2 1
5 10;1;22 3 22
6 5;1;7 1 5
6 5;1;7 2 1
6 5;1;7 3 7
7 12;1 1 12
7 12;1 2 1
8 10;12 1 10
8 10;12 2 12
9 1;5 1 1
9 1;5 2 5
10 1;1;1 1 1
10 1;1;1 2 1
10 1;1;1 3 1
11 15;20;22 1 15
11 15;20;22 2 20
11 15;20;22 3 22
Below is a modified version of a similar question How do I split a string so I can access item x?. Changing the text value for #sample to each of your possibilities listed seemed to work for me.
DECLARE #sample VARCHAR(200) = '15;20;22';
DECLARE #individual VARCHAR(20) = NULL;
WHILE LEN(#sample) > 0
BEGIN
IF PATINDEX('%;%', #sample) > 0
BEGIN
SET #individual = SUBSTRING(#sample, 0, PATINDEX('%;%', #sample));
SELECT #individual;
SET #sample = SUBSTRING(#sample, LEN(#individual + ';') + 1, LEN(#sample));
END;
ELSE
BEGIN
SET #individual = #sample;
SET #sample = NULL;
SELECT #individual;
END;
END;
Loading flat file to postgres table.I need to do few transformations while reading the file and load it.
Like
-->Check for characters, if it is present, default some value. Reg_Exp can be used in oracle. How the functions can be called in below syntax?
-->TO_DATE function from text format
-->Check for Null and defaulting some value
-->Trim functions
-->Only few columns from source file should be loaded
-->Defaulting values, say for instance, source file has only 3 columns. But we need to load 4 columns. One column should be defaulted with some value
LOAD CSV
FROM 'filename'
INTO postgresql://role#host:port/database_name?tablename
TARGET COLUMNS
(
alphanm,alphnumnn,nmrc,dte
)
WITH truncate,
skip header = 0,
fields optionally enclosed by '"',
fields escaped by double-quote,
fields terminated by '|',
batch rows = 100,
batch size = 1MB,
batch concurrency = 64
SET work_mem to '32 MB', maintenance_work_mem to '64 MB';
Kindly help me, how this can be accomplished used pgloader?
Thanks
Here's a self-contained test case for pgloader that reproduces your use-case, as best as I could understand it:
/*
Sorry pgloader version "3.3.2" compiled with SBCL 1.2.8-1.el7 Doing kind
of POC, to implement in real time work. Sample data from file:
raj|B|0.5|20170101|ABCD Need to load only first,second,third and fourth
column; Table has three column, third column should be defaulted with some
value. Table structure: A B C-numeric D-date E-(Need to add default value)
*/
LOAD CSV
FROM inline
(
alphanm,
alphnumnn,
nmrc,
dte [date format 'YYYYMMDD'],
other
)
INTO postgresql:///pgloader?so.raja
(
alphanm,
alphnumnn,
nmrc,
dte,
col text using "constant value"
)
WITH truncate,
fields optionally enclosed by '"',
fields escaped by double-quote,
fields terminated by '|'
SET work_mem to '12MB',
standard_conforming_strings to 'on'
BEFORE LOAD DO
$$ drop table if exists so.raja; $$,
$$ create table so.raja (
alphanm text,
alphnumnn text,
nmrc numeric,
dte date,
col text
);
$$;
raj|B|0.5|20170101|ABCD
Now here's the extract from running the pgloader command:
$ pgloader 41287414.load
2017-08-15T12:35:10.258000+02:00 LOG Main logs in '/private/tmp/pgloader/pgloader.log'
2017-08-15T12:35:10.261000+02:00 LOG Data errors in '/private/tmp/pgloader/'
2017-08-15T12:35:10.261000+02:00 LOG Parsing commands from file #P"/Users/dim/dev/temp/pgloader-issues/stackoverflow/41287414.load"
2017-08-15T12:35:10.422000+02:00 LOG report summary reset
table name read imported errors total time
----------------------- --------- --------- --------- --------------
fetch 0 0 0 0.007s
before load 2 2 0 0.016s
----------------------- --------- --------- --------- --------------
so.raja 1 1 0 0.019s
----------------------- --------- --------- --------- --------------
Files Processed 1 1 0 0.021s
COPY Threads Completion 2 2 0 0.038s
----------------------- --------- --------- --------- --------------
Total import time 1 1 0 0.426s
And here's the content of the target table when the command is done:
$ psql -q -d pgloader -c 'table so.raja'
alphanm │ alphnumnn │ nmrc │ dte │ col
═════════╪═══════════╪══════╪════════════╪════════════════
raj │ B │ 0.5 │ 2017-01-01 │ constant value
(1 row)
I'v got the following query:
SELECT
nr
, txt = info.result
FROM
dbo.anlagen AS a
CROSS APPLY
ocAuxiliary.splitString(
ISNULL(
ocAuxiliary.parseRTF(a.notiz)
,'')
,80)
AS info
which works fine on on database, but not on another. The functions / SPROCS are created by code and therefore deterministic.
Error on B is:
Meldung 102, Ebene 15, Status 1, Zeile 9
Falsche Syntax in der Nähe von '.'.
( Wrong Syntax near '.'.)
Just calling the used functions/SPROCS works fine also:
On DB A
SELECT * from ocAuxiliary.splitString('1234567890', 3)
returns
iteration result
1 123
2 456
3 789
4 0
as it does on DB B.
On DB A
select ocAuxiliary.parseRTF('{\rtf1\ansi\ansicpg1252\deff0{\fonttbl{\f0\fnil\fcharset0 Arial;}}\viewkind4\uc1\pard\lang1031\fs20 12 ')
returns 12
as it does on DB B.
I simply don't see the mistake.