simics-qsp-x86-6.0.53 causes pcie mcfg 0xe0100000 access error, which won't happen on 6.0.44 - simics

I am porting UEFI BIOS for QSP, recently tried new release 2021.50 but found booting failed. My BIOS using pcie mcfg memory space 0xe0000000 ~ 0xf0000000 for pci device enumeration, and that would cause problem on simics-qsp-x86-6.0.53. Simics stops and simics-log shows
4364572099 board.mb.cpu0.mem[0][0] error 0 Access (write of 4 bytes) at 0xe0100000 where nothing is mapped.
I try to turn on ignore_unmapped_writes but it still fail.
board.mb.cpu0.mem[0][0]->ignore_unmapped_writes=TRUE
also try adding mcfg range to board.mb.cpu0.mem[0][0] by
board.mb.cpu0.mem[0][0].add-map board.mb.socket[0].qpi_arch.port.mcfg 0xe0000000 0x10000000
but still not working. it troubles me a lot, Please help!
the bios works without problem for simics-qsp-x86-6.0.44.

Analysis
Updated twice:
It looks like the issue here is how PCIe ECAM is enabled. When the QSP is first started, accesses to 0xe000_0000 go to RAM. Then, the PCIe system is initialized by the UEFI (writing the PCIEXBAR register in the QPI controller), and accesses to 0xe000_0000 hit the PCI config space. With the standard QSP UEFI, PCIEXBAR is written and the mcfg space shows up, in both QSP 6.0.42 and 6.0.53.
The implementation of PCIe in Simics has indeed changed between these two versions. There is a new PCIe library being used in the most recent QSP versions, which has changed the internals of PCIe. That explains the translator that gets into the memory access path in latest QSP, while there is an opaque object in the old model. Same functionality for software, different implementation in the model.
To spot writes to PCIEXBAR in the new model, use the following Simics CLI commands:
simics> print-device-regs "board.mb.socket[0].qpi_arch"
To see the registers of the device for socket 0, which is where all the processor core are located by default.
To stop when UEFI is writing the register, do this:
simics> break-io device = "board.mb.socket[0].qpi_arch.bank.f1"
Before the breakpoint, the map of the pci_bus memory looks like this:
simics> board.mb.nb.pci_bus.port.mem.map
┌───────────┬───────────────────────┬──┬──────┬───────────┬───────────────────┬────┬─────┬────┐
│ Base│Object │Fn│Offset│ Length│Target │Prio│Align│Swap│
├───────────┼───────────────────────┼──┼──────┼───────────┼───────────────────┼────┼─────┼────┤
│0x000a_0000│board.mb.gpu.dmap_space│ │0x0000│0x0002_0000│ │ 0│ │ │
│0x000c_0000│board.mb.shadow │ │0x0000│0x0004_0000│board.mb.shadow_mem│ 1│ │ │
│0xfec0_0000│board.mb.sb.ioapic │ │0x0000│ 0x0020│ │ -1│ 8│ │
│0xffe0_0000│board.mb.rom │ │0x0000│0x0020_0000│ │ 0│ │ │
│ -default-│board.mb.dram_space │ │0x0000│ │ │ │ │ │
└───────────┴───────────────────────┴──┴──────┴───────────┴───────────────────┴────┴─────┴────┘
And then after the register has been written:
simics> board.mb.nb.pci_bus.port.mem.map
┌───────────┬─────────────────────────────────────┬──┬──────┬───────────┬───────────────────┬────┬─────┬────┐
│ Base│Object │Fn│Offset│ Length│Target │Prio│Align│Swap│
├───────────┼─────────────────────────────────────┼──┼──────┼───────────┼───────────────────┼────┼─────┼────┤
│0x000a_0000│board.mb.gpu.dmap_space │ │0x0000│0x0002_0000│ │ 0│ │ │
│0x000c_0000│board.mb.shadow │ │0x0000│0x0004_0000│board.mb.shadow_mem│ 1│ │ │
│0xe000_0000│board.mb.socket[0].qpi_arch.port.mcfg│ │0x0000│0x1000_0000│ │ 0│ │ │
│0xfec0_0000│board.mb.sb.ioapic │ │0x0000│ 0x0020│ │ -1│ 8│ │
│0xffe0_0000│board.mb.rom │ │0x0000│0x0020_0000│ │ 0│ │ │
│ -default-│board.mb.dram_space │ │0x0000│ │ │ │ │ │
└───────────┴─────────────────────────────────────┴──┴──────┴───────────┴───────────────────┴────┴─────┴────┘
Thus, the question is really what your UEFI is writing to activate PCIEXBAR, and how that worked in the old model but not in the new model.
After UEFI has run
After a few virtual seconds, the memory map is changed as PCIe ECAM is enabled.
New QSP:
simics> probe-address p:0xe000_0000
┌─────────────────────────────────────┬───────────┬─────┐
│ Target │ Offset │Notes│
├─────────────────────────────────────┼───────────┼─────┤
│board.mb.cpu0.mem[0][0] │0xe000_0000│ │
│board.mb.phys_mem │0xe000_0000│ │
│board.mb.nb.pci_bus.port.mem │0xe000_0000│~ │
│board.mb.nb.pci_bus.mem_space │0xe000_0000│ │
│board.mb.socket[0].qpi_arch.port.mcfg│0x0000_0000│* │
│board.mb.nb.pci_bus.port.cfg │0x0000_0000│~ │
│board.mb.nb.pci_bus.cfg_space │0x0000_0000│ │
│board.mb.nb.bridge.bank.pcie_config │0x0000_0000│ │
└─────────────────────────────────────┴───────────┴─────┘
'*' - Translator implementing 'translator' interface
'~' - Translator implementing 'transaction_translator' interface
Destination: board.mb.nb.bridge.bank.pcie_config offset 0x0
Register: vendor_id # 0x0 (2 bytes) + 0
Old QSP (with a less capable CLI probe command):
simics> probe-address p:0xe0000000
┌─────────────────────────┬───────────┬─────┐
│ Target │ Offset │Notes│
├─────────────────────────┼───────────┼─────┤
│board.mb.cpu0.mem[0][0] │0xe000_0000│ │
│board.mb.phys_mem │0xe000_0000│ │
│board.mb.nb.pci_mem │0xe000_0000│ │
│board.mb.socket_sad_f1[0]│0x0000_0000│ │
└─────────────────────────┴───────────┴─────┘
Destination: board.mb.socket_sad_f1[0] offset 0x0 - no register information available
Simulation initial state
Bring up the two versions of the target and check the physical memory map.
Looking at the new QSP, before any code is run, starting from the targets/qsp-x86/qsp-clear-linux.simics script, I see this:
simics> probe-address obj = "board.mb.cpu0.mem[0][0]" p:0xe010_0000
┌─────────────────────────────┬───────────┬─────┐
│ Target │ Offset │Notes│
├─────────────────────────────┼───────────┼─────┤
│board.mb.cpu0.mem[0][0] │0xe010_0000│ │
│board.mb.phys_mem │0xe010_0000│ │
│board.mb.nb.pci_bus.port.mem │0xe010_0000│~ │
│board.mb.nb.pci_bus.mem_space│0xe010_0000│ │
│board.mb.dram_space │0xe010_0000│ │
│board.mb.ram │0xe010_0000│ │
└─────────────────────────────┴───────────┴─────┘
'~' - Translator implementing 'transaction_translator' interface
Destination: board.mb.ram offset 0xe0100000
simics> board.mb.cpu0.mem[0][0].map
┌───────────┬────────────────────────┬──┬──────┬──────┬──────┬────┬─────┬────┐
│ Base│Object │Fn│Offset│Length│Target│Prio│Align│Swap│
├───────────┼────────────────────────┼──┼──────┼──────┼──────┼────┼─────┼────┤
│0xfee0_0000│board.mb.cpu0.apic[0][0]│ │0x0000│0x1000│ │ 0│ 8│ │
│ -default-│board.mb.phys_mem │ │0x0000│ │ │ │ │ │
└───────────┴────────────────────────┴──┴──────┴──────┴──────┴────┴─────┴────┘
simics> board.mb.phys_mem.map
┌────────────────┬────────────────┬──┬────────────────┬────────────┬──────┬────┬─────┬────┐
│ Base│Object │Fn│ Offset│ Length│Target│Prio│Align│Swap│
├────────────────┼────────────────┼──┼────────────────┼────────────┼──────┼────┼─────┼────┤
│ 0x0000│board.mb.dram_ │ │ 0x0000│ 0x000a_0000│ │ 0│ │ │
│ │space │ │ │ │ │ │ │ │
├────────────────┼────────────────┼──┼────────────────┼────────────┼──────┼────┼─────┼────┤
│ 0x0010_0000│board.mb.dram_ │ │ 0x0010_0000│ 0xdff0_0000│ │ 0│ │ │
│ │space │ │ │ │ │ │ │ │
├────────────────┼────────────────┼──┼────────────────┼────────────┼──────┼────┼─────┼────┤
│0x0001_0000_0000│board.mb.dram_ │ │0x0001_0000_0000│0x0001_0000_│ │ 0│ │ │
│ │space │ │ │ 0000│ │ │ │ │
├────────────────┼────────────────┼──┼────────────────┼────────────┼──────┼────┼─────┼────┤
│ -default-│board.mb.nb.pci_│ │ 0x0000│ │ │ │ │ │
│ │bus.port.mem │ │ │ │ │ │ │ │
└────────────────┴────────────────┴──┴────────────────┴────────────┴──────┴────┴─────┴────┘
I.e., that location should be hitting RAM.
You cannot just ignore unmapped writes - if you want to save some state there, you need memory or some device mapped into physical memory. The processor cannot do much reasonable work if told to write data to locations "in the void".
So - what does your UEFI expect to be at those addresses? How do you emulate PCIe accesses?
In Simics QSP 6.0.44, the picture looks similar:
simics> probe-address obj = "board.mb.cpu0.mem[0][0]" p:0xe010_0000
┌───────────────────────┬───────────┬─────┐
│ Target │ Offset │Notes│
├───────────────────────┼───────────┼─────┤
│board.mb.cpu0.mem[0][0]│0xe010_0000│ │
│board.mb.phys_mem │0xe010_0000│ │
│board.mb.nb.pci_mem │0xe010_0000│ │
│board.mb.dram_space │0xe010_0000│ │
│board.mb.ram │0xe010_0000│ │
└───────────────────────┴───────────┴─────┘
Destination: board.mb.ram offset 0xe0100000
simics> board.mb.cpu0.mem[0][0].map
┌───────────┬────────────────────────┬──┬──────┬──────┬──────┬────┬─────┬────┐
│ Base│Object │Fn│Offset│Length│Target│Prio│Align│Swap│
├───────────┼────────────────────────┼──┼──────┼──────┼──────┼────┼─────┼────┤
│0xfee0_0000│board.mb.cpu0.apic[0][0]│ │0x0000│0x1000│ │ 0│ 8│ │
│ -default-│board.mb.phys_mem │ │0x0000│ │ │ │ │ │
└───────────┴────────────────────────┴──┴──────┴──────┴──────┴────┴─────┴────┘
simics> board.mb.phys_mem.map
┌────────────────┬────────────────┬──┬────────────────┬────────────┬──────┬────┬─────┬────┐
│ Base│Object │Fn│ Offset│ Length│Target│Prio│Align│Swap│
├────────────────┼────────────────┼──┼────────────────┼────────────┼──────┼────┼─────┼────┤
│ 0x0000│board.mb.dram_ │ │ 0x0000│ 0x000a_0000│ │ 0│ │ │
│ │space │ │ │ │ │ │ │ │
├────────────────┼────────────────┼──┼────────────────┼────────────┼──────┼────┼─────┼────┤
│ 0x0010_0000│board.mb.dram_ │ │ 0x0010_0000│ 0xdff0_0000│ │ 0│ │ │
│ │space │ │ │ │ │ │ │ │
├────────────────┼────────────────┼──┼────────────────┼────────────┼──────┼────┼─────┼────┤
│0x0001_0000_0000│board.mb.dram_ │ │0x0001_0000_0000│0x0001_0000_│ │ 0│ │ │
│ │space │ │ │ 0000│ │ │ │ │
├────────────────┼────────────────┼──┼────────────────┼────────────┼──────┼────┼─────┼────┤
│ -default-│board.mb.nb.pci_│ │ 0x0000│ │ │ │ │ │
│ │mem │ │ │ │ │ │ │ │
└────────────────┴────────────────┴──┴────────────────┴────────────┴──────┴────┴─────┴────┘

Related

PostgreSql : Merge two rows and add the difference to new column

We have an app which displays a table like this :
this is what it looks like in database :
┌──────────┬──────────────┬─────────────┬────────────┬──────────┬──────────────────┐
│ BatchId │ ProductCode │ StageValue │ StageUnit │ StageId │ StageLineNumber │
├──────────┼──────────────┼─────────────┼────────────┼──────────┼──────────────────┤
│ 0B001 │ 150701 │ LEDI2B4015 │ │ 37222 │ 1 │
│ 0B001 │ 150701 │ 16.21 │ KG │ 37222 │ 1 │
│ 0B001 │ 150701 │ 73.5 │ │ 37222 │ 2 │
│ 0B001 │ 150701 │ LEDI2B6002 │ KG │ 37222 │ 2 │
└──────────┴──────────────┴─────────────┴────────────┴──────────┴──────────────────┘
I would like to query the database so that the output looks like this :
┌──────────┬──────────────┬────────────────────┬─────────────┬────────────┬──────────┬──────────────────┐
│ BatchId │ ProductCode │ LoadedProductCode │ StageValue │ StageUnit │ StageId │ StageLineNumber │
├──────────┼──────────────┼────────────────────┼─────────────┼────────────┼──────────┼──────────────────┤
│ 0B001 │ 150701 │ LEDI2B4015 │ 16.21 │ KG │ 37222 │ 1 │
│ 0B001 │ 150701 │ LEDI2B6002 │ 73.5 │ KG │ 37222 │ 2 │
└──────────┴──────────────┴────────────────────┴─────────────┴────────────┴──────────┴──────────────────┘
Is that even possible ?
My PostgreSQL Server version is 14.X
I have looked for many threads with "merge two columns and add new one" but none of them seem to be what I want.
DB Fiddle link
SQL Fiddle (in case) link
It's possible to get your output, but it's going to be prone to errors. You should seriously rethink your data model, if at all possible. Storing floats as text and trying to parse them is going to lead to many problems.
That said, here's a query that will work, at least for your sample data:
SELECT batchid,
productcode,
max(stagevalue) FILTER (WHERE stagevalue ~ '^[a-zA-Z].*') as loadedproductcode,
max(stagevalue::float) FILTER (WHERE stagevalue !~ '^[a-zA-Z].*') as stagevalue,
max(stageunit),
stageid,
stagelinenumber
FROM datas
GROUP BY batchid, productcode, stageid, stagelinenumber;
Note that max is just used because you need an aggregate function to combine with the filter. You could replace it with min and get the same result, at least for these data.

where should the size values be place?

I am making the length and width values with the help of Mediaquery to be a responsive design. Where should I put these values ? in core/constants/ ? is there any project I can take an example of to find out this kind of thing or document.
├───core
│ ├───constants
│ │ ├───app
│ │ ├───color
│ │ └───textstyle
│ ├───extension
│ └───init
│ └───translations
├───product
│ ├───error
│ ├───navigator
│ │ └───guard
│ └───widget
│ ├───appbar
│ ├───button
│ └───textfield
├───providers
└───view
├───authenticate
│ ├───login
│ │ ├───model
│ │ ├───service
│ │ ├───view
│ │ └───viewmodel
│ ├───onboard
│ │ ├───model
│ │ ├───view
│ │ └───widget
│ ├───register
│ │ ├───model
│ │ ├───service
│ │ └───view
│ └───reset_password_view.dart
│ └───view
├───home
│ ├───home
│ │ └───view
│ ├───menu
│ │ └───view
│ ├───models
│ ├───more
│ │ └───view
│ ├───offers
│ │ └───view
│ └───profile
│ └───view
├───welcome
│ └───view
└───_product
└───_widgets
├───card
├───listtile
└───safearea
extension MediaQueryExtension on BuildContext {
double dynamicWidth(double val) => MediaQuery.of(this).size.width * val;
double dynamicHeight(double val) => MediaQuery.of(this).size.height * val;
There is no written rule about the question you ask.
It usually depends on developers/company preferences.
But since the extension can be used project wide, the most reasonable place for it would be core/extensions.
I would not place it inside constants since it's not a constant values.

Filter by all parts of a LTREE-Field

Let's say I have a Table people with the following columns:
name/string, mothers_hierachy/ltree
"josef", "maria.jenny.lisa"
How do I find all mothers of Josef in the people Table?
I'm searching for such a expression like this one: (That actually works)
SELECT * FROM people where name IN (
SELECT mothers_hierachy from people where name = "josef"
)
You can cast the names to ltree and then use index() to see if they are contained:
# select * from people;
┌───────┬───────────────────────┐
│ name │ mothers_hierarchy │
├───────┼───────────────────────┤
│ josef │ maria.jenny.lisa │
│ maria │ maria │
│ jenny │ maria.jenny │
│ lisa │ maria.jenny.lisa │
│ kate │ maria.jenny.lisa.kate │
└───────┴───────────────────────┘
(5 rows)
# select *
from people j
join people m
on index(j.mothers_hierarchy, m.name::ltree) >= 0
where j.name = 'josef';
┌───────┬───────────────────┬───────┬───────────────────┐
│ name │ mothers_hierarchy │ name │ mothers_hierarchy │
├───────┼───────────────────┼───────┼───────────────────┤
│ josef │ maria.jenny.lisa │ maria │ maria │
│ josef │ maria.jenny.lisa │ jenny │ maria.jenny │
│ josef │ maria.jenny.lisa │ lisa │ maria.jenny.lisa │
└───────┴───────────────────┴───────┴───────────────────┘
(3 rows)

Combine columns of text type to a jsonb column in postgresql

I have a table with below structure in postgres where id is the primary key.
┌──────────────────────────────────┬──────────────────┬───────────┬──────────┬──────────────────────────────────────────────────────────────┬──────────┬──────────────┬─────────────┐
│ Column │ Type │ Collation │ Nullable │ Default │ Storage │ Stats target │ Description │
├──────────────────────────────────┼──────────────────┼───────────┼──────────┼──────────────────────────────────────────────────────────────┼──────────┼──────────────┼─────────────┤
│ id │ bigint │ │ │ │ plain │ │ │
│ requested_external_total_taxable │ bigint │ │ │ │ plain │ │ │
│ requested_external_total_tax │ bigint │ │ │ │ plain │ │ │
│ store_address.country │ text │ │ │ │ extended │ │ │
│ store_address.city │ text │ │ │ │ extended │ │ │
│ store_address.postal_code │ text │
I want to convert the store_address fields to a jsonb column.
┌──────────────────────────────────┬──────────────────┬───────────┬──────────┬──────────────────────────────────────────────────────────────┬──────────┬──────────────┬─────────────┐
│ Column │ Type │ Collation │ Nullable │ Default │ Storage │ Stats target │ Description │
├──────────────────────────────────┼──────────────────┼───────────┼──────────┼──────────────────────────────────────────────────────────────┼──────────┼──────────────┼─────────────┤
│ id │ bigint │ │ │ │ plain │ │ │
│ requested_external_total_taxable │ bigint │ │ │ │ plain │ │ │
│ requested_external_total_tax │ bigint │ │ │ │ plain │ │ │
│ store_address │ jsonb │ │ │ │ extended │ │ │
Any efficient of doing this?
You will need to add a new column, UPDATE the table and populating the new jsonb column. After that you can drop the old columns:
alter table the_table
add store_address jsonb;
update the_table
set store_address = jsonb_build_object('country', "store_address.country",
'city', "store_address.city",
'postal_code', "store_address.postal_code");
alter table the_table
drop "store_address.country",
drop "store_address.city",
drop "store_address.postal_code"

Apache Druid : Issue while updating the data in Datasource

I am currently using the druid-Incubating-0.16.0 version. As mentioned in https://druid.apache.org/docs/latest/tutorials/tutorial-update-data.html tutorial link, we can use combining firehose to update and merge the data for a data source.
Step: 1
I am using the same sample data with the initial structure as
┌──────────────────────────┬──────────┬───────┬────────┐
│ __time │ animal │ count │ number │
├──────────────────────────┼──────────┼───────┼────────┤
│ 2018-01-01T01:01:00.000Z │ tiger │ 1 │ 100 │
│ 2018-01-01T03:01:00.000Z │ aardvark │ 1 │ 42 │
│ 2018-01-01T03:01:00.000Z │ giraffe │ 1 │ 14124 │
└──────────────────────────┴──────────┴───────┴────────┘
Step 2:
I updated the data for tiger with {"timestamp":"2018-01-01T01:01:35Z","animal":"tiger", "number":30} with appendToExisting = false and rollUp = true and found the result
┌──────────────────────────┬──────────┬───────┬────────┐
│ __time │ animal │ count │ number │
├──────────────────────────┼──────────┼───────┼────────┤
│ 2018-01-01T01:01:00.000Z │ tiger │ 2 │ 130 │
│ 2018-01-01T03:01:00.000Z │ aardvark │ 1 │ 42 │
│ 2018-01-01T03:01:00.000Z │ giraffe │ 1 │ 14124 │
└──────────────────────────┴──────────┴───────┴────────┘
Step 3:
Now i am updating giraffe with {"timestamp":"2018-01-01T03:01:35Z","animal":"giraffe", "number":30} with appendToExisting = false and rollUp = true and getting the following result
┌──────────────────────────┬──────────┬───────┬────────┐
│ __time │ animal │ count │ number │
├──────────────────────────┼──────────┼───────┼────────┤
│ 2018-01-01T01:01:00.000Z │ tiger │ 1 │ 130 │
│ 2018-01-01T03:01:00.000Z │ aardvark │ 1 │ 42 │
│ 2018-01-01T03:01:00.000Z │ giraffe │ 2 │ 14154 │
└──────────────────────────┴──────────┴───────┴────────┘
My doubt is, In step 3 the count of the tiger is getting decreased by 1 but I think it should not be changed since there are no changes in step 3 for tiger and there is no number change also
FYI, count and number are metricSpec and they are count and longSum respectively.
Please clarify.
when using ingestSegment firehose with initial data like
┌──────────────────────────┬──────────┬───────┬────────┐
│ __time │ animal │ count │ number │
├──────────────────────────┼──────────┼───────┼────────┤
│ 2018-01-01T00:00:00.000Z │ aardvark │ 1 │ 9999 │
│ 2018-01-01T00:00:00.000Z │ bear │ 1 │ 111 │
│ 2018-01-01T00:00:00.000Z │ lion │ 2 │ 200 │
└──────────────────────────┴──────────┴───────┴────────┘
while adding a new data {"timestamp":"2018-01-01T03:01:35Z","animal":"giraffe", "number":30} with appendToExisting = true, i am getting
┌──────────────────────────┬──────────┬───────┬────────┐
│ __time │ animal │ count │ number │
├──────────────────────────┼──────────┼───────┼────────┤
│ 2018-01-01T00:00:00.000Z │ aardvark │ 1 │ 9999 │
│ 2018-01-01T00:00:00.000Z │ bear │ 1 │ 111 │
│ 2018-01-01T00:00:00.000Z │ lion │ 2 │ 200 │
│ 2018-01-01T00:00:00.000Z │ aardvark │ 1 │ 9999 │
│ 2018-01-01T00:00:00.000Z │ bear │ 1 │ 111 │
│ 2018-01-01T00:00:00.000Z │ giraffe │ 1 │ 30 │
│ 2018-01-01T00:00:00.000Z │ lion │ 1 │ 200 │
└──────────────────────────┴──────────┴───────┴────────┘
is it correct and expected output? why the rollup didn't happen?
Druid has actually only 2 modes. Overwrite or append.
With the appendToExisting=true, your data will be appended to the existing data, which will cause that the "number" field will increase (and the count also).
With appendToExisting=false all your data in the segment is overwritten. I think this is what happening.
This is different then with "normal" databases, where you can update specific rows.
In druid you can update only certain rows, but this is done by re-indexing your data. It is not a very easy process.
This re-indexing is done by an ingestSegment Firehose, which reads your data from a segment, and then writes it also to a segment (can be the same). During this process, you can add a transform filter, which does a specific action, like update certain field values.
We have build a PHP library to make these processes more easy to work with. See this example how to re-index a segment and apply a transformation during the re-indexing.
https://github.com/level23/druid-client#reindex