When installing Porto theme in magento2 cannot import statics blocks - magento2

Hi I am very new to Magento, and I am trying to install the proto theme, I have activated the theme, add the demo which I want, overwritten cms pages, and now when I try to import static block, it gives an error,
SQLSTATE[23000]: Integrity constraint violation: 1048 Column 'title' cannot be null, query was: INSERT INTO `cms_block` (`block_id`, `title`, `identifier`, `content`, `is_active`) VALUES (?, ?, ?, ?, ?)

You can manually export/import the cms_block and cms_page's database tables using MySQLDump or PHPMyAdmin.
All the data you need should be in those two tables.
However, the DB tables involved are:
cms_block
cms_block_store
cms_page
cms_page_store

Related

How to export data from postgresql db as plain sql?

I am building a api that is developed by multiple developers and we require to have the same db for testing purposes. We are using UUID and so it would be ideal if we all use the same UUIDs. I can't seem to find a way to export the db contents as plain executable SQL, preferably with insert statements. NO drop tables, NO recreation of database.
I would like the end result to look something like:
INSERT INTO public.bla_bla(
id, bla_bla, bla_bla1, bla_bla2, bla_bla3, bla_bla4)
VALUES (?, ?, ?, ?, ?, ?);
INSERT INTO public.bla_bla(
id, bla_bla, bla_bla1, bla_bla2, bla_bla3, bla_bla4)
VALUES (?, ?, ?, ?, ?, ?);
...
I am using pgAdmin4 as ui. But also have Dbeaver.
I have tried using the Backup wizard to export the data.
On the database, - Does not produce a result if only data is selected. If instead the "sections" category sliders are used the result includes drop statements, which is not wanted and no readable insert statements.
on the schema - same as above
on the table - produces a CSV file, which is not ideal.
I tried following the steps here, but they do not yield the produce I need the result I need.
How to export Postgres schema/data to plain SQL in PgAdmin 4
At this point I am considering just doing it by hand.
Use pg_dump
pg_dump -Fp -d your_database -U your_db_user --column-inserts --data-only -f db_dump.sql
--data-only will skip the creation of the CREATE TABLE statements (or any other DDL).
If you want, you can add --rows-per-insert <nnnn> to only create a single INSERT statement for multiple rows.
You can try pg_dump with the option format plaintext and --column-inserts .
For more details please read here

magetno 2.1 order creation error

When am placing an order (even from admin).. the order is created, but i get the following error:
Order saving error: SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails (magento_db.catalog_product_entity, CONSTRAINT CAT_PRD_ENTT_ATTR_SET_ID_EAV_ATTR_SET_ATTR_SET_ID FOREIGN KEY (attribute_set_id) REFERENCES eav_attribute_set (attribute_set_id) ON ), query was: INSERT INTO catalog_product_entity (entity_id, sku, has_options, required_options) VALUES (?, ?, ?, ?)
any idea how to solve this ?
googling didnt help

LibreOffice Base Form Error with PostgreSQL Autogenerated UUID Primary Key

I have a PostgreSQL 9.5 back-end and with LibreOffice Base v 5.1.3.2 (x64) I am trying to create some data entry forms for various tables all with 1:many relationships. These tables all have UUID auto generated primary-keys.
LibreOffice does not like these PostgreSQL auto generated primary keys. It keeps giving me errors when I try to create a new record, sometimes when I try to edit a new record and won't give me access to the sub-form after I try to create a new parent record. Its like it cannot commit the records and isn't getting an "update" from PSQL upon a new create record.
I have discovered on the net that this is a known problem with all PostgreSQL auto generated PKEYS (UUID, SERIAL, etc) and the LibreOffice native PostgreSQL drivers.
Does anyone have a solution to this problem?
Phil

ORACLE 10g : How to import without foreign key constraint error?

I have a data dump then I wanna import it to another DB (Oracle 10g). The destination DB has already tables (no data) and some foreign key constraints. I will import the data by tables in order to avoid constraint errors. If does anyone know any easier ways, please teach me ?. (Oracle's import tool does not have functions to recognize relations between tables automatically, does it ?)
You can disable all foreign keys first:
begin
for cnst in (SELECT constraint_name, table_name FROM user_constraints WHERE constraint_type ='R') loop
execute immediate 'alter table '|| cnst.table_name||' disable constraint ' || cnst.constraint_name ;
end loop;
end;
After loading your data do the same to enable them back (just change the alter table command to enable instead of disable
But this is risky, because if the data won't meet your contraints - you'll have a problem ...
Assuming that you have a schema-level export, the source schema had those same foreign key constraints, and all the foreign key constraints are between tables in the same schema, the import utility should automatically take care of the foreign key constraints. You shouldn't need to do anything for that (though, of course, you'll have to ignore errors when you do the import because it will try to create the tables and get an error that they already exist).

Creating a "table of tables" in PostgreSQL or achieving similar functionality?

I'm just getting started with PostgreSQL, and I'm new to database design.
I'm writing software in which I have various plugins that update a database. Each plugin periodically updates its own designated table in the database. So a plugin named 'KeyboardPlugin' will update the 'KeyboardTable', and 'MousePlugin' will update the 'MouseTable'. I'd like for my database to store these 'plugin-table' relationships while enforcing referential integrity. So ideally, I'd like a configuration table with the following columns:
Plugin-Name (type 'text')
Table-Name (type ?)
My software will read from this configuration table to help the plugins determine which table to update. Originally, my idea was to have the second column (Table-Name) be of type 'text'. But then, if someone mistypes the table name, or an existing relationship becomes invalid because of someone deleting a table, we have problems. I'd like for the 'Table-Name' column to act as a reference to another table, while enforcing referential integrity.
What is the best way to do this in PostgreSQL? Feel free to suggest an entirely new way to setup my database, different from what I'm currently exploring. Also, if it helps you answer my question, I'm using the pgAdmin tool to setup my database.
I appreciate your help.
I would go with your original plan to store the name as text. Possibly enhanced by additionally storing the schema name:
addin text
,sch text
,tbl text
Tables have an OID in the system catalog (pg_catalog.pg_class). You can get those with a nifty special cast:
SELECT 'myschema.mytable'::regclass
But the OID can change over a dump / restore. So just store the names as text and verify the table is there by casting it like demonstrated at application time.
Of course, if you use each tables for multiple addins it might pay to make a separate table
CREATE TABLE tbl (
,tbl_id serial PRIMARY KEY
,sch text
,name text
);
and reference it in ...
CREATE TABLE addin (
,addin_id serial PRIMARY KEY
,addin text
,tbl_id integer REFERENCES tbl(tbl_id) ON UPDATE CASCADE ON DELETE CASCADE
);
Or even make it an n:m relationship if addins have multiple tables. But be aware, as #OMG_Ponies commented, that a setup like this will require you to execute a lot of dynamic SQL because you don't know the identifiers beforehand.
I guess all plugins have a set of basic attributes and then each plugin will have a set of plugin-specific attributes. If this is the case you can use a single table together with the hstore datatype (a standard extension that just needs to be installed).
Something like this:
CREATE TABLE plugins
(
plugin_name text not null primary key,
common_int_attribute integer not null,
common_text_attribute text not null,
plugin_atttributes hstore
)
Then you can do something like this:
INSERT INTO plugins
(plugin_name, common_int_attribute, common_text_attribute, hstore)
VALUES
('plugin_1', 42, 'foobar', 'some_key => "the fish", other_key => 24'),
('plugin_2', 100, 'foobar', 'weird_key => 12345, more_info => "10.2.4"');
This creates two plugins named plugin_1 and plugin_2
Plugin_1 has the additional attributes "some_key" and "other_key", while plugin_2 stores the keys "weird_key" and "more_info".
You can index those hstore columns and query them very efficiently.
The following will select all plugins that have a key "weird_key" defined.
SELECT *
FROM plugins
WHERE plugin_attributes ? 'weird_key'
The following statement will select all plugins that have a key some_key with the value the fish:
SELECT *
FROM plugins
WHERE plugin_attributes #> ('some_key => "the fish"')
Much more convenient than using an EAV model in my opinion (and most probably a lot faster as well).
The only drawback is that you lose type-safety with this approach (but usually you'd lose that with the EAV concept as well).
You don't need an application catalog. Just add the application name to the keys of the table. This of course assumes that all the tables have the same structure. If not: use the application name for a table name, or as others have suggested: as a schema name( which also would allow for multiple tables per application).
EDIT:
But the real issue is of course that you should first model your data, and than build the applications to manipulate it. The data should not serve the code; the code should serve the data.