When and How InnoDB access its metadata or index? - metadata

There is few information about the read path or write path with respect to the role of metadata or index. I mean I cannot really understand how InnoDB manage its metadata, and when to access it in order to execute a query.
Can you please name some keyword or provide docs which could make this clear?
Thank you very much!

As of MySQL 5.6, you can extract metadata about schema objects managed by InnoDB using InnoDB INFORMATION_SCHEMA system tables.
Read more on oracles website: https://docs.oracle.com/cd/E17952_01/mysql-5.6-en/innodb-information-schema-system-tables.html

Related

Postgres TDE capability only for specific schema

As part of GDPR requirement we need to encrypt data at rest.
We are planning to use Postgres and from the below links looks like TDE can be achieved in Postgres as well.
https://www.enterprisedb.com/blog/postgres-and-transparent-data-encryption-tde
https://www.cybertec-postgresql.com/en/products/postgresql-transparent-data-encryption/
When we have multiple schema in Postgres, is it possible to apply TDE only in a particular schema?
Unfortunately it is not possible to just encrypt a schema because, when you install PostgreSQL TDE, you initialize the whole database with the encryption key.
Like you can see in the picture here:
there is a reason for this: if we allow encryption on a per-table level (or per schema or per database, doesn't matter) we got to manage an infinite number of keys. this is especially true during point-in-time-recovery and all that. this is why we decided to do the encryption on the instance level. one key. the core advantage is: we can easily encrypt all parts of the instance including the WAL, temp files, and so on (basically everything but the clog).
don't expect this to change - go for full encryption.
we can help you with that.
cheers from cybertec :)
i hope you like the feature :)
hans

Can someone explain the functionality of ActiveRecord postgres pg_type?

https://github.com/rails/rails/blob/master/activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb
Can anyone explain regarding pg_type in Postgres? As I cannot find types in other database connection adaptors like MySQL and SQLite, what is its functionality and features it provides?
PostgreSQL has a rich set of native data types available to users.
Users can add new types to PostgreSQL using the CREATE TYPE command or new domains using CREATE DOMAIN.
Also, when you create a table or a view, the corresponding composite type with the same name is automatically created.
Each database may have a different set of defined types. Information of all types and domains known in a database is stored in the system catalog pg_type.
The postgres catalog table pg_type contains information about all data types available in your database. That includes built-in datatypes like bool and text, extension datatypes like hstore, and custom datatypes that are the result of using CREATE TYPE.
There's more information available in the postgres documentation for that table, if you're interested. For most uses of the database, you don't need to access pg_type, but it can be useful. In this case, ActiveRecord is, among other things, querying pg_type to pull accurate information about the types of each column in a user-created table.

Hibernate indexing for like expression

I am using Hibernate 3.3.1 and PostgresQL 9.2.2 server. The tables for my application
are generated by hibernate automatically and now i would like to do an optimization for
a very often used "like" expression in a table wich looks that wy:
"where path like 'RootFolder_FirstSubfolder%'"
by default hibernate only creates an index for the "id" column i defined via annotation.
Are there any recommendations how i could speedup my "like" expression using more indexes?
Thanks very much in advance for helping me
Kind regards
Shannon
Hibernate can use the Index annotation to automatically creating an additional index:
#org.hibernate.annotations.Index(name = "IDX_PATH")
private String path;
BUT it won't help since the created index is not suitable for like clauses.
Read the most upvoted answer here for a better solution. Unfortunately, it requires custom sql and AFAIK there is no easy way to integrate custom sql in script generated by hibernate schema update tool.
As an alternative to hibernate auto update: you can use a tool like liquibase to manage schema update. It requires more setup, but it gives you full control of schema update scripts.

Migrating a schema from one database to other

As part of some requirement, I need to migrate a schema from some existing database to a new schema in a different database. Some part of it is already done and now I need to compare the 2 schema and make changes in the new schema as per gap finding.
I am not using a tool and was trying to understand some details using syscat command but could not get much success.
Any pointer on what is the best way to solve this?
Regards,
Ramakant
A tool really is the best way to solve this – IBM Data Studio is free and can compare schemas between databases.
Assuming you are using DB2 for Linux/UNIX/Windows, you can do a rudimentary compare by looking at selected columns in SYSCAT.TABLES and SYSCAT.COLUMNS (for table definitions), and SYSCAT.INDEXES (for indexes). Exporting this data to files and using diff may be the easiest method. However, doing this for more complex structures (tables with range or database partitioning, foreign keys, etc) will become very complex very quickly as this information is spread across a lot of different system catalog tables.
An alternative method would be to extract DDL using the db2look utility. However, you can't specify the order that db2look outputs objects (db2look extracts DDL based on the objects' CREATE_TIME), so you can't extract DDL for an entire schema into a file and expect to use diff to compare. You would need to extract DDL into a separate file for each table.
Use SchemaCrawler for IBM DB2, a free open-source tool that is designed to produce text output that is designed to be diffed. You can get very detailed information about your schema, including view and stored procedure definitions. All of the information that you need will be output in a single file, and can be compared very easily using a standard diff tool.
Sualeh Fatehi, SchemaCrawler
unfortunately as per company policy, cannot use these tools at this point of time. So am writing some program using JDBC to get the details and do some comparison kind of stuff.

How do I remove CHECK PENDING state from a DB2 Tablespace on z/OS?

maybe one of you can help me with this DB2 z/OS thingy.
I edited foreign key on a table that was already populated. Due to integrity reasons (I guess) the tablespace was placed in CHECK PENDING and I cannot perform operations on it any longer.
This IBM help page is about the exact problem
It says
Action
Perform the CHECK DATA command: CHECK DATA TABLESPACE DATABASE NAME TABLESPACE NAME
I have no clue, what this means (its no SQL statement thats for sure) or where I can issue the command. Maybe one of you cann tell me what to do. TIA
As you noted, CHECK DATA is not an SQL command, it is a DB/2 Utility. See: Check Data