behavioural equivalent of NOSCALE in Oracle 12c R1 - oracle12c

Oracle 12c R2 introduces the concept of NOSCALE to its sequence creation script options.
What is the equivalent option for NOSCALE in 12c R1? i.e. if I just leave it off of my script in R1, will I get the same behaviour? Or is there some other option I need to specify? What differences will I notice?

Really, what was introduced is the concept of SCALE i.e. scalability. NOSCALE specifies the default behaviour. In 12c R1 you have no option but the default, effectively NOSCALE.

Related

Is Change tracking feature available in PostgreSQL similar to that of Microsoft SQL Server?

Is Change tracking feature available in PostgreSQL similar to that of Microsoft SQL Server. Actually we are using PostgreSQL and MS SQL together and want to move changed data from PostgreSQL to MS SQL using change tracking. How to achieve this in best possible and lightweight way?
Yes, there is something like that.
It's called logical decoding and is part of the infrastructure for logical replication. While logical replication can only be used between two Postgres instances, logical decoding can be used independently of that (if you write the code to consume the messages).
Tools like Debezium make use of that

Is native IMS vulnerable to injections?

As shown in this article, DB2 might be vulnerable to SQL Injections:
* Potential SQL injection if X, Y or Z host variables come from untrusted input
STRING "INSERT INTO TBL (a,b,c) VALUES (" X "," Y "," Z ")" INTO MY-SQL.
EXEC SQL PREPARE STMT FROM :MY-SQL END-EXEC.
EXEC SQL EXECUTE STMT END-EXEC.
My question is if native IMS commands are vulnerable of this kind (or similar) injections? For instance, by imputing malicious input in the ISRT DLI command.
It depends on how you plan to access the IMS database.
Quoting from an IBM document.
The SQL statements that you issue through the web interface or the
ISPF interface are executed as IMS application programming API in the
IMS SPUFI application program in z/OS®. You can select COBOL or Java™
for the language environment to execute SQL statements.
If you use SQL, you're possibly vulnerable to SQL injection.
If you use native IMS commands, probably not. But it's still a good idea to sanitize your inputs, even for native IMS commands.
Yes, all SQL databases that support runtime parsing of an SQL query string are susceptible to SQL injection.
SQL injection is not a flaw in the database technology, it's a flaw in the client code you write that builds the SQL query string.
I’m a member of the IBM IMS team.
IMS DL/I calls are not dynamic and for that reason are not susceptible like SQL calls. There is no injection risk for CALL xxxTDLI IMS APIs. That being said, a COBOL program can open up risk by allowing input to the program to influence the SSA list or IOAREA parameters being passed to the xxxTDLI. So, secure engineering practices should be followed while programing against these interfaces.
No, an IMS DL/I database doesn't parse the record at all. See it as an early version of a NoSQL database like Cassandra. The segment key is parsed as a binary value but you can't do injections like in a SQL database.
And depending on the skill of the programmers/IMS-admins the attack vector might be closed by limiting the range of available CRUD actions that are available for the program using the PROCOPT's of the PCB in the PSB.
Most IMS-system+DB2 use static SQL's so the statement is already prepared and not vulnerable to SQL injection attacks.

Using H2 database only for Unit testing

I have a spring boot application standing on Postgres database.
Now I want to use h2 database for Unit testing alone.
Is this right to do? or what is the recommendation
Yes and you should also use H2 as an in memory database as it allows to create a clean data base fast enough, to execute unit tests against and to delete the database fast enough when the test cycle phase was executed.
Creating and deleting a physical database at each build would consume much time and would do your local build slow.
Now, automatic testing should not rely only on H2.
This has some limitations that can create slight different behaviors compared to your target DBMS (PostgreSQL).
You should also create integration tests that uses the target DBMS.
Generally these integration tests should not be executed automatically on the developer build but on a continuous integration environment.
H2 compatibility and limitations :
H2 provides some specific database compatibility modes (for PostgreSQL and many others) but these have multiple corner cases.
It supports not fully the ANSI SQL and specific database features:
Compatibility
All database engines behave a little bit different. Where possible, H2
supports the ANSI SQL standard, and tries to be compatible to other
databases. There are still a few differences however:
In MySQL text columns are case insensitive by default, while in H2
they are case sensitive. However H2 supports case insensitive columns
as well. To create the tables with case insensitive texts, append
IGNORECASE=TRUE to the database URL (example:
jdbc:h2:~/test;IGNORECASE=TRUE).
And you could some undetailed information on this page, about specific specific database modes :
Compatibility Modes
For certain features, this database can emulate the behavior of
specific databases. However, only a small subset of the differences
between databases are implemented in this way. Here is the list of
currently supported modes and the differences to the regular mode:
DB2 Compatibility Mode
...
MySQL Compatibility Mode
...
Oracle Compatibility Mode
...
PostgreSQL Compatibility Mode
To use the PostgreSQL mode, use the database URL
jdbc:h2:~/test;MODE=PostgreSQL or the SQL statement SET MODE
PostgreSQL.
For aliased columns, ResultSetMetaData.getColumnName() returns the
alias name and getTableName() returns null. When converting a floating
point number to an integer, the fractional digits are not be
truncated, but the value is rounded. The system columns CTID and OID
are supported. LOG(x) is base 10 in this mode.
I can recommend that. H2 has a kind of compatibility mode to postgres, which makes it quite similar. The only part, where we had problems were the lacking of "common table expressions".
The biggest advantage I see is the in memory db. You can easily start for each test with a blank slate, that is much easier than with any harddisk backed dbms.
As live DB especially when you need to store much data, in my opinion the efficiency is lacking. We had some performance problems in tests with bigger data amounts, like 1000000 records. Because of this you naturally can not do any meaningfull index-optimizations using H2.

Oracle Form 10g/12c function SET_FIELD

Does SET_FIELD function exists in Oracle Form 10g or 12c? I hardly find documentation for this function.
Many Thanks
No it does not. I guess you are asking because it appears in a list of reserved words in the online help? That is because there was once a built-in procedure of that name in an earlier version. Use SET_ITEM_PROPERTY instead.
See http://www.oracle.com/technetwork/developer-tools/forms/264850-130496.pdf
To be precise, Tony's answer is not correct, in fact, in Oracle Forms 12c:
SET_FIELD procedure cannot be found in Oracle Forms help, it's exists only in "PL/SQL and Oracle Forms Reserved Words" list, so SET_FIELD could be tracted as deprecated;
At the same time SET_FIELD procedure still supported and works fine in Oracle Forms 12c (from working code). Oracle also mentioned SET_FIELD in Upgrading Oracle Forms 6i to Oracle Forms
12c:
Replace any references to obsolete logical
and GUI attributes in SET_ITEM_PROPERTY, SET_FIELD, or DISPLAY_ITEM with
an equivalent Visual Attribute.
P.S. Anyway, I personally going to replace all existing occurencies of SET_FIELD with SET_ITEM_PROPERTY :)

PostgreSql or SQL Server 2008 R2 should be use with .Net application using entity framework?

I have a database in PostgreSQL with millions of records and I have to develop a website that will use this database using Entity Framework (using dotnetConnect for PostgreSQL driver in case of PostgreSQL database).
Since SQL Server and .Net are both native to the Windows platform, should I migrate the database from PostgreSQL to SQL Server 2008 R2 for performance reasons?
I have read some blogs comparing the two RDBMS' but I am still confused about which system I should use.
There is no clear answer here, as its subjective, however this is what I would consider:
The overhead of learning a new DBMS and its tools.
The SQL dialects each RDBMS uses and if you are using that dialect currently.
The cost (monetary and time) required to migrate from PostgreSQL to another RDBMS
Do you or your client have an ongoing budget for the new RDBMS? If not, don't make the mistake of developing an application to use a RDBMS that will never see the light of day.
Personally if your current database is working well I wouldn't change. Why fix what isn't broke?
You need to find out if there is actually a problem, and if moving to SQL Server will fix it before doing any application changes.
Start by ignoring the fact you've got .net and using entity framework. Look at the queries that your web application is going to make, and try them directly against the database. See if its returning the information quick enough.
Only if, after you've tuned indexes etc. you can't make the answers come back in a time you're happy with should you decide the database is a problem. At that point it makes sense to try the same tests against a SQL Server database, but don't just assume SQL Server is going to be faster. You might find out that neither can do what you need, and you need to use faster disks or more memory etc.
The mechanism you're using to talk to a database (DotConnect or Microsoft drivers) will likely be a very minor performance consideration, considering the amount of information flowing (SQL statements in one direction and result sets in the other) is going to be almost identical for both technologies.