How to create DB2 multiple Instance? - db2

I have response file which create one instance of DB2.
How to create multiple Db2 instances using the response file for silent installation of DB2?

In the response file you can define multiple instances. You have to define them as
INSTANCE = DB2_INS2 ** char(8) no spaces
And then, define every single element for this instance
DB2_INS2.NAME = db2inst1 ** char(8) no spaces, no upper case letters
DB2_INS2.GROUP_NAME = db2iadm1 ** char(30) no spaces
The work DB2_INS2 is just a name in the response file, and it is not the name of the instance.
You can give any name. For another, it could be
INSTANCE = PROD ** char(8) no spaces
PROD.NAME = db2inst1 ** char(8) no spaces, no upper case letters
PROD.GROUP_NAME = db2iadm1 ** char(30) no spaces
The response file has a section where it explain how to define multiple instances:
** 2nd (non-pureScale) Instance Creation Settings
** ----------------------------------------------
** Multiple (non-pureScale) DB2 instances can be created in the same
** installation. This section shows how to specify the 2nd instance in the rsp
** file. Note: Only a subset of the instance keywords are listed below. You can
** specify other instance related keywords similar as the 1st instance. All
** keywords in this section are commented out. By default, only one instance
** will be created during the install.

The best guess would be to create a template.
Then in a recipe, create the files from the template with different parameters and notifies the executions from the resource.
Something like this (untested code, just to illustrate):
Attribute.rb
default['db2']['instances']['A']['port'] = 1234
default['db2']['instances']['A']['bind_adress'] = "*"
default['db2']['instances']['A']['password'] = "whatever"
default['db2']['instances']['B']['port'] = 1235
default['db2']['instances']['B']['bind_adress'] = "*"
default['db2']['instances']['B']['password'] = "whatever"
default.rb (recipe)
node['db2']['instances'].each do |id,properties|
template "/path/to/db2_#{id}_answer" do
source "answer_file.erb"
notifies :run, "execute[install_db2_#{id}"
variabes (
vars => properties
)
end
execute "install_db2_#{id}" do
command "/path/to/script_to_init /path/to/db2_#{id}_answer"
action :nothing
end
end

Related

How to get permission to change run-time parameter?

In this wonderful answer is proposed GUC-pattern to use run-time parameters to detect current user inside trigger (as one solution). It seemed to suit to me too. But problem is: when I declare the variable in postgresql.conf it is usable inside trigger and I can access it from queries, but can't change it:
# SET rkdb.current_user = 'xyzaaa';
ERROR: syntax error at or near "current_user"
LINE 1: SET rkdb.current_user = 'xyzaaa';
The error message is misleading, so I did not dig it a while, but now it seems this user (database owner) has no permissions to change params set in global configuration.
I can set any other params:
# SET jumala.kama = 24;
SET
And read it back:
# SHOW jumala.kama;
jumala.kama
-------------
24
(1 row)
I can't SHOW globally set params:
# SHOW rkdb.current_user;
ERROR: syntax error at or near "current_user"
LINE 1: SHOW rkdb.current_user;
^
but I can reach it with current_setting() function:
# select current_setting('rkdb.current_user');
current_setting
-----------------
www
(1 row)
So my guess is, my database owner does not have permissions to access this param. How could I:
set needed permissions?
or even better
set run-time params with database owner rights?
current_user is an SQL standard function, so your use of that name confuses the parser.
Either use a different name or surround it with double quotes like this:
rkdb."current_user"

Assigning a whole DataStructure its nullind array

Some context before the question.
Imagine file FileA having around 50 fields of different types. Instead of all programs using the file, I tried having a service program, so the file could only be accessed by that service program. The programs calling the service would then receive a DataStructure based on the file structure, as an ExtName. I use SQL to recover the information, so, basically, the procedure would go like this :
Datastructure shared by service program :
D FileADS E DS ExtName(FileA) Qualified
Procedure called by programs :
P getFileADS B Export
D PI N
D PI_IDKey 9B 0 Const
D PO_DS LikeDS(FileADS)
D LocalDS E DS ExtName(FileA) Qualified
D NullInd S 5i 0 Array(50) <-- Since 50 fields in fileA
//Code
Clear LocalDS;
Clear PO_DS;
exec sql
SELECT *
INTO :LocalDS :nullind
FROM FileA
WHERE FileA.ID = :PI_IDKey;
If SqlCod <> 0;
Return *Off;
EndIf;
PO_DS = LocalDS;
Return *On;
P getFileADS E
So, that procedure will return a datastructure filled with a record from FileA if it finds it.
Now my question : Is there any way I can assign the %nullind(field) = *On without specifying EACH 50 fields of my file?
Something like a loop
i = 1;
DoW (i <= 50);
if nullind(i) = -1;
%nullind(datastructure.field) = *On;
endif;
i++;
EndDo;
Cause let's face it, it'd be a pain to look each fields of each file every time.
I know a simple chain(n) could do the trick
chain(n) PI_IDKey FileA FileADS;
but I really was looking to do it with SQL.
Thank you for your advices!
OS Version : 7.1
First, you'll be better off in the long run by eliminating SELECT * and supplying a SELECT list of the 50 field names.
Next, consider these two web pages -- Meaningful Names for Null Indicators and Embedded SQL and null indicators. The first shows an example of assigning names to each null indicator to match the associated field names. It's just a matter of declaring a based DS with names, based on the address of your null indicator array. The second points out how a null indicator array can be larger than needed, so future database changes won't affect results. (Bear in mind that the page shows a null array of 1000 elements, and the memory is actually relatively tiny even at that size. You can declare it smaller if you think it's necessary for some reason.)
You're creating a proc that you'll only write once. It's not worth saving the effort of listing the 50 fields. Maybe if you had many programs using this proc and you had to create the list each time it'd be a slight help to use SELECT *, but even then it's not a great idea.
A matching template DS for the 50 data fields can be defined in the /COPY member that will hold the proc prototype. The template DS will be available in any program that brings the proc prototype in. Any program that needs to call the proc can simply specify LIKEDS referencing the template to define its version in memory. The template DS should probably include the QUALIFIED keyword, and programs would then use their own DS names as the qualifying prefix. The null indicator array can be handled similarly.
However, it's not completely clear what your actual question is. You show an example loop and ask if it'll work, but you don't say if you had a problem with it. It's an array, so a loop can be used much like you show. But it depends on what you're actually trying to accomplish with it.
for old school rpg just include the nulls in the data structure populated with the select statement.
select col1, ifnull(col1), col2, ifnull(col2), etc. into :dsfilewithnull where f.id = :id;
for old school rpg that can't handle nulls remove them with the select statement.
select coalesce(col1,0), coalesce(col2,' '), coalesce(col3, :lowdate) into :dsfile where f.id = :id;
The second method would be easier to use in a legacy environment.
pass the key by value to the procedure so you can use it like a built in function.
One answer to your question would be to make the array part of a data structure, and assign *all'0' to the data structure.
dcl-ds nullIndDs;
nullInd Ind Dim(50);
end-ds;
nullIndDs = *all'0';
The answer by jmarkmurphy is an example of assigning all zeros to an array of indicators. For the example that you show in your question, you can do it this way:
D NullInd S 5i 0 dim(50)
/free
NullInd(*) = 1 ;
Nullind(*) = 0 ;
*inlr = *on ;
return ;
/end-free
That's a complete program that you can compile and test. Run it in debug and stop at the first statement. Display NullInd to see the initial value of its elements. Step through the first statement and display it again to see how the elements changed. Step through the next statement to see how things changed again.
As for "how to do it in SQL", that part doesn't make sense. SQL sets the values automatically when you FETCH a row. Other than that, the array is used by the host language (RPG in this case) to communicate values back to SQL. When a SQL statement runs, it again automatically uses whatever values were set. So, it either is used automatically by SQL for input or output, or is set by your host language statements. There is nothing useful that you can do 'in SQL' with that array.

How do I rename a tSQLt test class?

I'm developing a database using the Red Gate SQL Developer tools. SQL Test, the SSMS add-in that runs tSQLt tests, lacks a way to rename test classes.
I have a test called [BackendLayerCustomerAdministrationTests].[test uspMaintainCustomerPermissions throws error when PermissionValue is missing or empty].
The name is so long it breaks Deployment Manager.
2013-12-05 18:48:40 +00:00 ERROR The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.
There are other unwieldly test names in this class, so I want to start by shortening the class name.
A more succinct class name would be CustomerTests.
sp_rename is no help here.
EXECUTE sys.sp_rename
#objname = N'BackendLayerCustomerAdministrationTests',
#newname = N'CustomerTests';
Msg 15225, Level 11, State 1, Procedure sp_rename, Line 374
No item by the name of 'BackendLayerCustomerAdministrationTests' could be found in the current database 'ApiServices', given that #itemtype was input as '(null)'.
How do I change it?
tSQLt test classes are schemas with a special extended property.
Cade Roux's great solution for renaming schemas is to create a new schema, transfer all the objects, then drop the old schema.
If we did that here we'd lose the extended property.
Let's adapt it for the tSQLt framework.
How to rename a tSQLt test class
Create a new test class.
EXECUTE tSQLt.NewTestClass
#ClassName = 'CustomerTests';
You should see the old class and the new class together in the tSQLt.TestClasses view.
SELECT *
FROM tSQLt.TestClasses;
Name SchemaId
----------------------------------------- ----------
SQLCop 7
BackendLayerCustomerAdministrationTests 10
CustomerTests 14
Cade used Chris Shaffer's select variable concatenation trick to build a list of transfer statements, and print the result.
DECLARE #sql NVARCHAR(MAX) = N'';
SELECT #sql = #sql +
N'ALTER SCHEMA CustomerTests
TRANSFER BackendLayerCustomerAdministrationTests.' + QUOTENAME(name) + N';' +
CHAR(13) + CHAR(10)
FROM sys.objects
WHERE SCHEMA_NAME([schema_id]) = N'BackendLayerCustomerAdministrationTests';
PRINT #sql;
Ugly, but effective.
Copy the output and execute as a new query.
ALTER SCHEMA CustomerTests
TRANSFER BackendLayerCustomerAdministrationTests.[test uspMaintainCustomer validate merged data];
ALTER SCHEMA CustomerTests
TRANSFER BackendLayerCustomerAdministrationTests.[test uspMaintainCustomerPermissions throws error when PermissionValue is missing or empty];
I've shown only two tests here, but it should work for all of them.
Now drop the old test class.
EXECUTE tSQLt.DropClass
#ClassName = N'BackendLayerCustomerAdministrationTests';
The old class should be gone from view.
SELECT *
FROM tSQLt.TestClasses;
Name SchemaId
----------------------------------------- ----------
SQLCop 7
CustomerTests 14
Run all your tests again to check that it worked.
EXECUTE tSQLt.RunAll;
+----------------------+
|Test Execution Summary|
+----------------------+
|No|Test Case Name |Result |
+--+----------------------------------------------------------------------------+-------+
|1|[CustomerTests].[test uspMaintainCustomer throws error on missing APIKey] |Success|
|2|[CustomerTests].[test uspMaintainCustomerPermissions validate merged data] |Success|
|3|[SQLCop].[test Decimal Size Problem] |Success|
|4|[SQLCop].[test Procedures Named SP_] |Success|
|5|[SQLCop].[test Procedures using dynamic SQL without sp_executesql] |Success|
|6|[SQLCop].[test Procedures with ##Identity] |Success|
|7|[SQLCop].[test Procedures With SET ROWCOUNT] |Success|
-------------------------------------------------------------------------------
Test Case Summary: 7 test case(s) executed, 7 succeeded, 0 failed, 0 errored.
-------------------------------------------------------------------------------
Success!
Sorry to come into this so late! I'm a developer who's working on SQL Test.
We've just added the ability to rename test classes to the latest version of SQL Test.
http://www.red-gate.com/products/sql-development/sql-test/
It's now as simple as right clicking on the context menu for a test class, or pressing F2:
Please bear in mind that this option will not appear for old versions of tSQLt. To upgrade, right click on the database to uninstall the framework, then do Add database... to re-add it (the right-most button in the window):
Alternatively, you could just call a new procedure in tSQLt called tSQLt.RenameClass, which is what SQL Test calls behind the scenes.
Please let us know if you have any issues with this!
David
What is your workflow like? If you have all your tests for that test class in one script with exec tSQLt.NewTestClass 'BackendLayerCustomerAdministrationTests' then you can just find and replace the testclass name and you are done.
e.g.
EXEC tSQLt.DropClass 'BackendLayerCustomerAdministrationTests'
GO
EXEC tSQLt.NewTestClass 'CustomerTests'
GO
CREATE PROC [CustomerTests].[test_Insert_AddsACustomer]
AS
etc, etc
This will work because the EXEC tSQLt.NewTestClass 'CustomerTests' will drop all objects in the testclass and they will be recreated as the rest of the script runs.
Simplest is probably:
EXEC tSQLt.RenameClass 'old test class name', 'new test class name';
See the tSQLt docs for RenameClass
It seems Red-gate have added that ability to SQL Test since this question was posted, but the raw SQL code is somehow leaner and cleaner (whether or not you use the excellent SQL Test)

SQLAlchemy, Psycopg2 and Postgresql COPY

It looks like Psycopg has a custom command for executing a COPY:
psycopg2 COPY using cursor.copy_from() freezes with large inputs
Is there a way to access this functionality from with SQLAlchemy?
accepted answer is correct but if you want more than just the EoghanM's comment to go on the following worked for me in COPYing a table out to CSV...
from sqlalchemy import sessionmaker, create_engine
eng = create_engine("postgresql://user:pwd#host:5432/db")
ses = sessionmaker(bind=engine)
dbcopy_f = open('/tmp/some_table_copy.csv','wb')
copy_sql = 'COPY some_table TO STDOUT WITH CSV HEADER'
fake_conn = eng.raw_connection()
fake_cur = fake_conn.cursor()
fake_cur.copy_expert(copy_sql, dbcopy_f)
The sessionmaker isn't necessary but if you're in the habit of creating the engine and the session at the same time to use raw_connection you'll need separate them (unless there is some way to access the engine through the session object that I don't know). The sql string provided to copy_expert is also not the only way to it, there is a basic copy_to function that you can use with subset of the parameters that you could past to a normal COPY TO query. Overall performance of the command seems fast for me, copying out a table of ~20000 rows.
http://initd.org/psycopg/docs/cursor.html#cursor.copy_to
http://docs.sqlalchemy.org/en/latest/core/connections.html#sqlalchemy.engine.Engine.raw_connection
If your engine is configured with a psycopg2 connection string (which is the default, so either "postgresql://..." or "postgresql+psycopg2://..."), you can create a psycopg2 cursor from an SQL Alchemy session using
cursor = session.connection().connection.cursor()
which you can use to execute
cursor.copy_from(...)
The cursor will be active in the same transaction as your session currently is. If a commit or rollback happens, any further use of the cursor with throw a psycopg2.InterfaceError, you would have to create a new one.
You can use:
def to_sql(engine, df, table, if_exists='fail', sep='\t', encoding='utf8'):
# Create Table
df[:0].to_sql(table, engine, if_exists=if_exists)
# Prepare data
output = cStringIO.StringIO()
df.to_csv(output, sep=sep, header=False, encoding=encoding)
output.seek(0)
# Insert data
connection = engine.raw_connection()
cursor = connection.cursor()
cursor.copy_from(output, table, sep=sep, null='')
connection.commit()
cursor.close()
I insert 200000 lines in 5 seconds instead of 4 minutes
It doesn't look like it.
You may have to just use psycopg2 to expose this functionality and forego the ORM capabilities. I guess I don't really see the benefit of ORM in such an operation anyway since it's a straight bulk insert and dealing with individual objects a la an ORM would not really make a whole lot of sense.
If you're starting from SQLAlchemy, you need to first get to the connection engine (also known by the property name bind on some SQLAlchemy objects):
engine = create_engine('postgresql+psycopg2://myuser:password#localhost/mydb')
# or
engine = session.engine
# or any other way you know to get to the engine
From the engine you can isolate a psycopg2 connection:
# get a psycopg2 connection
connection = engine.connect().connection
# get a cursor on that connection
cursor = connection.cursor()
Here are some templates for the COPY statement to use with cursor.copy_expert(), a more complete and flexible option than copy_from() or copy_to() as it is indicated here: https://www.psycopg.org/docs/cursor.html#cursor.copy_expert.
# to dump to a file
dump_to = """
COPY mytable
TO STDOUT
WITH (
FORMAT CSV,
DELIMITER ',',
HEADER
);
"""
# to copy from a file:
copy_from = """
COPY mytable
FROM STDIN
WITH (
FORMAT CSV,
DELIMITER ',',
HEADER
);
"""
Check out what the options above mean and others that may be of interest to your specific situation https://www.postgresql.org/docs/current/static/sql-copy.html.
IMPORTANT NOTE: The link to the documentation of cursor.copy_expert() indicates to use STDOUT to write out to a file and STDIN to copy from a file. But if you look at the syntax on the PostgreSQL manual, you'll notice that you can also specify the file to write to or from directly in the COPY statement. Don't do that, you're likely just wasting your time if you're not running as root (who runs Python as root during development?) Just do what's indicated in the psycopg2's docs and specify STDIN or STDOUT in your statement with cursor.copy_expert(), it should be fine.
# running the copy statement
with open('/path/to/your/data/file.csv') as f:
cursor.copy_expert(copy_from, file=f)
# don't forget to commit the changes.
connection.commit()
You don't need to drop down to psycopg2, use raw_connection nor a cursor.
Just execute the sql as usual, you can even use bind parameters with text():
engine.execute(text('''copy some_table from :csv
delimiter ',' csv'''
).execution_options(autocommit=True),
csv='/tmp/a.csv')
You can drop the execution_options(autocommit=True) if this PR will be accepted

Specifying a DB table dynamically? Is it possible?

I am writing a BSP and based on user-input I need to select data from different DB tables. These tables are in different packages. Is it possible to specify the table I want to use, based on its path, like this:
data: path1 type string value 'package1/DbTableName',
path2 type string value 'package2/OtherDbTableName',
table_to_use type string.
if some condition
table_to_use = path1.
elseif some condition
table_to_use = path2.
endif.
select *
from table_to_use
...
endselect
I am new to ABAP & Open SQL and am aware this could be an easy/silly question :) Any help at all would be very much appreciated!
You can define the name of the table to use in a variable, and then use the variable in the FROM close of your request :
data tableName type tabname.
if <some condition>.
tableName='PA0001'.
else.
tableName='PA0002'.
endif.
select * from (tableName) where ...
there are a few limitation to this method, as the stable can not contains fields of type RAWSTRING, STRING or SSTRING.
as for the fact that the table are in different package, i don't think it matters.
Regards,