db2ilist shows the instance, but db2idrop says instance doesn't exists - db2

/opt/IBM/db2_10_01/bin/db2ilist shows the instance db2inst1 exists. But, when I try to drop it using /opt/IBM/db2_10_01/instance/db2idrop it gives error:
The specified instance "db2inst1" does not exist. Specify an existing instance
name.
How to drop instance in such scenario

This can happen when Db2-instance removal was incorrectly performed, often if directories were manually removed or mount points unavailable etc.
To recover, study this page until you comprehend it.
Become root and run
db2greg -dump
Study its output carefully and ensure you comprehend the output (read the documentation carefully).
You may see the line that identifies the db2inst1 - so carefully verify that each detail matches your expectation and the documentation.
Take a secure backup of the global-registry-file. This is a vital step.
As root, run db2greg without any arguments and study the instructions. Your aim is to run db2greg -delinstrec with some additional options to identify the line to delete, via a comma-separated list of field=value tokens.
For example db2greg -delinstrec instancename=db2inst1,instancepath=... etc.
When db2greg -delinstrec completes successfully (it takes a couple of seconds), you can then run db2ilist and you should find that db2inst1 has disappeared.

Related

db2 update dbm cfg immediate

I am looking as the following article:
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.1.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0001988.html
I would like to ask about the IMMEDIATE and the DEFERRED part. Sorry I am still confuse and not really understand on it.
in the IMMEDIATE part, it explain that IMMEDIATE is the default, but it requires an instance attachment to be effective. , what does it means that requires an instance attachment to be effective? I though it should be straight take effect after I run the command?
For example:
db2 update dbm cfg using diaglevel 4 immediate
Does this direct take effect on my db2diag log files?
Take care to read the Db2 knowledge-center version that matches your Db2-version. Maybe you are using a more recent version of Db2 like V10.5 or v11.1.
For the DIAGLEVEL parameter, you can change it on the fly i.e. without needing to bounce the Db2-instnce. The new value is effective immediately and you can see this in the db2diag (which will increase quickly in size because of all the extra messages that will appear).
For "instance attachment" it means that you can run db2 attach ... command before running the db2 update dbm cfg ... The details are here.
However, if you are running as the Db2-instance owner and you are on the Db2-server directly (e.g. via ssh etc) then the instance-attachment is not necessary in this specific case. The instance-attachment is necessary when the instance is remote, or is not the current instance, or you are not running as the instance-owner etc.

Actions on Google - What is the relationship between commands/devices/executions in the Google Home EXEC input and response?

This question concerns the Actions on Google Smart Home documentation Create a Smart Home App, specifically the action.devices.EXECUTE section.
We are somewhat confused regarding the exact relationship between the list of 'Command' objects and their associated lists of Devices and Executions, especially regarding how these are translated to a response.
Based on the documentation, we believe that the intent is for Commands to be processed in order: top to bottom. Per Command, each Execution is processed (again, top to bottom) for each device ID in the Command.
A response, if we understand the description correctly, could include up to 4 Commands per initial Command in the input (one for SUCCESS, PENDING, OFFLINE & ERROR), each with a list of device IDs for which that result is appropriate.
There is no mention of Executions in the response, however. Does this mean that if 1 execution for a device fails (out of multiple) that in the response it is listed under ERROR, despite other executions for the device succeeding?
For example, if a command comes in to turn on a light and set its color to blue. Turning it on succeeds, but some arbitrary error prevents the color from being set, then what should the response format look like?
Thank you for reading.
A commands array will contain all of the devices that are supposed to controlled with this command. There is an additional execution array which provides the command and parameters.
If some devices could not successfully be controlled, there should be an error returned for that device id, as shown in the documentation.
For any particular device, it may be odd to think of a scenario where one command is successful but another failing. In that case, you will need to think of the reason that makes the most sense, perhaps error protocolError or unknownError.
Every command is meant to be processed simultaneously, or in parallel. If you cannot make all of the changes that the user requested, it may be more consistent if no command was executed at all. So your device could be turned on/off, but if color is broken it should fail if both commands are sent at the same time.

Split MS Access DB Needs Compact/Repair as well as Re Link on Front and Backend, Why?

I have an ACCDB that I split a while ago that contains many forms with sub forms (based on tables) and over two hundred tables in the BE (almost all are small lookup tables for vehicle objects) and 400+ queries. There also happens to exist another ACCDB with a single table in it with 6.5M rows that the FE links to with basic history info. The two backends do not link to each other in any way. The FE is 14MB, BE is 1.2G and the single table DB is 900MB, all with primary keyes and indexes setup appropriately. The DB is 100% normalized. Both BE's grow 5% every month. The DB is currently slated to be migrated to an Oracle 11G environment later this year.
Question:
I found out recently that if I compact and repair the back end or front end that none of the forms containing subforms open; the whole FE just freezes to white. Even if all 3 are repaired I still have issues. BUT if I compact/repair all 3 as well as relink the entire front end to the two backends the forms all of sudden start working. It was only recently that this behavior began.
Why do I have to relink to make the forms work again?
You should not have to re-link anything here at all after a C+R.
The only thing that comes to mind is the user who is doing the C+R has some restricted rights in the folder or directory where the C+R occurs.
Remember, when the user does the C+R, then a COPY of the file is created – and thus possible inheriting of the CURRENT user’s rights can occur WHEN the NEW file is created. So it sounds like some permissions issues exists on the folder, or the user that is doing the C+R has some special (different) rights. (perhaps some inherited rights do to membership in some security group).
Of course one should ensure that you are using UNC path names, and of course the front end needs to be placed on each machine.
Perhaps again the user doing the C+R has “different” drive mappings and thus links to the back end databases are thus wrong due to different drive letter. So if not already, as a general rule I would STRONGLY avoid drive letters and use NC path names (if you not already).
If you are using UNC path names, then the likely issue is permissions.
There also a possibility that the new user doing the C+R is running the front end from a “non” trusted location.
Also, the table of 6.5 million rows seems a bit large, and I assume the 1.2 gig size is RIGHT AFTER a C+R? (but this issue is for another post).
This suggests a drive mapping issue, a permissions issue, or perhaps the user launching the application is messing up references. I would shift by-pass into the application and ensure that the user doing the C+R can compile the application, and would from VBA editor take CAREFUL note that say office 14 references are not being hi-jacked to office 15 references for example.
You're reaching the "hassle-free" viable (as opposed to "documented") limits of Access as a database. remember the queries need to be compiled which means resolving all the table links, and verifying existing indexes and other meta-data. it's possible that simply over-writing this information by manually using the linked table manager as you have, may be more efficient.
Here's a few prescribed tips which might help you out:
http://office.microsoft.com/en-gb/access-help/improve-performance-of-an-access-database-HP005187453.aspx
And some more...
http://www.fmsinc.com/MicrosoftAccess/Performance.html#Linked%20Tables
And a related thread from this site:
Proper way to program a Microsoft Access Backend Database in a Multiuser Environment
Issues which may not be helping you:
queries which don't restrict the dataset sufficiently, particularly those running a dynaset
backed database files sitting too low in the windows folder structure (the higher the better)
As the 2nd link suggests, the truth is there are so many variables at work that resolving this will require some tinkering, with trial & error playing a major part.
All that, or you can upsize to SQL Server Express :)
http://office.microsoft.com/en-gb/access-help/move-access-data-to-a-sql-server-database-by-using-the-upsizing-wizard-HA010275537.aspx

Swiftstack - Containers not getting removed

Even after deleting containers and objects directly from file system, Swift is listing the containers when executed GET command on it. However, if we try to delete the container with DELETE command then 404: Not Found error message is returned. Please explain whether there is something wrong or is there some kind of cache?
I think the problem came from deleting the containers and/or objects directly from the file system.
Swift's methods for handling write requests for object and container have to be very careful to ensure all the distributed index information remains eventually consistent. Direct modification of the file system is not sufficient. It sounds like the container databases got removed before they had a chance to update the account databases listings - perhaps manually unlinked before all of the object index information was removed?
Normally after a delete request the containers have to hang around for awhile as "tombstones" to ensure the account database gets updated correctly.
As a work around you could recreate them (with a POST) and then re-issue the DELETE; which should successfully allow the DELETE of the new empty containers and update the account database listing directly.
(Note: the container databases themselves, although empty, will still exist on disk as tombstones until the the reclaim_age passes)

Explain the steps for db2-cobol's execution process if both are db2 -cobol programs

How to run two sub programs from a main program if both are db2-cobol programs?
My main program named 'Mainpgm1', which is calling my subprograms named 'subpgm1' and 'subpgm2' which are a called programs and I preferred static call only.
Actually, I am now using a statement called package instead of a plan and one member, both in 'db2bind'(bind program) along with one dbrmlib which is having a dsn name.
The main problem is that What are the changes affected in 'db2bind' while I am binding both the db2-cobol programs.
Similarly, in the 'DB2RUN'(run program) too.
Each program (or subprogram) that contains SQL needs to be pre-processed to create a DBRM. The DBRM is then bound into
a PLAN that is accessed by a LOAD module at run time to obtain the correct DB/2 access
paths for the SQL statements it contains.
You have gone from having all of your SQL in one program to several sub-programs. The basic process
remains the same - you need a PLAN to run the program.
DBA's often suggest that if you have several sub-programs containing SQL that you
create PACKAGES for them and then bind the PACKAGES into a PLAN. What was once a one
step process is now two:
Bind DBRM into a PACKAGE
Bind PACKAGES into a PLAN
What is the big deal with PACKAGES?
Suppose you have 50 sub-programs containing SQL. If you create
a DBRM for each of them and then bind all 50 into a PLAN, as a single operation, it is going
to take a lot of resources to build the PLAN because every SQL statement in every program needs
to be analyzed and access paths created for them. This isn't so bad when all 50 sub-programs are new
or have been changed. However, if you have a relatively stable system and want to change 1 sub-program you
end up reBINDing all 50 DBRMS to create the PLAN - even though 49 of the 50 have not changed and
will end up using exactly the same access paths. This isn't a very good apporach. It is analagous to compiling
all 50 sub-programs every time you make a change to any one of them.
However, if you create a PACKAGE for each sub-program, the PACKAGE is what takes the real work to build.
It contains all the access paths for its associated DBRM. Now if you change just 1 sub-program you
only have to rebuild its PACKAGE by rebinding a single DBRM into the PACKAGE collection and then reBIND the PLAN.
However, binding a set of PACKAGES (collection) into a PLAN
is a whole lot less resource intensive than binding all the DBRM's in the system.
Once you have a PLAN containing all of the access paths used in your program, just use it. It doesn't matter
if the SQL being executed is from subprogram1 or subprogram2. As long as you have associated the PLAN
to the LOAD that is being run it should all work out.
Every installation has its own naming conventions and standards for setting up PACKAGES, COLLECTIONS and
PLANS. You should review these with your Data Base Administrator before going much further.
Here is some background information concerning program preperation in a DB/2 environment:
Developing your Application