Monitoring DB2 Queries at live - db2

I would like to know if the DB2 data base has any kind of Monitoring tools.
For example. I use Navicat to manage Mysql Bases, and Navicat has a monitor tool that shows me which queries are running at that momment and i can close the processes. So... Does the Db2 Has any kind of monitoring tool?

Use db2top.
There is a nice blog about using db2top here - http://www.thekguy.com/db2top

In your example, you mention that you use Navicat to manage MySQL databases.
Navicat interfaces with MySQL to give you reports on the state of your databases, but Navicat is not part of the MySQL database management subsystem, rather it relies on information that MySQL makes available to Navicat in order for it to generate its reports that give you your insight into how well they are performing.
There are monitoring products out there that do the same with DB2. i.e. interface with DB2 and give insight into what work is being processed and overall system health.
What monitoring tools there are available to you depend on where your DB2 subsystem runs, mainframe or Linux/Unix/Windows. Different platforms with different solutions, but common among them is that they report on the health of your DB2 subsystem and the elements contained within it.
So the basic answer to your question is, yes, there are monitoring and reporting tools and applications available to monitor a DB2 subsystem, the work it is doing, and the state of the objects within (databases, tables, indexes and the data within them).

Related

Realtime sync between Oracle db(source) and Mongodb(destination)

Is it possible to have a real time sync between a heavy Oracle database and mongodb? Has any one tried this?
I saw a site - Keep MongoDB and Oracle in sync
Here they have mentioned having triggers on the oracle tables, my doubt is that will this slow the applications already running on the Oracle database. Will this replication cause the applications to slow down or bring down the oracle database's performance?
The right solution would involve Change Data Capture from Oracle. This does not require triggers on oracle and thus won't effect performance. There are several tools you can use such as Striim and Attunity. Striim supports change data capture from Oracle and writing to MongoDB.
https://striim.com
https://attunity.com

Can we connect external data to k2?

I am new to K2 and have to check how similar is it to MS Access. So I need to know whether we connect external data from example from SQL server to K2.
Yes, K2 uses SmartObjects to connect to external data sources (like SQL Server).
Absolutely! Connecting with many disparate repositories of data is K2's great strength.
To connect to a SQL Server, you simply have to create an instance of the SQL service broker, with the details of the server and database you want to read from. Then you can create a SmartObject for each table, view, or stored procedure within that SQL Server database that you need to interact with.
The following thread on K2 Community should get you started: http://community.k2.com/t5/K2-blackpearl/How-to-connect-K2-blackpearl-with-MS-SQL-R2/td-p/53993
K2 is not similar to Access since it is larger platform which meets enterprise workflow automation needs whereas Access would rather allow you to build department level apps with little flexibility - so comparison is incorrect neither from feature set nor from product positioning or pricing point of views.
K2 has 3 major pillars tightly integrated with each other:
Workflow Engine (manages execution of steps defined for process you automating)
SmartForms (allow to build you web UI to your apps and processes)
SmartObjects - this abstraction layer offers you set of OOB connectors which allow you to consume or write data from variety of external LOB systems - SQL Server, Oracle, SharePoint and many more. Custom brokers can be created to connect to any other LOB system which is not covered by OOB brokers set.
So in terms of connecting to different to external data you won't have any problems and capabilities are far greater than those you may find in MS Access. Comparing those two things it is almost like compare SMB shared folder VS SharePoint Server or something like this.
Also product being marketed (and build in that way) to allow "code-less development" - it has really gentle learning curve / allows you to start quick with building your applications.

Postgresql Multiple Database VS Multiple Schemas

We are in the process of building a cluster for our hosted services at work, the final product will be used to host multiple separate services. We are in the middle of deciding on how we want to setup our databases. We are running a postgresql database server which all services in the cluster will use. The debate right now is whether to give each service its own schema in a single database or to give each service its own database.
We just aren't sure which is the better solution for us. None of our services have a common structure and data does not need to be shared. What we are more concerned about is ease of use.
Here's what we care most about, we are really hoping for an objective vs opinion based answer.
Backups
Disaster recovery - all services vs individual
Security between services
Performance
For some additional information, the cluster is hosted within AWS with our database being an RDS instance.
This is what PostgreSQL official docs says:
Databases are physically separated and access control is managed at the connection level. If one PostgreSQL server instance is to house projects or users that should be separate and for the most part unaware of each other, it is therefore recommendable to put them into separate databases. If the projects or users are interrelated and should be able to use each other's resources they should be put in the same database, but possibly into separate schemas. Schemas are a purely logical structure and who can access what is managed by the privilege system.
Source: http://www.postgresql.org/docs/8.0/static/managing-databases.html
Disaster recovery - all services vs individual
You can dump and restore one database at a time. You can dump and restore one schema at a time. You can also dump schemas that match a pattern.
Security between services
I presume you mean isolation between databases and isolation between schemas. The isolation between databases is stronger and more "natural" for developers concerned with "ease of use". For example, if you use one database per service, every developer can just use the public schema for all development. This might seem "easier" than adding schemas to the search path, or "easier" than using schema.object when programming.
It depends in part on how you manage privileges for the roles you use for development, and on how you manage privileges in each database or schema. You can change default privileges.
Performance
I don't see a measurable difference. YMMV.

Can we store the data collected by executing tcl script into a database (postgresql)? If yes how?

In my project I need to simulate social network community members and their activities. I think to represent the members as nodes and need to store the count of their postings, feedback etc. Can we store the data collected by executing tcl script into a database (PostgreSQL)? If yes can anyone explain how it is?
You could also use SQlite3, a much lighter alternative to a full Postgres installation.
see https://www.sqlite.org/tclsqlite.html
Try OpenACS - it's a social networking community software and is written using TCL and Postgresql (or Oracle if you prefer)

OrientDB in Azure

We would like to use OrientDB Graph in an Azure environment. Does anybody has experience using it? We also would like to know if high availability from OrientDB is required under Azure cloud? Azure already offers high availability for Azure storage, Azure Drive and SQL. I understand that they have replications and load balancing built in.
This is super important because we prefer not to get into the business of replications and infrastructure management.
Thanks
So you can spin up 2 or more machines and install OrientDB on them, then configure them together as a distributed cluster. However I haven't been able to find any way that is simpler, easier to do. I am interested in this topic too.
Azure does have features such as geo-replication, which is protects your data against a major data-center incident but doesn't provide any performance benefit and will not make it highly available.
Although pretty reliable, occasionally Microsoft will reboot servers for updates, so to protect against downtime you can use affinity groups so that, of your 2 or more servers, one will always be online. This however does need to be used in conjunction with database replication and ideally load balancing.
It's also worth noting that OrientDB recommends clusters have an odd number of servers as this can prevent conflicts when synchronising data after a communication issue between the servers.
I am using it in amazon and I had to create a java project to monitor http requests inserts and queries. The queries are very fast but takes longer inserting data .
I recommend this type of graph database mode to decrease the time of the queries. Also if you have empty fields OrientDB manages very well compared to other databases .
If you need help with the java project can response to this post and I´ll help u.
I hope it helps. Good luck.