How to determine which SSAS Cube is processing now? - sql-server-2008-r2

There is a problem when several users can process the same cube simultaniously and as a result processing of cube fails. So I need to check if certain cube is processing at current moment.

I dont think you can prevent a cube from being processed if someone else is already processing it. What you can do to "help" is run a MDX query to check the last time the cube was processed:
SELECT CUBE_NAME, LAST_DATA_UPDATE FROM $System.MDSCHEMA_CUBES
or check the sys.process table on the realted sql server instance to see if it is running:
select spid, ecid, blocked, cmd, loginame, db_name(dbid) Db, nt_username, net_library, hostname, physical_io,
login_time, last_batch, cpu, status, open_tran, program_name
from master.dbo.sysprocesses
where spid > 50
and loginame <> 'sa'
and program_name like '%Analysis%'
order by physical_io desc
go

use this code to select running processes: (execute this in OLAP)
select *
from $system.discover_Sessions
where session_Status = 1
And this code to cancel running prossesess ! Please change PID to running SESSISONS_SPID
like in example:
<Cancel xmlns ="http://schemas.microsoft.com/analysisservices/2003/engine">
<SPID>92436</SPID>
<CancelAssociated>1</CancelAssociated>
</Cancel<

Probably a better approach to the ones already listed would be to use SQL Server Profiler to watch activity on the Analysis Server. As stated already, the current popular answer has two flaws, the first option only shows the LAST time the cube had been processed. And the second option only shows if something is running. But it doesn't tell you what is running and what if your cube was not processing from a SQL server but a different data source?
Utilizing SQL Server Profiler will tell you not only if something is processing but also details of what is processing. Most of the Events you can filter out. Watch Progress Report Current events if you want real time information... It's usually too much of a fire-hose of data to get real info out of it, but you'll know well that at least a process is going on. Watch Progress Report Begin and End events only to get better information like what is currently being processed, even down to the partition levels. Other events with good information include Command Begin/End and Query Begin/End.

You will see a job running in Task Manager called "MSDARCH" if a cube is processing. Not sure how you can tell which one though.

Related

IBM Datastage reports failure code 262148

I realize this is a bad question, but I don't know where else to turn.
can someone point me to where I can find the list of reports failure codes for IBM? I've tried searching for it in the IBM documentation, and in general google search, but this particular error is unique and I've never seen it before.
I'm trying to find out what code 262148 means.
Background:
I built a datastage job that has:
ORACLE CONNECTOR --> TRANSFORMER -> HIERARCHICAL DATA
The intent is to pull data from a ORACLE table, and output the response of the select statement into a JSON file. I'm using the HIERARCHICAL stage to set it. When tested in the stage, no problems, I see the JSON output.
However, when I run the job, it squawks:
reports failure code 262148
then the job aborts. There are no warnings, no signs, no errors prior to this line.
Until I know what it is, I can't troubleshoot.
If someone can point me to where the list of failure codes are, i can proceed.
Thanks!
can someone point me to where I can find the list of reports failure codes for IBM?
Here you go:
https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_73/rzahb/rzahbsrclist.htm
While this list does not list your specific error code, it does categorize many other codes, and explains how the code breakdown works. While this list is not specifically for DataStage, in my experience IBM standards are generally compatible across different products. In this list every code that starts with a 2 is a disk failure, so maybe run a disk checker. That's the best I've got as far as error codes.
Without knowledge of the inner workings of the product, there is not much more you can do beyond checking system health in general (especially disk, network and permissions in this case). Personally, I prefer to go get internal knowledge whenever exterior knowledge proves insufficient. I would start with a network capture, as I'm sure there's a socket involved in the connection between the layers. Compare the network capture from when the select statement is run from the hierarchical from one captured when run from the job. There may be clues in there, like reset or refused connections.

Can I debug a PostgreSQL query sent from an external source, that I can't edit?

I see how to debug queries stored as Functions in the database. But my problem is with an external QGIS plugin that connects to my Postgres 10.4 via network and does a complex query and calculations, and stores the results back into PostGIS tables:
FOR r IN c LOOP
SELECT
(1 - ST_LineLocatePoint(path.geom, ST_Intersection(r.geom, path.geom))) * ST_Length(path.geom)
INTO
station
(continues ...)
When it errors, it just returns that line number as the failing location, but no clue where it was in the loop through hundreds of features. (And any features it has processed are not stored to the output tables when it fails.) I totally don't know enough about the plugin and about SQL to hack the external query, and I suspect if it was a reasonable task the plugin author would have included more revealing debug messages.
So is there some way I could use pgAdmin4 (or anything) from the server side to watch the query process? Even being able to see if it fails the first time through the loop or later would help immensely. Knowing the loop count at failure would point me to the exact problem feature. Being able to see "station" or "r.geom" would make it even easier.
Perfectly fine if the process is miserably slow or interferes with other queries, I'm the only user on this server.
This is not actually a way to watch the RiverGIS query in action, but it is the best I have found. It extracts the failing ST_Intersects() call from the RiverGIS code and runs it under your control, where you can display any clues you want.
When you're totally mystified where the RiverGIS problem might be, run this SQL query:
SELECT
xs."XsecID" AS "XsecID",
xs."ReachID" AS "ReachID",
xs."Station" AS "Station",
xs."RiverCode" AS "RiverCode",
xs."ReachCode" AS "ReachCode",
ST_Intersection(xs.geom, riv.geom) AS "Fraction"
FROM
"<your project name>"."StreamCenterlines" AS riv,
"<your project name>"."XSCutLines" AS xs
WHERE
ST_Intersects(xs.geom, riv.geom)
ORDER BY xs."ReachID" ASC, xs."Station" DESC
Obviously replace <your project name> with the QGIS project name.
Also works for the BankLines step if you replace "StreamCenterlines" with "BankLines". Probably could be adapted to other situations where ST_Intersects() fails without a clue.
You'll get a listing with shorter geometry strings for good cross sections and double-length strings for bad ones. Probably need to widen your display column a lot to see this.
Works for me in pgAdmn4, or in QGIS3 -> Database -> DB Manager -> (click the wrench icon). You could select only bad lines, but I find the background info helpful.

NUM_LOG_SPAN usage on a DB2 database (10.1)

Is there any way to see how many transaction logs a process (agent_id) currently spans? Or list the transaction logs it's currently using/spanning? I.e. is it possible to check if NUM_LOG_SPAN is about to be reached?
We've had an issue recently whereby a long running transaction breached NUM_LOG_SPAN. This was set to 70, when we had 105 logs. We've increased this now but potentially it may still not be enough. We could set NUM_LOG_SPAN to 0, but that's a last resort... what we'd like to be able to do is at least monitor the situation (and not just wait until it hits and causes issues) - to be able to run a command to see if, for example, a process was now using/spanning, say, 90 of the logs? And then we could decide whether to cancel it or not.
We're after something similar to the following statement where you can see the percentage of transaction log usage:
select log_utilization_percent,dbpartitionnum from sysibmadm.log_utilization
-is there anything similar for monitoring processes to ensure they don't cross the NUM_LOG_SPAN threshold?
NB: This is in a SAP system (NW7.3)... perhaps there's something in DBACOCKPIT to view this too?
As far as I can tell you can't calculate this from the monitor functions only, because none of the monitoring functions expose the Start LSN for a unit of work.
You can do this with db2pd, though. Use db2pd -db <dbname> -logs to find the Current LSN, and use db2pd -db <dbname> -transactions to find Firstlsn for the particular unit of work.
With these two numbers, you can use the formula
(currentLSN - firstLSN)
Logs Files Spanned = -------------------------
logfilsiz * 4096
(You should convert the hex values for current LSN and firstLSN returned by db2pd to decimal values).

tsql query kill scenario

In our webapp, we have lots of queries running. Most of them reading data but some update queries with high priority might come. Since, we'd like to cancel read queries but when using KILL, I'd like the read query to return certain dataset result or execution result upon receiving cancel.
My intention is to mimic the behavior of signal in C programs for which a signal handler is invoked upon receiving a kill signal.
Is there any method to define an asynchrnous KILL signal handler for SPs?
This is not a fully tested answer. But it is a bit more than just a comment.
One is to have dirty read (with nolock).
This part is tested I do this all time.
Build a large scalable app you need to resort to this and manage it.
A dirty read will not block an update.
You can get that - a dirty read.
A lot of people think a dirty read may get corrupt data.
If you are updating smith to johnson the select is not going to get smison.
The select is going to get smith and it will be immediately stale.
But how is that worse then taking a read lock?
The read get smith and blocks the update.
Once the read locks are cleared it is updated.
I would contend that blocking an update is also stale data.
If you are using reader I think you could pass the same cancellation token to each select and then just cancel the one token.
But if may not process the CancellationToken until it read the row so it may not cancel a query a long running query that has not yet returned any rows.
DbDataReader.ReadAsync Method (CancellationToken)
Or if you are not using reader look at
SqlCommand.Cancel
As far as getting cancel to return alternate data. I doubt SQL is going to do that.

DB2 lock timeout

We have a WebSphere cluster with four clones. Identical code runs on each of the clones. We have Quartz periodically kick off a job that runs the code.
The code tries to update a row in a table so that only one of the clones will be able to successfully update the table, and then that clone will run the rest of the job. Something like:
update <table> set status = 'RUNNING' where job_name = 'JOB1' and status = 'STOPPED'
We do not start a transaction when we execute the update statement.
What we see sometimes is that all four clones fail to update the table, and all get a lock timeout error (sql code -913).
We've also tried an alternative where we start a transaction, select to see if the row is marked as running, and if not, then performing an update and committing; and otherwise rolling back.
That had the same problem.
One solution we did not try yet is to modify the select to be a "select for update" although from my googleing, I have doubts as to whether that will help.
Any suggestions?
This ended up not being a problem (that's what I get for listening to someone without checking it out myself).
I tested this out in our development environment with two clones. One of the clones would see the -913 lock timeout error occasionally while the other clone would successfully update the table. Other than the ugly log message, everything worked as it should.
Usually, however, we would not get the -913 error, but rather a warning indicating that there was no row to update from one of the clones. Again, this behavior is fine.
So, as we originally thought, and Clockwork-Muse also suggests, using UPDATE statements in this manner to enforce a lock works just fine in DB2.