I need to run a query against Active Directory to identify all the unique (distinct) operating systems + service packs in the domain. I can do that pretty easily via the ADsDSOObject provider and a SQL statement. But I need to also tally up how many accounts for each distinct combination. I can do this against a SQL Server or Oracle database very easily using COUNT(field) AS X and GROUP BY field. But with an AD query I can't use GROUP BY (as far as I know), so I'm funneling the recordset into a new disconnected recordset, but how can I run a COUNT() and GROUP BY statement against that? Is there a better way than this?
If you have a SQL Server available you could insert into a temporary table and then have TSQL. Not pretty but that is what I would try.
If I had SQL Server available I wouldn't bother with a disconnected recordset. Apparently the GROUP BY option is unavailable to ADO disconnected recordsets.
Related
My ssis package has an oledb source which joins oracle and sql server to get source data and loads it into sql server oledb destination. Earlier we were using linked server for this purpose but we cannot use linked server anymore.
So I am taking the data from sql server and want to return it to the in clause of the oracle query which i am keeping as sql command oledb source.
I tried parsing an object type variable from sql server and putting it into the in clause of oracle query in oledb source but i get error that oracle cannot have more than 1000 literals in the in statement. So basically I think I have to do something like this:
select * from oracle.db where id in (select id from sqlserver.db).
Since I cannot use linked server so i was thinking if I could have a temp table which can be used throughout the package.
I tried out another way of using merge join in ssis. but my source data set is really large and the merge join is returning fewer rows than expecetd. I am badly stuck at this point. I have tried a number if things nothung seems to be working.
Can someone please help. Any help will be greatly appreciated.
A couple of options to try.
Lookup:
My first instinct was a Lookup Task, but that might not be a great solution depending on the size of your data sets, since all of the records from both tables have to pulled over the wire and stored in memory on the SSIS server. But if you were able to pull off a Merge Join, then a Lookup should also work, but it might be slow.
Set an OLE DB Source to pull the Oracle data, without the WHERE clause.
Set a Lookup to pull the id column from your SQL Server table.
On the General tab of the Lookup, under Specify how to handle rows with no matching entries, select Redirect rows to no-match output.
The output of the Lookup will just be the Oracle rows that found a matching row in your SQL Server query.
Working Table on the Oracle server
If you have the option of creating a table in the Oracle database, you could create a Data Flow Task to pipe the results of your SQL Server query into a working table on the Oracle box. Then, in a subsequent Data Flow, just construct your Oracle query to use that working table as a filter.
Probably follow that up with an Execute SQL Task to truncate that working table.
Although this requires write access to Oracle, it has the advantage of off-loading the heavy lifting of the query to the database machine, and only pulling the rows you care about over the wire.
We recently migrated a large DB2 database to a new server. It got trimmed a lot in the migration, for instance 10 years of data chopped down to 3, to name a few. But now I find that I need certain data from the old server until after tax season.
How can I run a UNION query in DBeaver that pulls data from two different connections..? What's the proper syntax of the table identifiers in the FROM and JOIN keywords..?
I use DBeaver for my regular SQL work, and I cannot determine how to span a UNION query across two different connections. However, I also use Microsoft Access, and I easily did it there with two Pass-Through queries that are fed to a native Microsoft Access union query.
But how to do it in DBeaver..? I can't understand how to use two connections at the same time.
For instance, here are my connections:
And I need something like this...
SELECT *
FROM ASP7.F_CERTOB.LDHIST
UNION
SELECT *
FROM OLD.VIPDTAB.LDHIST
...but I get the following error, to which I say "No kidding! That's what I want!", lol... =-)
SQL Error [56023]: [SQL0512] Statement references objects in multiple databases.
How can this be done..?
This is not a feature of DBeaver. DBeaver can only access the data that the DB gives it, and this is restricted to a single connection at a time (save for import/export operations). This feature is being considered for development, so keep an eye out for this answer to be outdated sometime in 2019.
You can export data from your OLD database and import it into ASP7 using DBeaver (although vendor tools for this are typically more efficient for this). Then you can do your union as suggested.
Many RDBMS offer a way to logically access foreign databases as if they were local, in which case DBeaver would then be able to access the data from the OLD database (as far as DBeaver is concerned in this situation, all the data is coming from a single connection). In Postgres, for example, one can use a foreign data wrapper to access foreign data.
I'm not familiar with DB2, but a quick Google search suggests that you can set up foreign connections within DB2 using nicknames or three-part-names.
If you check this github issue:
https://github.com/dbeaver/dbeaver/issues/3605
The way to solve this is to create a task and execute it in different connections:
https://github.com/dbeaver/dbeaver/issues/3605#issuecomment-590405154
My manager wants to be able to run a script/job to find the total number of databases currently on all instances/servers.
I know to use: select COUNT(*) from sys.databases
But what's the easiest way to get this to run against all instances so that when he runs it, it counts all for him as opposed to running it against each instance separately?
To query data from different databases/servers, you need Linked Servers. You can get to them in SQL Server Management Studio under
Server Objects-->Linked Servers
Once you have that, you can call data from other servers like so:
select
*
from
sys.databases,
[OtherServerName].[OtherDB].[sys].[databases]
Then build a query to cover all your instances.
I use INavigor system for ad-hoc data extraction from the DB2 database. Only issue is that when it comes to automation. Is there a way I could automate the SQL code to be run on a specific time? I know there is Advance Job Sheduler but I'm not sure how the SQL can be added to the Sheduler. Any one who can help?
IBM added a Run SQL Statements (RUNSQL) CL command at v7.1.
Prior to that, you could store SQL statements in source files and run them with the Run SQL Statements (RUNSQLSTM) command.
Neither of the above allow an SQL Select to be run by itself. For data extraction, you'd want INSERT INTO tbl (SELECT <...> FROM <...>)
For reporting SELECTs, your best bet is to create a Query Manager query (*QMQRY object) and form (*QMFORM object) via Start DB2 UDB Query Manager (STRQM); which can then be run by the Start Query Management Query (STRQMQRY) command. Query Manager (QM) is SQL based, unlike the older Query/400 product. QM manual is here
One last option, is the db2 utility available via QShell.
Don't waste effort creating a day late going out of business because the job scheduler hasn't updated the file system.
Real businesses need real time data.
Just make a SQL view on the iseries that pulls the info you need together.
Query the view externally in real time. Even if you need last 30 days or last month or year to date. These are all simple views to create.
SQL server has an option to create proxy user accounts with the statement
CREATE USER proxyUser WITHOUT LOGIN;
I couldn't find much help on internet on getting the db2 (v8) equivalent of this. I'm not sure whether this is possible, if yes please let me know how.
The scenario where i want to use this is as follows.
I have table with ~8 million records which gets updated daily. Before the inserts happen, few records are deleted from the table and the number is ~2 million. Since these deletes need not be logged, we decided on setting off Logging during the deletes. Since our credentials do not have alter table rights, we decided to put the ALTER and DELETE statements in a script and and execute the script using the proxy account irrespective of what user executes the SP.
I foud this article which closely describes the scenario which i described above. The differences are that i need to do this on db2 and i need to do deletes without logging them.
http://www.mssqltips.com/sqlservertip/2583/grant-truncate-table-permissions-in-sql-server-without-alter-table/
Thanks
Arjun
It will work basically in the same manner in DB2, with a few exceptions. Firstly, there's no TRUNCATE TABLE statement in DB2 8.2 (and there's no DB2 version 8 on Linux). Secondly, there are no database users in DB2 -- all users are defined externally in the operating system, so there's no CREATE USER statement either.
All statements in a stored procedure, except dynamic SQL, are executed with the authorization of the procedure creator.
So, using the authorized ID, e.g. the database administrator's ID, create the stored procedure that does what you need (ALTER, DELETE, whatever), then grant the EXECUTE privilege on that procedure to whoever needs to run it.