I'd like to scope some cells to particular values if the current user is an SSAS admin.
I'm not sure where I'd even begin to determine that kind of introspection. Any ideas would be welcome please.
Note that I'm using UDM and not Tabular models
As a workaround you can append «;EffectiveUserName=» to connection string. Then try to connect and check for exception. This connection is able to establish with administrative rights only.
See here for how to use the SCOPE function in an IF clause. Inside the clause, the USERNAME() function can be used to return the current user(See here). Hope this helps.
Related
I am new to SSIS, I have created variables for connection string (Both source and destination). While generating the Config file, which property I need to select. Could you please help me with this?
It's not necessary to create variables for a connection string.
There are a few things you will need to provide to us to give you an exact answer.
The type of database you are connecting to.
What type of authentication you use to connect to it.
If you take the below image when setting up a connection manager for an OLE DB you simply need to provide the server name. Then which type of authentication it is.
If the connection is successful you should be able to select a database you wish to connect to. You can also test the connect to make sure the connection is working successfully.
Let me know if you have any other issues.
Thanks
Gav
Basics - I need to return data from columns based on some variables from a different table(I either return column or null if access is not allowed)
I have already done what I need via a custom function in postgres, but the problem is that in Hasura functions share the permission with the table/view it implements SETOF on.
So I have to allow the access to the table itself and as the result permissions in my function are kind of meaningless, because anyone will be able to access the data simply by querying the original table directly.
My current line of thinking is that the only way to do what I need is to create a remote schema and remove access to the original table.
But maybe there is a way to not expose some of the tables as a graphql query? If I could do something like this - I'd just hide my table and expose only a function.
The remote schema seems like it would work.
Another option would be the allow-queries option.
It's possible to limit queries. It's a bit tricky it seems, you need an exact copy of every query that should be allowed (with the fields in the exactly correct order), but if you do that, then only your explicitly whitelisted queries will be accepted. More info in the docs.
I'm not familiar enough with postgres permissions to offer any better ideas...
I have a small application written in Go that connects to a PostgreSQL database on another server, utilizing database/sql and lib/pq. When I start the application, it goes through and establishes that all the database tables and indexes exist. As part of this process, it issues a SET search_path TO preferredschema,public command. Then, for the remainder of the database access, I do not have to specify the schema.
From what I've determined from debugging it, when database/sql reconnects (no network is perfect), the application begins failing because the search path isn't set. Is there a way to specify commands that should be executed when it reconnects? I've searched for an event that might be able to be leveraged, but have come up empty so far.
Thanks!
From the fine manual:
Connection String Parameters
[...]
In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html.
Then if we go over to the PostgreSQL documentation, you'll see various ways of setting connection parameters such as config files, SET commands, command line switches, ...
While the desired behavior isn't exactly spelled out, it is suggested that you can put anything you'd SET right into the connection string:
connStr := "dbname=... user=... search_path=preferredschema,public"
// -----------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
and since that's all there is for configuring the connection, it should be used for every connection (including reconnects).
The Connection String Parameters section of the pq documentation also tells you how to quote and escape things if whatever preferredschema really is needs it or if you have to grab a value at runtime and add it to the connection string.
Whenever I try to apply filter to an attribute, which has ValueSelection= Dropdown, the dropdown is not populated and error message "The requested list could not be retrieved because the query is not valid or a connection could not be made to the data source" is shown instead.
If I set up ValueSelection=List I am getting a different error message:
An attempt has been made to use a semantic query extension associated with the data extension 'SQL' that is not registered for this report server.
(Microsoft.ReportingServices.SemanticQueryEngine)
This happens within BIDS environment and was observed both in SQL 2005 and SQL 2008.
I've already studied articles, which discussed the similiar problem, but neither of them applied to my case. The user account in data source has all necessary rights, data could be retrieved without any problem (for example if i try "Explore data" in data source view). The SQL profiler shows that no query is being sent to SQL Server when there is an attempt to populate dropdown. So nothing is wrong with the query, it is simply never executed.
Your connection is not working. Try to test you connection by trying a simple table and query output.
This will enable you to test the connection before trying anything advanced.
Got this problem and in my case it was caused by wrong connection string in Data Source - instead of just having a SQL Server name like "SOMESQLSERVER_MACHINE" I had for some reason "SOMESQLSERVER_MACHINE.our.corp.domain". It had to be the same, but then I realized that the domain is wrong, after removing it all works like a charm again. That said: it's always good idea to start with detailed checks on your basic settings.
Otherwise this could be a problem with permissions to the folders on Report Manager.
I have a complex reporting application that allows clients to login and view reports for their client data. There are several sections of the application where there are database calls, using various controllers. I need to make sure that client A doesn't get client B's information via header manipulation.
The system authenticates, and assignes them a clientID and roleID. If your roleID >1, that means you work for the company hosting the data, and you can see all client info. I want to create a catch-all that basically works like this:
if($roleID > 1) {
...send query to database
}else {
if(...does this query select a record with clientID other than my $auth->clientID){
do not execute query
}else {
execute query
}
}
The problem is, I want this to run for every query that goes to the server... how can I place this code as a "roadblock" between the application and the DB? I already use Zend_Profiler to look at queries, so I know it is somehow possible, but cannot discern this from the Profiler code...
I can always write an authentication function and pass selected queries that way, but this catch-all would be easier to implement across all of the calls and would be future proof. Any help is appreciated.
it's application design fault.
you shoud use 'service architecture' - the only one entry point for queries would be a service. and any checks inside it.
If this is something you want run on every query, I'd suggest extending Zend_Db_Select and overwrite either the query() or assemble() functions to add in your logic. You'll also want to add a way for it to be aware of your $auth object.
Another option is to extend your database adapter so you can intercept the queries directly. IMO, you should try and do this at the application level though.
Depending on your database server, you can put a trace on the DB side.
Here's an example for Oracle:
http://orafaq.com/wiki/SQL_Trace