Master Data Services: Bypass member Name and Code input? - master-data-services

We're on MDS 2008 R2. Does anyone know of a way that we can make MDS automatically generate a Name and Code, though the MDS interface when a data entry person clicks on the new member button, so they can just go on and enter the pertinent info. Data entry staff shouldn't be having to come up with name and codes for records, at least for us.
I can't see how to do this, but maybe I'm missing something.

You can use a Default value action - http://msdn.microsoft.com/en-us/library/ee633870(v=sql.105).aspx#Actions - to default a value for the Code. Names are not mandatory. However, you can use the same Default value action to generate a Name as well. The way Code defaulting works has changed somewhat with MDS 2012 but as long as you are on 2008R2 you shouldn't have to worry about that.

Related

Problem with connecting ADODB.Recordset to a forms RECORDSET on the On Open event of the form

I have an access project that is "linked" to a SQL database that now works like a charm. The last problem I solved was, making sure any Boolean fields be turned to bits with default of 0, and adding the TIMESTAMP in SQL due to the fact that ACCESS is not so much of a genius with record locking (so I was told) .
Now that I tried to connect direct to SQL server by using an ADODB.Recordset and setting the forms.recordset to the recordset, at the OnOpen event of the form, (this recordset runs a stored procedure in SQL, I get the data fine but get the error locking (write conflict) back.
This ADODB.Recordset cursorlocation is set to "adUseClient".
Obviously I no longer have the forms recordsource attached or assigned to the linked SQL table anymore.
Am I missing something? do I need to assign anything to the forms recordsource?
The Idea is trying to connect directly thru the use of stored procedures instead of linked tables.
thanks so much for any help.
The adding of timestamp is a VERY good idea. And do not confuse the term/name used timestamp to mean an actual date/time column. The correct term is "row version".
This issue has ZERO to do with locking. The REASON why you want this column added is because then Access will use that column to determine when the record is dirty, and more imporant figure out that the record been changed. If you omit this column, then access reverts to a column by column testing approach. Not only does this cause more network traffic, but worse for real type values, due to rounding, you can get the dredged this record has been changed by another user. But, it not been changed, and even columns with floating point values will cause access to error out with that changed record.
So, for all tables, and you even see the option included in the SSMA (the access to sql migration wizard that this option is available (and I believe it is a default).
So yes, it is HIGH but VERY high recommended that you include/add a rowversion column to all tables - this will help Access in a HUGE way.
And as noted, there is a long standing issue with bit fields that don't have a default setting. so, you don't want to allow bit fields to be added/created with a null value. So, ensure that there is a default value of 0 (you set this sql server side).
Ok, now that we have the above cleared up?
It not really all that clear as to why you want or need or are adopting a store procedure and code to load/fill up the form. You not see any better performance if you bind the form DIRECTLY to the linked table. Access will ONLY pull the reocrds you tell that form to load.
So, bind the form directly to the linked table. Then, you can launch/open the form say to once reocrd with this:
docmd.OpenForm "frmInvoices",,,"InvoiceNum = 123"
Now, you would of course change the above "123" to some variable or some way to prompt the user for what invoice to work on.
The invoice form will then load to the ONE record. So, even if the form bound (linked table) has 2 million rows? Only ONE record will come down the network pipe. So, all that extra work of a store procedure, creating a recordset and pulling it ? You will gain ZERO in terms of performance, but you are writing all kinds of code when it simply not required, and you not achieve any superior performance to the above one line of code that will automatic filter and ONLY pull down the record that meets the given criteria (in this example invoice number).
So:
Yes, all tables need a PK
Yes, all tables should have a rowversion (but it called a timestamp column - nothing to do with the actual time).
Yes, all bit fields need a default of 0 - don't allow null values.
And last but not least?
I don't see any gains in performance, or even any advantages of attempting to code your way though this by adopting store procedures and that of introducing reocrdset code when none is required, but worse will not gain you performance anyway.

Crystal Reports cannot map to new database server

I have performed this process numerous times with other Reports but this one Report is not working as it should.
Essentially, I am trying to point a report at a new server where the stored procedure is EXACTLY the same as on the previous server. I am using the Verify Database functionality to do this. But when I point at the new server and enter parameters, CR prompts me to re-map the fields. This would be only slightly annoying if the Map Fields window actually displayed the returned columns from the new server.
But, as you can see from the image, even with the 'Match type' unchecked, no columns from the stored procedure display to be mapped. I have clicked on every field in the report but none of them show any columns to map to.
I have also tried changing the Database Location first before trying to verify, but that doesn't make any difference.
Has anybody else seen this? Is there any sort of workaround?
​I found my solution. Kinda dumb, really.
My stored procedure calls another stored procedure. I commented that call out and tried to Verify the Database and it worked.
Apparently Crystal Reports doesn't handle procedures that call other procedures very well when trying to map fields.

SSRS won't allow parameters in embedded dataset based on data source

Whenever I construct a report that uses an embedded dataset and try to use a parameter (such as #StartDate and #EndDate), I receive an error that states I must declare scalar values. However, this error only comes up if I set a data source that uses the "credentials stored securely in the report server" option. If I set the data source to use "Windows integrated security," I do not receive the error.
I am at a complete loss. These reports need to be accessed by a large amount of people. We have granted them "browser" privileges through an Active Directory Group through SSRS, including the data sources.
What is the best way to proceed? Is there an easy fix?
I generally deploy with the option already set by going into the Data Source and choosing 'Log on to SQL Server' section > 'Use SQL Server Authentication'> (Set your user and settings). When you use a windows user as your main user after you deploy there could be issues.
The other question would be does this work correctly at all times in Business Intelligence Development Studio, BIDS, and just not on the server? It is very interesting a permission issue alone would cause a scalar error to return. Generally when users have to get to the report they may still get the error but not storing the credentials merely asks them for credentials. It would help more to know the datasets and what they are returning or supposed to be returning. Generally a Start and End are typically defined as 'DataTime' in SSRS and are in a predicate like 'Where thing between #Start and #End' and there data is chosen from a calendar by a user. If you are binding them to other datasets and there is the possibility of a user selecting multiple values that could present an issue.
I took a look at the data source that had been set up by our DBA. It was set up as an ODBC connection. I changed it to Microsoft SQL. It works now. I do not understand why and would appreciate if a more seasoned individual might explain.

On Sql Azure, duplicate database appears in square brackets

I have a SQL Azure database called Palladium.
I just logged into the Windows Azure portal, and Palladium is showing up twice now, and the duplicate is named [Palladium] with square brackets.
Any idea what this is?
I am using Code First and I've been fumbling around with different connection strings (serious problems today...), and they all actually do specify [Palladium] for some reason. When I go into the new one and click to generate a connection string, it actually says [[Palladium]]]. That's right, that's three square brackets on the end.
I am using Entity Framework with Code First, but as far as I know the part that actually drops and modifies the database is disabled.
Solution: I still have no idea what that database was. It appears to be empty, and the portal seemed confused by it (not letting me delete it, select certain things, etc). However, through SSMS I was able to drop it no problem and now everything seems fine.
I can tell you why the db gets created - it's created by the ASP.Net universal providers.
If you had checked the tables in your [Palladium] database you would have noticed only the membership tables that are used by the ASP.Net universal providers. (Applications, Memberships, Profiles, Roles, Users and UsersInRoles)
One of the features of these providers is that they can automatically create the membership tables for you in your database. This is what happened to you - the web.config had the square brackets around it as indicated by Gary's answer. So the automatic creation of the membership tables took that connection string too literally and created a DB with the square brackets.
I'm interested to know whether you left the square brackets in your web.config and just deleted the DB. If it started working under those conditions, then that indicates to me that this must be an Azure bug.
Brian, even though you appear to have solved your particular issue, I want to add a message here since this is one of the first Google search results for people having the [] square bracket problem in their database name.
I have previously encountered this issue where the database name appears to have square brackets [] on azure. In my case it was not code first. Let's say I create a new database using the Azure portal, and name it Palladium. In order to get the connection string for it, I go to the dashboard page of the sql database and click the 'show connection strings' link that appears on the right. This connection string needs to be placed in your web.config file replacing the contents of the connectionstring="".
Before doing that, notice that for some reason the connection string contains the string "Database=[Palladium]". I believe you have already noticed this. This is a problem.. I do not know why Azure insists on putting [] around the database name, but you must remove those square brackets from around the word Palladium before using it as your connection string. (Also don't forget to replace the string "{your_password_here}" with the actual password).
That fixed my problem, and I have been doing this routinely for some months now without problems. Still don't know why Azure puts [] in the first place.
I am thinking this solution would apply even to someone using codefirst since you would still need to create the database on Azure and get the connection string, since code first creates the tables, not the db itself.

Oracle Global Temporary Tables and using stored procedures and functions

we recently changed one of the databases I develop on from Oracle accounts to LDAP login accounts and all went well for the front end used by the staff that access the system. However, we have a second method of entry restricted to admin staff that load the data onto the database and a lot of processing is called using the dbms_scheduler.
Most of the database tables have a created_by column which is defaulted to pick up their user name from a sys_context but when the data loads are run from dbms_scheduler this information is not available and hence the created_by columns all get populated with APP_GLOBAL.
I have managed to populate a Global Temporary Table (GTT) with the sys_context value and use this to populate the created_by from a stored procedure called by dbms_scheduler so my next logical step was to put this in a function and call it so it could be used throughout the system or even be referenced from a before insert trigger.
The problem is, when putting the code into a function the data from the GTT is not found. The table is set to preserve rows.
I have trawled many a site for an answer but have found nothing to help me can anyone here provide a solution?
The scheduler will be using a different session than the session that created the job - preserve rows will not make the GTT data visible in a different session.
I am assuming the created_by columns have a default value like nvl(sys_context(...),'APP_GLOBAL'). Consider passing the user name as a parameter to the job and set the context as the first step in the job.
A weekend off and a closer look at my code showed a fatal flaw in my syntax where the selection of data from the GTT would never happen. A quick tweak and recompile and all is well.
Jack, thanks for your help.