On Sql Azure, duplicate database appears in square brackets - entity-framework

I have a SQL Azure database called Palladium.
I just logged into the Windows Azure portal, and Palladium is showing up twice now, and the duplicate is named [Palladium] with square brackets.
Any idea what this is?
I am using Code First and I've been fumbling around with different connection strings (serious problems today...), and they all actually do specify [Palladium] for some reason. When I go into the new one and click to generate a connection string, it actually says [[Palladium]]]. That's right, that's three square brackets on the end.
I am using Entity Framework with Code First, but as far as I know the part that actually drops and modifies the database is disabled.
Solution: I still have no idea what that database was. It appears to be empty, and the portal seemed confused by it (not letting me delete it, select certain things, etc). However, through SSMS I was able to drop it no problem and now everything seems fine.

I can tell you why the db gets created - it's created by the ASP.Net universal providers.
If you had checked the tables in your [Palladium] database you would have noticed only the membership tables that are used by the ASP.Net universal providers. (Applications, Memberships, Profiles, Roles, Users and UsersInRoles)
One of the features of these providers is that they can automatically create the membership tables for you in your database. This is what happened to you - the web.config had the square brackets around it as indicated by Gary's answer. So the automatic creation of the membership tables took that connection string too literally and created a DB with the square brackets.
I'm interested to know whether you left the square brackets in your web.config and just deleted the DB. If it started working under those conditions, then that indicates to me that this must be an Azure bug.

Brian, even though you appear to have solved your particular issue, I want to add a message here since this is one of the first Google search results for people having the [] square bracket problem in their database name.
I have previously encountered this issue where the database name appears to have square brackets [] on azure. In my case it was not code first. Let's say I create a new database using the Azure portal, and name it Palladium. In order to get the connection string for it, I go to the dashboard page of the sql database and click the 'show connection strings' link that appears on the right. This connection string needs to be placed in your web.config file replacing the contents of the connectionstring="".
Before doing that, notice that for some reason the connection string contains the string "Database=[Palladium]". I believe you have already noticed this. This is a problem.. I do not know why Azure insists on putting [] around the database name, but you must remove those square brackets from around the word Palladium before using it as your connection string. (Also don't forget to replace the string "{your_password_here}" with the actual password).
That fixed my problem, and I have been doing this routinely for some months now without problems. Still don't know why Azure puts [] in the first place.
I am thinking this solution would apply even to someone using codefirst since you would still need to create the database on Azure and get the connection string, since code first creates the tables, not the db itself.

Related

Problem with connecting ADODB.Recordset to a forms RECORDSET on the On Open event of the form

I have an access project that is "linked" to a SQL database that now works like a charm. The last problem I solved was, making sure any Boolean fields be turned to bits with default of 0, and adding the TIMESTAMP in SQL due to the fact that ACCESS is not so much of a genius with record locking (so I was told) .
Now that I tried to connect direct to SQL server by using an ADODB.Recordset and setting the forms.recordset to the recordset, at the OnOpen event of the form, (this recordset runs a stored procedure in SQL, I get the data fine but get the error locking (write conflict) back.
This ADODB.Recordset cursorlocation is set to "adUseClient".
Obviously I no longer have the forms recordsource attached or assigned to the linked SQL table anymore.
Am I missing something? do I need to assign anything to the forms recordsource?
The Idea is trying to connect directly thru the use of stored procedures instead of linked tables.
thanks so much for any help.
The adding of timestamp is a VERY good idea. And do not confuse the term/name used timestamp to mean an actual date/time column. The correct term is "row version".
This issue has ZERO to do with locking. The REASON why you want this column added is because then Access will use that column to determine when the record is dirty, and more imporant figure out that the record been changed. If you omit this column, then access reverts to a column by column testing approach. Not only does this cause more network traffic, but worse for real type values, due to rounding, you can get the dredged this record has been changed by another user. But, it not been changed, and even columns with floating point values will cause access to error out with that changed record.
So, for all tables, and you even see the option included in the SSMA (the access to sql migration wizard that this option is available (and I believe it is a default).
So yes, it is HIGH but VERY high recommended that you include/add a rowversion column to all tables - this will help Access in a HUGE way.
And as noted, there is a long standing issue with bit fields that don't have a default setting. so, you don't want to allow bit fields to be added/created with a null value. So, ensure that there is a default value of 0 (you set this sql server side).
Ok, now that we have the above cleared up?
It not really all that clear as to why you want or need or are adopting a store procedure and code to load/fill up the form. You not see any better performance if you bind the form DIRECTLY to the linked table. Access will ONLY pull the reocrds you tell that form to load.
So, bind the form directly to the linked table. Then, you can launch/open the form say to once reocrd with this:
docmd.OpenForm "frmInvoices",,,"InvoiceNum = 123"
Now, you would of course change the above "123" to some variable or some way to prompt the user for what invoice to work on.
The invoice form will then load to the ONE record. So, even if the form bound (linked table) has 2 million rows? Only ONE record will come down the network pipe. So, all that extra work of a store procedure, creating a recordset and pulling it ? You will gain ZERO in terms of performance, but you are writing all kinds of code when it simply not required, and you not achieve any superior performance to the above one line of code that will automatic filter and ONLY pull down the record that meets the given criteria (in this example invoice number).
So:
Yes, all tables need a PK
Yes, all tables should have a rowversion (but it called a timestamp column - nothing to do with the actual time).
Yes, all bit fields need a default of 0 - don't allow null values.
And last but not least?
I don't see any gains in performance, or even any advantages of attempting to code your way though this by adopting store procedures and that of introducing reocrdset code when none is required, but worse will not gain you performance anyway.

Crystal Reports cannot map to new database server

I have performed this process numerous times with other Reports but this one Report is not working as it should.
Essentially, I am trying to point a report at a new server where the stored procedure is EXACTLY the same as on the previous server. I am using the Verify Database functionality to do this. But when I point at the new server and enter parameters, CR prompts me to re-map the fields. This would be only slightly annoying if the Map Fields window actually displayed the returned columns from the new server.
But, as you can see from the image, even with the 'Match type' unchecked, no columns from the stored procedure display to be mapped. I have clicked on every field in the report but none of them show any columns to map to.
I have also tried changing the Database Location first before trying to verify, but that doesn't make any difference.
Has anybody else seen this? Is there any sort of workaround?
​I found my solution. Kinda dumb, really.
My stored procedure calls another stored procedure. I commented that call out and tried to Verify the Database and it worked.
Apparently Crystal Reports doesn't handle procedures that call other procedures very well when trying to map fields.

DBeaver will not display certain schemas correctly in the Database Navigator

I'm using DBeaver 5.2.5.201811181655 with IBM DB2/400 v7r3.
I'm trying to see a schema called WRKCERTO, but Database Navigator will not show it. The schema is there and I have rights to it, and I'm able to run SQL scripts with its objects, such as SELECT * FROM WRKCERTO.DAILYT and it works.
To make matters stranger, when WRKCERTO is the only schema in the filters, the contents of a schema which I cannot identify are shown under the connection as if the connection is their parent. It doesn't show any schema as a node in the tree between the connection & Tables, Views, etc. The tables are familiar, but I cannot determine their exact schema, and as such also cannot query any of them because DBeaver doesn't know what schema to use.
The behavior of the Projects window is the same.
If I connect with SquirrelSQL 3.8.1 everything looks ok. I can see WRKCERTO along with all my other schemas as if nothing is different.
The screenshot below shows the issue. The schema I use most is F_CERTOB, which is visible under the connection ASP7, which currently has two schema filters: F_CERTOB and WRKCERTO. But as shown, WRKCERTO...isn't.
The connection TEST is an exact copy of ASP7, but its only filter is WRKCERTO. And as mentioned above, the items under the connection name cannot be identified.
I've gone through the DBeaver settings, but I cannot find any way to change this behavior. AND...this is the first time I've tried to use WRKCERTO. I tried to access it for the first time only a couple days ago, so it seems unlikely there are bad bits of information about it floating around in my system, or in DBeaver.
What information can I provide to help diagnose this issue...?
Please check below url.Similar issue mentioned with some solution.
You may also want to try this and let me know if it works or not.
https://dbeaver.io/forum/viewtopic.php?f=2&t=911

Split MS Access DB Needs Compact/Repair as well as Re Link on Front and Backend, Why?

I have an ACCDB that I split a while ago that contains many forms with sub forms (based on tables) and over two hundred tables in the BE (almost all are small lookup tables for vehicle objects) and 400+ queries. There also happens to exist another ACCDB with a single table in it with 6.5M rows that the FE links to with basic history info. The two backends do not link to each other in any way. The FE is 14MB, BE is 1.2G and the single table DB is 900MB, all with primary keyes and indexes setup appropriately. The DB is 100% normalized. Both BE's grow 5% every month. The DB is currently slated to be migrated to an Oracle 11G environment later this year.
Question:
I found out recently that if I compact and repair the back end or front end that none of the forms containing subforms open; the whole FE just freezes to white. Even if all 3 are repaired I still have issues. BUT if I compact/repair all 3 as well as relink the entire front end to the two backends the forms all of sudden start working. It was only recently that this behavior began.
Why do I have to relink to make the forms work again?
You should not have to re-link anything here at all after a C+R.
The only thing that comes to mind is the user who is doing the C+R has some restricted rights in the folder or directory where the C+R occurs.
Remember, when the user does the C+R, then a COPY of the file is created – and thus possible inheriting of the CURRENT user’s rights can occur WHEN the NEW file is created. So it sounds like some permissions issues exists on the folder, or the user that is doing the C+R has some special (different) rights. (perhaps some inherited rights do to membership in some security group).
Of course one should ensure that you are using UNC path names, and of course the front end needs to be placed on each machine.
Perhaps again the user doing the C+R has “different” drive mappings and thus links to the back end databases are thus wrong due to different drive letter. So if not already, as a general rule I would STRONGLY avoid drive letters and use NC path names (if you not already).
If you are using UNC path names, then the likely issue is permissions.
There also a possibility that the new user doing the C+R is running the front end from a “non” trusted location.
Also, the table of 6.5 million rows seems a bit large, and I assume the 1.2 gig size is RIGHT AFTER a C+R? (but this issue is for another post).
This suggests a drive mapping issue, a permissions issue, or perhaps the user launching the application is messing up references. I would shift by-pass into the application and ensure that the user doing the C+R can compile the application, and would from VBA editor take CAREFUL note that say office 14 references are not being hi-jacked to office 15 references for example.
You're reaching the "hassle-free" viable (as opposed to "documented") limits of Access as a database. remember the queries need to be compiled which means resolving all the table links, and verifying existing indexes and other meta-data. it's possible that simply over-writing this information by manually using the linked table manager as you have, may be more efficient.
Here's a few prescribed tips which might help you out:
http://office.microsoft.com/en-gb/access-help/improve-performance-of-an-access-database-HP005187453.aspx
And some more...
http://www.fmsinc.com/MicrosoftAccess/Performance.html#Linked%20Tables
And a related thread from this site:
Proper way to program a Microsoft Access Backend Database in a Multiuser Environment
Issues which may not be helping you:
queries which don't restrict the dataset sufficiently, particularly those running a dynaset
backed database files sitting too low in the windows folder structure (the higher the better)
As the 2nd link suggests, the truth is there are so many variables at work that resolving this will require some tinkering, with trial & error playing a major part.
All that, or you can upsize to SQL Server Express :)
http://office.microsoft.com/en-gb/access-help/move-access-data-to-a-sql-server-database-by-using-the-upsizing-wizard-HA010275537.aspx

SSMS 2008 adding ALTER TABLE WITH CHECK ADD CONSTRAINT to my Procs

I have searched Google, BOL, and several forums and can't find the answer:
I have a very small data base application that I write some queries and SPs to extract data. A few days ago I opened an exsiting SP to find that something had added code similar to that below, sometimes multiple lines referring to every table in the database. When I set up a new simple SP like "Select * from TinyTable" and re-open it, the same code has been inserted.
The last thing I remember doing was reviewing the settings for results to grid in SSMS 2008 R2. I'm afraid I may have accidently changed a setting but I've spent hours reviewing them and can't identify what it might be.
I have considered reinstalling SSMS to set back to defaults but I have a linked server set up to solve a collation conflict, and don't want to cause problems with that. If anyone can point me in the right direction I'd appreciate it. I may be searching using the wrong terminology but can find nothing. As I say, I don't know for sure a change to the SSMS tools options is the problem but I suspect it could be something I have done.
Here's a sample of what gets automatically inserted at the bottom of every one of my procs:
GO
ALTER TABLE [dbo].[tblLot] WITH CHECK ADD CONSTRAINT [FK_tblLot_tblLocation] FOREIGN KEY([LocID])
REFERENCES [dbo].[tblLocation] ([LocID])
You likely have Tools > Options > SQL Server Object Explorer > Scripting > Object Scripting Options > Generate script for dependent objects set to True. Try changing the value to false.