ADO.net SQL Server commands failing on x86 Windows Server 2008 - deployment

I am deploying project with developer-targeted setup using Inno Setup. I've wrote some code to make some actions that are a bit too custom for the original Inno Setup.
One of such actions was connecting to and running SQL files, thanks to StackOverflow users I've found some code for doing so. The problem is that whenever I try deploying the project on a new Windows Server 2008 machine (x86), I'm getting such cryptic error on every sql command apart from the first one:
"Not enough storage space is available to complete this operation ProgID: ADODB.Connection"
So. The first command fires and works, next stop with error. If I'd run the code again, some of the commands might work, but at some point, the installer would hung. The problem does not exists on windows XP I am writing it on.
I use the code linked above for every SQL command I run (new ole object ADODB.Connection, new command, execute... end) - I don't make the second query for results of insert (as it's there just for showing purpose).
Any thoughts what can be happening here or how can I make a workaround?

Related

Stored procedure starting with #

I was running a trace on an application today and saw this for the first time ever, in the text data of the trace:
exec #spDMF1F848FB98D743F69BA4AF02A7C05927
Can't seem to find anything online anywhere. I am guessing that it is some kind of temp procedure that the application built in background, but that is a complete guess and never have heard that was even possible.
Running SQL Server 2005, SQL Server Management Studio v15.0.18384.0
Anyone familiar with anything similar?

How to run powershell script remotely using chef?

I have powershell script which is present on chef server to run on remote windows server, how can i run this powershell script from chef server on remote windows server.
Chef doesn't do anything like this. First, Chef Server can never remotely access servers directly, all it does is stores data. Second, Chef doesn't really do "run a thing in a place right now". We offer workstation tools like knife ssh and knife winrm as simplistic wrappers but they aren't made for anything complex. The Chef-y way to do this would be to make a recipe and run your script using the the powershell_script resource.
Does it mean chef is also running on Windows server ?
If yes, why not to use psexec from Windows Ps tools ?
https://learn.microsoft.com/en-us/sysinternals/downloads/psexec
Here is my understanding of what you are trying to achieve. If I'm wrong then please correct me in a comment and I will update my answer.
You have a powershell script that you need to run on a specific server or set of servers.
It would be convenient to have a central management solution for running this script instead of logging into each server and running it manually.
Ergo you either need to run this script in many places when a condition isn't filled, such as a file is missing, or you need to run this script often, or you need this script to be run with a certain timing in regards to other processes you have going on.
Without knowing precisely what you're trying to achieve with your script the best solution I know of is to write a cookbook and do one of the following
If your script is complex place it in your cookbook/files folder (assuming the script will be identical on all computers it runs on) or in your cookbook/templates folder (if you will need to inject information into it at write time). You can then write the .ps file to the local computer during a Chef converge with the following code snippet. After you write it to disk you will also have to call it with one of the commands in the next bullet.
Monomorphic file:
cookbook_file '<destination>' do
source '<filename.ps>'
<other options>
end
Options can be found at https://docs.chef.io/resource_cookbook_file.html
Polymorphic file:
template '<destination>' do
source '<template.ps.erb>'
variables {<hash of variables and values>}
<other options>
end
Options can be found at https://docs.chef.io/resource_template.html
If your script is a simple one-liner you can instead use powershell_script, powershell_out! or execute. powershell_out! has all the same options and features as the shell_out! command and the added advantage that your converge will pause until it receives an exit status for the command, if that is desirable. The documentation on using it is a bit more spotty though so spend time experimenting with it and googling.
https://docs.chef.io/resource_powershell_script.html
https://docs.chef.io/resource_execute.html
Which ever option you end up going with you will probably want to guard your resource with conditions on when it should not run, such as when a file already exists, a registry key is set or what ever else your script changes that you can use. If you truly want the script to execute every single converge then you can skip this step, but that is a code smell and I urge you to reconsider your plans.
https://docs.chef.io/resource_common.html#guards
It's important to note that this is not an exhaustive list of how to run a powershell script on your nodes, just a collection of common patterns I've seen.
Hope this helped.

Why do Selenium tests behave different on different machines?

I couldn't find much information on Google regarding this topic. Below, I have provided three results from the same Selenium tests. Why am I getting different results when running the tests from different places?
INFO:
So our architecture: Bitbucket, Bamboo Stage 1 (Build, Deploy to QA), Bamboo Stage 2 (start Amazon EC2 instance "Test", run tests from Test against recently deployed QA)
Using Chrome Webdriver.
For all three of the variations I am using the same QA URL that our application is deployed on.
I am running all tests Parallelizable per fixture
The EC2 instance is running Windows Server 2012 R2 with the Chrome browser installed
I have made sure that the test solution has been properly deployed to the EC2 "test" instance. It is indeed the exact same solution and builds correctly.
First, Local:
Second, from EC2 Via SSM Script that invokes the tests:
Note that the PowerShell script calls the nunit3-console.exe just like it would be utilized in my third example using the command line.
Lastly, RDP in on EC2 and run tests from the command line:
This has me perplexed... Any reasons why Selenium is running different on different machines?
This really should be a comment, but I can't comment yet so...
I don't know enough about the application you are testing to say for sure, but this seems like something I've seen testing the application I'm working on.
I have seen two issues. First, Selenium is checking for the element before it's created. Sometimes it works and sometimes it fails, it just depends on how quickly the page loads when the test runs. There's no rhyme or reason to it. Second, the app I'm testing is pretty dumb. When you touch a field, enter data and move on to the next, it, effectively, posts all editable fields back to the database and refreshes all the fields. So, Selenium enters the value, moves to the next field and pops either a stale element error or can't find element error depending on when in the post/refresh cycle it attempts to interact with the element.
The solution I have found is moderately ugly, I tried the wait until, but because it's the same element name, it's already visible and is grabbed immediately which returns a stale element. As a result, the only thing that I have found is that by using explicit waits between calls, I can get it to run correctly consistently. Below is an example of what I have to do with the app I'm testing. (I am aware that I can condense the code, I am working within the style manual for my company)
Thread.Sleep(2000);
By nBaseLocator = By.XPath("//*[#id='attr_seq_1240']");
IWebElement baseRate = driver.FindElement(nBaseLocator);
baseRate.SendKeys(Keys.Home + xBaseRate + Keys.Tab);
If this doesn't help, please tell us more about the app and how it's functioning so we can help you find a solution.
#Florent B. Thank you!
EDIT: This ended up not working...
The tests are still running different when called remotely with a powershell script. But, the tests are running locally on both the ec2 instance and my machine correctly.
So the headless command switch allowed me to replicate my failed tests locally.
Next I found out that a headless chrome browser is used during the tests when running via script on an EC2 instance... That is automatic, so the tests where indeed running and the errors where valid.
Finally, I figured out that the screen size is indeed the culprit as it was stuck to a size of 600/400 (600/400?)
So after many tries, the only usable screen size option for Windows, C# and ChromeDriver 2.32 is to set your webDriver options when you initiate you driver:
ChromeOptions chromeOpt = new ChromeOptions();
chromeOpt.AddArguments("--headless");
chromeOpt.AddArgument("--window-size=1920,1080");
chromeOpt.AddArguments("--disable-gpu");
webDriver = new ChromeDriver(chromeOpt);
FINISH EDIT:
Just to update
Screen size is large enough.
Still attempting to solve the issue. Anyone else ran into this?
AWS SSM Command -> Powershell -> Run Selenium Tests with Start-Process -> Any test that requires an element fails because ElementNotFound or ElementNotVisible exceptions.
Using POM for tests. FindsBy filter in c# is not finding elements.
Running tests locally on EC2 run fine from cmd, powershell and Powershell ISE.
The tests do not work correctly when executing with the AWS SSM Command. Cannot find any resources to fix problem.

SQL Server 2012 - SSAS Deployment Failed: File System Error, Access is Denied

The context is OLAP cube development. After configuring my project though SQL Server Data Tools (SSDT, the new BIDS) I am unable to deploy the project.
Every time the deployment process is started I get an error like the one below:
File system error: The following error occurred while opening the file '\\?\D:\[...]\database\mssql\tmpdb\MDTempStore_1864_9_no8wd.tmp': Access is denied.
(The [...] denotes some part of the path I ommited for brievty)
I always get the same error, indicating that some .tmp file could not be accessed.
My environment:
OS: Windows Server 2008 R2 Standard, SP1
SQL Server: SQL Server 2012 (v11.0.2100.60), running on localhost
What I tried:
I have the File System access rights for the folder in question (at some point I even tried with Admin privileges on the machine, didn't help)
I tried to deactivate the anti-virus in case it was performing on-access-scan (still didn't help)
Attempts to deploy/process individual dimensions causes the same problem
Deploying dimensions or cubes programmatically through SMO (instead of SSDT) runs into the same problem
Deploying DataSource objects as well as DataSourceView objects works fine
Maybe some of you faced similiar issues or have further suggestions/ideas?
Thanks for you help!
So, I finally figured it out.
As expected it was a permission issue, but despite the error message hinting at some missing file system permissions, the cause of the problem was the user I configured the Data Source with.
The SQL User I specified was given the roles
db_datareader
db_datawriter
db_ddladmin
on the source database but this doesn't seem to be enough. When I tried to give him the server role sysadmin it started working.
This is probably overkill, one could further fine-tune the role assignment but for now it also works that way.
Just a suggestion here - have you tried running SSDT as an administrator? That is, right-click on SSDT and click Run As Administrator. Then try to deploy your project. It definitely sounds like a permissions issue.
Exact reason is SSAS Service user does not have an access to the folders that are specified in SSAS configuration (i.e error states it is Temp Folder). I think it is not directly related with SQL Server because it is just a file access error. Error is thrown before it reaches SQL Server.
Give full permission to SSAS Service User for those folders.
Regards
Onur

Why does a SQL Azure DACPAC upgrade (via a PowerShell script) consistently take 30min to complete

I created a PowerShell script to upgrade a SQL Azure instance with my latest DACPAC (taken from http://msdn.microsoft.com/en-us/library/ee634742.aspx).
What I have experienced when running my PowerShell script is that it consistently takes approximately 30min to execute. The script is idle for almost half an hour waiting on $dacstore.IncrementalUpgrade($dacName, $dacType, $upgradeProperties) to return from execution and nothing is printed out on the PowerShell console window. Only right at the end of the half hour does the incremental update start spitting out console messages which inform me that the upgrade is taking place (essentially it appears that the script has hung for 30min until it finally comes back alive and the script does this consistently every time).
Does it usually take this long for the IncrementalUpgrade to complete and is there supposed to be a 30min period of inactivity/waiting?
Note that I am running the PowerShell script from my local machine which is external to the Azure network.
Thanks for any insight you can give for this, I am hoping that I can reduce this incremental upgrade process to substantially less than 30min so that my continuous integration build doesn't take so long.
According to Microsoft Support this is a known issue and will be fixed in SQL Server 2012 (code named Denali). Here are the details from Microsoft Support:
It’s a known issue that using SSMS 2008 or PowerShell to update DAC on
SqlAzure is very slow. SQLServer 2008 utilize old extraction engine
which run query for every column and small object. This way works well
at on-premise server, and meets SQLServer 2008 original design target.
However, when managing the SqlAzure database, the query need be
transferred over internet, network latency makes the old extraction
becomes inefficient, especially, when network is not good.
Our SQL product team aware this issue and designed new extraction
engine to fix it. The new engine is integrated in SQL Server 2012
(code name Denali). Unfortunately, some of the engine behavior may
bring break changes to SQL Server 2008. We try different approach but
we can’t relief regression barrier when apply the new engine in the
SQL server 2008. Therefore, we don’t have plan to deliver the new
extraction engine as hotfix on SQLServer 2008 so far. That will impact
the current on-premise users and operation.
Further details about how I architected the PowerShell script with a continuous integration (CI) process can be found here.