I have a standard vanilla database in a folder location, e.g. MyDatabase.mdf, MyDatabases.ldf. My PowerShell script is copying these files to the data folder of SQL Server, and renaming in the process, e.g. MyProject.mdf, MyProject.ldf.
I then attach the databases, however the logical names of both the original vanilla .mdf and .ldf remain. I am unable to figure out how to change these with PowerShell. I can do this with a restore, but wondering how with an attach.
$mdfFileName = "DataFolder\MyProject.mdf"
$ldfFileName = "DataFolder\MyProject.ldf"
$sc = New-Object System.Collections.Specialized.StringCollection
$sc.Add($mdfFileName)
$sc.Add($ldfFileName)
$server.AttachDatabase("MyProject", $sc)
An a test, I have tried
$db.LogFiles[0].Name
and this returns the logical name, however it is only accessible as a getter.
The sample code is missing a lot of functionality. It seems you are using SMO to work with the database. Why not use TSQL instead? It can be executed with Invoke-SqlCmd, or by using System.Data.SqlClient classes from .Net.
CREATE DATABASE [MyProject] ON
(FILENAME = 'some\path\MyProject.mdf'), (FILENAME = 'some\path\MyProject.ldf')
FOR ATTACH;
You can call the rename method to rename the logical file followed by alter method. You'll need to refresh your SMO object with refresh method afterwards to see the changes.
Related
Short Version
How does the shell get the properties of a file?
Long Version
The Windows Shell exposes a rich system of properties about items (e.g. files and folders) in the shell namespace.
For example:
System.Title: A Quick Guide for SQL Server Native Client OLE DB to ODBC Conversion
System.Author: George Yan (KW)
System.Document.LastAuthor: rohanl
System.Comment: To learn more about this speaker, find other TEDTalks, and subscribe to this Podcast series, visit www.TED.com Feedback: tedtalks#ted.com
System.ItemParticipants: George Yan (KW)
System.Company: Contoso
System.Language: English (United States)
System.Document.DateCreated: 6/10/2014 5∶16∶30 ᴘᴍ
System.Image.HorizontalSize: 1845 pixels
System.Image.VerticalSize: 4695 pixels
System.Image.HorizontalResolution: 71 dpi
System.Image.VerticalResolution: 71 dpi
In order for the shell to read these properties, it obviously has to use a lot of sources:
Windows Media Foundation IMFMetadata works great for images and movies
Windows Imaging Component (WIC) probably has a lot of APIs for reading metadata
I'm not sure if IFilter can retrieve Title, Author, Subject, Comments etc from Office documents
Either way, it has to read the file contents stream and do something with the contents of the file in order to get all these fancy shell properties. In other words:
IStream \
+ |--> [magic] --> IPropertyStore
.ext /
Can use it with my own stream?
I have items that are not in the shell namespace; they are in a data store. I do expose them to the shell through IDataObject as CF_FILEDESCRIPTOR with an IStream when its time to perform copy-paste or drag-drop. But outside of that they are just streamable blobs in a data store.
I'd like to be able to leverage all the existing work done by the very talented and hard-working1 shell team to read metadata from a "file", which in the end only exists as an IStream.
Is there perhaps a binding context option that lets me get a property store based on an IDataObject rather than a IShellItem2?
So rather than:
IPropertyStore ps = shellItem2.GetPropertyStore();
is there a:
IPropertyStore ps = GetShellPropertiesFromFileStream(stream);
?
How does the shell get all the properties of a file?
Bonus Chatter - IPropertyStoreFactory
This interface is typically obtained through IShellFolder::BindToObject or IShellItem::BindToHandler. It is useful for data source implementers who want to avoid the additional overhead of creating a property store through IShellItem2::GetPropertyStore. However, IShellItem2::GetPropertyStore is the recommended method to obtain a property store unless you are implementing a data source through a Shell folder extension.
Tried
IPropertyStore ps = CoCreateInstance(CLSID_PropertyStore);
IInitializeWithStream iws = ps.QueryInterface(IID_IInitializeWithStream);
But CLSID_PropertyStore does not support IInitializeWithStream.
Bonus Reading
MSDN: Initializing Property Handlers
Property handlers are a crucial part of the property system. They are invoked in-process by the indexer to read and index property values, and are also invoked by Windows Explorer in-process to read and write property values directly in the files.
MSDN: Registering and Distributing Property Handlers (spellunking the registry for fun and reading contracts from the other side)
(Have some experience in Property Store handlers) How I see a solution:
Get PropertyStore handler CLSID for your file extension. You should use 2 regkeys key:
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\PropertySystem\PropertyHandlers\.yourext
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\PropertySystem\SystemPropertyHandlers
Create two objects with CoCreateInstance
If you have 2 object you can combine them into single object with PSCreateMultiplexPropertyStore
Query for IInitializeWithStream (also you can try to query IPersistStream).
If the PropertyStore object supports IInitializeWithStream/IPersistStream: you are lucky - just init your object and query the properties you need. If does not - you still have (dirty) variant to create temporary file and then use IPersistFile.
I can't seem to find good enough solution to my problem. Is there a good way of grouping variables in some kind of file so that multiple scripts could access them?
I've been doing some work with Desired State Configuration but the work that needs to be done cannot be efficiently implemented that way. The point is to install Azure Build Agent on a server and then to configure it. There are some variables that really should not be inside a script file just copypasted like Personal Access Token. I just want to be able to easily change it without the need to go inside every script that would be using it. In DSC you can just make a .psd1 file and access the variables like for example AllNodes.NodeName. The config file invocation and parameters look like this:
.\config.cmd --unattended --url $myUrl --auth PAT --token $myToken --pool default --agent "$env:COMPUTERNAME" --acceptTeeEula --work $workDir'
I want to make the variable $myToken accessible from outside file for better security and having a centralized place from where I can change values. $myUrl is also important to have access to due to it changing with new update to Build Agent.
Thank you in advance for your effort. If anything is not clear please let me know.
I have two very different answers to your question, although either one of them may miss your point.
First, it's possible to define veriables inside your profile script. Most people only use the profile script to define a library of functions or classes. But a variable can be made global the same way.
I have a variable named $myps that identifies the folder where I keep my PS scripts (in subfolders).
When I start a session I generally switch to this directory (oops, I called it a folder above.
The second way involde storing values of variables in a CSV file, while the names are stored in the CSV header.i then have a quickie little comandlet that steps through a CSV file, record by record, generating different expansions of a template each time through.
These values are not quite global, but they can be used in more than one context.
Thank you for the help. Those are very useful solutions in some cases, but I dug a bit deeper and found solution that suits my purpose. Basically if you have a psd1 file suited for DSC use you can also access its content via normal ps1 file. For example:
NonNodeData =
#{
Pat = 'somePAT'
}
Let's say this section of a psd1 file called ENV.psd1 is on your local machine in C:/Configuration
To access the content of this file you have to make a variable inside your script and use Import-PowerShellDataFile like so:
$configData = Import-PowerShellDataFile -Path "C:\Configuration\ENV.psd1"
And now you are free to use anything stored inside ENV.psd1. For example if I want to extract my PAT from config file to be able to store it in a variable in the script:
$myPat = $configData.NonNodeData.Pat
Thanks to that I can just pass $myPat as a parameter when invoking config.cmd like so:
.\config.cmd --unattended --auth PAT --token $myPat
Keeping my code cleaner and easier for any future updates.
I've tried three different ways to detect if a FileReference's original file is still existing (i.e. file has been deleted outside TYPO3 using SFTP or similar):
if($fileReference instanceof \TYPO3\CMS\Extbase\Domain\Model\FileReference) {
$isMissing = $fileReference->getOriginalResource()->getStorage()->getFile($fileReference->getOriginalResource()->getIdentifier())->isMissing();
$isMissing = $fileReference->getOriginalResource()->getOriginalFile()->isMissing();
$isMissing = $fileReference->getOriginalResource()->isMissing();
}
Only the first one give me the right isMissing() value.
The property isMissing is an database value, which is set if the storage detect an missing file. On getFile the storage check if the file is missing and set "isMissing" for the file. If you dont persist this to the database, the setting is get loose with the next call.
You can also call $isMissing = $fileReference->getOriginalResource()->getStorage()->hasFile($fileReference->getOriginalResource()->getIdentifier());
You can run the file indexer scheduler (TYPO3\CMS\Scheduler\Task\FileStorageIndexingTask) if you want to check frequently for deleted files. This should be required if you let change files externaly (like ftp).
My simple experiment reads from an Azure Storage Table, Selects a few columns and writes to another Azure Storage Table. This experiment runs fine on the Workspace (Let's call it workspace1).
Now I need to move this experiment as is to another workspace(Call it WorkSpace2) using Powershell and need to be able to run the experiment.
I am currently using this Library - https://github.com/hning86/azuremlps
Problem :
When I Copy the experiment using 'Copy-AmlExperiment' from WorkSpace 1 to WorkSpace 2, the experiment and all it's properties get copied except the Azure Table Account Key.
Now, this experiment runs fine if I manually enter the account Key for the Import/Export Modules on studio.azureml.net
But I am unable to perform this via powershell. If I Export(Export-AmlExperimentGraph) the copied experiment from WorkSpace2 as a JSON and insert the AccountKey into the JSON file and Import(Import-AmlExperiment) it into WorkSpace 2. The experiment fails to run.
On PowerShell I get an "Internal Server Error : 500".
While running on studio.azureml.net, I get the notification as "Your experiment cannot be run because it has been updated in another session. Please re-open this experiment to see the latest version."
Is there anyway to move an experiment with external dependencies to another workspace and run it?
Edit : I think the problem is something to do with how the experiment handles the AccountKey. When I enter it manually, it's converted into a JSON array comprising of RecordKey and IndexInRecord. But when I upload the JSON experiment with the accountKey, it continues to remain the same and does not get resolved into RecordKey and IndexInRecord.
For me publishing the experiment as a private experiment for the cortana gallery is one of the most useful options. Only the people with the link can see and add the experiment for the gallery. On the below link I've explained the steps I followed.
https://naadispeaks.wordpress.com/2017/08/14/copying-migrating-azureml-experiments/
When the experiment is copied, the pwd is wiped for security reasons. If you want to programmatically inject it back, you have to set another metadata field to signal that this is a plain-text password, not an encrypted password that you are setting. If you export the experiment in JSON format, you can easily figure this out.
I think I found the issue why you are unable to export the credentials back.
Export the JSON graph into your local disk, then update whatever parameter has to be updated.
Also, you will notice that the credentials are stored as 'Placeholders' instead of 'Literals'. Hence it makes sense to change them to Literals instead of placeholders.
This you can do by traversing through the JSON to find the relevant parameters you need to update.
Here is a brief illustration.
Changing the Placeholder to a Literal:
I have 'inherited' a test harness application which uses Enterprise Library for its SQL data access. In the app.config file (enterpriselibrary.configurationSettings), it references a "configurationSection" with a path to "dataConfig.config", which is encrypted. I would like to change the database connection properties, but EntLibConfig.exe will not open the dataConfig.config or app.config (I have the FileKeyAlgorithmPairStorageProviderData file).
The test harness application runs, so its configured ok.
I can, in code, using (Microsoft.Practices.EnterpriseLibrary.Data.Configuration.ConfigurationManager.GetConfiguration("dataConfiguration")) read the data configuration, and can navigate all the instances and connection strings (security isn't an issue for this test harness). I can dump everything I need to a hand-crafted XML file (using GetType().AssemblyQualifiedName to get the full name for the classes which read the config file) and then change the app.config to read my new, unencrypted, xml dataConfig file.
All is fine, I can now change my database config settings.
However... given that ConfigurationManager.GetConfiguration("dataConfiguration") returns a fully populated instance of a DatabaseSettings object, is there not a method I can call which will write the XML file (dataConfig.config) for me ?
I appreciate that this is probably a really big hammer way to edit the data configuration, but after half a day of trying, I fell back on the old coding maxim... if you can't find the tool to do what you want, write your own !
Thanks
Well... turns out that its not that hard.
I added a new "configurationSection" to my app.config (dataConfiguration2), with encrypt set to false, with a path pointing to an new empty text file (dataConfiguration.config2). I then copied my encrypted dataConfiguration details using the following code:
using Entlib = Microsoft.Practices.EnterpriseLibrary.Configuration;
using Microsoft.Practices.EnterpriseLibrary.Data.Configuration;
:
DatabaseSettings settings = (DatabaseSettings)Entlib.ConfigurationManager.GetConfiguration("dataConfiguration");
Entlib.ConfigurationManager.WriteConfiguration("dataConfiguration2", settings);
...and it filled the empty file with the (unencrypted) configuration details.