Let's say I have add two migrations using EF Core. Let's say this is the filename in Migration folder
20210619063616_base.cs
20210619064501_removePersonModel.cs
When I 20210619063616_base.cs is generated, I run script-migration, it generated the sql version of the migration.
Then, when run another add-migration and when I run script-migration, it generated all the sql including the previous migration.
How can I run script-migration just to generate sql which includes only last migration ?
Why script-migration generate all migration script instead of the last migration?
Script-Migration command has two optional parameters called From and To, which allow you to specify respectively the starting and ending migration:
-From <String> The starting migration. Migrations may be identified by name or by ID. The number 0 is a special case that means before the first migration. Defaults to 0.
-To <String> The ending migration. Defaults to the last migration.
As you can see, the default for ending migration is the last, but for starting it is the special number 0 meaning starting from the first inclusive.
How can I run script-migration just to generate sql which includes only last migration ?
One may consider the From default to be a poor design choice, but it is what it is, so what you need is to explicitly provide that parameter. For instance, to generate the script for removePersonModel migration, you would use
Script-Migration -From base
if it is the last migration, or generally (regardless of which is the current last migration)
Script-Migration -From base -To removePersonModel
Related
I have a sqlite database say c:\myDb.sqlite
I have figured out how to build a query to this db in SQLKata:
$query = New-Object SqlKata.Query("myTable")
$compiler = New-Object SqlKata.Compilers.SqliteCompiler
$query.Where("myColumn", "1")
$result = $compiler.Compile($query)
But I have no clue at all how to submit this to my Sqlite database.
Can anyone help?
Thanks,
Alex
Getting this to work from PowerShell is hampered by two difficulties:
Loading the assemblies related to NuGet packages in general and the Microsoft.Data.Sqlite NuGet package in particular often requires extra, non-obvious work in PowerShell.
PowerShell generally doesn't surface extension methods as such - e.g. .Get() on query instances - necessitating explicit calls to the static methods of [SqlKata.Execution.QueryExtensions] instead.
Specifically, using NuGet packages from PowerShell requires the following steps, which are neither convenient nor obvious:
Merely installing NuGet packages with Install-Package or trying to use them from the local cache created by .NET SDK projects in $HOME/.nuget/packages is often not enough, because any assemblies they depend on aren't then present in the same directory, which is what Add-Type requires.
They must also be unpacked in a platform-appropriate manner via an auxiliary .NET SDK project to a single target folder (per package or combined), as outlined in this answer.
Additionally, for the Microsoft.Data.Sqlite package, the platform-appropriate native library (e.g., win-x64\native\*.dll from the "runtimes" folder subtree of the .NET SDK project's publish folder) must be copied directly to the target folder in PowerShell (Core), but curiously not in Windows PowerShell, at least as of package version 5.0.9
The following sample code uses the Add-NuGetType helper function, available from this MIT-licensed Gist, which automates all of the steps above:
Note:
Assuming you have looked at the linked code to ensure that it is safe (which I can personally assure you of, but you should always check), you can install Add-NuGetType directly as follows (instructions for how to make the function available in future sessions or to convert it to a script will be displayed):
irm https://gist.github.com/mklement0/7436c9e4b2f73d7256498f959f0d5a7c/raw/Add-NuGetType.ps1 | iex
When first run, the function downloads and installs a private copy of the .NET SDK embedded inside the folder in which NuGet packages downloaded later are cached. This initial installation takes a while, and the -Verbose switch used below reports its progress.
Add-NuGetType is not meant for production use, but for experimentation with NuGet packages; run help Add-NuGetType for more information.
# Reference the relevant namespaces.
using namespace SqlKata
using namespace SqlKata.Compilers
using namespace SqlKata.Execution
using namespace Microsoft.Data.Sqlite
# Load the SqlKata and Sqlite asssemblies.
# See the comments above for how to install the Add-NuGetType function.
# Note: On first call, a private copy of the .NET SDK is downloaded
# on demand, which takes a while.
Add-NuGetType -Verbose SqlKata, SqlKata.Execution, Microsoft.Data.Sqlite
# First, create sample database './sample.db' with table 'sample_table'
#'
create table sample_table (Name string, Age int);
insert into sample_table (Name, Age) values ("JDoe", 42), ("JRoe", 43);
.save ./sample.db
'# | sqlite3
# Create a [SqliteConnection] instance...
$connection = [SqliteConnection]::new("Data Source=$PWD/sample.db")
# ... and create a query factory for it.
$sqliteDb = [QueryFactory]::new($connection, [SqlServerCompiler]::new())
# Create and execute a sample query.
$query = $sqliteDb.Query("sample_table").Where("Name", "JRoe")
# Note the need to use the static methods of [SqlKata.Execution.QueryExtensions],
# because PowerShell doesn't make *extension methods* automatically available.
[SqlKata.Execution.QueryExtensions]::Get($query) # outputs [Dapper.SqlMapper+DapperRow] instances
I have a Crate.io database, running CrateDB version 3.2.7, under Windows Server 2012. (I know it's not the best, it's only for testing purposes right now and will not be the final setup.)
I have create the table dbo.snapshots
I exported the data from SQL Server to a CSV file via the BCP command.
bcp DatabaseName.dbo.Snapshots out F:\Path\dbo_OldSnapshots.dat -S ServerName -U UserName -P Password -a65535 -c -C 65001 -t ,
Then I tried to import the data into CrateDb with the "COPY FROM" command.
COPY dbo.snapshots FROM 'file:///F:/Path/dbo_OldSnapshots.dat';
The file is about 11go big. I know it found the file, as I could see the I/O on the drive in Task Manager.
It ran for about 13 minutes, and then said "0 rows affected". I have no idea why it didn't work, I didn't get any errors.
Any idea what I can do to make it work?
********************************* EDITED ADDED ADDITIONAL INFO ****************************
Ok, so I've found out that you can specify the "RETURN SUMMARY" clause at the end of the COPY command. I tested it with a smaller file.
With that, I got an error that says that the primary key cannot be NULL. I know that it's not NULL in the data that I extract, so I'm gonna have to find out why it says that my primary key is NULL.
So I changed the BCP delimiter to be a comma, since the CSV file for CrateDB must be comma separated, and I manually edited the file to add the column headers since CrateDB asks for a header.I also edited the file in Notepadd++ to save it in UTF-8 encoding, to make sure it was the right encoding.But even with all that, I still get the error saying that the primary key value must not be NULL.
Ok so I managed to make it work. Here is what you need to check if you try to Export data from SQL or another dbms to CrateDB :
- File encoding in UTF-8
- Comma separated file
- The first line needs to be a header with all the columns be carefull, the names are case sensitive so if the column is "MyColumn" in SQL Server, but "mycolumn" in CrateDB, it must be in lower case in the header or else CrateDb won't be able to find it correctly
- If you have DateTime types, it must be between "" in the file (ex: 1,0,"2019-05-10T16:40:00",0,0)
- If you have DateTime types, please note that you need to have T between the date and time part. So "2019-05-10T16:40:00" instead of "2019-05-10 16:40:00"
With all of that in check, I was able to import a sample data in my CrateDB database.
I created a mapping that pulls data from a flat file that shows me usage data for specific SSRS reports. The file is overwritten each day with the previous days usage data. My issue is, sometimes the report doesn't have any usage for that day and my ETL sends me a "Failed" email because there wasn't any data in the Source. The job from running if there is no data in the source or to prevent it from failing.
--Thanks
A simple way to solve this is to create a "Passthrough" mapping that only contains a flat file source, source qualifier, and a flat file target.
You would create a session that runs this mapping at the beginning of your workflow and have it read your flat file source. The target can just be a dummy flat file that you keep overwriting. Then you would have this condition in the link to your next session that would actually process the file:
$s_Passthrough.SrcSuccessRows > 0
Yes, there are several ways, you can do this.
You can provide an empty file to ETL job when there is no source data. To do this, use a pre-session command like touch <filename> in the Informatica workflow. This will create an empty file with the <filename> if it is not present. The workflow will run successfully with 0 rows.
If you have a script that triggers the Informatica job, then you can put a check there as well like this:
if [ -e <filename> ]
then
pmcmd ...
fi
This will skip the job from executing.
Have another session before the actual dataload. Read the file, use a FALSE filter and some dummy target. Link this one to the session you already have and set the following link condition:
$yourDummySessionName.SrcSuccessRows > 0
When I run Add-Migration ABC into package manager console, it create a file name 201508210357164_ABC. File name contains a datetime prefix, I don't want this prefix.
How can I stop adding datetime prefix while running Add-Migration BCA everytime.
or
How can I choose prefix myself such 'dbname' instead of datetime while running Add-Migration BCA everytime.
An important pat of migrations in EF6 is the ordering of Migrations, so there is no way to change this
I am facing a peculiar problem where i need to update a particular value in database to say 'Hellò'. When i run normal update statement the values are updated fine. But when i put it i a .sql script file and then run the update statement the last character gets replaced by a junk value. Can some one enlighten me on this and how oracle processes script files?
If the update statement works then this isn't an issue with the character encoding in the database so there are two main culprits to look at; your software and your file encoding.
Check that you editor is UTF8 compliant - Notepad for example is not, but Wordpad is, and so are better editors like UltraEdit. You also need to check that the file is saved as UTF8 since this in not always the default and if you edited and saved a file with Notepad, for example, it won't be UTF8 any more.
Once you have a UTF8 file then you must load it into the database via software that supports UTF8, which excludes SQLPlus in Windows 10g. SQL Developer is OK. If you want to use SQLPLus upgrade your client to 11g. You don't say how you are loading it, so check that whatever you are using supported UTF8.