One of my schema.prisma's file is wrote like this:
generator client {
provider = "prisma-client-js"
output = "./generated/own_database"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model Employee {
...
}
And I have another one, which is like this:
generator client {
provider = "prisma-client-js"
output = "./generated/another_database"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL_2")
}
model Costs {
cod_center_cost Int #id
description String? #db.VarChar(100)
status Boolean?
classification Int?
}
...
I have a model that needs to have a relation to another model, but how can I refer an another file's model?
Splitting Prisma's schema file is not officially possible yet.
However there are some community provided solutions like Aurora which provides you functionality to split prisma schemas and import them.
Here's the Official GitHub Issue where splitting schema file is being tracked by Prisma Team.
Related
From Corda documents, it says we can have custom schema in Vault Extension.
However there is not much clarity for Vault Extension which should have ability to create/manage custom database schema pertaining to node vault database.
Are we going to publish API in feature release of Corda
Inside flows, the node exposes a JDBC connection that allows you to write native custom SQL queries (as a vault extension). You can access this JDBC connection using serviceHub.jdbcSession().
If your question is about how to write a custom schema then please see the existing Corda Persistence API docs.
You can then query that custom schema using the new Vault Query API - please see the existing [Corda Vault Query API][3] docs.
Just to add an example to the above, here's a custom schema for the Yo! CorDapp. See YoSchemaV1 below:
// State.
data class State(val origin: Party,
val target: Party,
val yo: String = "Yo!") : ContractState, QueryableState {
override val participants get() = listOf(target)
override val contract get() = Yo()
override fun toString() = "${origin.name}: $yo"
override fun supportedSchemas() = listOf(YoSchemaV1)
override fun generateMappedObject(schema: MappedSchema) = YoSchemaV1.YoEntity(this)
object YoSchemaV1 : MappedSchema(Yo.State::class.java, 1, listOf(YoEntity::class.java)) {
#Entity #Table(name = "yos")
class YoEntity(yo: State) : PersistentState() {
#Column var origin: String = yo.origin.name.toString()
#Column var target: String = yo.target.name.toString()
#Column var yo: String = yo.yo
}
}
}
In short, your state object needs to implement QueryableState, as above.
The full CorDapp is available here: https://github.com/roger3cev/yo-cordapp
Cheers
I am trying to connect my Scala application to a Postgres cluster consisting of one master node and 3 slaves/read replicas. My application.conf looks like this today:
slick {
dbs {
default {
driver = "com.company.division.db.ExtendedPgDriver$"
db {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://"${?DB_ADDR}":"${?DB_PORT}"/"${?DB_NAME}
user = ${?DB_USERNAME}
password = ${?DB_PASSWORD}
}
}
}
}
Based on Postgres' documentation, I can define the master and slaves all in one JDBC URL, which will give me some failover capabilities, like this:
jdbc:postgresql://host1:port1,host2:port2/database
However, if I want to separate my connections by read and write capabilities, I have to define two JDBC URls, like this:
jdbc:postgresql://node1,node2,node3/database?targetServerType=master
jdbc:postgresql://node1,node2,node3/database?targetServerType=preferSlave&loadBalanceHosts=true
How can I define two JDBC URLs within Slick? Should I define two separate entities under slick.dbs, or can my slick.dbs.default.db entity have multiple multiple URLs defined?
Found an answer from Daniel Westheide's blog post. To summarize, it can be done with a DB wrapper class and custom Effect types that provides specific rules to control where read-only queries are directed vs. write queries are directed.
Then your slick file would look like this:
slick {
dbs {
default {
driver = "com.yourdomain.db.ExtendedPgDriver$"
db {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://"${?DB_PORT_5432_TCP_ADDR}":"${?DB_PORT_5432_TCP_PORT}"/"${?DB_NAME}
user = ${?DB_USERNAME}
password = ${?DB_PASSWORD}
}
}
readonly {
driver = "com.yourdomain.db.ExtendedPgDriver$"
db {
driver = "org.postgresql.Driver"
url = ${DB_READ_REPLICA_URL}
user = ${?DB_USERNAME}
password = ${?DB_PASSWORD}
}
}
}
}
And it's up to your DB wrapper class to route queries to either 'default' or 'readonly'
I am attempting to use Entity Framework code based migrations with my web site. I currently have a solution with multiple projects in it. There is a Web API project which I want to initialize the database and another project called the DataLayer project. I have enabled migrations in the DataLayer project and created an initial migration that I am hoping will be used to create the database if it does not exist.
Here is the configuration I got when I enabled migrations
public sealed class Configuration : DbMigrationsConfiguration<Harris.ResidentPortal.DataLayer.ResidentPortalContext>
{
public Configuration()
{
AutomaticMigrationsEnabled = false;
}
protected override void Seed(Harris.ResidentPortal.DataLayer.ResidentPortalContext context)
{
// This method will be called after migrating to the latest version.
// You can use the DbSet<T>.AddOrUpdate() helper extension method
// to avoid creating duplicate seed data. E.g.
//
// context.People.AddOrUpdate(
// p => p.FullName,
// new Person { FullName = "Andrew Peters" },
// new Person { FullName = "Brice Lambson" },
// new Person { FullName = "Rowan Miller" }
// );
//
}
}
The only change I made to this after it was created was to change it from internal to public so the WebAPI could see it and use it in it's databaseinitializer. Below is the code in the code in the Application_Start that I am using to try to initialize the database
Database.SetInitializer(new MigrateDatabaseToLatestVersion<ResidentPortalContext, Configuration>());
new ResidentPortalUnitOfWork().Context.Users.ToList();
If I run this whether or not a database exists I get the following error
Directory lookup for the file "C:\Users\Dave\Documents\Visual Studio 2012\Projects\ResidentPortal\Harris.ResidentPortal.WebApi\App_Data\Harris.ResidentPortal.DataLayer.ResidentPortalContext.mdf" failed with the operating system error 2(The system cannot find the file specified.).
CREATE DATABASE failed. Some file names listed could not be created. Check related errors.
It seems like it is looking in the totally wrong place for the database. It seems to have something to do with this particular way I am initializing the database because if I change the code to the following.
Database.SetInitializer(new DropCreateDatabaseAlways<ResidentPortalContext>());
new ResidentPortalUnitOfWork().Context.Users.ToList();
The database will get correctly created where it needs to go.
I am at a loss for what is causing it. Could it be that I need to add something else to the configuration class or does it have to do with the fact that all my migration information is in the DataLayer project but I am calling this from the WebAPI project?
I have figured out how to create a dynamic connection string for this process. You need to first add this line into your EntityFramework entry on Web or App.Config instead of the line that gets put there by default.
<defaultConnectionFactory type="<Namespace>.<ConnectionStringFacotry>, <Assembly>"/>
This tells the program you have your own factory that will return a DbConnection. Below is the code I used to make my own factory. Part of this is a hack to get by the fact that a bunch of programmers work on the same set of code but some of us use SQL Express while others use full blown SQL Server. But this will give you an example to go by for what you need.
public sealed class ResidentPortalConnectionStringFactory: IDbConnectionFactory
{
public DbConnection CreateConnection(string nameOrConnectionString)
{
SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(ConfigurationManager.ConnectionStrings["PortalDatabase"].ConnectionString);
//save off the original catalog
string originalCatalog = builder.InitialCatalog;
//we're going to connect to the master db in case the database doesn't exist yet
builder.InitialCatalog = "master";
string masterConnectionString = builder.ToString();
//attempt to connect to the master db on the source specified in the config file
using (SqlConnection conn = new SqlConnection(masterConnectionString))
{
try
{
conn.Open();
}
catch
{
//if we can't connect, then append on \SQLEXPRESS to the data source
builder.DataSource = builder.DataSource + "\\SQLEXPRESS";
}
finally
{
conn.Close();
}
}
//set the connection string back to the original database instead of the master db
builder.InitialCatalog = originalCatalog;
DbConnection temp = SqlClientFactory.Instance.CreateConnection();
temp.ConnectionString = builder.ToString();
return temp;
}
}
Once I did that I coudl run this code in my Global.asax with no issues
Database.SetInitializer(new MigrateDatabaseToLatestVersion<ResidentPortalContext, Configuration>());
using (ResidentPortalUnitOfWork temp = new ResidentPortalUnitOfWork())
{
temp.Context.Database.Initialize(true);
}
I'm a little confused as to the purpose of a data model in Entity Framework code-first. Because EF will auto-generate a database from scratch for you if it doesn't already exist using nothing more than the data model (including data annotations and Fluent API stuff in DbContext.OnModelCreating), I was assuming that the data model should fully describe your database's structure, and you wouldn't need to modify anything fundamental after that.
However, I came across this Codeplex issue in which one of the EF Triage Team members suggests that custom indexes be added in data migrations, but not as annotations to your data model fields, or Fluent API code.
But wouldn't that mean that anyone auto-generating the database from scratch would not get those custom indexes added to their DB? The assumption seems to be that once you start using data migrations, you're never going to create the database from scratch again. What if you're working in a team and a new team member comes along with a new SQL Server install? Are you expected to copy over a database from another team member? What if you want to start using a new DBMS, like Postgres? I thought one of the cool things about EF was that it was DBMS-independent, but if you're no longer able to create the database from scratch, you can no longer do things in a DBMS-independent way.
For the reasons I outlined above, wouldn't adding custom indexes in a data migration but not in the data model be a bad idea? For that matter, wouldn't adding any DB structure changes in a migration but not in the data model be a bad idea?
Are EF code-first models intended to fully describe a database's structure?
No, they don't fully describe the database structure or schema.Still there are methods to make the database fully described using EF. They are as below:
You can use the new CTP5’s ExecuteSqlCommand method on Database class which allows raw SQL commands to be executed against the database.
The best place to invoke SqlCommand method for this purpose is inside a Seed method that has been overridden in a custom Initializer class. For example:
protected override void Seed(EntityMappingContext context)
{
context.Database.ExecuteSqlCommand("CREATE INDEX IX_NAME ON ...");
}
You can even add Unique Constraints this way.
It is not a workaround, but will be enforced as the database will be generated.
OR
If you are badly in need of the attribute, then here it goes
[AttributeUsage(AttributeTargets.Property, Inherited = false, AllowMultiple = true)]
public class IndexAttribute : Attribute
{
public IndexAttribute(string name, bool unique = false)
{
this.Name = name;
this.IsUnique = unique;
}
public string Name { get; private set; }
public bool IsUnique { get; private set; }
}
After this , you will have an initializer which you will call in your OnModelCreating method as below:
public class IndexInitializer<T> : IDatabaseInitializer<T> where T : DbContext
{
private const string CreateIndexQueryTemplate = "CREATE {unique} INDEX {indexName} ON {tableName} ({columnName});";
public void InitializeDatabase(T context)
{
const BindingFlags PublicInstance = BindingFlags.Public | BindingFlags.Instance;
Dictionary<IndexAttribute, List<string>> indexes = new Dictionary<IndexAttribute, List<string>>();
string query = string.Empty;
foreach (var dataSetProperty in typeof(T).GetProperties(PublicInstance).Where(p => p.PropertyType.Name == typeof(DbSet<>).Name))
{
var entityType = dataSetProperty.PropertyType.GetGenericArguments().Single();
TableAttribute[] tableAttributes = (TableAttribute[])entityType.GetCustomAttributes(typeof(TableAttribute), false);
indexes.Clear();
string tableName = tableAttributes.Length != 0 ? tableAttributes[0].Name : dataSetProperty.Name;
foreach (PropertyInfo property in entityType.GetProperties(PublicInstance))
{
IndexAttribute[] indexAttributes = (IndexAttribute[])property.GetCustomAttributes(typeof(IndexAttribute), false);
NotMappedAttribute[] notMappedAttributes = (NotMappedAttribute[])property.GetCustomAttributes(typeof(NotMappedAttribute), false);
if (indexAttributes.Length > 0 && notMappedAttributes.Length == 0)
{
ColumnAttribute[] columnAttributes = (ColumnAttribute[])property.GetCustomAttributes(typeof(ColumnAttribute), false);
foreach (IndexAttribute indexAttribute in indexAttributes)
{
if (!indexes.ContainsKey(indexAttribute))
{
indexes.Add(indexAttribute, new List<string>());
}
if (property.PropertyType.IsValueType || property.PropertyType == typeof(string))
{
string columnName = columnAttributes.Length != 0 ? columnAttributes[0].Name : property.Name;
indexes[indexAttribute].Add(columnName);
}
else
{
indexes[indexAttribute].Add(property.PropertyType.Name + "_" + GetKeyName(property.PropertyType));
}
}
}
}
foreach (IndexAttribute indexAttribute in indexes.Keys)
{
query += CreateIndexQueryTemplate.Replace("{indexName}", indexAttribute.Name)
.Replace("{tableName}", tableName)
.Replace("{columnName}", string.Join(", ", indexes[indexAttribute].ToArray()))
.Replace("{unique}", indexAttribute.IsUnique ? "UNIQUE" : string.Empty);
}
}
if (context.Database.CreateIfNotExists())
{
context.Database.ExecuteSqlCommand(query);
}
}
private string GetKeyName(Type type)
{
PropertyInfo[] propertyInfos = type.GetProperties(BindingFlags.FlattenHierarchy | BindingFlags.Instance | BindingFlags.Public);
foreach (PropertyInfo propertyInfo in propertyInfos)
{
if (propertyInfo.GetCustomAttribute(typeof(KeyAttribute), true) != null)
return propertyInfo.Name;
}
throw new Exception("No property was found with the attribute Key");
}
}
Then overload OnModelCreating in your DbContext
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
Database.SetInitializer(new IndexInitializer<MyContext>());
base.OnModelCreating(modelBuilder);
}
Apply the index attribute to your Entity type, with this solution you can have multiple fields in the same index just use the same name and unique.
OR
You can do the migrations later on.
Note:
I have taken a lot of this code from here.
The question seems to be if there is value in having migrations added mid-stream, or if those will cause problems for future database initializations on different machines.
The initial migration that is created also contains the entire data model as it exists, so by adding migrations (enable-migrations in the Package Manager Console) you are, in effect, creating the built-in mechanism for your database to be properly created down the road for other developers.
If you're doing this, I do recommend modifying the database initialization strategy to run all your existing migrations, lest EF should start up and get the next dev's database out of sync.
Something like this would work:
Database.SetInitializer(new MigrateDatabaseToLatestVersion<YourNamespace.YourDataContext, Migrations.Configuration>());
So, no, this won't inherently introduce problems for future work/developers. Remember that migrations are just turned into valid SQL that executes against the database...you can even use script mode to output the TSQL required to make the DB modifications based on anything in the migrations you have created.
Cheers.
Our database was designed in such a way where there are various schemas for production and various equivalent schemas for test. For example, many tables rest in MyProduction schema while the same tables live in MyTest schema.
What I want to do is determine which schema a table is using so I know which one to change it to. So, by default, everything will be under the production schemas. In the OnModelCreating event of the DbContext, if I need to point to test (determined by some true/false configuration), I need to determine the production schema being used, then point it to it's test equivalent.
I'm already aware of how to set the schema but can't find how to get it. Any ideas how I can determine the schema that a table is using?
Thank You.
Try below code after modifying according to you local settings:
var context = new YouDbContext("ConnectionName");
var adapter = (IObjectContextAdapter)context;
var objectContext = adapter.ObjectContext;
EntitySetBase schema = null;
if (objectContext.MetadataWorkspace != null)
{
schema = objectContext.MetadataWorkspace
.GetItems<EntityContainer>(DataSpace.SSpace).First()
.BaseEntitySets
.First(meta => meta.ElementType.Name == "ClassNameUnderYourDbContext");
}
//See the properties of schema in debug mode to understand details
Entity Framework schemas are System.ComponentModel.DataAnnotations.TableAttribute objects. Here are some methods you can use to get an entitity's schema name and table name. Cheers!
private string GetTableName(Type type)
{
var tableAttribute = type.GetCustomAttributes(false).OfType<System.ComponentModel.DataAnnotations.TableAttribute>().FirstOrDefault();
if (tableAttribute != null && !string.IsNullOrEmpty(tableAttribute.Name))
{
return tableAttribute.Name;
}
else
{
return string.Empty;
}
}
private string GetTableSchema(Type type)
{
var tableAttribute = type.GetCustomAttributes(false).OfType<System.ComponentModel.DataAnnotations.TableAttribute>().FirstOrDefault();
if (tableAttribute != null && !string.IsNullOrEmpty(tableAttribute.Schema))
{
return tableAttribute.Schema;
}
else
{
return string.Empty;
}
}
System.ComponentModel.DataAnnotations.Schema.TableAttribute