I have three solutions. One is a schema solution that only has a schema File in it, lets call it the SchemaSolution.
The SchemaSolution is referenced in my other two solutions because the Solution1 creates xml instances of the schema in the SchemaSolution and drops it as self-correlating in the message box.
This works magically but if I want to update one of the solutions where the SchemaSolution is referenced (deploy to BizTalk) I always have to delete the other solutions. This is horrible and I was not able to find a solution until now.
Is there a (no hacky) way? I thought about merging all Projects into one solution, but this is the worst case scenario I can imagine to achieve my goal.
How can I deploy a project that is referenced in different solutions without deleting and redeploying everything?
BizTalk 2013R2 in use
No this is not supported and not recommended to try and hack your way into this idea (definitely need to alter the BizTalk database, and this is not even allowed by Microsoft i think).
I can give you 3 options:
Make the SchemaSolution as small as possible, like break it down into multiple schema solutions per process for instance, so the chances of you needing to change the solution will be smaller. Ideally, in this solution you would have 1 assembly/project per schema, so new schema's can be added without redeploy.
Another option would be to duplicate your schema's into your projects, this is a design choice you could make, but would require some more work as you need to specify schema's in your pipelines (or else it doesn't know which one you mean), and you have double work with changing the same schema's in multiple projects. The downside is, the schema's are not the same to BizTalk so you can't use it in another project without reference.
Your final option would be to get rid of the dependency of that schema completely, you can do this by creating your own internal/generic/cdm schema, which ideally would be more robust and less prone to changes. This schema would still be referenced by multiple projects, but since you're the one in charge of it, you can predict and mold it into your likings. Again, ideally, in this solution you would have 1 assembly/project per schema, so new schema's can be added without redeploy.
I have a very similar (if not the same) issue within a solution.
I have a set of integration projects dependent on a simple schema project. If I deploy one integration project, I must deploy the schema project, which means I must deploy all integration projects!
In order to deploy them independently, I simply turned the redeploy flag from true to false within properties (in VS) of the schema project..
This allows me to redeploy as many other dependent projects as I like without having to delete or mess around. I can deploy a single integration project with no effect on the others.
The only caveat, is when you redeploy, for some reason, VS flags the fact you have set redeploy to False on the schema project as an error and says that one of the projects was not deployed.
Not a true error, more of a warning imo.
I have been doing this in BT2016, I would assume you can do the same in 2013
Related
I'm building an application where each of our clients needs their own data warehouse (for security, compliance, and maintainability reasons). For each client we pull in data from multiple third party integrations and then merge them into a unified view, which we use to perform analytics and report metrics for the data across those integrations. These transformations and all relevant schemas are the same for all clients. We would need this to scale to 1000s of clients.
From what I gather dbt is designed so each project corresponds with one warehouse. I see two options:
Use one project and create a separate environment target for each client (and maybe a single dev environment). Given that environments aren't designed for this, are there any catches to this? Will scheduling, orchestrating, or querying the outputs be painful or unscalable for some reason?
profiles.yml:
example_project:
target: dev
outputs:
dev:
type: redshift
...
client_1:
type: redshift
...
client_2:
type: redshift
...
...
Create multiple projects, and create a shared dbt package containing most of the logic. This seems very unwieldy needing to maintain a separate repo for each client and less developer friendly.
profiles.yml:
client_1_project:
target: dev
outputs:
client_1:
type: redshift
...
client_2_project:
target: dev
outputs:
client_2:
type: redshift
...
Thoughts?
I think you captured both options.
If you have a single database connection, and your client data is logically separated in that connection, I would definitely pick #2 (one package, many client projects) over #1. Some reasons:
Selecting data from a different source (within a single connection), depending on the target, is a bit hacky, and wouldn't scale well for 1000's of clients.
The developer experience for packages isn't so bad. You will want a developer data source, but depending on your business you could maybe get away with using one client's data (or an anonymized version of that). It will be good to keep this developer environment logically separate from any individual client's implementation, and packages allow you to do that.
I would consider generating the client projects programmatically, probably using a Python CLI to set up, dbt run, and tear down the required files for each client project (I'm assuming you're not going to use dbt Cloud and have another orchestrator or compute environment that you control). It's easy to write YAML from Python with pyyaml (each file is just a dict), and your individual projects probably only need separate profiles.yml, sources.yml, and (maybe) dbt_project.yml files. I wouldn't check these generated files for each client into source control -- just check in the script and generate the files you need with each invocation of dbt.
On the other hand, if your clients each have their own physical database with separate connections and credentials, and those databases are absolutely identical, you could get away with #1 (one project, many profiles). The "hardest" parts of that approach would likely be managing secrets and generating/maintaining a list of targets that you could iterate over (ideally in a parallel fashion).
I would like to change the properties of multiple diagrams together rather than clicking on them one by one. Does anyone know how this can be achieved?
You can use the scripting facility of Enterprise Architect to loop the diagrams you would like to change and update them.
See this section of the manual to get help.
There is a bunch of example scripts included with EA, either from the local scripts, or from the EAScriptLib MDG.
Another source of examples is my Github repository: https://github.com/GeertBellekens/Enterprise-Architect-VBScript-Library
You could write a SQL to manipulate your database. t_diagram.PDATA holds a long cryptic string where one part is ScalePI=0; (which is the default for no scaling). You can alter that to be ScalePI=1; (meaning scale to one page).
String manipulations vary from database to database. So you need to write your own which you can execute in a script using
Repository.Execute("UPDATE t_diagram ...")
Note that you should test this in a sandbox first since invalid SQLs can easily disrupt your whole repository.
I'm developing a system with database version control in LiquiBase. The system is still in pre-alpha development and there are a lot of changes that were reverted or supplemented by other changes (tables removed, columns added and removed).
The current change set reflects the whole development history with many failed experiments, and this whole is rollouted when initializing the database.
Because there is NO release version, I can start from scratch and pull actual DB state in single XML changeset.
Is there a way to tell LiquiBase to merge all change sets into one file, or the only way to do that is per hand?
Just use your existing database to generate change log that will be used from now on. For this you can use generateChangeLog command from command line, it will generate the changelog file with all the changeSets that represent your current state of the database. You can use this file in your project as initial db creation file, to be used on an empty database. Here's a link to docs.
There is a page in the Liquibase docs which discusses this scenario in detail:
http://www.liquibase.org/documentation/trimming_changelogs.html
To summarise, they recommend that you don't bother since consolidating your changelogs is both risky and low-reward.
If you do want to push ahead with this, then restarting the changelog using generateChangeLog, as suggested by #veljkost, is probably the easiest way. This is documented at http://www.liquibase.org/documentation/existing_project.html
Hence I didn't find automatic solution for this problem in case the changelog is already deployed on several databases in different states, I will describe here my solution for this problem:
Generate changelog of your current development state of database using liquibase generate changelog, like:
mvn liquibase:generateChangeLog -Dliquibase.outputChangeLogFile=current_state.yml
Audit generated changelog, check whether it looks good (liquibase is not perfect, it often generates stupid statements). Also if you have in your schema some static data, like dictionaries or so, which were previosuly populated using liquibase, you have to add these to generated changelog as well, you can export data from your database using generate changelog command mentioned above with -Dliquibase.diffTypes=data property.
Now to prevent the execution of generated changelog (it will obviously fail on existing databases, on prod, test, and other developers local envs), you can do this using for example liquibase changelogSync, or using liquibase contexts, but all this options require you to do some manual work on every database. You can achieve automatic result by adding the preConditions statements for your changeSets.
For changesets intended to run on empty databases (changelogs you generated in step 1. above) you can add something like this:
preConditions:
- onFail: MARK_RAN
- not:
- tableExists:
tableName: t_project
Where t_project is the table name that existed before (most likely this should be table added in first changeSet, so every database which runned at least one changeSet will have this). This will mark generated changelog as run on environments with existing schema, and will run generated changelog for every new database you would like to migrate.
Unfortunatelly you have to adjust all legacy changesets as well (I didn't found better solution yet, I did this change using regex and sed), you have to add something like that:
preConditions:
- onFail: MARK_RAN
- tableExists:
tableName: t_project
So opposite condition, from above one. With this all databases which runned at least one changeset in past, will continue to migrate (EXECUTED status of changesets) until changeset generated in step 1. above, and mark generated changesets with MARK_RAN. And for new databases, all previous changesets will be skipped, and first executed will be one generated in step 1. above.
With this solution you can push your merged changelog at any time, and every environment and developer won't have any problem with manual syncing or so.
I have two LLGLGEN 2.6 pro source files that I have to merge in my git repo (2 different branches). Due to the "professionnal" work of previous programmers on this project, the two projects have changes (the fork is 1 year old) that are not tracked in documents.
What can be the less painfull solution to finalize my merge ?
Thanks.
In my experience, it's easier to simply ignore the merge conflicts in the LLBL generated code and just re-sync the project to the database and then regenerate the code completely post-merge.
Where this becomes a problem is when there are a lot (or even a few) customizations made to the LLBL project file (e.g renaming fields, creating typed lists). There isn't much you can do about these outside of tracking them down one by one. The good news is the compiler will complain of something is missing or renamed.
Okay there are similar questions to this but this is NOT a duplicate. This error seems to come up when you have parameters referencing a dataset which is shared. Deleting the report from the server and redeploying does not fix in my case.
So I am developing on VS 2010 Professional with Business Intelligence Development Studio, BIDS, which is under source control with Team Foundation Server. I am deploying to a 2008R2 server which I thought may be the issue. The workaround is to change the dataset references to be embedded instead which stops this error dead in it's tracks but that is pretty poor in my opinion and I would like to have this work with shared datasets ultimately.
Things I have tried:
Ensure the naming of the dataset matches the reference. EG: "Name is ClientQuery, shared dataset is ClientQuery"
Ensure the naming on the server matches the refernces in step 1.
Ensure that this is what is breaking it by removing the reference to the shared dataset, works right away then.
Ensures that the shared dataset is not enabling some type of caching on the server.
I had a filter on a second shared dataset limiting scope, I removed that and there was still an error.
Removed all parameters and only added a single shared dataset, it gives error right away.
Added an option to the parameters binding to say: "Allow Empty values". Did this with Nulls as well.
Recreated EVERYTHING, a whole brand new RDL file, and copy and pasted only elements on the body of the report but explicitly created the parameters and the datasets and this STILL HAPPENED.
9. UPDATED - I have done the old destroy the RDL and then hope to redeploy. I found that a lot online. That does not work in this case. It is almost like this reference in the RDL:
< DataSet Name="**ClientQuery**">
< SharedDataSet>
< SharedDataSetReference>**ClientQuery**</SharedDataSetReference>
< /SharedDataSet>
< Fields>
< Field Name="CUSTOMER_ID">
< DataField>CUSTOMER_ID</DataField>
< rd:TypeName>System.String</rd:TypeName>
< /Field>
< Field Name="CUSTOMER_NAME">
< DataField>CUSTOMER_NAME</DataField>
< rd:TypeName>System.String</rd:TypeName>
< /Field>
< /Fields>
< /DataSet>
It appears that somehow the mention of this refernce causes havoc. I would examine my bin(environment) directory under my project. (I deploy for multiple environments and set up QA, UAT, PROD, etc.. under solution config) Each time the RDL is getting updated as it should and posting the updates I am showing. I think 'rebuild' is a lot of the issue at times when people see their report files not updating on a server, in my case a rebuild usually gets updates to the RDL versus just hitting deploy first.
While all of this is happening the hard part is that it works throughout changes every time on BIDS seamlessly. So the error is dealing completely with what the source server believes the rdl data to represent.
Any help is much appreciated, I would rate myself advanced at SSRS but this one has me stumped of what the error is refernecing that it is not getting.
I know this is an old question, but I just ran across this and was able to resolve my issue. Thought an updated option is warranted for others struggling with it. My issue had to do with the parameter settings on the Shared Dataset properties. The menu looks like this:
Specifically, make sure that you check the "Allows null value" option where needed. This instantly resolved my issue where a dataset would not work when pointing to a shared but embedding the dataset did.
Okay so the answer Jeroen proposed and others is half right. My issue was that my source code was under an older SVN Source Control, that was deployed to an SSRS 2008 Server, then we migrated the code base to TFS Source Control. The issue appears to be that the Shared Datasets were BELIEVING to be different identifiers than they actually were. The simple work around IN ADDITION to deleting the files is to redeploy the shared datasets as well. In my case I went into my project settings and deployed them to a different location entirely under the report structure to keep them in the same area so: Reports/Datasets instead of just Datasets. This seems to clear up the issue in my case so I believe this was just a perfect storm. In doubt with SSRS just delete everything and start from the ground up I guess.