Business Intelligence Development Studio stuck Validating a task in a dtsx file - bids

I'm using MS Business Intelligence Development Studio.
This morning, when I open one of my dtsx files, BIDS just sits there, stuck validating one of the tasks in the particular dtsx file that I'm trying to open.
Are there things that I can do to fix this problem?
Has anyone encountered this before?
David

I would set the DelayValidation property on the package and control flow tasks to True. So that the package doesn't validate all the tasks every time I open it. It will speed up the process of opening the packages but the package will validate any changes in the connections or schema only during run time. If you click on the task that has a connection or schema, it will still validate even if you have DelayValidation set to true.

As the other responders suggest, you want to select Work Offline from the SSIS menu, then you want to open your SSIS project.
However, when you open Business Intelligence Developer Studio, there may not be an SSIS menu. If this happens, create a throw-away SSIS project (File | New | Project | Integration Services Project). Once the throw-away SSIS project is open, there will be an SSIS menu, from which you must select 'Work Offline'.
Next, from the Solution Explorer, right click on SSIS Packages, and select Add Existing Package. Specify File System in the Package location control, the specify the file path to the DTSX file in the Package path control.
At this point, you can open your intended SSIS package without the excruciating delay.

Related

BIML Studio vs Visual Studio

We are creating a new data warehouse using SSIS and are looking at BIML Studio. I know that for BIML Express I need Visual Studio, but for BIML Studio it seems that we don't even need Visual Studio if we develop our entire ETL with BIML Studio. Is this correct or do I still need Visual Studio in some way?
Standard consulting answer: "It depends"
Your BimlStudio workflow is probably going to be a few BimlScript files that contain your core logic. And then there's gonna be the generated artifacts. As an example, here's shot of the Logical view of my current Biml project. It's a large and still growing "export procs to fixed width files" solution for a client.
Since I need to write to a flat file, that means each package needs a Flat File Connection Manager and a Flat File Connection Manager needs a Flat File Format definition. So, 1 logical entity requires 3 Biml artifacts (at least for how I'm building it)
What you see in the BimlScripts and Connections folder are what run the project (plus the custom metadata repository aka "one big table").
The black circle next to 01_FFF.biml means that's a "live" Biml so every time I make a change to my metadata or the underlying file, whoosh out comes 45 File Format entries (project view)
All of this is great but eventually, I need to translate what's in my Integration Services node (1 project, 45 Packages) into a deliverable.
What's your deliverable?
Right clicking on the project gives me 3 options: Build, Build & Run, Build and & Open in SSDT
Build - this results in a .ispac file being created. That's the quantum for pushing a project deployment model into the SSISDB
Build & Run - Honestly, I don't know what this option does. I should check the book https://link.springer.com/book/10.1007/978-1-4842-3135-7
Build & Open in SSDT - This results in everything you need to interact with the project in Visual Studio (a .dtproj file, Project.params, any Project level .conmgr files and all the associated .dtsx files)
When would I need Visual Studio?
Debugging. I'm pretty good at this stuff but even I miss some settings for things I don't have solid patterns for. For example, this project is using Fixed Width File Formats and in early iterations, I was getting defects open as the files weren't correct because the default file encoding was for Unicode, despite each individual column being defined as DT_STR (non-unicode). Little stuff like that is much easier to find and resolve and fix back in BimlStudio. Otherwise, you're trying to debug the results of a package execution but if you knew you had the wrong pattern, you wouldn't have built the bug into your pattern.

Azure Pipelines: building a C++ project with outside "Include Directories"

I tried searching with as many different terms as I could and couldn't find exactly what I'm looking for.
I have a C++ Project developed in Visual Studio 2019 and I am trying to build and deploy it in Azure Pipelines. It uses Boost and OpenCV. I skipped trying to include these in Azure Artifacts because of a rabbit hole with Azure CLI errors that took me almost half a day.
So it seems that there is a task to publish pipeline artifacts in the .yml file. How do I do this when my project needs to reference a certain directory, instead of one specific file or .dll? Here are images for how this is configured in Visual Studio:
include directory for boost image
include directory settings for opencv image
Edit: Still trying, see my comment. Thinking about switching over to CircleCI.
I found out what to do. Hopefully no one else wastes as much time as I did.
The key was MSBuild. One needs to first find out the values of $(IncludePath) and $(LibraryPath) by doing the following first in Visual Studio:
Right-click on your project, choose "Properties"
Go to the Build Events tab, and click "Pre-Build Event"
Click on and expand the Command Line row, and click "Edit"
Now click the button that says "Macros>>"
You will see a bunch of different variables and their values. Find the values for LibraryPath and IncludePath, copy and past them into a text file.
Now, assuming you already set up a local agent, follow these steps:
Put the text file in the root folder of where your agent is installed. For me, this was "C:\agents"
Have the first line be "LibraryPath=value" and the other line be "IncludePath=value". Use double slashes for the directory paths.
Rename the file to .env. If the agent is currently running, restart it so it can read in the environment variables it will use during your build.
In the MSBuild task of your pipeline, specify arguments. For my case, it was simply this: /p:IncludePath="C:\Program Files\boost_1_77_0;$(IncludePath)" /p:LibraryPath="$(LibraryPath)"
Run the pipeline. You can check your completed build on the local machine. For me, the path it kept going to was "C:\agents_work\2\s"

How to upload Parent child SSIS package to server

Hi all I am very new to SSIS. I have got SSIS package developed by some other guy this package reads data from flat files and stores to database after mapping.
Flow:
1) First package extract records from flat file and stores in table.
2) Then it calls child package using Execute package tasks.
3) Then child package do some calculations and update the database table.
SSIS is using Environment variable to get database information.
Every thing is working fine but now I want to deploy this package to my client's server.
Ques: Do I need to copy and paste files from bin folder and paste on clients machine?
What I Tried: I copy files from bin folder and placed on my local computer. Then I create a job in MSSQL and run the job. Package runs perfectly. But Later I changed location of my project and problem starts job stops working.
Issue: Error says location of child package is not available(As I changed position of my project files)
Kindly suggest what to do.
I am going to make several assumtions here so please correct me if I get any wrong.
The problem I am guessing is that on your Package.dtsx within the connection manager this is currently linked to the package location within the project folder. In this case you are wanting to change it to another location, however the package in the connection manager is still pointing to the project location.
If I were you I would do the following:
Create a string variable
PackageFolderPath - C:\CurrentPackagePath\DBPackage.dtsx
Now what you want to do is go to the package within the connection manager and under the properties add an expression for ConnectionString with the following: “#[User::PackageFolderPath] If you evaluate the expression it should give you the location you setup in your variables.
Please note however that if you want this to work on the development system then setup the package to the project location.
Now once you have those setup, copy the files across the new server and under the SQL agent job to go the Set Values tab and within here you want to add the following:
\Package.Variables[User::PackageFolderPath].Properties[Value]
Under the value you want to put wherever the package is now located
This now should pickup the new location of the package when it is run.
A better way to do this would be to make use of the deployment utility and using an XML configuration variable on the package. However this way should work.

TFS Source Control Uncompressed Build

Is there a way to create a "build" but not actually compile the output of the site? Bascially I want to push the files live from the source control in TFS to the final IIS folder destination.
I have used CopyDirectory on my other project builds, but that requires a BuildDetail.DropLocation (compiled Build). Maybe there is another option for the CopyDirectory Source I could use that wouldn't require a build DropLocation.
To simplify, I want to copy files directly from a tfs Source control to a folder, using a Build Template, but that won't compile the files. Is that possible?
While the default .xaml build workflow for Team Foundation Build is indeed a compilation build it does not have to be. I usually recommend teams to at least have one compile and one deploy .xaml workflow.
1) The CompileMyStuff.xaml (DefaultBuildTemplate.xaml) should take my stuff from source control and do whatever is needed to create a Build Drop folder with my output. I may or may not need to actually compile before creating the drop, and it looks like you just want to copy to the drop location.
2) The DeployMyStuff.xaml should take a build number and deploy my code to an environment of my choice.
It looks like you want to skip the "Drop" and go state to deploy and while I would never recommend this you do have a "BuildDetail.BuildLocation" for teh local workspace where the build server has done a get of your code. You can just "CopyDirectory" from there to your server/host for the website.
If you are having a little trouble you could use the Community Build Extensions and fire up PowerShell to do your copy/deploy.
I figured out a solution to this problem. I created a new .xaml file and then the only item that I put in the sequence was "DownLoadFiles". Then I filled out the properties of the task and ran a "build" and it worked.

How to use version control with JasperReports

We're about to start development of a number of reports using Jasper Server Reports version 3.7.0 CE.
Does anyone have any recommendations as to how best to manage version control with this development, given that the structure of the report units is managed in the database and through either iReport or the web front end?
In fact you can import/export to a directory structure using the js-import/js-export scripts, but then you can't edit these files directly with iReport.
Does anyone have any pointers?
This is problematic. I have established a subversion repository to allow standard reports delivery to be versioned but it is a real pain because jasper does not make this even a little bit easy.
I created a maven project with an assembly descriptor so that "src/main/xml/resources/Reports,adhoc,Domains, etc" can be packaged up in a zip that is pushed to our maven repository.
The biggest problem is that you can't just develop adhoc and input controls merely by modifying XML files. The developer has to import what is in source control into a working jasper server, modify the reports or add new ones (after making sure that his organization and datasources are configured) and once he's satisfied that the report(s) works, export the resources to a directory or zip file, manually modify all references in the exported files from datasources and organization specific resource locations back to "generic" before checking in his changes.
When importing into jasper, the same process has to be done in reverse. The generic paths and organization values have to be converted to the developer's organization so they can be easily imported/updated and he can prove out that the full "round trip" works properly before checking in.
To make the export/subversion checkin easier, I created an ant build file which lives in the maven project's root dir. The build prompts (or will read a properties file) to determine the exported zip location, the organization id of the exported tree. It then opens exported zip file from jasper, explodes it, performs text replacements on the files, resets the "createdDate" and "updatedDate" elements to something standard (so that the developer does not end up checking in files that haven't actually changed since jasper does not preserve the date values), and then copy the files into the subversion tree.
For the import process (from the subversion tree into jasper) we have a script that takes as input the organization id and then modifies the versioned xml files to the appropriate values so that the entire tree can be easily imported/updated into their organization.
The reason this level of complexity is required is to allow us to create the same standard reports in a multi-tenant environment, plus jasper's notion of deploying reports is absolutely bizarre. I'm not sure it would be possible to make this process more difficult if you were intending to do so.
If I was in your position I would have established this kind of process:
end of development session: export all reports to a directory structure in a project under version control
commit the project
before next development session: synchronize the project with svn repository
import directory structure to Jasper Server Reports
continue development
Not sure if someone found posted the solution.
This is what I have done for existing reports.
export reports from jasper server
modify file names from .data to .jrmxl
modify subreport calling to add extension (like in A.jrxml should have subreport name as B.jrxml
modify add .jrmxl to datafile,label and name in report unit xml files.
If you are creating new report on jasper server, it simple
give .jrxml to name and label while adding jrxml file. thats it.
Now you can work same files in local and import same to jasper server.