What are the benefits of using a symbol server instead of simply including the pdb files in the nuget package?
The main advantage is that the package size is much smaller. So your restore/disk utilization and potentially deployment are smaller by default.
There is also an automatic matching of the pdb to your dll if you deploy an application without the pdb, where you will have to manually match them otherwise.
For example:
The package developer creates a package with pdb files in it.
The app developer can debug with the pdbs in the package. So far so good.
When the app developer deploys the app, he omits the pdbs (because they are large and not necessary).
Several versions of the app have been deployed.
Now the app developer (or another person using the app) hits a problem in production or on a client machine.
By adding the symbolserver url to visual studio, the symbols are resolved automatically on the target machine and the app developer does not have to bring over the right set of pdbs.
Related
I'm working on a Service Fabric application that is deployed to Azure. It currently consists of only 5 stateless services. The zipped archive weighs in at ~200MB, which is already becoming problematic.
By inspecting the contents of the archive, I can see the primary problem is that many files are required by all services. An exact duplicate of those files is therefore present in each service's folder. However, the zip compression format does not do anything clever with respect to duplicate files within the archive.
As an experiment, I wrote a little script to find all duplicate files in the deployment and delete all but one of each files. Then I tried zipping the results and it comes in at a much more practical 38MB.
I also noticed that system libraries are bundled, including:
System.Private.CoreLib.dll (12MB)
System.Private.Xml.dll (8MB)
coreclr.dll (5MB)
These are all big files, so I'd be interested to know if there was a way for me to only bundle them once. I've tried removing them altogether but then Service Fabric fails to start the application.
Can anyone offer any advice as to how I can drastically reduce my deployment package size?
NOTE: I've already read the docs on compressing packages, but I am very confused as to why their compression method would help. Indeed, I tried it and it didn't. All they do is zip each subfolder inside the primary zip, but there is no de-duplication of files involved.
There is a way to reduce the size of the package but I would say it isn't a good way or the way things should be done but still I think it can be of use in some cases.
Please note: This approach requires target machines to have all prerequisites installed (including .NET Core Runtime etc.)
When building .NET Core app there are two deployment models: self-contained and framework-dependent.
In the self-contained mode all required framework binaries are published with the application binaries while in the framework-dependent only application binaries are published.
By default if the project has runtime specified: <RuntimeIdentifier>win7-x64</RuntimeIdentifier> in .csproj then publish operation is self-contained - that is why all of your services do copy all the things.
In order to turn this off you can simply add SelfContained=false property to every service project you have.
Here is an example of new .NET Core stateless service project:
<PropertyGroup>
<TargetFramework>netcoreapp2.2</TargetFramework>
<AspNetCoreHostingModel>InProcess</AspNetCoreHostingModel>
<IsServiceFabricServiceProject>True</IsServiceFabricServiceProject>
<ServerGarbageCollection>True</ServerGarbageCollection>
<RuntimeIdentifier>win7-x64</RuntimeIdentifier>
<TargetLatestRuntimePatch>False</TargetLatestRuntimePatch>
<SelfContained>false</SelfContained>
</PropertyGroup>
I did a small test and created new Service Fabric application with five services. The uncompressed package size in Debug was around ~500 MB. After I have modified all the projects the package size dropped to ~30MB.
The application deployed worked well on the Local Cluster so it demonstrates that this concept is a working way to reduce package size.
In the end I will highlight the warning one more time:
Please note: This approach requires target machines to have all prerequisites installed (including .NET Core Runtime etc.)
You usually don't want to know which node runs which service and you want to deploy service versions independently of each other, so sharing binaries between otherwise independent services creates a very unnatural run-time dependency. I'd advise against that, except for platform binaries like AspNet and DotNet of course.
However, did you read about creating differential packages? https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-application-upgrade-advanced#upgrade-with-a-diff-package that would reduce the size of upgrade packages after the initial 200MB hit.
Here's another option:
https://devblogs.microsoft.com/dotnet/app-trimming-in-net-5/
<SelfContained>True</SelfContained>
<PublishTrimmed>True</PublishTrimmed>
From a quick test just now, trimming one app reduced the package size from ~110m MB to ~70MB (compared to ~25MB for selfcontained=false).
The trimming process took several minutes for a single application though, and the project I work on has 10-20 apps per Service Fabric project. Also I suspect that this process isn't safe when you have a heavy reliance on dependency injection model in your code.
For debug builds we use SelfContained=False though because developers will have the required runtimes on their machines. Not for release deployments though.
As a final note, since the OP mentioned file upload being a particular bottleneck:
A large proportion of the deployment time is just zipping and uploading the package
I noticed recently that we were using the deprecated Publish Build Artifacts task when uploading artifacts during our build pipeline. It was taking 20 minutes to upload 2GB of files. I switched over the suggested Publish Pipeline Artifact task and it took our publish step down to 10-20 seconds. From what I can tell, it's using all kinds of tricks under the hood for this newer task to speed up uploads (and downloads) including file deduplication. I suspect that zipping up build artifacts yourself at that point would actually hurt your upload times.
In setting up Sitecore 7.2 at my organization for our public facing .com I have run into a hiccup while trying to implement proper CI, Release Management, and Deployment Management. I am able to, using MSBuild, compile my Sitecore MVC code, compile .update packages from TDS, and package each of these in .nupkg files for Octopus Deploy. What I am running in to is that once I have deployed the MVC code I must also deploy the Sitecore Structure/Content which requires me to install .update packages. I have tried the solution provided at https://github.com/adoprog/Sitecore-Deployment-Helpers but for a fairly lightweight site this is timing out around 20 minutes within Octopus Deploy for only my System package, let alone having not touched Structure or Content. I am looking for a way, preferably through PowerShell (not strictly speaking, the Sitecore PowerShell Extensions built into the sitecore web interface after installing that package). Using the SPE would be acceptable if, and only if, I can use SPE's Cmdlets from Octopus Deploy's PowerShell workflow.
Please Advise.
Jason Bert has a great series of blogs on using Octopus Deploy with TeamCity and TDS for deploying to Sitecore instances:
http://www.jasonbert.com/2013/11/03/continuous-integration-deployment-with-sitecore/
You can also use TDS itself to deploy the items in the solution, but this uses direct calls to a webservice on the target Sitecore instance which may not meet with your requirements.
Also, are you deploying the entire System tree? 20 minutes to deploy changes made to the System tree seems unusual, unless you've made a LOT of changes in there (for example, the Dictionary). Even then, you shouldn't be source-controlling author content, only the elements crucial to the solution that are owned by development.
You can install the update package via sitecore utility at /sitecore/admin/UpdateInstallationWizard.aspx
If you experience that installing the package via this mode takes a lot of time, you might want to modify the Deployment Property Manager settings for the TDS project.
You can do this by right clicking your TDS project in Visual Studio and selecting "Deployment Property Manager".
Once the Deployment Property Manager window opens up, set the Deploy property to Once for every node which does not need to be updated. For any items which are to be updated, mark them as Always.
This will drastically save you on the time required to install the package.
I'm developing a WPF application that I deploy with ClickOnce to a network share on the intranet from which clients can install it.
I need to make sure that the user can't modify any of the application files (especially DLLs and the main executable) on their machine. That is, if any of the application files have changed, the application should refuse to run. I was under the impression that, when using ClickOnce, this was available out of the box and that the application would refuse to start if the file hashes didn't match the manifest.
However, I tried to manually replace the executable or a DLL with a slightly different version after installation and the application still ran fine (executing the modified code).
Does ClickOnce provide what I'm looking for?
How can I enable the functionality?
I'm using a level 2 StartSSL code-signing certificate to sign the application manifest if this matters.
P.S.: just to be sure: I'm talking about the installed application files, not the installation files.
You can sign AND strong name each one of DLLs to prevent tampering but then, doing so has its own pain points when it comes to upgrades and distribution in general. Note that even doing so, doesn't entirely prevent someone from injecting code into your running process. It's a sticky subject.
I recommend going thru this question which already discusses these points in detail. Does code-signing without strong-naming leave your app open to abuse?
I think it will be a fairly manual process.
Doesn't look like the VS2013 deployment tools handle code obfuscation but they do support signing and app permissions. Start with that, then you might have to get the generated manifest as a starting point to build your own with obfuscated assemblies.
MS docs break it into 3 steps: 1. obfuscate, 2. build manifest, 3. manually publish
Here is what MS docs say...
Securing ClickOnce Applications
Deploying Obfuscated Assemblies
You might want to obfuscate your application by using Dotfuscator to prevent others from reverse engineering the code. However, assembly obfuscation is not integrated into the Visual Studio IDE or the ClickOnce deployment process. Therefore, you will have to perform the obfuscation outside of the deployment process, perhaps using a post-build step. After you build the project, you would perform the following steps manually, outside of Visual Studio:
Perform the obfuscation by using Dotfuscator.
Use Mage.exe or MageUI.exe to generate the ClickOnce manifests and sign them. For more information, see Mage.exe (Manifest Generation and Editing Tool) and MageUI.exe (Manifest Generation and Editing Tool, Graphical Client).
Manually publish (copy) the files to your deployment source location (Web server, UNC share, or CD-ROM).
We have WCF services deployed in azure cloud and runnig. We have some changes in some dlls and want to update in VM but dont want to go through regular deployment/redeployment process.
We are thinking of manually coping dlls to approot and siteroot folders. Will it work?
Will it pick up new dlls when VM restart anytime in future?
To answer your questions
Will manually copying dlls to approot and sitesroot folders work: Yes (make sure you do this on each instance if you have multiple instances running)
Will these dlls survive a reboot: Yes (see Reboot Role Instance: ... Any data that is written to the local disk is persisted across reboots. ...)
But I would suggest to only do this if you're planning to test some things while developing your service.
Do NOT plan to use this for production deployments, because if something goes wrong with your instance, the Fabric Controller might decide to destroy that instance and deploy a new one (same could apply for Windows Updates). This new instance would go back to the initial state of your deployment (the content of the cspkg you deployed).
To make your development deployments even easier you could also activate WebDeploy on your Web Role to deploy from Visual Studio: Enabling Web Deploy for Windows Azure Web Roles with Visual Studio (again, do not use this for real deployments, this is only for when you're testing out some things).
Note: Web Deploy will not work with multiple instances.
No,
And this is not the way to go. If you want to be more dynamic, you have to take the approach of Windows Azure Accelerator for WebRoles. Although not anymore supported and developed project, it will give you a good foundation of dynamically loading assemblies (in this case entire sites) from Blob storage.
How do you handle the deployment of a LightSwitch application into a production environment?
i.e. the LS application has been developed, but it now needs to be installed first into Test, and then into Live.
We don't want to use the "manual" approach, i.e. use the Visual Studio Build / Publish option, rather we want to automate the deployment.
My feeling is that deployment is one of the real weak points of LightSwitch. If you are using the very simple deployment model that is build into the product, and you're doing everything within a Windows domain, the publishing wizard can do everything. But if you're deviating from the model at all LightSwitch will fight you. I'd really like to see an "advanced" deployment option that provided some configurability.
Here's how I solved the problem you're having with LightSwitch applications that are targeting web deployment:
At the beginning of the project, deploy once to each target environment using the publish wizard. This is the easiest way to get the database set up.
As new builds are deployed, use the publish wizard to deploy to a deployment package to a standard location on the local development machine.
The deployment package is just a zip file, so you can open it an drill down to where the actual binary release is. I use a powershell script to copy the binary files out of the the deployment package and in to a local SVN working directory. Note that you must not copy web.config file during this step.
Check the unpacked binary files into SVN and use SVN to manage the deployment.
Manage schema changes with SQL scripts.