What's the difference between a "release" and a "master" in VS? - unity3d

"Visual Studio Express 2013 for Windows" allows me to compile debug, release or master. While the difference between debug and release is known, I've never heard of a master compilation option. What does it do different from release?

The "master" configuration is from Unity, as you stated in your comment and it is used to submit your app to the Store and to remove profiler support.
There are three build configurations you can choose from. Debug should obviously be used to debug your scripts. Release optimizes code for better performance. Master configuration build should be used to submit your app to the Store. It has profiler support stripped out.
For more information, see Building and deploying a Unity Visual Studio solution (HoloLens)

No, the term "master" never appears in any of the project templates in VS2013 as a solution configuration name. The ones that ship with Express will only ever generate a Release and Debug configuration. Visible in the Build + Configuration dialog, upper left combobox.
It is not entirely impossible that you are using a project template created by somebody else, Visual Studio makes that very simple with File + Export Template. Adding a configuration with a different name is certainly possible but it is unguessable what this programmer intended it to mean. Bit of a stretch, Express does not support customizations like this.
The only place where the term "master" ever appears in standard project templates are in Web applications. Where the "master page" is a template design from which all the other pages in the web application are generated. Pretty useful, it helps to create a consistent look-and-feel in the web site design. That however has nothing to do with solution configurations

Related

Ionic + IBM MobileFirst

I've found a few post on this topic but have not been able to find the best solution.
Attempted to integrate Ionic into IBM MobileFirst (Worklight).
At the moment - I have built a normal Ionic project and moved the WWW folder in the 'common' folder. Also added in the initOptions, main.js and messages.js.
MobileFirst has an awful build process - I hate having to deploy to a mobilefirst development server + preview app for any code changes. I am hoping to get some type of auto reload working within mobileFirst, or at least develop with ionic normally and hav ea job to bring my changes into my worklight project... something that is better than me current situation.
Does anyone have a sample project that actually auto-builds or picks up code changes automatically?
Any and all help is greatly appreciated.
Not sure what do you mean by "auto-reloading"; if you make any changes to the web resources to your project inside the Studio plug-in (in Eclipse) and reload the preview in the browser, it will show the changes.
You are not required to Run As > Run on MobileFirst Development Server for each change. As long as you work on the resources in your workspace, the "auto-reloading" as you call it, should work (make sure you are using the latest available MobileFirst Studio version from the Eclipse Marketplace).
There is also a rudimentary Starter Application that is based on Ionic.
You can download it from here.
There are also several results on the subject matter when searching in Google.
The need to rebuild in order to see changes in your Web components (CSS, JavaScript, HTML) did used to be an annoyance in early versions of what was then Worklight and is now MobileFirst. I forget when the need for a rebuild was removed but certainly in Worklight 6.2 and beyond you now simply need to refresh in your browser.
UPDATE: If using MobileFirst 6.3 you need to ensure that you are on a
suitable patch level. I find that simple refresh does not work in
6.3.0.00-20150106-1717, but if I update (Help->Check for Updates) to 6.3.0.00-20150214-1702 then edit/save/refresh works as
expected.
My personal practice is always to have Mobile Web environment in my project and then choose that from the Console. This loads the application in the browser-based Mobile Simulator that you can tailor to fit your target form-factor. This has a "Go/Refresh" button that immediately reflects your edits.
Alternatively, some folks these days do not use Studio, instead they use the Command Line Interfacer. Possibly this may be more to your taste. You can download it here.
there is a solution with using staff like ionic-cli serve command + symbolic links that will replace common folder.
check here an example https://www.dropbox.com/s/4pvaulo6yo47kb9/lab_7.2.mp4?dl=0
(you just can disable sound, cause i've recorded it in russian) 7-15 minutes of this video
Other option is to organize live-preview yourself using IDE features and/or nodejs
This will work as long as you are working on front-end (mostly non-worklight api) part.
You need to include this lines in the index.html
<!-- ionic bundle & css -->
<link href="www/ionic/css/ionic.css" rel="stylesheet">
<script src="www/ionic/js/ionic.bundle.js"></script>

Source control for MS Dynamics CRM

I'm undecided about CRM at the moment. It's a great tool for the business users but so far for development it's been a bit against the grain. The next problem I need to tackle is how to easily source control javascript used within forms. We use TFS for our source control.
Anyone had an experience or have any ideas on how to do this?
Obvious choice would be to copy and paste the JS in to your source control, but it's also an obvious pain in the rear.
A couple of things that we do in our projects:
We use the Web Resource Utility included with the CRM SDK (actually a modified version of it) to deploy JavaScript web resources to a particular solution. Makes it very easy to keep script files checked in to source control as normal and avoid copying and pasting.
We wrote a custom HTTP Module that we use on local deployments. It intercepts requests for JavaScript libraries and redirects them to a location on local disk. That way, we don't have to actually redeploy the web resources as we test, just the JavaScript files to disk. (Note that this would be unsupported in a production environment. We just do it in our development environments to ease the pain of JavaScript deployment).
I answered a very similar question here - Version Control for Visual Studio projects and MS Dynamics CRM (javascript)
My choice for source control is TFS holding each of the 2011 JScript libraries.
We try to mirror the file structure that Dynamics uses for Web resources in a basic Library project. So version control works as normal, we just don't use the output from the project.
You can also try using the new "CRM Solution" project template (installed from the SDK) and have the ability to deploy from the context menu of the project.
I've had some issues with the template but something to check out.
Hope this helps.
You can take a look on my answer on my own question here.
MS Dynamics CRM 2011 SDK has solutionpackager.exe utility what could split all CRM resources into file tree and you can store them either in git or in tfs.
Any web resource in CRM 2011 is a pain to manage. We just end up doing a lot of copy pasting in and out of TFS 2010 (which has actually caused some problems with poor pastes).
Currently out of the box there isn't an easy way to do it.
Only worry about this if you really need the ability to go back to old versions of web resources. I've found that I don't often have to do this. Remember that the web resources are stored in SQL Server just like they would be if you put them in TFS, so as long as your CRM database is being backed up, you won't lose the web resources. In traditional development, it is important to keep the source in TFS because you can't easily get back to it once you compile and release. With CRM development, your web resources are mostly HTML or JavaScript, so you can always get at the source.
If you really need version control, why not build a quick little console app that downloads all customizations every night and stores that zip file in TFS? True, it wouldn't be as easy to get at older versions, but you should gain a lot of productivity by not having to manually keep TFS in sync. This also has the benefit of storing all customizations in TFS, not just web resources.
Silverlight is the obvious exception here - I would definitely store Silverlight web resource source code in TFS, because it is a "compiled" web resource. You are already in Visual Studio, so TFS is a natural fit anyway.
Hope that helps!

Azure web.config per environment

I have a Azure project (Azure 1.3) in VS2010. There are 2 webroles, one web page project and one WCF project. In debug mode I want the web project to use a web.config for DEV enviroment, and when publishing the web.config for PROD must be used.
What is the best way to do this ?
Currently I am facing issues when using a Web.Debug.config with transform XSLT. It doesn't seem to work in Azure....
Solve your problem a different way. Think about the web.config always being static and never changing when working with Azure. What does change is your ServiceConfiguration.cscfg.
What we have done is created our own configuration provider that first checks the ServiceConfiguration.cscfg and then falls back to the web.config if the setting/connection string is't there. This allows us to run servers in IIS/WCF directly during development and then to have different settings when deployed to Azure. There are some circumstances where you have to use web.config (yes, I'm referring to WCF here) and in those cases you have to write code and create convention instead of storing everything in web.config. I have a blog post where I show an example of how I did this when dealing with WIF (Windows Identity Foundation) and Azure.
I agree with Mose, excellent question!
Visual Studio 2010 includes a solution for this type of problem, web.config transforms. If you look at your web role you'll notice it includes Web.Debug.config and Web.Release.config along with the traditional web.config. These files are used to transform the web.config during deployment.
The canonical example is "I need different database connection strings for development and release" but it also fits your situation.
There is an excellent blog post from the Visual Web Developer Team that explains how to use this feature (don't bother with the MSDN docs, I know how it works and still don't understand the docs). Check out http://blogs.msdn.com/b/webdevtools/archive/2009/05/04/web-deployment-web-config-transformation.aspx
I like this question !
For worker roles, I solved this problem by detecting the environment at runtime and launching my 'application' in a new AppDomain with a custom configuration :
bot.cloud.config
bot.dev.config
bot.win.config
This is incredibly efficient !
I'd like to do the same with web projects, because using the Azure specific configuration is a lot of trouble :
Both config are not in the same place, which is time-consuming when debugging
You have to learn a new way of writing something that sould be standard
Sometime you'll wonder if the app falled back on web.config because of a stupid syntax error
I'm still searching the right way to do that, like in this post
Another possible solution is to have two CloudService projects, each one with specific ServiceConfiguration.cscfg(dev/prod). Develop using the Dev, but deploy the Prod.
Currently I am facing issues when using a Web.Debug.config with
transform XSLT. It doesn't seem to work in Azure....
It depends on whether you want to make it work on your local machine or inside continuous integration.
For the local machine I tried to answer here: https://stackoverflow.com/a/9393533/182371
For the continuous integration it's even easier. When you build from the command line specifying the Configuration property value your configs WILL be transformed (no matter what it does when you build inside VS). So properly specifying build configurations for both cloud and web project will give you the correct output depending on build parameters.

How to get .NET 2.0 web app into production, using which tools, and why use those tools and methods over other options?

With VisualStudio Publish, CruiseControl.NET, MSBuild, aspnet_compiler.exe, and Web Deployment Projects out there, how would one know which tool to use to ultimately get a .NET 2.0 web application into a testing/production environment?
With .NET 1.1, I simply copied all files over to the server's directory and set it to a configured virtual directory in IIS. Unless I am really missing something, it seemed to work just fine. Now I'm reading about how important it is to put some good thought into 2.0 deployment and the the more I read, the more I get confused.
Please breakdown how to choose which tool to use, and why you would use that tool. If more than one tool is needed, please identify how they relate to this process.
CC.NET is for Continuous Integration it can build your setup projects as artifacts, but that is not it's main purpose. MSBuild is the Microsoft build system -- again, not related to deployment. aspnet_complier compiles your web sties, which may make deployment easier, but is not in itself deployment.
Web deployment projects is what you should be looking at. Here's a decent little post that goes over some of the options for deployment and a reference from MSDN. There are also commercial products.
In most cases, you can right-click on project in VS.NET and choose "Publish". This will give you a few options for deploying via FTP or file path.
Publish Web http://img26.imageshack.us/img26/1261/screencfl.png
What we do it publish to an SVN repository, then run SVN UPDATE on the machines it needs to go to...
I use TeamCity, which implements
Rebuilding solution with
devenv.exe in command line
Changing settings in web.config
(connection strings and debug mode)
with sed.exe
Precompiling WebSite
with the aspnet_compiler in command
line.
Copying solution to FTP
(with internal tool)

How to version control the build tools and libraries?

What are the recommendations for including your compiler, libraries, and other tools in your source control system itself?
In the past, I've run into issues where, although we had all the source code, building an old version of the product was an exercise in scurrying around trying to get the exact correct configuration of Visual Studio, InstallShield and other tools (including the correct patch version) used to build the product. On my next project, I'd like to avoid this by checking these build tools into source control, and then build using them. This would also simplify things in terms of setting up a new build machine -- 1) install our source control tool, 2) point at the right branch, and 3) build -- that's it.
Options I've considered include:
Copying the install CD ISO to source control - although this provides the backup we need if we have to go back to an older version, it isn't a good option for "live" use (each build would need to start with an install step, which could easily turn a 1 hour build into 3 hours).
Installing the software to source control. ClearCase maps your branch to a drive letter; we could install the software under this drive. This doesn't take into account non-file part of installing your tools, like registry settings.
Installing all the software and setting up the build process inside a virtual machine, storing the virtual machine in source control, and figuring out how to get the VM to do a build on boot. While we capture the state of the "build machine" with ease, we get the overhead of a VM, and it doesn't help with the "make the same tools available to developers issue."
It seems such a basic idea of configuration management, but I've been unable to track down any resources for how to do this. What are the suggestions?
I think the VM is your best solution. We always used dedicated build machines to get consistency. In the old COM DLL Hell days, there were dependencies on (COMCAT.DLL, anyone) on non-development software installed (Office). Your first two options don't solve anything that has shared COM components. If you don't have any shared components issue, maybe they will work.
There is no reason the developers couldn't take a copy of the same VM to be able to debug in a clean environment. Your issues would be more complex if there are a lot of physical layers in your architecture, like mail server, database server, etc.
This is something that is very specific to your environment. That's why you won't see a guide to handle all situations. All the different shops I've worked for have handled this differently. I can only give you my opinion on what I think has worked best for me.
Put everything needed to build the
application on a new workstation
under source control.
Keep large
applications out of source control,
stuff like IDEs, SDKs, and database
engines. Keep these in a directory as ISO files.
Maintain a text document, with the source code, that has a list of the ISO files that will be needed to build the app.
I would definitely consider the legal/licensing issues surrounding the idea. Would it be permissible according to the various licenses of your toolchain?
Have you considered ghosting a fresh development machine that is able to build the release, if you don't like the idea of a VM image? Of course, keeping that ghosted image running as hardware changes might be more trouble than it's worth...
Just a note on the versionning of libraries in your version control system:
it is a good solution but it implies packaging (i.e. reducing the number of files of that library to a minimum)
it does not solves the 'configuration aspect' (that is "what specific set of libraries does my '3.2' projects need ?").
Do not forget that set will evolves with each new version of your project. UCM and its 'composite baseline' might give the beginning of an answer for that.
The packaging aspect (minimum number of files) is important because:
you do not want to access your libraries through the network (like though dynamic view), because the compilation times are much longer than when you use local accessed library files.
you do want to get those library on your disk, meaning snapshot view, meaning downloading those files... and this is where you might appreciate the packaging of your libraries: the less files you have to download, the better you are ;)
My organisation has a "read-only" filesystem, where everything is put into releases and versions. Releaselinks (essentially symlinks) point to the version being used by your project. When a new version comes along it is just added to the filesystem and you can swing your symlink to it. There is full audit history of the symlinks, and you can create new symlinks for different versions.
This approach works great on Linux, but it doesn't work so well for Windows apps that tend to like to use things local to the machine such as the registry to store things like configuration.
Are you using a continuous integration (CI) tool like NAnt to do your builds?
As a .Net example, you can specify specific frameworks for each build.
Perhaps the popular CI tool for whatever you're developing in has options that will allow you to avoid storing several IDEs in your version control system.
In many cases, you can force your build to use compilers and libraries checked into your source control rather than relying on global machine settings that won't be repeatable in the future. For example, with the C# compiler, you can use the /nostdlib switch and manually /reference all libraries to point to versions checked in to source control. And of course check the compilers themselves into source control as well.
Following up on my own question, I came across this posting referenced in the answer to another question. Although more of a discussion of the issue than an aswer, it does mention the VM idea.
As for "figuring out how to build on boot": I've developed using a build farm system custom-created very quickly by one sysadmin and one developer. Build slaves query a taskmaster for suitable queued build requests. It's pretty nice.
A request is 'suitable' for a slave if its toolchain requirements match the toolchain versions on the slave - including what OS, since the product is multi-platform and a build can include automated tests. Normally this is "the current state of the art", but doesn't have to be.
When a slave is ready to build, it just starts polling the taskmaster, telling it what it's got installed. It doesn't have to know in advance what it's expected to build. It fetches a build request, which tells it to check certain tags out of SVN, then run a script from one of those tags to take it from there. Developers don't have to know how many build slaves are available, what they're called, or whether they're busy, just how to add a request to the build queue. The build queue itself is a fairly simple web app. All very modular.
Slaves needn't be VMs, but usually are. The number of slaves (and the physical machines they're running on) can be scaled to satisfy demand. Slaves can obviously be added to the system any time, or nuked if the toolchain crashes. That'ss actually the main point of this scheme, rather than your problem with archiving the state of the toolchain, but I think it's applicable.
Depending how often you need an old toolchain, you might want the build queue to be capable of starting VMs as needed, since otherwise someone who wants to recreate an old build has to also arrange for a suitable slave to appear. Not that this is necessarily difficult - it might just be a question of starting the right VM on a machine of their choosing.