Configure Service Fabric actor and service logic by deployment location - azure-service-fabric

Azure Service Fabric applications have an ApplicationParameters folder with XML config files targeting different deployment locations. The settings in these files seem to deal with the number of instances/partitions of the contained actors and services; I have not seen examples of these settings affecting actor or service logic.
Additionally, reliable services and reliable actors can specify a configuration package in the ServiceManifest.xml file, which points to a folder containing a Settings.xml file. You are able to create custom configuration sections in Settings.xml and gain access to them via the service's/actor's ConfigurationPackage through ServiceInitializationParameters.CodePackageActivationContext.GetConfigurationPackageObject(). Unlike the configuration at the application level, these configuration files do not seem to easily target specific deployment locations.
What is the proper way to tailor actor/service logic via configuration files that target deployment locations? For example, if your service is dependent on an external API with different URLs for development vs. production environments, how can these be established easily with config files? If the ApplicationParameters files are the answer, how do you programmatically access this information from the actor or service? If custom sections within the Settings.xml file is the answer, how does the actor/service know which environment it is in?

Take a look at the "Per-environment service configuration settings" section here: Managing application parameters for multiple environments.
In short, you can create a ConfigOverride when importing the service manifest to the application manifest. Suppose that you have the following setting in the Settings.xml for the Stateful1 service:
<Section Name="MyConfigSection">
<Parameter Name="MaxQueueSize" Value="25" />
</Section>
In the application manifest, you would specify the following:
<ServiceManifestImport>
<ServiceManifestRef ServiceManifestName="Stateful1Pkg" ServiceManifestVersion="1.0.0" />
<ConfigOverrides>
<ConfigOverride Name="Config">
<Settings>
<Section Name="MyConfigSection">
<Parameter Name="MaxQueueSize" Value="[Stateful1_MaxQueueSize]" />
</Section>
</Settings>
</ConfigOverride>
</ConfigOverrides>
</ServiceManifestImport>
You can then specify the application/environment-specific value for MaxQueueSize using application parameters.

Related

WSDL service location base URL will be different in each environment and how to not build the same jar over and over again

I have a wsdl file, which contains service-tag and partial view of it:
<service name="EXFlowers">
<port binding="getflowers:EXFlowersGetflowers" name="EXFlowersGetflowersService">
<soap:address location="http://dev.example.com:67857/EXFlowers/getflowers" />
</port>
</service>
My problem is in this part:
<soap:address location="http://dev.example.com:67857/EXFlowers/getflowers" />
I am using wsdl2java which creates .java files from the wsdl file (then I compile the generated files and make a jar file out of it for using it in WebSpehere).
As it is now I can't promote a .jar file from one environment to another (for example: development -> test -> staging -> production), because the location in each environment has different base url. I hate to run wsdl2java for each environment (basically create the same jar 5 times with just different string values in some of the class files). I want "build once, run anywhere" workflow and I found this from IBM's support page https://www.ibm.com/support/pages/accommodating-different-wsdl-urls-between-environments which explains how it could be done.
So my question is:
Can a kindly soul explain me with syntax examples how option 2 can be done:
Use a file:// based WSDL URL. Store the WSDL file itself in the same file path in each environment, but use different contents with
custom hostnames and/or endpoint URLs therein.
I don't understand it, how will it look like:
<soap:address location="file://tmp/myendpointfile.txt and what about this part -> /EXFlowers/getflowers" />
and how will the content of the file be ?
cat /tmp/myendpointfile.txt
http://dev.example.com:67857
I would appreciate any help or if you guys know a better way of accomplishing this task.

Make a sub directory public on .Net core webAPI

I have a Restful API project with .Net Core 1 that has a directory that contains some public files (ex: images). I created a controller that retrieved files by file name, but I think it uses CPU and it had has much delay.
for example:
wwwroot
- refs
- runtimes
+ public
- logo.png
+ subdir
- icon1.png
- icon2.png
I want to access this directory publicly from a url like this
https://MyAPIDomain.com/public/logo.png
https://MyAPIDomain.com/public/subdir/icon1.png
I want to IIS directly handles these files and no need to process with dot net. Also they should be resumable on download and browsers could be able to cache theme. How can I do that?
web.config:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<!--
Configure your application settings in appsettings.json. Learn more at http://go.microsoft.com/fwlink/?LinkId=786380
-->
<system.webServer>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath="dotnet" arguments=".\MyApp.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" forwardWindowsAuthToken="false" />
</system.webServer>
</configuration>
ASP.NET Core uses IIS as a reverse-proxy. What this means is that all requests are forwarded to your ASP.NET Core app. Period. There's no way around that. The only way you can get IIS to directly serve a file is to host it in a virtual directory in your IIS site. Then, because that particular path is now handled by IIS, it will not forward to your ASP.NET Core app. However, that means then that your ASP.NET Core app can no longer work with that path. In other words, you can't create a virtual directory like "public" and also serve files separately under wwwroot/public.
That said, the static files middleware runs relatively early in the pipeline and is also pretty lightweight. I honestly doubt you'd see much, if any, performance decline over directly hosting static files in IIS. It's also much easier with the static files middleware to handle things like setting cache headers. I'd suggest you simply leave things as they are.

aspnet static file access with authentication

In my application folder I have virtual application QA. There is a folder "help" which contains html and other static files. QA is using form authentication.
All files in help folder are accessible without authentication (for example, www.mypage.com/QA/help/test.html). I need to change this so that if user acces files in help folder (html files or any other static files) then user is redirecet to login page.
I was googling and the ony thing I have found is that this is something with Static file handling and mapping to asp. I am using IIS 6.
I have tried to add line like this
< add name="StaticHandler" type="System.Web.StaticFileHandler" path="*.html" verb="*" validate="true" />
to my web.config (that is in QA folder), but it doesn't help.
Actually, I do not understand this line and also I am new to web.config file administrating.
I also tried to put all static files from help folder into QA, but it also doesn't help.
Make sure you have added a config file to the directory that contains your static files that you want protected from anonymous users like so (this means you will have a second web.config file in the directory you are trying to protect). Which will deny any anonymous users (that is what the does).
<configuration>
<appSettings/>
<connectionStrings/>
<system.web>
<authorization>
<deny users="?"/>
</authorization>
</system.web>
</configuration>
IIS is serving your static files outside of the ASP.net pipeline. Besides adding the declaration you have added System.Web.StaticFileHandler you need to map the extension in IIS. In order to ensure that your .htm or .html files are passed through ASP.net and therefore authenticated.
In your root web .config file add
<system.web>
<httpHandlers>
<add path="*.html" verb="*" type="System.Web.StaticFileHandler" />
</httpHandlers>
Then you need to perform some operation in IIS. These directions apply to IIS 6.0
Open IIS Manager
Right click on your website and select properties
Click Home Directory -> Configuration (displays application extensions etc). You will need the path from a mapped extension already in use by asp.net. The best way to get this is to find an already mapped asp.net extension in the list like .aspx or.ascx, click Edit and copy the Executable path. The path should end in aspnet_isapi.dll.
Click Add
Paste in the previous executable path and the extension (in your case .html).
Repeat this process for any other file types you want handled by the ASP.net runtime.

Does a local NuGet Gallery deployment require an Azure account?

I'd like to run a local NuGet Gallery to serve dependencies to my build system.
I notice in the web.config it asks for Azure details, but the code seems to suggest you can choose 'FileSystem' as a storage backend.
My questions are:
If I choose 'FileSystem' how do I configure the target folder?
Can I instead point the storage engine at an in-house instance of SQL Server?
I'm trying to avoid using a file system because that's what we are using now with NuGet Server and it's very slow. A couple of the devs like to pack and push every single successful build, so scalability is important.
I hope any answers here will help others, too. For background, here is a great link of setting up your own NuGet Gallery. Sadly, the author has omitted all details pertaining to the actual package storage: https://github.com/NuGet/NuGetGallery/wiki/Hosting-the-NuGet-Gallery-Locally-in-IIS
To configure File System Package Store:
<appSettings>
<add key="Gallery:PackageStoreType" value="FileSystem" />
<add key="Gallery:FileStorageDirectory" value="C:\Path\To\Packages" />
</appSettings>
To point to a different SQL Server:
<connectionStrings>
<add name="NuGetGallery" connectionString="Data Source=SQLSERVERNAME;Initial Catalog=NuGetGallery;Integrated Security=SSPI" providerName="System.Data.SqlClient" />
</connectionStrings>
EDIT: Support SQL Server as Package Store
If you want to store your packages as BLOBs in SQL Server, you'll have to make a couple of changes to the code.
First, create a class named SqlServerFileStorageService and implement IFileStorageService. This interface has several methods. The important ones are GetFile() and SaveFile(). Combining folderName and fileName will create a unique key you can use in your database table.
You can use the same connection string NuGetGallery or add a new one for your data access.
You then add an item to the enum PackageStoreType called SqlServer.
In ContainerBinding.cs add a case for PackageStoreType.SqlServer to bind to your SqlServerFileStorageService.
Now the NuGet Gallery should create a SqlServerFileStorageService and all gets and saves will use your class to store the blob in SQL Server.
BTW: I'm basing this on a cursory look at the code. There may be an extra step or two, but these look like the main areas you'll need to focus on.
Hope that helps.

How to modify the csdef defined in a cspkg

To deploy to different azure environments I modify the csdef as part of the compilation step to change the host headers. Doing so requires building the cspkg once for each environment instead of being able to reuse the cspkg and specify different configs for deployment.
I would like to instead modify the csdef file of a cspkg after it has been created, without recompiling. Is that possible, and if so how?
I've done something similar to what you're after to differentiate between test and live environments. First of all you need to create a new .csdef file that you want to use for your alternate settings. This needs to be the complete file as we're just going to swap it out with the original one. Now we need to add this to the cloud project. Right click on the cloud project and select unload project. Right click on it again and select Edit [Name of project]. There's a section that looks a bit like this:
<ItemGroup>
<ServiceConfiguration Include="ServiceConfiguration.Test.cscfg" />
<ServiceDefinition Include="ServiceDefinition.csdef" />
<ServiceConfiguration Include="ServiceConfiguration.cscfg" />
</ItemGroup>
Add a new ServiceDefinition item that points to your newly created file. Now find the following line:
<Import Project="$(CloudExtensionsDir)Microsoft.WindowsAzure.targets" />
Then add this code block, editing the TargeProfile check to be the build configuration you're wanting to use for your alternate and ensuring that it points to your new .csdef file
<Target Name="AfterResolveServiceModel">
<!-- This should be run after it has figured out which definition file to use
but before it's done anything with it. This is all a bit hard coded, but
basically it should remove everything from the SourceServiceDefinition
item and replace it with the one we want if this is a build for test-->
<ItemGroup>
<!-- This is an interesting way of saying remove everything that is in me from me-->
<SourceServiceDefinition Remove="#(SourceServiceDefinition)" />
<TargetServiceDefinition Remove="#(TargetServiceDefinition)" />
</ItemGroup>
<ItemGroup Condition="'$(TargetProfile)' == 'Test'">
<SourceServiceDefinition Include="ServiceDefinition.Test.csdef" />
</ItemGroup>
<ItemGroup Condition="'$(TargetProfile)' != 'Test'">
<SourceServiceDefinition Include="ServiceDefinition.csdef" />
</ItemGroup>
<ItemGroup>
<TargetServiceDefinition Include="#(SourceServiceDefinition->'%(RecursiveDirectory)%(Filename).build%(Extension)')" />
</ItemGroup>
<Message Text="Source Service Definition Changed To Be: #(SourceServiceDefinition)" />
</Target>
To go back to normal, right click on the project and select Reload Project. Now when you build your project, depending on which configuration you use, it will use different .csdef files. It's worth noting that the settings editor in is not aware of your second .csdef file so if you add any new settings through the GUI you will need to add them manually to this alternate version.
If you would want to just have a different CSDEF then you can do it easily by using CSPACK command prompt directly as below:
Open command windows and locate the folder where you have your CSDEF/CSCFG and CSX folder related to your Windows Azure Project
Create multiple CSDEF depend on your minor changes
Be sure to have Windows Azure SDK in path to launch CS* commands
USE CSPACK command and pass parameters to use different CSDEF and Output CSPKG file something similar to as below:
cspack <ProjectName>\ServiceDefinitionOne.csdef /out:ProjectNameSame.csx /out:ProjectOne.cspkg /_AddMoreParams
cspack <ProjectName>\ServiceDefinitionTwo.csdef /out:ProjectNameSame.csx /out:ProjectTwo.cspkg /_AddMoreParams
More about CSPACK: http://msdn.microsoft.com/en-us/library/windowsazure/gg432988.aspx
As far as I know, you can't easily modify the .cspkg after it is created. I guess you probably technically could as the .cspkg is a zip file that follows a certain structure.
The question I'd ask is why? If it is to modify settings like VM role size (since that's defined in the .csdef file), then I think you have a couple of alternative approaches:
Create a seperate Windows Azure deployment project (.csproj) for each variation. Yes, I realize this can be a pain, but it does allow the Visual Studio tooling to work well. The minor pain may be worth it to have the easier to use tool support.
Run a configuration file transformation as part of the build process. Similiar to a web.config transform.
Personally, I go with the different .csproj approach. Mostly because I'm not a config file transformation ninja . . . yet. ;) This was the path of least resistance and it worked pretty well so far.