Can someone explain the general idea behind providing ... Abstraction packages?
As an example, when I search for the word "hosting" in VS NuGet package manager, in the list of findings there are:
Microsoft.Extensions.Hosting
Microsoft.Extensions.Hosting.Abstractions
Microsoft.AspNetCore.Hosting.Server.Abstraction
Microsoft.AspNetCore.Hosting.Abstractions
Microsoft.AspNetCore.Hosting
Microsoft.Extensions.Hosting.WindowsServices
Are these all related? Some seem to be platform dependent (AspNetCore) while others are not? Is there a general rule that tells me when to choose which?
Suppose I want to implement BackgroundService in a .NET5 class library, which of these NuGets shall I install? (It seems that Microsoft.Extensions.Hosting.Abstractions works fine for me, but I had to try that out.)
Thanks
The idea is that a library or package that you provide would only reference the Abstraction packages for easier compatibility.
E.g. if you ship a company-wide NuGet package with some business logic in it or a custom client, you may want to use ILogger / ILogger<T> for logging but not actually depend on any implementation for logging (both the built-in loggers or Serilog etc.), so you can reference the logging abstractions for these interfaces.
Suppose I have a class library which I want to target netstandard1.3, but also use BigInteger. Here's a trivial example - the sole source file is Adder.cs:
using System;
using System.Numerics;
namespace Calculator
{
public class Adder
{
public static BigInteger Add(int x, int y)
=> new BigInteger(x) + new BigInteger(y);
}
}
Back in the world of project.json, I would target netstandard1.3 in the frameworks section, and have an explicit dependency on System.Runtime.Numerics, e.g. version 4.0.1. The nuget package I create will list just that dependency.
In the brave new world of csproj-based dotnet tooling (I'm using v1.0.1 of the command-line tools) there's an implicit metapackage package reference to NETStandard.Library 1.6.1 when targeting netstandard1.3. This means that my project file is really small, because it doesn't need the explicit dependency:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard1.3</TargetFramework>
</PropertyGroup>
</Project>
... but the nuget package produced has a dependency on NETStandard.Library, which suggests that in order to use my small library, you need everything there.
It turns out I can disable that functionality using DisableImplicitFrameworkReferences, then add in the dependency manually again:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard1.3</TargetFramework>
<DisableImplicitFrameworkReferences>true</DisableImplicitFrameworkReferences>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="System.Runtime.Numerics" Version="4.0.1" />
</ItemGroup>
</Project>
Now my NuGet package says exactly what it depends on. Intuitively, this feels like a "leaner" package.
So what's the exact difference for a consumer of my library? If someone tries to use it in a UWP application, does the second, "trimmed" form of dependencies mean that the resulting application will be smaller?
By not documenting DisableImplicitFrameworkReferences clearly (as far as I've seen; I read about it in an issue) and by making the implicit dependency the default when creating a project, Microsoft are encouraging users to just depend on the metapackage - but how can I be sure that doesn't have disadvantages when I'm producing a class library package?
In the past, we've given developers the recommendation to not reference the meta
package (NETStandard.Library) from NuGet packages but instead reference
individual packages, like System.Runtime and System.Collections. The
rationale was that we thought of the meta package as a shorthand for a bunch of
packages that were the actual atomic building blocks of the .NET platform. The
assumption was: we might end up creating another .NET platform that only
supports some of these atomic blocks but not all of them. Hence, the fewer packages you reference, the more portable you'd be. There were also concerns regarding how our tooling deals with large package graphs.
Moving forward, we'll simplify this:
.NET Standard is an atomic building block. In other words, new platforms
aren't allowed to subset .NET Standard -- they have to implement all of it.
We're moving away from using packages to describe our platforms,
including .NET Standard.
This means, you'll not have to reference any NuGet packages for .NET Standard
anymore. You expressed your dependency with the lib folder, which is exactly how
it has worked for all other .NET platforms, in particular .NET Framework.
However, right now our tooling will still burn in the reference to
NETStandard.Library. There is no harm in that either, it will just become
redundant moving forward.
I'll update the FAQ on the .NET Standard repo to include this question.
Update: This question is now part of the FAQ.
The team used to recommend figuring out what the slimmest package set was. They no longer do this, and recommend people just bring in NETStandard.Library instead (in the case of an SDK-style project, this will be done automatically for you).
I've never gotten a totally straight forward answer as to why that was, so allow me to make some educated guesses.
The primary reason is likely to be that it allows them to hide the differences in versions of the dependent libraries that you would otherwise be required to track yourself when changing target frameworks. It's also a much more user friendly system with the SDK-based project files, because you frankly don't need any references to get a decent chunk of the platform (just like you used to with the default references in Desktop-land, especially mscorlib).
By pushing the meta-definition of what it means to be a netstandard library, or a netcoreapp application into the appropriate NuGet package, they don't have to build any special knowledge into the definition of those things as Visual Studio (or dotnet new) sees them.
Static analysis could be used during publishing to limit the shipped DLLs, which is something they do today when doing native compilation for UWP (albeit with some caveats). They don't do that today for .NET Core, but I presume it's an optimization they've considered (as well as supporting native code).
There's nothing stopping you from being very selective, if you so choose. I believe you'll find that you're nearly the only one doing it, which also defeats the purpose (since it'll be assumed everybody is bringing in NETStandard.Library or Microsoft.NETCore.App).
You shouldn't need to disable the implicit reference. All platforms that the library will be able to run on will already have the assemblies that the NETStandard.Library dependency would require.
The .NET Standard Library is a specification, a set of reference assemblies that you compile against that provides a set of APIs that are guaranteed to exist on a know set of platforms and versions of platforms, such as .NET Core or the .NET Framework. It is not an implementation of these assemblies, just enough of the API shape to allow the compiler to successfully build your code.
The implementation for these APIs are provided by a target platform, such as .NET Core, Mono or .NET Framework. They ship with the platform, because they are an essential part of the platform. So there is no need to specify a smaller dependency set - everything's already there, you won't change that.
The NETStandard.Library package provides these reference assemblies. One point of confusion is the version number - the package is version 1.6.1, but this does not mean ".NET Standard 1.6". It's just the version of the package.
The version of the .NET Standard you're targeting comes from the target framework you specify in your project.
If you're creating a library and want it to run on .NET Standard 1.3, you'd reference the NETStandard.Library package, currently at version 1.6.1. But more importantly, your project file would target netstandard1.3.
The NETStandard.Library package will give you a different set of reference assemblies depending on your target framework moniker (I'm simplifying for brevity, but think lib\netstandard1.0, lib\netstandard1.1 and dependency groups). So if your project targets netstandard1.3, you'll get the 1.3 reference assemblies. If you target netstandard1.6, you'll get the 1.6 reference assemblies.
If you're creating an application, you can't target the .NET Standard. It doesn't make sense - you can't run on a specification. Instead, you target concrete platforms, such as net452 or netcoreapp1.1. NuGet knows the mapping between these platforms and the netstandard target framework monikers, so knows which lib\netstandardX.X folders are compatible with your target platform. It also knows that the dependencies of NETStandard.Library are satisfied by the target platform, so won't pull in any other assemblies.
Similarly, when creating a standalone .NET Core app, the .NET Standard implementation assemblies are copied with your app. The reference to NETStandard.Library does not bring in any other new apps.
Note that dotnet publish will create a standalone application, but it won't doesn't currently do trimming, and will publish all assemblies. This will be handled automatically by tooling, so again, trimming dependencies in your library won't help here.
The only place I can imagine where it might help to remove the NETStandard.Library reference is if you are targeting a platform that doesn't support the .NET Standard, and you can find a package from the .NET Standard where all of the transitive dependencies can run on your target platform. I suspect there aren't many packages that would fit that bill.
I am working on Web Api with OData enabled. I started working by referring to (only the relevant dlls to the question are mentioned)
Microsoft.AspNet.WebApi.OData
Microsoft.Data.OData
Microsoft.Data.Edm
System.Spatial
But when I found out OData is by default case-sensitive, I look for a solution of case insensitive and I have gone through few post Support OData Uri case insensitive parse & ODataLib 6.7.0 Release then finally landed into nuget package Microsoft.OData.Core 6.9.0 which seems to be solving my problem. Here my confusion starts, It has its own collection related libraries under different namespace
Microsoft.OData.Core-----------VS------Microsoft.Data.OData
Microsoft.OData.Edm------------VS------Microsoft.Data.Edm
Microsoft.Spatial--------------VS------System.Spatial
What is difference between above similar libraries, where we should use one upon another? In similar cases of mine, Microsoft.OData.Core can be use instead of Microsoft.Data.OData which is solving case sensitive issue?
1. About the namespace,
Microsoft.Data.OData
Microsoft.Data.Edm
System.Spatial
are the implementation of OData Version 3.0 Spec. The package name on Nuget end with OData v1-3. For example: ODataLib for OData v1-3
While,
Microsoft.OData.Core
Microsoft.OData.Edm
Microsoft.Spatial
are the implementation of OData Version 4.0 Spec. The package name on Nuget don't contain version . For example: ODataLib
2. About the case-insensitive
Web API OData will support case-insensitive, unqualified function/action call and Enum prefix free in the 5.4 release.
Here's a simple example you can refer to.
The source codes are moved to:
WebAPI OData on Github
And you can get the binary from nightly build.
We are using SpringSource-TC-Server and we are considering upgrading to java7. (Currently using java6).
We have not seen any reports on SpringSource-TC-Server not working well with java7 but we do not know of any name worthy projects that have migrated to such an environment.
I'm looking for answer(s) about the following:
Are there any known issues?
Are there any projects who migrated and can report on how it went?
Java 7 is officially supported since vFabric tc Server 2.7.0:
http://www.vmware.com/support/vfabric-tcserver/doc/vfabric-tcserver-rn-2.7.0.html#whatsnew
Since you're using tc Server instead of plain Tomcat probably due to commercial support, it's reasonable only to migrate the underlying Java JDK to the latest version when it is officially supported by the employed version of tc Server. Otherwise, you'd be running it in an unsupported configuration, which isn't far from running a plain unsupported open source version of Tomcat.
Operating tc Server on Java 7 in an officially supported arrangement of versions gives you 2 advantages:
It would have been thoroughly tested by vmWare for any incompatibilities so that you wouldn't have to deal with testing by yourself.
If any problems do occur, you can always get support from vmWare in resolving them.
I know it doesn't directly address your questions, as we in my company also haven't upgraded yet and are only planning to do so.
I just had an impression that your approach makes no sense for a commercially supported product and wanted to outline the reasonable (IMO) approach that is in wide use.
As to any known issues, Java 7 is known for its backward incompatible changes to the XML stack, especially the migration to JAXB 2.2 which changes handling of java.lang.Boolean objects (see the other question - What are the pitfalls when upgrading to Java 7). This can spring up in many different places, I've seen it cause problems in Apache CXF's cxf-codegen-plugin that generates Java stubs from WSDL since the wsdl2java tool it launches makes use of JAXB - the generated method names for boolean elements were no longer in the form of java.lang.Boolean isSomeBooleanProperty() but in the form of java.lang.Boolean getSomeBooleanProperty() which broke code depending on those stubs.
So perform thorough testing if you deal with SOAP web services or XML in general.
I'm new to version control and reading about some of the ones available out there. I noticed the term SCC-Compliant (used with clearcase), and the definition from wikipedia is:
SCC compliant as applied to revision control software, means that a
program uses a particular interface defined by Microsoft for Visual
SourceSafe.[1] The SCC denotes Source Code Control
Is this concept of SCC-compliance huge or not that meaningful? Do most version control systems have it?
Regarding ClearCase, as mentioned in this IBM technote:
The SCC API is an interface specification, defined by Microsoft® that defines hooks for a number of common source control operations.
An application (typically an "integrated" development environment (IDE) of any kind) can provide source control functions without implementing the functions itself.
If an SCC compliant code control system is installed, the application dispatches code control operations to the source control tool (e.g. Visual Studio > ClearCase).
That being said:
if you are new to version control, try and stay away from ClearCase: it isn't the more practical one by far ;)
IBM Jazz protocol is a much more recent standard, that other SCM tools can use to integrate into other environments.
So while the concept of tool integration is important, the SCC concept is quite old, and limited to version control.
As opposed to Application Hub communication protocol, for integrating any two applications together, like Jazz.