I recently upgraded Shibboleth from v4.0.1 to v4.1.0. After the upgrade, I get the deprecated warning message regarding SAML2NameID. I am using this feature in "attribute-resolver.xml" conf file and couldn't find its replacement in the Shibboleth documentation. Can anyone suggest what should I use in place of SAML2NameID?
WARN [DEPRECATED:125] - [:] - xsi:type 'SAML2NameID', (file [conf/attribute-resolver.xml]): This will be removed in the next major version of this software; replacement is (none)
I am using this feature in "attribute-resolver.xml" conf file and couldn't find its replacement in the Shibboleth documentation.
Using SAML2NameID encoders in the resolver is now deprecated, and you're given specific configuration to generate name ids for different saml protocols, SPs, per attributes, etc.
Per the docs,
The saml-nameid.xml configuration file defines two list beans, each one an ordered list of "generator" plugins for the two different SAML versions. Each plugin is specific to an identifier Format, a SAML constant that identifies the kind of value being expressed. The generation process involves selecting a list of Formats to try and generate (see Format Selection below), and then trying each Format until an appropriate value is obtained by running each configured generator in order.
Please see this link.
Related
I am working on an old internal project. I am working on windows. The puredata I am using is on 32bit.
There are some objects like [mtx_*~] [mtx_:] [mtx_.^] [mtx_circular_harmonics] have "couldn't create" error.
I have iemmatrix installed through "find external".
I tried older versions of Puredata extended or several versions of vanilla. I can't create mtx_, either.
From pd/externals/iemmatrix, I can find a file called "mtx_0x2a0x7e.dll", which I think is "mtx_~" after decoding.
There is not much information on the internet about it anymore.
The "official" version (not the one with the 'extended' suffix) is compiled as a multi-object library. So you have to load the library first, either with a command line flag '-lib iemmatrix' or with a [declare -lib iemmatrix] object in your patch (The latter is much preferred as it makes your patch more portable). When loaded, iemmatrix prints a greeter to the Pd console window:
iemmatrix 0.3.2
objects for manipulating 2d-matrices
(c) 2001-2015 iem
IOhannes m zmölnig
Thomas Musil
Franz Zotter
compiled Sep 6 2019 : 12:07:54
After that you can create objects like [mtx_*~]
The version 'v0.0-extended' was added to facilitate the migration away from now retired Pd-extended. Since it is compiled as a one-object-per-file library and many of the those objects have names that cannot easily be used in filenames, Pd-extended used a trick with an additional hexloader library that translates hex encoded filenames to the actual name of the objects. For being able to load objects from the extended version, you would have to install and load 'hexloader' first.
Having said that, it is highly recommended to use the official version which is actively maintained while the extended version is not and is there for historical reasons.
While upgrading to 6.4, we ran pattern detector report and got below lines for ECU category. Is there any reference to fix this issue?
Cross-boundary resource type usage of internal marked path /libs/cq/gui/components/projects/admin/projectteam referenced at
/apps/cq/core/content/projects/gadgets/xtrftranslationprojectsummary/jcr:content/content/items/form/items/fixedcolumns/items/column2/items/tabs/items/tab1/items/projectmembers
One more:
Cross-boundary resource type usage of internal marked path /libs/cq/gui/components/projects/admin/wizard/properties/thumbnail referenced at
/apps/cq/core/content/projects/wizard/xtrftranslationproject/defaultproject/items/column1/items/cover
As per the official documentation on Extraneous Content Usage, this means that your custom code uses components that are considered internal and are not part of the API. Both errors say you referenced them so we're looking at simple use (rather than an overlay or inheritance based on sling:resourceSuperType). You just have a couple resources with the sling:resourceType values belonging to internal components whose use in this context is not something that's officially supported or tested.
They may break at some point when you upgrade to a newer version of AEM or try to apply a hotfix.
The best way to go forward would be to stop using them and replace them with other components that are considered public and therefore supported. If no suitable replacement is available, you should consider replacing them with custom code that you control.
I'm not familiar with either cq/gui/components/projects/admin/projectteam or cq/gui/components/projects/admin/wizard/properties/thumbnail so I can't recommend any replacements. Any potential replacement should have the mixin type of either granite:PublicArea (can be used, overlaid or inherited), granite:AbstractArea (can be inherited but not overlaid or used directly) or granite:FinalArea (can be used but not inherited).
I am working on a library which follows PKCS#11 standard.
https://www.cryptsoft.com/pkcs11doc/v220/
The library can generate RSA Keypair in token by the function C_GenerateKeyPair and returns appropriate object handles with return value CKR_OK.
The token(applet) not supports load of private/public key except generate key pair. What will be the appropriate return value of create RSA private/public key using C_CreateObject?
Now I am returning CKR_GENERAL_ERROR, is it okay?
Allowed return values are
CKR_ARGUMENTS_BAD, CKR_ATTRIBUTE_READ_ONLY,
CKR_ATTRIBUTE_TYPE_INVALID, CKR_ATTRIBUTE_VALUE_INVALID,
CKR_CRYPTOKI_NOT_INITIALIZED, CKR_DEVICE_ERROR, CKR_DEVICE_MEMORY,
CKR_DEVICE_REMOVED, CKR_DOMAIN_PARAMS_INVALID, CKR_FUNCTION_FAILED,
CKR_GENERAL_ERROR, CKR_HOST_MEMORY, CKR_OK, CKR_PIN_EXPIRED,
CKR_SESSION_CLOSED, CKR_SESSION_HANDLE_INVALID, CKR_SESSION_READ_ONLY,
CKR_TEMPLATE_INCOMPLETE, CKR_TEMPLATE_INCONSISTENT,
CKR_TOKEN_WRITE_PROTECTED, CKR_USER_NOT_LOGGED_IN.
Thanks for your help
Update
I have two types of applet, one supports load of RSA private/public key to token and another not supports. It can only possible to identify if the token supports load of key is the response of transmitted APDU. So I can't take decision only to check the class attribute of C_CreateObject.
If your library does not support C_CreateObject at all then the best choice IMO is CKR_FUNCTION_NOT_SUPPORTED.
Chapter 11 in PKCS#11 v2.20 states:
A Cryptoki library need not support every function in the Cryptoki API. However, even an unsupported function must have a "stub" in the library which simply returns the value CKR_FUNCTION_NOT_SUPPORTED.
If your library does support C_CreateObject for creation of other object types (e.g. certificates, data objects etc.) then the best choice IMO is CKR_ATTRIBUTE_VALUE_INVALID.
Chapter 10.1.1 in PKCS#11 v2.20 states:
If the supplied template specifies an invalid value for a valid attribute, then the attempt should fail with the error code CKR_ATTRIBUTE_VALUE_INVALID.
UPDATE
Now that you have shared more details about your library in the comments I can add more detailed explanation:
It seems I can call your implementation of C_CreateObject with template containing CKA_CLASS=CKO_CERTIFICATE and it will create certificate object on this particular token and return CKR_OK. If I call it with template containing CKA_CLASS=CKO_PRIVATE_KEY then your code will decide to return an error right after the evaluation of the supplied value of this attribute. IMO there is no doubt that chapter 10.1.1 of PKCS#11 v2.20 recommends you to return CKR_ATTRIBUTE_VALUE_INVALID in this case.
However if are not willing to follow behavior recommended by the specification and there is no predefined error code you like, you can introduce your own vendor defined code (see my older answer for more details):
#define CKR_TOKEN_OPERATION_NOT_SUPPORTED (CKR_VENDOR_DEFINED|0x0000001)
IMO confusion level for inexperienced developer will be the same regardless of error code you return. In the end he/she will need to consult your documentation or logs produced by your library to find out the real reason why he/she received the error.
In Paper [A Software Product Line for Static Analyses(2014)], there is an illustration related constructing call graph(Listing7).
In this example, Line14 is related to construct call graph. while i check the src code and API, what i could find is DefaultCHACallGraphDomain.scala which has no implementation of construct call graph.
As my purpose is using OPAL to construct call graph. Is there any demo or documents help me understanding existing CallGraphDomain in OPAL? currently, i can only find some class declaration.
I'll be really appreciated if anyone can give me some suggestions related this topic.
Thanks in advance.
Jiang
The interface that was shown in the paper doesn't exist anymore, so you can totally forget about it.
The default interface to get a CallGraph class is provided by the Project object you retrieve when you load the bytecode a Java project.
A general code Example:
val project = ... // a java project
val computedCallGraph = project.get(/* Some call graph key */)
val callGraph = computedCallGraph.callGraph // the final call graph interface.
The computed call graph contains several things. It contains the entry points, unresolved method calls, exceptions when something went wrong at the construction time and the actual call graph.
OPAL provides you several call graph algorithms, you can retrieve each by passing the corresponding call graph key to the Project's get method.
Currently, the following two keys are available and can be passed to Project.get (more information is available in the documentation of this classes):
CHACallGraphKey
VTACallGraphKey
Analysis mode - Library vs Application
To construct a valid call graph for a software project it depends on the project kind which analysis mode to chose. While applications provide complete information (except incomplete projects, class loading and so on), software libraries are intended to be used by other projects. However, those two different scenarios have to be kept in mind, when construction call graphs. More details can be found here: org.opalj.AnalysisModes
OPAL offers the following analysis modes:
DesktopApplication (safe for application call graphs)
LibraryWithClosePackagesAssumption (safe for call graphs that are used for security-insensitive analyses)
LibraryWithOpenPackagesAssumption (very conservative/safe for security analyses)
The analysis mode can be either configured in OPAL's config file or set as project setting at runtime. You can find the config file in the Common project under /src/main/resources/reference.conf.
All of those analysis modes are supported by the the CHACallGraphKey while VTACallGraphKey only supports applications so far.
NOTE: The interface may change in upcoming versions again.
I'm wondering what REST API version defines - resource structure, URI path or both? To me, it seems that specifying version in URI like this can define both:
api.foo.com/v1/path/to/resource
While specifying version as part of mime-type:
Content-type: application/json;application,v1
Clearly defines resource representation.
Some time ago I've read that the URL of a particular resource should not change among versions. After some thinking about it makes deep sense. URL is used to define an unique resource on the internet. If you specify version in URL it makes two different resources when it comes to definition above. Such resources may be a little different (a new field was added e.g.) however they seem to be exactly the same. Since then I use only header versioning.
Its completely up to us, how we want to represent and implement API. To me also it seems that specifying version in URI defines both "Version" as well as "Resource" present in current API version or not.