PSD2, The Payment Services Directive of the EU.
Financial institutions in the EU need to be PSD2 compliant, and there's a bunch of vendors claiming PSD2 compliancy. PSD2 is supposed to be a uniform EU-wide standard, and there's a million whitepapers, video blogs, impact estimates, high level overviews, but no technical specification.
Nothing saying really what message needs to be sent where and then happens what. The closest thing I found is this but even there there's no reference, nothing to imply what exact technical spec they followed.
Does anybody know where to get the official PSD2 technical requirements?
EDIT: I tried my luck with the developers of openbanking project
PS I understand that this question is technically a "questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam"
This question must have a unique and precise answer from a single regulator - the EC, this is not an opinionated answers area.
Here is the UK standard.
https://www.openbanking.org.uk
Also there is a linkedin group to connect developers working on PSD2 and Openbanking with banks, regulators and suppliers here.
https://www.linkedin.com/groups/12069802
I got an answer from the "owner" of the OBP project, I'm posting it verbatim:
Regarding the current status, Open Bank Project API develop branch currently supports OBP API specs 1.2.1 through 3.0.0
We also have an ISO20022 connector (PAIN) for initiating payments.
You can read the OBP specs here:
https://apiexplorersandbox.openbankproject.com/
or use the Swagger:
https://apisandbox.openbankproject.com/obp/v1.4.0/resource-docs/v3.0.0/swagger
or Resource Docs (our own format):
https://apisandbox.openbankproject.com/obp/v1.4.0/resource-docs/v3.0.0/obp
(the Swagger / Resource Doc links can also be found at the bottom of the API Explorer)
Regarding PSD2, PSD2 doesn't explain exactly how countries should comply (e.g. it doesn't define URLs etc.). However, it does say in Article 28 point 3: "Account servicing payment service providers shall also ensure that the dedicated interface uses ISO 20022 elements, components or approved message definitions, for financial messaging".
This is why STET (the recent French standard) uses field names like "PmtTpInf", "InstrPrty", "SvcLvl" and "Cd" etc.
In addtion to the OBP standards mentioned above, we aim to support:
An ISO 20022 version of OBP. This will most likely be requested using a different Mime type on the current OBP URLs and will be implemented as an automatic translation of OBP terms to ISO20022 equivelents (where they exist). We'll probably support ISO20022 short field names and also longer type names (which are verbose but are more self describing).
UK Open Banking standard
STET (French)
Other Country standards.
Thus OBP API will be able to surface multiple standards using one OBP instance and backend connector. It will provide easy to use REST APIs (OBP) and less easy to read ISO20022 interfaces for compliance.
Hope that helps.
p.s. here is STET: https://www.stet.eu/assets/files/PSD2/API-DSP2-STET_V1.2.2.pdf
If you are looking for a technical standard that is intended to be applicable across all PSD2 countries, you should check out the Berlin Group spec.
The Open Banking spec is somewhat UK specific, it might be sufficient if you only need to support UK market, or you could extend it to support other products/markets (e.g. SEPA payments).
I've been looking for an answer to this question myself, hoping that I'll find a PSD2-compliant JSON-based answer, rather than have to figure out ISO20022.
I found this brilliant article by Starling Bank saying:
As of November 2017, however, the Open Banking Implementation Entity (OBIE) announced amendments to the scope of Open Banking to broaden out the Open Banking solution to include PSD2 items “in order to deliver a fully compliant PSD2 solution” – which can be read in full here and here.
It seems to me that if Open Banking is designed to be PSD2-compliant and it already delivers detailed specs, then the safest bet here is to simply implement Open Banking specs.
I've also found that viable alternatives to this are:
The Berlin Group's NextGenPSD2 specs, published as a YAML file.
The Stet specs, also published as a YAML file.
The text of PSD2 is here: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32018R0389&from=DE
I found this from here: https://raue.com/en/e-commerce-2/new-eu-regulation-for-electronic-payments-and-online-banking/ which has a helpful summary.
PSD2 is the interface requirement, I don't understand why so many of the responses are about Open Banking, which is just about how to use the interface!
The specs rely a lot on JWTs I found this website very useful if it helps anyone - https://openbankingsdk.com
Related
I am working on deciding the technology stack for one of health-related application. We are targetting for HIPAA compliance for the same.
Definitely Native is a good option but I am looking for cost-effective option from development as well as maintenance perspective that's why looking into Flutter Framework. It is satisfying most of the functional as well as technical needs.
I need answers of,
Is there anything inside Flutter framework itself which is not compliant with Hippa?
Any challenges that I can't see at this moment but people have faced in compliance?
Popular third parties not to be used like Firebase, Crashlytics etc? Definitely, at the time of adding new package we will do analysis then we will add it.
Short answer (first bullet): Yes, you can use Flutter in a way that complies with the HIPAA Security & Privacy Rules.
Long Answer (second bullet): You can also use it in a way that violates those rules. At the risk of pedantry, you're asking the wrong question. HIPAA applies to Covered Entities and Business Associates, not to frameworks or applications. A better question is "Is my company HIPAA Compliant?" which means "Have we implemented the 54 safeguards of the Security Rule in a reasonable and appropriate fashion, and are we using and disclosing PHI in ways permissible under the Privacy Rule?"
Third Bullet: If the third party is handling ePHI, they will need to sign a Business Associate Agreement (BAA) - no matter how popular they are. Google's an odd case in that they'll sign a BAA for some, but not all, services. Here's the full list .
What technical detail should programmers consider while developing their own oAuth service?
Have been trying to find out guidelines, but found most of the oAuth related articles discuss as a consumer point of view (i.e. how to consume others service). I want to design my own oAuth system with my authorization service and resource service. What technical detail should I follow?
You probably have read the RFCs but just in case you haven't, they're the place you want to start:
oAuth 2.0 "core" (RFCs 6749 and 6750)
Proof Key for Code Exchange (PKCE) (RFC 7636)
The best 'packaged' guidance for oAuth implementers (client or otherwise) is available via IETF Best Current Practices (BCPs). Most people know about IETF RFCs and (confusingly) BCPs are published as RFCs with a RFC number. Despite that, they're best practices and not formal specifications:
The BCP process is similar to that for proposed standards. The BCP is
submitted to the IESG for review, and the existing review process
applies, including a "last call" on the IETF announcement mailing
list. However, once the IESG has approved the document, the process
ends and the document is published. The resulting document is viewed as having the
technical approval of the IETF, but it is not, and cannot become an official Internet Standard.
BCPs you want to review:
oAuth security (up to date as of this writing)
oAuth for browser-based apps (up to date as of this writing).
oAuth for native apps (published in 2017 as an update to "core" oAuth 2.0 RFC, still a good read)
JSON Web Tokens for oAuth (up to date)
These documents are framed in threat model terms - they cover attacks (or "security considerations" as a diluted format) and countermeasures. You might be looking for a more straightforward building blocks type of a roadmap and perhaps there should be one as an educational tool. Real-world oAuth implementations must be developed with a prima facie evidence of a threat model.
As one samurai said: ...swordsmanship untested in battle is like the art of swimming mastered on land.
I would also be interested to hear why you want to develop your own auth solution.
But putting that aside, there is an open source project that does exactly what you ask - Identity Server. You can check out their source code or fork it and build something on top of it.
Also, please check "identigral" answer on various docs.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm designing a REST-like API over Http.
I need the API Clients (apps, not browsers) to follow the links (HATEOAS), not to build them.
Also, I'll still use readable URLs for some reasons that can be disagreed.
However, if pretty ways to document url templates exist (like these ones), I don't think it is the right way as it could clearly tempt and legitimate developers to build urls themselves.
So, How to document an API in a way that respects HATEOAS ?
We often find Discoverability associated to HATEOAS.. To be honest, I don't think this is enough in real life : where business concepts are multiple, subtle to understand and client developers are not your teammates..
Meaningful names are clearly not enough.
Developers need to make their Client apps ..
Navigate into the API from the entry url to the relevant documents
Build valid requests (parameters and bodies) and interpret responses with no ambiguity on the semantics.
So, How to document this ?
Are there existing tools that generate documentation this way ?
Would a "Glossary" be enough to fill-in the gap between discoverability and unambiguous interpretation ?
Maybe the html representation of the API (Accept:text/html) could return human readable documentation...
.. any other idea or experience on this ?
Related concepts :
Design with Intent, Versioning, Level 3 API
First of all, there's nothing wrong with readable URIs and with users being able to easily explore your API by building URIs by hand. As long as they are not using that to drive the actual API usage, that's not a problem at all, and even encouraged by Roy Fielding himself. Disagreement on that on the basis that URIs must be opaque is a myth. Quoting Fielding himself on that matter:
Maybe I am missing something, but since several people have said that REST implies opaqueness in the URI, my guess is that a legend has somehow begun and I need to put it to rest (no pun intended).
REST does not require that a URI be opaque. The only place where the word opaque occurs in my dissertation is where I complain about the opaqueness of cookies. In fact, RESTful applications are, at all times,
encouraged to use human-meaningful, hierarchical identifiers in order to maximize the serendipitous use of the information beyond what is anticipated by the original application.
It is still necessary for the server to construct the URIs and for the client to initially discover those URIs via hypertext responses, either in the normal course of creating the resource or by some form of query
that results in a hypertext list. However, once that list is provided, people can and do anticipate the names of other/future resources in that name space, just as I would often directly type URIs into the
location bar rather than go through some poorly designed interactive multi-page interface for stock charts.
http://osdir.com/ml/web.services.rest/2003-01/msg00074.html
If you need your client developers to follow the hyperlinks and not build URIs by hand, from my experience I think the best way to do that is to promote it as a cultural change in your work environment. In my case I had a supportive manager, so it was much easier. You should warn them that the URI namespace is under control of the server and the URIs may change anytime. If their clients break because they failed to comply, it's not your responsibility. It also helps a lot to have some sort of workshop or presentation to explain how HATEOAS works and the benefits for everyone. I noticed how a lot of street-REST developers think it's superfluous, until they actually get it.
Now, to address your main question, you shouldn't document the API, you should focus your documentation efforts on your media-type. Quoting Fielding again:
A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]
http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
That means, you should have custom media-types for your representations, and instead of documenting API endpoints or URIs, you should document those media-types and the operations for the links available in them. For instance, let's say you have an API for a Q&A site like StackOverflow. Instead of having an API documentation telling them that they should POST to the rel:answers link in the representation of a question in order to answer it with their current user, your questions should have a media-type of application/vnd.yourcompany.question+xml and on the documentation for that media-type you say that a POST to a rel:answers http link will answer the question.
I don't know of any existing tools for this, but from my experience, any tool that can be used to generate documentation from abstract models can be used for this.
I don't know how your ecosystem of APIs is, but what works for me is to have a generic documentation with a gentle introduction to REST, addressing some of the misconceptions, and detailed general usage to your patterns, that should apply to any API. After that, each individual server should have its own documentation, focused on the media-type.
I don't like the idea of returning documentation in the text/html representation, because that's supposed to represent the resource itself, but I love the idea of having a rel:doc link pointing to your HTML documentation for that media-type.
First
No I am not asking you to teach me hacking, I am just curious about this file and its content.
My journey
When I dived into the new HTML5 Boilerplate I came accross the humans.txt. I googled for it and I came at this site http://humanstxt.org/.
Immediately my attention went to this picture:
Do I read this correctly? Hackers.txt?
So I resumed my journey in google and stopped at this articles
When I started reading this I had the feeling that its about the difference between Hackers and Crackers. Later I got the feeling that I'm might be wrong and that this place is that this hackers.txt file is a sort of guestbook for hackers?
Also other examples about hackers.txt files I found here
Some files contain code, others have just hurtfull information.
Now I'm realy confused, guestbook, hack tutorials or just history?
Question
What is the use of this hackers.txt file?
The way I see things:
robots.txt contains information and instructions for robots (so it should be read/used by web crawlers, spiders and other kind of bots)
humans.txt contains useful information to be consumed by humans, according to http://humanstxt.org/
hackers.txt should be targeted towards hackers, so it should contain any information the site owner might want to transmit to a hacker, as Ze'ev pointed out. I don't think this should be a place for hackers to write anything, but rather to get information from the site owner (perhaps on how to report vulnerabilities, as others suggested).
Commonly known as Eduardo Vela, Eduardo A. Vela Nava (or sirdarckcat on Github and Twitter) has been a Security Engineer at Google since 2010. (He currently has the role of Product Security Response Team Lead).
As other security experts before him, he pondered the issue of effectively communicating the details of a site's vulnerability reward program to white hat hackers/pen-testers.
One specific such person is Chema Alonso (also on Twitter).
He is well-known enough to warrant a Spanish Wikipedia entry
Between 2005 and 2011 Alonso was awarded the Microsoft Most Valuable Professional Award for Enterprise Security 6 years in a row. That should tell you something about his "skillz".
On February 3rd 2011 Alonso wrote about his frustrations regarding the topic of communication between the administrators and/or developers of a site and hackers.
He proposes a similar initiative as humans.txt but for hackers. As he mentions this hackers.txt initiative in his blog-post.
In April 2011 The humanstxt.org website got a new design which includes the image which mentions the hackers.txt file.
At this point, I must sadly submit to conjecture, but... consider:
The team behind humans.txt are all from Spain (mostly Barcelona)
At this point Alonso is already quite well known in the Spanish developer community
Would it be such a far stretch to imagine that they got to know of each other's efforts?
On May 14th 2014 Vela, already working at Google, commented on a blog-post by Alonso. It is most likely that they had further contact in a professional setting. Whether or not thay extively shared their idea's regarding anything related to hackers.txt is unknown.
On July 6th 2017 Vela posted a question to this extent on twitter:
How about we create a /hackers.txt that says whether something is in scope or not of a vulnerability reward program and where to report it?
Subsequently, an empty git repository was created for hackerstxt.org on github
and an email thread was opened at Google Groups to discuss this idea further.
On August 13 2017, Edwin Foudil (or EdOverflow on Github and Twitter) created a git repository for security.txt on Github and responded to the mailing list:
I have published a similar project to the one being discussed in this group (https://github.com/EdOverflow/security-txt) and would love to get some of your feedback and ideas.
The project is the equivalent of robots.txt, but for defining a security policy. Companies can add a security.txt to their website and define clear guidelines of what security researchers must do when they discover a security issue. security.txt also allows bug bounty programs to add their scope there. security.txt uses a similar syntax to robots.txt, which should make it easier for machines to parse.
He was, in part, inspired by an open-source project he was working on at the time called GratiPay. GratiPay had a SECURITY.txt file since 2013.
His inspiration also drew from the SECURITY.md files that more and more open-source projects were adding to their repositories.
On September 10th 2017, Foudil submitted a first draft for security.txt to the Internet Engineering Task Force.
On September 14th 2017 Alonso wrote a blog post with the title (translated from Spanish) "Security.TXT an IETF draft for my Hackers.TXT".
Beyond the title, Alonso does not allude to the fact that his 2011 idea was the origin of the draft but he does state his approval of the effort.
On February 3rd 2018, the mail group was informed to concede to security.txt and Vela tweeted that Google had already implemented one.
Further information
Details and a nifty tool to generate your own security.txt can be found at
https://securitytxt.org/
Adoptation
Even though the RFC is still in draft, the standard is already being adopted quite well by major players on the web.
Besides the security.txt at Google, there is also one on the website of:
1password
BBC
bit.ly - http://bit.ly/security.txt (can't be linked because StackOverflow blacklist the use of common link shorteners in posts)
CERT NZ
DailyMotion
Dropbox
Facebook
Github
haveibeenpwned
NodeJs
NPM
Open SSL
Shopify
(Feel free to add more from well-known sites, if you find 'm)
As with humans.txt, there also seems to be a hackers.txt site at http://www.hackerstxt.org/. I'm not sure if someone has set the site up as a joke or not, but it links to a blog post on someone's Blogger site.
The post rambles on a bit (I put it through Google Translate) about the poster's history as a 'hacker'. Anyway, towards the end the writer says:
therefore believe we should promote an initiative type hackers.txt , in which managers leave us a message to potential "aliens who are good" that makes it clear they will do managers receive a report of a vulnerability in your site. I've been circling this , the truth is that it is difficult to finish shaping, because perhaps some "alien who is not so good" , type Brainiac , take a free hand to brush a site, or the "good board administrator" , decide to change your mind and Liem, but I think we should be able to do something, I dunno, maybe having Jon Jonz , or perhaps thinking about how to write that file hackers.txt . How do you see it? Greetings Evil!
So I assume that the poster wants to start a sort of hackers.txt standard in the vein of humans.txt, but hasn't finished it off, or hasn't gotten it into the English speaking world.
Digging around, the Blogger site seems to be owned by a guy called Chema Alonso, who must be fairly reputable in the world of Spanish programmers as he has about 35k Twitter followers (https://twitter.com/chemaalonso). He seems to work for a company called ElevenPaths (http://elevenpaths.com/), which says that it's driving "radical innovation in security product development". A quick Whois check shows that the hackerstxt.org domain is registered by someone in Madrid, so I would assume it's Alonso.
The .txt file over at http://www.textfiles.com/news/hackers.txt, which has been refered to by some of the other answers in this thread, doesn't seem to have anything to do with the hackers.txt reference over at http://humanstxt.org/, and neither do most other search results for 'hackers.txt'.
It's possibly a joke, but If humans.txt is for humans to read then maybe hackers.txt is a warning for hackers.
Like the notice you get when you SSH into some more public terminals. "You are being watched... we will get you if you do anything bad..." That sort of thing.
If a hacker did compromise the site, the might notice the file, read it, realise you mean business and be scared away!
Interesting idea.
As this question is somewhat open, I think you are also expecting some assumptions, I write here (not in a comment) my opinion, but if it should be there, I'm sorry.
I think that the idea lying behind humans.txt (which I heard of before) is to make a new habit, new style or something like that. In fact, you can put a contact page, where all these data from humans.txt can be put. I think that hackers.txt could be also something like new style.
I suppose that hackers.txt was much earlier, maybe for 20 years, when www servers and popular web knowledge was poor, when using localhost Apache+PHP+MySQL was making you "a hacker", and if someone could access the file other than index.html (and linked pages from this), reading hackers.txt was some kind of prize, or maybe some kind of filter to show some information to "those who behold" (like this one perhaps).
I think hackers.txt should contain notes on how the site owner would like for data to be used... E.g. "I don't mind if you scrape the movie listings, but please don't hot link out images in your app"
I can only find and download OSGi Specs(e.g. core-spec, enterprise-spec) from its website. What about so-called OSGi RFCs? Are they publicly accessable, and how related to the Specs? Thanks!
From this osgi.org page:
Each Expert Group works on items defined in documents known as Requests for Proposals (RFPs), which set the requirements for the technical development.
RFPs may be created by anyone but are always reviewed by the Requirements Committee to ensure they meet real-world needs and complement the larger objectives of the OSGi Alliance.
Assuming the RFP is accepted, the relevant Expert Group develops Requests for Comments (RFCs), which define the technical solution to the RFP.
The Expert Group also develops Reference Implementations and Test Cases to support the RFC where this is appropriate.
The Member Area of the OSGi web site contains much more information and detail on specific activities, including drafts and final versions of RFPs and RFCs, final but pre-release versions of specifications and other technical documents, minutes, schedules and calendars of Expert Group meetings, and other important information. This information is only available to members.
So only the members can access those RFCs.
Regarding "draft specifications":
From time to time the expert groups of the OSGi Alliance publish some draft RFCs under a special license (the Distribution and Feedback License) for a public review in order to receive comments from non-OSGi members and other organisations.
The download page to access these draft specs is http://www.osgi.org/Specifications/Drafts .
To keep the RFCs non-public was a decision to protect the IPR as well as to keep the resulting specifications as unconfusing as possible. Sometimes one or more RFCs are combined into one specification, sometimes an RFC amends an existing spec. The RFCs are basically work-in-progress.
There are some RFCs the OSGi Alliance decided to publish. Those are the ones you can access. One example is RFC 112 Bundle Repository. This is a stand-alone spec, which is complete in itself.