WWW::Netflix::API program and hiding the consumer_secret key - perl

I've written a Perl CLI program that uses the WWW::Netflix::API module. It's finished and I'd like to release it but w/o exposing my consumer_secret key. Any ideas how this can be done?

I think you have two options:
Make the end users obtain their own Netflix key.
Proxy all the traffic through your own server and keep your secret key on your server.
You could keep casual users away from your secret key while still distributing it with some obfuscation but you won't keep it a secret from anyone with a modicum of skill.
Proxying all the traffic would pretty much mean setting up your own web service that mimics the parts of the Netflix API that you're using. If you're only using a small slice of the Netflix API then this could be pretty easy. However, you'd have to carefully check the Netflix terms of use to make sure you're playing by the rules.
I think you'd be better off making people get their own keys and then setting up your tool to read the keys from a configuration file of some sort.

Related

Best Practices for authorizing local scripts via oauth to access Web Services

I couldn't find information on how other people solve this, so maybe you can help me out.
What I have
Multiple Services with REST APIs, that are secured using OpenID Connect. Connections between the Services work fine.
Now I have multiple developers, who sometimes need to write and execute local scripts (Python, R, Bash etc.) for quick analysis and testing.
What I want
I want to enable the developers to use the services as easy as possible, but still respecting security concerns.
What I tried
I defined the script itself as a client. Therefore I created a public client in my OIDC product, which is called somewhat like 'developer-scripts'. Using a library which handles the oauth dance, I can then execute the script connecting as aforesaid client. First time, the browser pops up and requests the user to authenticate and therefore authorize the client to use the REST API on behalf of the user. After that, the tokens are cached and I can easily continue working on that script.
This simplified drawing tries to summarize, what I just described
That works perfectly fine and regarding security I'm glad that credentials are not saved on the local computers as it was before with e.g. Basic Authentication. Furthermore, I'm able to control the access to different services on a user level.
Other ideas, which didn't convince me:
every web service also has an public client which can then be used as a client by the scripts (so the scripts aren't defined as clients anymore)
token generation is done somewhere else and the developer just adds the generated access/refresh token to the script
My problem
What concerns me about my current solution is the definition of that client. In the described case it would be either a generic client used by all developers for all scripts, or a new client for every developer who want's to write a local script. The latter seems to be a lot of overhead, the former may be a security problem?
So finally I'm asking the question: Are there any known best practices for my described use case?
EDIT:
I found a small article by [Martin Fowler](https://martinfowler.com/articles/command-line-google.html), he is basically explaining, how he is receiving a token to use for a local script. But in his case, he's using it for one certain use case, and not as a general public client. So unfortunately it doesn't really contribute to my answer.

How to structure API service app architecture

Background:
I'm building an API service app. The app is just like any other, you send an HTTP request and receive a response. This seems simple up until I start thinking about user registration, payments, authentication, logging and so on.
Application:
tl;dr simple app diagram
Endpoints listening for HTTP requests and doing all the request related work. This is the core of the service, what the service user would use this app for. Directly not accessible to the end user (unless somehow it knows the url). Python flask server, deployed on google cloud RUN.
API gateway acting like a proxy and a single access point forwarding the requests to the endpoints. This is the service access point for the end users. This part will also be responsible for authentication, limitations, logging and tracking the use of the API endpoints. Python flask server, deployed on google cloud RUN.
Website including documentation, demo and show off of API calls through API gateway, registration, payment (thinking of Stripe) etc. VueJS app on NodeJS server on google cloud compute VM.
Database storing credentials of registered users, payment information and auth keys. Not implemented yet.
Problems:
Is this architecture proper? What could be done differently or improved? How could I further simplify all the interactions between separate parts of the app? Am I not missing any essential parts?
Haven't yet implemented the database part and I'm not sure what should I
use? There are plenty of options on google cloud. Also I could go with something simple and just install a DB with http/JSON interface on google cloud compute VM. How do I chose the DB? Given such an app, what would be the best choice?
Please recommend literature/blogs/other sources of info on similar app
architecture for new developers not familiar with it?
This is pretty open ended, but here are some general comments:
Think about how your UI will work. Are you setting up a static app served directly from cloud storage or do you need something rendered on the server? Personally I prefer separating UI from API when I can but you need to be aware of things like search engine optimization. Even if you need to render some content dynamically your site can still be static. Take a look at static site generators like Gatsby. I haven't had to implement a server rendered UI in years and that makes me happy.
API gateway might be fine, but you don't really need it for anything. It might be simpler to start without it and concentrate on what actually matters. If your APIs are being called by an external client you can't trust the calls anyways and any API key you might be using will be exposed. I'd say don't worry about it for a single app. That being said, if you definitely want to use a GW then use one, just be aware that it is mostly a glorified proxy and not some core part of your architecture.
Make sure your API implementations don't store any local state so you can rely on Cloud Run scaling your services up and down. Definitely don't ever store state directly inside your containers. If you need state on the server it needs to be in some external data store.
Use JWTs or an external IDM (that will generate JWTs) for authentication. Keep session data on the client side as much as possible and pass the JWT in every API call to authenticate the caller. If you are implementing login on your own the only APIs you need to expose without tokens are for auth and password recovery, which you can separate into their own service.
Database selection depends on how well you understand your processes, how transactional your services are and your existing skillset. Overall I would use what you are comfortable with, you can probably succeed with a lot of things. Certain NoSQL flavors can seem simple on the surface but if you don't have a clear understanding on the types of queries you need to run they can get tedious to work with. Generally you should stick to relational databases for OLAP style implementations and consider NoSQL for OLTP. Personally I like MongoDB and it is very popular, probably because it sort of sits in the middle of the pack which makes it fit a lot of applications. Using MongoDB also makes you cloud agnostic since it is available on every platform. Using platform specific database flavors can lock you down to a specific vendor.
Whatever you do, don't start installing things on VMs. You can be almost 100% sure you are doing it wrong if this comes up. Remember, the services you consume don't all have to be managed by Google or even run on GCP. You can get MongoDB capacity directly from MongoDB who manage it on your behalf on all of the Big3 cloud vendors.
At least think about the long term, even if you don't necessarily need to have it impact your architecture right now. If you are expecting your app to be up for years try to make it more platform agnostic than less. This might mean sticking away from some really platform specific serverless features that will force you to jump a couple of extra hoops. If you are using Cloud Run you are using containers which already makes your app pretty portable, don't lock it to one platform by using a lot of platform specific features. That being said, don't stay away from them either. You should always go for the low hanging fruit, so don't try to avoid using things like secrets manager etc. If your app has a short lifespan and you need really fast time to market then don't worry about it.
Just my 2c, what you are doing is very generic and can be done in a lot of different ways.

subdivide web api authorization in machine to machine scenario

I need advice coming up with the proper configuration for a scenario using IdentiyServer.
The scenario is machine to machine communication. A single web api is divided into two parts. One part allows notifications to be posted into it (write). The second allows information to be queried from it (read).
I envision protecting endpoints with something like [Authorize("Write")] and [Authorize("Read")]. From what I can tell, scopes are API wide... if they can be used to clarify access in this way, I haven't figured it out... or its too simple for my brain.
Suggestions?
Scopes can be used at a finer grained level than app wide. Just do a normal claims check in the API for the scope you require for that API.
Perhaps even something like this would work: https://github.com/IdentityModel/Thinktecture.IdentityModel/blob/master/source/WebApi.ScopeAuthorization/ScopeAuthorizeAttribute.cs

How to protect RESTful API

I have been looking for a way to protect my RESTful APIs. This appeared simple, but it seems to not be so simple. First off, I am writing an iOS app connecting to a Play Framework server. None of this has anything to do with Google, Facebook, Twitter, or LinkedIn (shocking I know). Oh, and my current plans do not require custom apps to use my APIs, its just my apps for the time being.
Basic Authentication
What appeared to be simple was a basic user/pass on a /auth method managing a cookie session. That may draw some groans as being too simple or weak but mostly it moved identity to a session key quickly verified. My initial setup was to expire the sessions every day, but that lead to the iOS app forcing a login daily proving to be an annoyance.
OAuth
I posted a question on an iOS board and received a blunt direction towards OAuth. My research of OAuth began but holy sh*t is that complicated and there does not seem to be any server side examples... just plenty of people complaining about how frustrating it is. All the client examples show connecting to Google, Facebook, Twitter, and LinkedIn. Oh Joy!
After watching Eran Hammer's rant about OAuth1 and OAuth2, it seemed fruitless to continue and his OZ idea (which looks really clean) is only at the early stages in node.js.
Question
So, my question to the broad StackOverflow community is... what do you do for securing your REST APIs?
I'd suggest to consider approach used by biggest players i.e. Amazon Web Services or Windows Azure - HMAC. Although it isn't comfortable in implementation, as you can see it's trusted technique.
The general idea is to sign the request's parts (i.e. headers) in the iOS with secret key and try to recalculate it on the Play app to verify that request is authentic and not manipulated. If it won't fail, you can be (almost) sure, that was sent from somebody, who uses valid secret key.
Take a look into Windows' document to get the concept (I think that for common task, you can use the less number of elements used for signing).
There is also other interesting post (based on AWS authentication) which describes whole process even better.
Edit
Of course you should realize that authentication in iOS and securing API requests are different things, even if you'll expire your session every 15 minutes, you can't be sure that somebody won't overhear it and then will be able to send a fake request from the outside. Signing every request should minimize the risk.
On the other hand, if you'll prepare clear rules for signing the requests and will write short doc (which I recommend even for yourself), you can deliver it to the other developer and he'll be able to implement it in (almost) any platform supporting SHA256, so you will have API ready for using from 3-rd party apps - if you'll decide to publish it in the future.
Since Play Framework is in Java, you could use Apache Shiro
I haven't used it yet.. (I am planning to though) So I don't know if it's the best option.
Just do something simple, send the authorization code / password in a custom header over HTTPS .
So the only problem with the Basic Authentication approach was that the user has to login every day? Why not offer the user an option to save his username/password on the device? That way he can choose between security and convenience.

Minimum overhead for ASP.NET MVC authentication

I want to keep things as simple as possible and I don't want a complicated security mechanism. Basically I need for a user an ID and an e-mail address and I really don't want to bother about other things. Also, I was a minimum overhead in terms of security (if there is anoter provider who can do it for me, that's even better).
What is the simplest way to do this? I was thinking about incorporating LiveID or OpenID by I don't know what are the advantages/disadvantages.
I am working with the Azure SDK.
If you use the Windows Azure Access Control Service, you can basically outsource all identity management. Take a look at the Windows Azure Platform Training Kit - there's a lab called "Introduction to the AppFabric Access Control Service 2.0" that will get you up and running quickly. Currently, you can choose any combination of the following identity providers:
WS-Federation
Facebook
Windows Live ID
Google
Yahoo!
"Simple" for whom?
The simplest strategy for you would probably be to use ASP.NET's standard SQL-based authentication provider. You just run a script against your database to set up all the tables, and then you use ASP.NET's built-in utility methods to authenticate. Give your user-specific tables a foreign key reference to that user's ID, and you're good to go. We've done this, and never had any trouble with it. It's a tried and well-used system, so you know you won't be introducing any security invulnerabilities by hacking your own solution together. (see SqlMembershipProvider vs a custom solutions)
If you want something simple for the user, then an OpenId solution would be my pick. Set up something like StackOverflow has, where you can let users choose an account from a number of trusted providers to allow them to log in. From the user's perspective, it's really nice not to have to remember one more username and password for one more site.