Goal: Use a single streaming locator to several tracks of mp3s where the client can get one token with the streaming locator's claim and play each track during its lifetime.
I modified my output asset job process to use the List<string> files, but the job fails saying the input asset does not have a primary file.
Is my approach wrong, or is there a way to say track1.mp3 is the primary?
I am trying to configure a situation like this:
1 stream locator (1 claim required for access)
several tracks in that locator (1.mp3, 2.mp3, ... 50.mp3)
customer buys 1 sku, gets that stream locator's claim
This would be my expected approach. However, my fallback (and fear) is that AMS wants me to have a 1:1 with 1.mp3:streamlocator.contentkey.id and maintain that list in a backend, then use this sequence flow:
Client requests token from a lookup endpoint
Endpoint compares their ownership vs the requested 1.mp3
If they have access to the sku 1.mp3 belongs to in the backend, issue signed token with the the content key id associated from 1.mp3:streamlocator.contentkey.id
If they dont have access, 401 the token request back the client app to handle
Let's take a step back and modify the scenario a bit: suppose it is a (single bitrate or multi-bitrate) video with 3 audio streams. And you want a 1:1 mapping for streaming_locator : content_key_id : media_asset. With a token containing the right claims you can get the decryption key/license and play any of the 3 audio streams (with video) with a video player. This scenario is supported by AMS and I don't see any problem with this.
However, your scenario is MP3 (without video). My concern is whether there is a binding between MP3 and AES-128 encryption or any DRM, and using such binding in AMS dynamic packaging. Without such encryption, how do you enforce the protection or what is the purpose of token?
Related
I have integrated an existing streaming application with Facebook, but am facing a final hurdle. My software doesn't have an API to change the streaming key. So I use a Persistent Key for Youtube, and also for Facebook.
However, I want to automate the whole system and I cannot find how to use the Persistent Streaming Key (PSK) with Facebook icm the Graph API.
I schedule an event, but when I start streaming to the PSK the connection is rejected. It does work however if I go to the Facebook Page and open the Live Producer for the scheduled stream. Straight away the stream is accepted and seems to be connected to the stream originally assigned to the LiveVideo (or at least that is how is seems).
Is there a way to allow the PSK to be accepted without the manual intervention of opening the Live Producer page? I don't seem to be able to find anything.
Is there a way to allow the PSK to be accepted without the manual intervention of opening the Live Producer page?
Unfortunately, no.
Your automation really needs to tie into the rest of the Facebook Live API so that you can create/start streams. It isn't possible to use RTMP alone, even with the persistent stream key.
Recently, usage of PERSISTENT STREAMING KEYS has been enabled by Facebook. Here is the detail: https://www.facebook.com/formedia/blog/new-live-tools-for-publishers-persistent-stream-keys-crossposting-and-live-rewind
There is no documentation anywhere, but it looks the LiveVideo object's stream key can be updated to PERSISTENT STREAMING KEY via POST endpoint on LiveVideo object
checkout this link on Facebook developer - the return value of LiveVideo object has PERSISTENT STREAMING KEY -
https://developers.facebook.com/docs/graph-api/reference/live-video#Updating
User Case: Customer can upload the file from the public REST api to our S3 bucket and then we can process the file using downstream services.
After doing some research I am able to find 3 ways to do it:
Uploading using OCTET-STREAM file type
Upload the file using form-data request
Upload the file using the pre-signed URL
In first 2 cases user will send the binary file and we will upload the file to S3 after file validation.
In the 3rd method user have to hit 3 apis. First API to get the S3 pre-signed URL which will give access to the user to upload the file to S3. In second hit user will upload the file to that s3 pre-signed URL. After the user complete the upload he will send the request to process the file.
Do we have any security issues with step 3? As user can misuse the pre-signed URL with malicious file.
Which of these method is best according to industry practice?
Details of each approach:
1. Uploading using OCTET-STREAM file type
Pros:
This method is good to upload file types which can be opened in some application such as xlsx.
1 API hit. Direct file upload
Cons:
This option is not suitable to upload multiple files. If in future we need to support multiple file upload this should be changed to multipart/form-data (A2).
No metadata can be send as body parameter. Metadata can be send in headers.
2. Upload the file using form-data request
User will upload the file with the API request by attaching it as multipart form.
Pros
We can send multiple files at the same time.
We can send extra parameters in the body.
3. Upload the file using the pre-signed URL
Cons
Customer have to hit the 3 APIs to upload the file. (2 API hits to upload and then 1 more API hit to check the process the file)
If you want them to load data into a bucket, the best way will almost always be the pre-signed URL. This gives you complete control over how you hand out access to the bucket, but also allows them to directly upload into the bucket when they have the access.
In the first two examples the user can send malicious data to your API, potentially DOSing the server / incurring costs on you to manage the payloads as you have no control over access (it is public).
In the third case they can request a URL from you, but that is it, other than spamming you for requests for URLs, unless you grant them a URL they can't access the bucket or do anything else. This seems much better than spamming your upload with large junk files and having you process them before you decide you didn't want them anyway.
Finally using the pre-signed URL is the pattern AWS would expect you to use, and so have a lot of support for managing the access, roles, logging and monitoring etc that you would want to put around this service. When you are standing up the API yourself this will all be up to you to manage.
I am attempting to make a website's back-end API (I want to make the back-end independent of the front-end so I'm only making a server-side API for now, abiding to RESTfulness as much as possible). I haven't done this before so I'm unaware of the 'best' & most secure way to do things.
How I do it now:
Some parts of the API should only be accessible to a specific user after they login and up to 24 hours later.
To do this, I am generating a random Session ID whenever a user logs in (I'm using passwordless logins so the user is assigned that ID when they click on a link in their email) on the server side, which respond by sending that session ID to the client once. The client then stores this session ID in localstorage (or a file in disk if the client is not a web browser).
Next, I store that ID along with the associated email in my DB (MySQL table) on the server side.
Now every time the client want something from my API, they have to provide the email & session ID in the URL (I don't want cookies for now), which the server checks against the ones in the DB, if they exist then the server responds fully else responds with an error.
After 24 hours, the server deletes the email/session ID pair and the user has to login again (to generate another session ID and associate it with their email).
Now the questions:
Is my method secure or does it have obvious vulnerabilities? Is
there another battle-tested way I'm not aware of?
Is there a better way for the client to store the session ID (if
they are a web browser)?
What is the best way to generate a unique session ID? Currently I
generate a random 16-char string that I set as the primary key of
the session-email table.
Is using a MySQL table the most performant/best way to store session
IDs (given it will be queried with each request)?
Do I need to encrypt session IDs in any way? Is it secure for the
client to send it as a 'naked' URL param?
Sorry for having too many questions in one post but I think they're related by the single scenario above. If it makes any difference, I'm using F# and I expect my client to either be an android app or a web app.
Your REST API MUST not know anything about the REST client session, not even the session id. If you don't want to send a password by every request, all you can do is signing the user id, and the timeout, so the service can authenticate based on the signature. Use JSON web token: https://en.wikipedia.org/wiki/JSON_Web_Token
You can have a server side REST client, which can have the session your described. The question is, does it really worth the effort to develop a REST service instead of a regular web application? I am not sure in your case, but typically the answer is no, because you won't have any 3rd party REST client and your application does not have enough traffic to justify the layered architecture or it is not big enough to split into multiple processes, etc...
If security is important then you MUST use a true random generator algorithm or hardware. https://en.wikipedia.org/wiki/Random_number_generation#.22True.22_vs._pseudo-random_numbers It is not safe to send anything through HTTP, you must use HTTPS instead. You MUST use the standard Authorization header instead of a query param. https://en.wikipedia.org/wiki/Basic_access_authentication
I have a Google Cloud Endpoints wich is using Cloud SQL to store data. I want to provide a file upload for Clients and the files should be stored in Cloud Storage but I also want to store file meta data and the file storage url in Cloud SQL.
What's the best was to do this?
Can I upload files through cloud endpoints or do I need an extra upload Servlet?
How can I update my database entities which needs a reference to the uploaded files.
Any examples on how to combine those 3 technologies?
Assuming your clients are not added to your google cloud project (which is typically the case), your users don't have write access to your GCS bucket. You can either submit files to your application and move to GCS from there (not recommended as consumes more network and CPU) or a better way is to submit to GCS directly.
To let the client write to your GCS bucket directly, you will need to either:
1. put your access key on client for write access (not recommended), if the client is used by limited trusted people.
2. generate a time-bound token and put it on the client as signed URL to upload directly.
Endpoints APIs themselves cannot do this, but you can generate the signed GCS URL at the server and get it using endpoints on client. then set it as form action (on web client, other clients have similar ways for signed upload) and submit the form to upload the file.
<form action="SIGNED_URL_FROM_ENDPOINTS" method="post" enctype="multipart/form-data">
I don't see an open-source code out there doing exactly this, but closest is this project that does generate the signed URL with a time-out (the only unintuitive part).
Best way to update the metadata in your database is to watch GCS bucket using 'Object Change Notifications'. Another way is to send the metadata to your server from client itself, which can be an endpoints call. You can also use a mix of both where the metadata goes to server using endpoints even before the the file is uploaded and the notification updates the record with confirmation that it is available to serve.
I am supposed to make web services for an app and thought I could do a nice job practicing the good practice. As I found out it's using REST. But there is one thing that makes very little sense in it.
Why use URI to pass any variable?
What we did in our last project is use POST only and pass whatever as raw POST data (which was JSON). That's not very RESTful. But it has some advantages. It was quite simple on the client side - I had a general function that takes URI and data as arguments and then it wraps it up and sends it.
Now, if I used proper REST, I would have to pass some data as part of the URI (user ID, for instance). All the other data (username, email and etc.) would have to go as raw data, like we did, I guess. That means I would have to separate user ID and the other data at some point. That's not so bad but still - why?
EDIT
Here is a more detailed example:
Let's say you want to access (GET) and update (POST) user data. You may have a service accessible under /user but what RESTful service would do is accept user's ID as part of the URI (/user/1234). All the other data (name, email and etc) would go to request content (probably as JSON).
What I pose is that it seems useless to make put user id in the URI. If you wanted to update user data - you would send additional data as content anyway. If you wanted to access it - you could use same generic method to request web service.
I know GET gets cached by a browser but I believe you have to cache it manually anyway if you use AJAX (web) or any HTTP client library (other platforms).
From point of scalability - you can always add more services.
You use the URI to identify the resource (user/document/webpage) you want to work with, and pass the related data inside the request.
It has the advantage that web infrastructure components can find out the location of the resource without having any idea how your content is represented. For example, you can use standard caches and load balancers, all they need to know is the URL and headers (which are always represented the same way) Whether you use JSON, protobuf or WAV audio to communicate with your resource is irrelevant.
This will for example let you keep different resources in totally different places, if you send it all as content you won't have the advantage of being able to place the resources in totally different locations, as for example http://cloud.google.com/resource1 and http://cloud.amazon.com/resource2.
All this will allow you to scale massively, which you won't be able to do if you put it all on http://my.url.com/rest and pass all resource info as content.
Re: Your edit
Passing the user id in the URL is the only way to identify the individual resource (user). Remember, it's the user that's the resource, not the "user store".
For example, a cache that caches http://my.url/user won't be much good, since it would return the same cached page for every user. If the cache can work with http://my.url/user/4711, it can cache every user separately. In the same way, a load balancer could know that users 1-5000 are handled by one machine, 5001-10000 by another etc. and make intelligent decisions based on the URL only.
Imagine a RESTful web service as a database.
To get or modify specific object you need to identify it by providing its primary key.
You identify a user by his ID, not his Name+Nickname+e-mail+mother's maiden name.
The information that identifies an object or selects a set of objects goes to the URL. The information that modifies objects should be POSTed to the corresponding URL.