How to config Wowza Streaming Engine to use HLS-AES128 with dynamic key?
Is it possible?
It's possible with the Server Side API. There are two callbacks:
onHTTPCupertinoEncryptionKeyVODChunk
onHTTPCupertinoEncryptionKeyLiveChunk
Example from the documentation:
public void onHTTPCupertinoEncryptionKeyLiveChunk(ILiveStreamPacketizer liveStreamPacketizer, String streamName, CupertinoEncInfo encInfo, long chunkId, int mode)
{
if (streamName.equals("myStream"))
{
encInfo.setEncMethod(CupertinoEncInfo.METHOD_AES_128);
encInfo.setEncUrl("http://mycompanykeyserver.com/authenticate.aspx");
encInfo.setEncKeyBytes(BufferUtils.decodeHexString("123456789ABCDEF123456789ABCDEF12"));
encInfo.setEncIVBytes(BufferUtils.decodeHexString("FEDCBA9876543210FEDCBA9876543210"));
encInfo.setEncKeyFormatVersion("1");
}
}
Rotation is achieved by changing the key. Note that you shouldn't do this for each segment. You have to change the above example and choose your window.
See: On-the-fly encryption with Wowza server-side API
Related
After login when redirecting the user using context.AuthenticateResult = new AuthenticateResult(<destination>, subject, name, claims) the partial cookie gets so big that it contains up to 4 chunks and ends up causing "request too big" error.
The number of claims is not outrageous (in the 100 range) and I haven't been able to consistently reproduce this on other environments, even with larger number of claims. What else might be affecting the size of this cookie payload?
Running IdSrv3 2.6.1
I assume that you are using some .NET Framework clients, because all of these problems are usually connected with the Microsoft.Owin middleware, that has some encryption that causes the cookie to get this big.
The solution for you is again part of this middleware. All of your clients (using the Identity Server as authority) need to have a custom IAuthenticationSessionStore imlpementation.
This is an interface, part of Microsoft.Owin.Security.Cookies.
You need to implement it according to whatever store you want to use for it, but basically it has the following structure:
public interface IAuthenticationSessionStore
{
Task RemoveAsync(string key);
Task RenewAsync(string key, AuthenticationTicket ticket);
Task<AuthenticationTicket> RetrieveAsync(string key);
Task<string> StoreAsync(AuthenticationTicket ticket);
}
We ended up implementing a SQL Server store, for the cookies. Here is some example for Redis Implementation, and here is some other with EF DbContext, but don't feel forced to use any of those.
Lets say that you implement MyAuthenticationSessionStore : IAuthenticationSessionStore with all the values that it needs.
Then in your Owin Startup.cs when calling:
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationType = "Cookies",
SessionStore = new MyAuthenticationSessionStore()
CookieName = cookieName
});
By this, as the documentation for the IAuthenticationSessionStore SessionStore property says:
// An optional container in which to store the identity across requests. When used,
// only a session identifier is sent to the client. This can be used to mitigate
// potential problems with very large identities.
In your header you will have only the session identifier, and the identity itself, will be read from the Store that you have implemented
I'm looking into Unity multiplayer support. From all docs it seems like the main model is for a game to be capable of being both the server and the client, and the same binary used for both.
Would it be possible to make a game where the client and the server are two different binaries: client being more lightweight and only doing the client part, while server doing heavy lifting of handling the open world/gameplay/state etc.?
As a simplified example imagine a huge world populated by characters, and the client is a mobile app that only needs to display their health/stats and render their avatar. While on the server those characters live a complex life in a large environment.
You could use something like SmartFoxServer (supports unity3d) for the server operations completely independent of the client side logic. This is going to be C# Unity on the client and Java for SmartFoxServer. It is pretty easy to configure extensions, manage rooms, lobby, user events, chats etc in the server and get the events on the client side. You can build a complete MMO system and run it on mobile too.
So I believe I found a way, at least it's working for me.
What I need is possible using NetworkClient and NetworkServer classes. So now I have two separate projects, server and client.
Server has a script which is pretty much:
public class Server : MonoBehaviour {
public Text text;
public class HelloMessage : MessageBase
{
public string helloText;
}
void Start () {
NetworkServer.Listen(4444);
NetworkServer.RegisterHandler(333, onHelloMessage);
}
public void onHelloMessage(NetworkMessage msg)
{
text.text = msg.ReadMessage<HelloMessage>().helloText;
}
}
This listens for messages on port 4444.
Then the client side is like this:
public class NetworkManager : MonoBehaviour {
NetworkClient client;
public class HelloMessage : MessageBase
{
public string helloText;
}
// Use this for initialization
void Start () {
client = new NetworkClient();
client.Connect("127.0.0.1", 4444);
}
public void SendNetworkMessage()
{
HelloMessage msg = new HelloMessage();
msg.helloText = "Hello";
client.Send(333, msg);
}
}
Now on the server side we can hook up text to a label and on the client side SendNetworkMessage to a button and we can send messages from client to appear on the server.
Now just need to define a protocol and off we go.
I am trying to use Azure Iot hub REST API to create device by following links
Create a new device identity
Control access to IoT Hub
And my http data is like
{
"status":"connected",
"authentication":{ "symmetricKey":{
"primaryKey":"key in shared access policies",
"secondaryKey":"key in shared access policies"}
},
"statusReason":"reason",
"deviceId":"test123"
}
My header is like
["Content-Type": "application/json", "Authorization": "SharedAccessSignature sig=(key in shared access policies public key)=&se=1481687791&skn=iothubowner&sr=(my iot hub name).azure-devices.net%2fdevices%2ftest123"]
But i get error 401
{"Message":"ErrorCode:IotHubUnauthorizedAccess;Unauthorized","ExceptionMessage":"Tracking ID:(tracking id )-TimeStamp:12/14/2016 03:15:17"}
Anyone know how to fixed it , or to track the exceptionMessage ?
The problem of 401 is, probably, in the way you are calculating the SAS.
The full process to calculate a SAS for the IoT Hub (in C#) is:
private static readonly DateTime epochTime = new DateTime(1970, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc);
public static string SharedAccessSignature(string hostUrl, string policyName, string policyAccessKey, TimeSpan timeToLive)
{
if (string.IsNullOrWhiteSpace(hostUrl))
{
throw new ArgumentNullException(nameof(hostUrl));
}
var expires = Convert.ToInt64(DateTime.UtcNow.Add(timeToLive).Subtract(epochTime).TotalSeconds).ToString(CultureInfo.InvariantCulture);
var resourceUri = WebUtility.UrlEncode(hostUrl.ToLowerInvariant());
var toSign = string.Concat(resourceUri, "\n", expires);
var signed = Sign(toSign, policyAccessKey);
var sb = new StringBuilder();
sb.Append("sr=").Append(resourceUri)
.Append("&sig=").Append(WebUtility.UrlEncode(signed))
.Append("&se=").Append(expires);
if (!string.IsNullOrEmpty(policyName))
{
sb.Append("&skn=").Append(WebUtility.UrlEncode(policyName));
}
return sb.ToString();
}
private static string Sign(string requestString, string key)
{
using (var hmacshA256 = new HMACSHA256(Convert.FromBase64String(key)))
{
var hash = hmacshA256.ComputeHash(Encoding.UTF8.GetBytes(requestString));
return Convert.ToBase64String(hash);
}
}
If you want to create the device in the IoTHub you have to have a policy with full permissions that mean:
Registry read and write, Service connect and Device connect.
If you need a full functional example, in C#, about how use the IoT Hub REST API to create a device, check if a device exists and send messages to the IoT Hub I have wrote this post about it (the post is in spanish but I can imagine that what you need is just the code).
it looks like your SAS is wrong. It shouldn't include the devices part in the end. If you open the Iot Hub Device Explorer you can Generate SAS token to access the Iot Hub API. You should create the SAS for the IoT hub level and for a device level (which include the devive id in the SAS like you have).
So your SAS should look like this -
SharedAccessSignature sr={iot hub name}.azure-devices.net&sig={sig}&se={se}&skn=iothubowner
There are two edits you need to do:
In your http data, only deviceId is required, other are optional, you can do it like this:
{
deviceId: "test123"
}
Note that there are no double quotes around deviceId.
Like #shachar said, you need remove "%2fdevices%2ftest123" in SAS token of the header. About generating SAS token you can utilize Device Explorer.
This is my test result:
The format of SAS token you use is wrong. To create a device, you need to use SAS Token for IoT Hub. You could easily use Azure IoT Toolkit extension for Visual Studio Code to generate SAS Token for IoT Hub as below screenshot.
BTW, the format of SAS token for device is /^SharedAccessSignature sr=iot-hub-test.azure-devices.net%2Fdevices%2Fdevice1&sig=.+&se=.+$/, while the format of SAS token for IoT Hub is /^SharedAccessSignature sr=iot-hub-test.azure-devices.net&sig=.+&skn=iothubowner&se=.+$/
//Following code is to generate the SAS token programatically.
string sasToken = new SharedAccessSignatureBuilder()
{ KeyName = name,
Key = key,
Target = target,
TimeToLive = TimeSpan.FromDays(days)
}.ToSignature();
//use this sas token as authorization header before calling the iot restapi
I want to implement a VSCode extension that uses the Language Server Protocol, but I want the server component to be on an actual server (in the cloud), and not a part of the VSCode extension.
Can I set the client extension to connect to a server via websockets or HTTP?
Multiple ServerOptions are supported when you initialize a LanguageClient according to the signature of ServerOptions.
you can use the StreamInfo if you want to use a real remove server as your language server. Here is a sample code to connect to your server via WebSocket and initialize a LanguageClient.
const connection = connectToServer(hostname, path);
const client = new LanguageClient(
"docfxLanguageServer",
"Docfx Language Server",
() => Promise.resolve<StreamInfo>({
reader: connection,
writer: connection,
}),
{});
private connectToServer(hostname: string, path: string): Duplex {
const ws = new WebSocket(`ws://${hostname}/${path}`);
return WebSocket.createWebSocketStream(ws);
}
I am not sure if you can control the location of the language server, but there is another option. You do not need to implement the Language Server Protocol to, for example, provide parsing help. In that case you can implement your own convenient parsing service API (tailored to the nature of the language you want to support).
Within your extension you subscribe to workspace edit events using workspace.onDidChangeTextDocument
Re-start a 1sec timeout every time the file on-change event is raised
When the timeout expires without any further file modification, gather all relevant files and send them to your parsing server
In your extension, create a DiagnosticCollection using https://code.visualstudio.com/api/references/vscode-api#languages.createDiagnosticCollection and replace populate it with the warnings/errors/hints resulting from the parsing server in the cloud.
Subscribe to other workspace events, e.g. workspace.onDidOpenTextDocument or workspace.onDidCloseTextDocument in order to keep the DiagnosticCollection content relevant
I have been working on Wowza Streaming Server and while trying to secure Apple HTTP Live Streaming using AES-128 - external method I am encountering below problems :
External AES-128 method of encryption is not working for .smil files present in the sub-folder of the application's source directory. I tried to achieve it by putting the [my-stream].key in [install-dir]/keys and [install-dir]/keys/[sub-folder-name] but both the scenarios failed for me to achieve this.
playlist url is :- [wowza-server-ip]:[port]/[application-name]/[applcation-instance-name]/smil:[sub-folder]/demo.smil/playlist.m3u8
In case of mp4s present in the application's source path, the player is not calling the key url.
The sequence of calls made by the player are :-
[wowza-server-ip]:[port]/crossdomain.xml
[wowza-server-ip]:[port]/[application-name]/[applcation-instance-name]/[stream-name]/playlist.m3u8
[wowza-server-ip]:[port]/[application-name]/[applcation-instance-name]/[stream-name]/chunklist_w[wowza-session-id].m3u8
[web-server-ip]:[port]/crossdomain.xml
After this player is not calling the "key request uri" as it was supposed to call. The calls are going properly when I am using the internal AES-128 method of Encryption.
My chunklist_w[wowza-session-id].m3u8 is
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:12
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-KEY:METHOD=AES-128,URI="http://[web-server-ip]:[port]/SimpleWebServlet/key.jsp?wowzasessionid=[session-id]"
#EXTINF:9.52,
media_w[session-id]_0.ts
#EXTINF:10.4,
media_w[session-id]_1.ts
[streamname].key file in [install-dir]/keys folder is
cupertinostreaming-aes128-key: DE51A7254739C0EDF1DCE13BBB308FF0
cupertinostreaming-aes128-url: http://[web-server-ip]:[port]/SimpleWebServlet/key.jsp
jsp file to return the key is key.jsp
<%# page import="java.util.*,java.io.*" %>
<%
boolean isValid = true;
if (!isValid)
{
response.setStatus( 403 );
}
else
{
response.setHeader("Content-Type", "binary/octet-stream");
response.setHeader("Pragma", "no-cache");
String keyStr = "DE51A7254739C0EDF1DCE13BBB308FF0";
int len = keyStr.length()/2;
byte[] keyBuffer = new byte[len];
for (int i=0;i<len;i++)
keyBuffer[i] = (byte)Integer.parseInt(keyStr.substring(i*2, (i*2)+2), 16);
OutputStream outs = response.getOutputStream();
outs.write(keyBuffer);
outs.flush();
}
%>
If anybody has encountered the similar problem or has successfully implemented the external aes-128 method of wowza, kindly put some light on the issues mentioned above.
EDIT 1
Kindly ignore the 2nd point as after further analysis I found out that there is some issue with the jboss delivering the key, once it delivers the crossdomain xml to the player.
For reference to this problem kindly check : Can I call two crossdomain.xml from two different servers from my flash player?
EDIT 2
Apologies for the typo in my first point. It should be .smil rather than .mp4, I have corrected the same in my first point
I recently tried out HLS with AES128 and it worked fine. My key file was in [wowzadir]/keys/mystream.key. Looks like it is your player that does not do something right here. Which player are you using?
You can try to use wget to download some chunks and you can inspect them with VLC for example to see if the encryption was applied.