AeRender.exe After Effect - virtual-reality

I would like to automate a render process of video in after Effects using AeRender.exe.
I am using the node.js for run the Aerender.exe but i am getting the error
"unable to read VR path from registry from c:\Users\username\AppData\Local\openvr\openvrpaths.vrpath"
I am using the following code.
var spawn = require('child_process').spawn;
var ae = spawn('/Program Files/Adobe/Adobe After Effects CC 2018/Support Files/aerender.exe',[
'-project', 'template.aep',
'-comp', 'final',
'-output', 'movie.mov',
'-OMtemplate', 'h264'
]);
ae.stderr.on('data', function (data) {
// Error occured
console.log('stderr: ' + data);
});
ae.on('close', function (code) {
// Video has rendered
});
Please help on this topic.

Related

Unity WebGL throws Error: "ReferenceError: Runtime is not defined"

I wanted to export my Unity project (Unity Version 2021.2) for WebGL, but I get this Error:
An error occurred running the Unity content on this page. See your browser JavaScript console for more info. The error was:
ReferenceError: Runtime is not defined unityFramework/_WebSocketConnect/instance.ws.onopen#http://localhost:55444/Build/WEbGL.framework.js:3:67866
I am using this Websocket package (https://github.com/endel/NativeWebSocket) an everything is working fine in Unity or in a Windows Build. When i run the WebGL build it does connect with the websocket but then i get the Error.
The Error message says more info is in my console but the console on F12 only repeats the error:
Uncaught ReferenceError: Runtime is not defined
at WebSocket.instance.ws.onmessage (WEbGL.framework.js:3)
instance.ws.onmessage # WEbGL.framework.js:3
To give a minimal reproducable example i just created a empty 3D Core project with Unity 2021.2 and imported the package NativeWebSocket (I downloaded the File from GitHub and installed it manally:
Copy the sources from NativeWebSocket/Assets/WebSocket into your Assets directory
Then you have to make the fixes postet by kentakang on https://github.com/endel/NativeWebSocket/pull/54 otherwise the build will fail.
Then i made a new C# script with the code below (also from the Github page) and put it on the Camera in the Scene. I exported it for WebGL and got the mentioned Error.
This happens when one of the Methods websocket.OnOpen/OnError/OnClose/OnMessage is called, so you donĀ“t even need a running websocket because then websocket.OnError is called and the WebGL Build throws the "Runtime is not defined" Error. Or if you have also the running Websocket server which is also included in the package you get the Error when websocket.OnOpen is called.
using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using NativeWebSocket;
public class Connection : MonoBehaviour
{
WebSocket websocket;
// Start is called before the first frame update
async void Start()
{
websocket = new WebSocket("ws://localhost:2567");
websocket.OnOpen += () =>
{
Debug.Log("Connection open!");
};
websocket.OnError += (e) =>
{
Debug.Log("Error! " + e);
};
websocket.OnClose += (e) =>
{
Debug.Log("Connection closed!");
};
websocket.OnMessage += (bytes) =>
{
Debug.Log("OnMessage!");
Debug.Log(bytes);
// getting the message as a string
// var message = System.Text.Encoding.UTF8.GetString(bytes);
// Debug.Log("OnMessage! " + message);
};
// Keep sending messages at every 0.3s
InvokeRepeating("SendWebSocketMessage", 0.0f, 0.3f);
// waiting for messages
await websocket.Connect();
}
void Update()
{
#if !UNITY_WEBGL || UNITY_EDITOR
websocket.DispatchMessageQueue();
#endif
}
async void SendWebSocketMessage()
{
if (websocket.State == WebSocketState.Open)
{
// Sending bytes
await websocket.Send(new byte[] { 10, 20, 30 });
// Sending plain text
await websocket.SendText("plain text message");
}
}
private async void OnApplicationQuit()
{
await websocket.Close();
}
}
Does someone know how to fix this Error? Help would be appreciated:)
It seams that in unity 2021.2 variable Runtime doesn't exist and can be replaced with Module['dynCall_*'].
In webSocket.jslib change all Runtime.dynCall('*1', *2, [*3, *4]) for Module['dynCall_*1'](*2, *3, *4)
Example instance.ws.onopen function in WebSocket.jslib:
change Runtime.dynCall('vi', webSocketState.onOpen, [ instanceId ]);
for
Module['dynCall_vi'](webSocketState.onOpen, instanceId);

How to clear cache in ionic v3

I would like to know how to clear the ion cache for the ios platform when switching between tabs. I mean free memory, such as stored canvas, images, etc.
I mean inside the application. My application increases memory usage by rendering the canvas and it is not cleaned, it is maintained.
There is a npm package available for this:
https://www.npmjs.com/package/cordova-plugin-cache
This is an example code how to do it:
document.addEventListener('deviceready', onDeviceReady);
function onDeviceReady()
{
var success = function(status) {
alert('Message: ' + status);
}
var error = function(status) {
alert('Error: ' + status);
}
window.cache.clear( success, error );
window.cache.cleartemp(); //
}

Problems with WebAudio

I'm creating a research experiment that uses WebAudio API to record audio files spoken by the user.
I came up with a solution for this using recorder.js and everything was working fine... until I tried it yesterday.
I am now getting this error in Chrome:
"The AudioContext was not allowed to start. It must be resumed (or
created) after a user gesture on the page."
And it refers to this link: Web Audio API policy.
This appears to be a consequence of Chrome's new policy outlined at the link above.
So I attempted to solve the problem by using resume() like this:
var gumStream; //stream from getUserMedia()
var rec; //Recorder.js object
var input; //MediaStreamAudioSourceNode we'll be recording
// shim for AudioContext when it's not avb.
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioContext = new AudioContext; //new audio context to help us record
function startUserMedia() {
var constraints = { audio: true, video:false };
audioContext.resume().then(() => { // This is the new part
console.log('context resumed successfully');
});
navigator.mediaDevices.getUserMedia(constraints).then(function(stream) {
console.log("getUserMedia() success, stream created, initializing Recorder.js");
gumStream = stream;
input = audioContext.createMediaStreamSource(stream);
rec = new Recorder(input, {numChannels:1});
audio_recording_allowed = true;
}).catch(function(err) {
console.log("Error");
});
}
Now in the console I'm getting:
Error
context resumed successfully
And the stream is not initializing.
This happens in both Firefox and Chrome.
What do I need to do?
I just had this exact same problem! And technically, you helped me to find this answer. My error message wasn't as complete as yours for some reason and the link to those policy changes had the answer :)
Instead of resuming, it's best practise to create the audio context after the user interacted with the document (when I say best practise, if you have a look at padenot's first comment of 28 Sept 2018 on this thread, he mentions why in the first bullet point).
So instead of this:
var audioContext = new AudioContext; //new audio context to help us record
function startUserMedia() {
audioContext.resume().then(() => { // This is the new part
console.log('context resumed successfully');
});
}
Just set the audio context like this:
var audioContext;
function startUserMedia() {
if(!audioContext){
audioContext = new AudioContext;
}
}
This should work, as long as startUserMedia() is executed after some kind of user gesture.

Observe changes in TinyMCE from Dart

According to TinyMCE API, the following JavaScript code observe changes in TinyMCE editor:
tinyMCE.init({
...
setup : function(ed) {
ed.onChange.add(function(ed, l) {
console.debug('Editor contents was modified. Contents: ' + l.content);
});
}
});
However, I'm unable to run this code from Dart using the js Library. Help is appreciated.
UPDATE:
There is a problem in the JS code above. Alternatively, I found this working code in here:
var ed = new tinymce.Editor('textarea_id', {
init_setting_item: 1,
}, tinymce.EditorManager);
ed.on('change', function(e) {
var content = ed.getContent();
console.log(content);
});
ed.render();
I still need help running the code from Dart. And preferably storing its results in a Dart variable for subsequent processing.
Here's the same code called from Dart :
var ed = new js.Proxy(js.context.tinymce.Editor, 'textarea_id', js.map({
'init_setting_item': 1
}), js.context.tinymce.EditorManager);
js.retain(ed); // retain allows to use 'ed' in the following callback
ed.on('change', new js.Callback.many((e) {
var content = ed.getContent();
window.console.log(content);
}));
ed.render();

Using PhoneGap to record audio to documents folder on iOS

As part of an iPhone app I'm creating using PhoneGap, I need to be able to use the microphone to record to a new file which is sorted in the apps document folder on the phone.
I think I have the code sorted to actually capture the recording I'm just having trouble creating a blank .wav in the documents folder to record to. According to the PhoneGap API iOS requires that the src file for the audio already exists.
Can anyone help my with the couple of lines of code needed to create this blank file? My code so far is -
function recordAudio() {
var src = "BLANK WAV IN DOCUMENTS FOLDER";
var mediaRec = new Media(src, onSuccess, onError);
// Record audio
mediaRec.startRecord();
// Stop recording after 10 sec
var recTime = 0;
var recInterval = setInterval(function() {
recTime = recTime + 1;
if (recTime >= 10) {
clearInterval(recInterval);
mediaRec.stopRecord();
}
}, 1000);
}
function onSuccess() {
console.log("recordAudio():Audio Success");
}
// onError Callback
function onError(error) {
alert('code: ' + error.code + '\n' +
'message: ' + error.message + '\n');
}
$('#record-button').
bind('tap', function(){
recordAudio();
})
You may need to create the file first using the File API.
document.addEventListener("deviceready", function onDeviceReady() {
window.requestFileSystem(LocalFileSystem.PERSISTENT, 0, gotFS, function fail(){});
}, false);
var gotFS = function (fileSystem) {
fileSystem.root.getFile("blank.wav",
{ create: true, exclusive: false }, //create if it does not exist
function success(entry) {
var src = entry.toURI();
console.log(src); //logs blank.wav's path starting with file://
},
function fail() {}
);
};
tried using something like this?
var src = "blank.wav"; instead of "BLANK WAV IN DOCUMENTS FOLDER" ?