Unity WebGL throws Error: "ReferenceError: Runtime is not defined" - unity3d

I wanted to export my Unity project (Unity Version 2021.2) for WebGL, but I get this Error:
An error occurred running the Unity content on this page. See your browser JavaScript console for more info. The error was:
ReferenceError: Runtime is not defined unityFramework/_WebSocketConnect/instance.ws.onopen#http://localhost:55444/Build/WEbGL.framework.js:3:67866
I am using this Websocket package (https://github.com/endel/NativeWebSocket) an everything is working fine in Unity or in a Windows Build. When i run the WebGL build it does connect with the websocket but then i get the Error.
The Error message says more info is in my console but the console on F12 only repeats the error:
Uncaught ReferenceError: Runtime is not defined
at WebSocket.instance.ws.onmessage (WEbGL.framework.js:3)
instance.ws.onmessage # WEbGL.framework.js:3
To give a minimal reproducable example i just created a empty 3D Core project with Unity 2021.2 and imported the package NativeWebSocket (I downloaded the File from GitHub and installed it manally:
Copy the sources from NativeWebSocket/Assets/WebSocket into your Assets directory
Then you have to make the fixes postet by kentakang on https://github.com/endel/NativeWebSocket/pull/54 otherwise the build will fail.
Then i made a new C# script with the code below (also from the Github page) and put it on the Camera in the Scene. I exported it for WebGL and got the mentioned Error.
This happens when one of the Methods websocket.OnOpen/OnError/OnClose/OnMessage is called, so you don´t even need a running websocket because then websocket.OnError is called and the WebGL Build throws the "Runtime is not defined" Error. Or if you have also the running Websocket server which is also included in the package you get the Error when websocket.OnOpen is called.
using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using NativeWebSocket;
public class Connection : MonoBehaviour
{
WebSocket websocket;
// Start is called before the first frame update
async void Start()
{
websocket = new WebSocket("ws://localhost:2567");
websocket.OnOpen += () =>
{
Debug.Log("Connection open!");
};
websocket.OnError += (e) =>
{
Debug.Log("Error! " + e);
};
websocket.OnClose += (e) =>
{
Debug.Log("Connection closed!");
};
websocket.OnMessage += (bytes) =>
{
Debug.Log("OnMessage!");
Debug.Log(bytes);
// getting the message as a string
// var message = System.Text.Encoding.UTF8.GetString(bytes);
// Debug.Log("OnMessage! " + message);
};
// Keep sending messages at every 0.3s
InvokeRepeating("SendWebSocketMessage", 0.0f, 0.3f);
// waiting for messages
await websocket.Connect();
}
void Update()
{
#if !UNITY_WEBGL || UNITY_EDITOR
websocket.DispatchMessageQueue();
#endif
}
async void SendWebSocketMessage()
{
if (websocket.State == WebSocketState.Open)
{
// Sending bytes
await websocket.Send(new byte[] { 10, 20, 30 });
// Sending plain text
await websocket.SendText("plain text message");
}
}
private async void OnApplicationQuit()
{
await websocket.Close();
}
}
Does someone know how to fix this Error? Help would be appreciated:)

It seams that in unity 2021.2 variable Runtime doesn't exist and can be replaced with Module['dynCall_*'].
In webSocket.jslib change all Runtime.dynCall('*1', *2, [*3, *4]) for Module['dynCall_*1'](*2, *3, *4)
Example instance.ws.onopen function in WebSocket.jslib:
change Runtime.dynCall('vi', webSocketState.onOpen, [ instanceId ]);
for
Module['dynCall_vi'](webSocketState.onOpen, instanceId);

Related

How to retrieve logcat in Flutter?

How do you get the output written to logcat back into the Flutter app that caused it? Or simpler asked: How to read logcat in Flutter?
The problem is this:
The app uses a stack of Android plugins to communicate with some custom hardware through Bluetooth. Those Android plugins write extensively to logcat. Now, for debugging, it would be very helpful to be able to read all the messages the App (including native plugins) has written to logcat. Question is, is this somehow possible?
How would you tackle that?
Check out the plugin called logcat on pub.dev.
Sadly, it seems to be no longer maintained and isn't updated for null safety.
But you can check out the source code here and see how the plugin gets access to the android logcat.
Because the logcat is a native thing, you'll have to use a MethodChannel to call a Java/Kotlin function:
// define MethodChannel
final platform = const MethodChannel('app.channel.logcat');
// call native method
logs = await platform.invokeMethod('execLogcat');
And the native part:
public class LogcatPlugin implements MethodCallHandler {
public static void registerWith(Registrar registrar) {
final MethodChannel channel = new MethodChannel(registrar.messenger(), "app.channel.logcat");
channel.setMethodCallHandler(new LogcatPlugin());
}
#Override
public void onMethodCall(MethodCall call, Result result) {
if (call.method.equals("execLogcat")) {
String logs = getLogs();
if (logs != null) {
result.success(logs);
} else {
result.error("UNAVAILABLE", "logs not available.", null);
}
} else {
result.notImplemented();
}
}
String getLogs() {
try {
Process process = Runtime.getRuntime().exec("logcat -d");
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(process.getInputStream()));
StringBuilder log = new StringBuilder();
String line;
while ((line = bufferedReader.readLine()) != null) {
log.append(line);
}
return log.toString();
} catch (IOException e) {
return "EXCEPTION" + e.toString();
}
}
}
The code samples are from github.com/pharshdev/logcat.
Maybe you can just fork the git repo and migrate it to null safety if needed.
Check the plugin called logcat_monitor on pub.dev.
Its biggest advantage over the other logcat plugin is that it allows continuous monitoring of logcat messages.
Follows a screenshot example:
how to use
Add the dependencies:
dependencies:
logcat_monitor: ^0.0.4
Create a function to consume the logcat messages
void _mylistenStream(dynamic value) {
if (value is String) {
_logBuffer.writeln(value);
}
}
Register your function as a listener to get logs then use it in anyway within your app.
LogcatMonitor.addListen(_mylistenStream);
Start the logcat monitor passing the filter parameters as defined in logcat tool.
await LogcatMonitor.startMonitor("*.*");

Unity3D firebase AUTH not working on ANDROID fierbase doesnt connect or something

in editor work fine. i am trying to add my project fierbase and in work. but when i start test at android login window dont react. i am trying to do this
Unity3D firebase AUTH not working on ANDROID
public class FirebaseINIT : MonoBehaviour
{
public static bool firebaseReady;
void Start()
{
CheckIfReady();
}
void Update()
{
if(firebaseReady == true)
{
SceneManager.LoadScene("LoginScene");
}
}
public static void CheckIfReady()
{
Firebase.FirebaseApp.CheckAndFixDependenciesAsync().ContinueWith(task => {
Firebase.DependencyStatus dependencyStatus = task.Result;
if (dependencyStatus == Firebase.DependencyStatus.Available)
{
Firebase.FirebaseApp app = Firebase.FirebaseApp.DefaultInstance;
firebaseReady = true;
Debug.Log("Firebase is ready for use.");
}
else
{
firebaseReady = false;
UnityEngine.Debug.LogError(System.String.Format(
"Could not resolve all Firebase dependencies: {0}", dependencyStatus));
}
});
}
}
but not loaded next scene. and i am confused why this happen, i cant do next because dont understand problem. May this will be from android resolve? Because in order to build the project, I must remove the Resolved libraries.
otherwise if I don't remove assets=> Android=> resolve libraries then I will get the following errors
Configure project :launcher
WARNING: The option setting 'android.enableR8=false' is deprecated.
Execution failed for task ':launcher:processReleaseResources'.
A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade
Android resource linking failed
the project has an additional plugin for advertising appodeal + admob at least the direction of movement of the study of the issue
and i am integrate this first time i am not sure i am at the right way

How to display Unity WebGL template in full screen after loaded [duplicate]

Task at hand is to add support for fullscreen mode to an WebGL application written in Dart.
canvas.requestFullscreen() works for simple test cases, but fails on the full app.
Please point out the way to tell what is preventing the browser from switching to fullscreen.
The code is:
void trapFullscreenError() {
document.onFullscreenError.listen((e) {
log("fullscreenerror: $e");
});
}
void toggleFullscreen(CanvasElement c) {
log(
"fullscreenSupport=${document.fullscreenEnabled} fullscreenElement=${document.fullscreenElement}"
);
if (document.fullscreenElement != null) {
log("exiting fullscreen");
document.exitFullscreen();
} else {
log("requesting fullscreen");
c.requestFullscreen();
}
}
In Chrome that code results in:
fullscreenSupport=true fullscreenElement=null
requesting fullscreen
fullscreenerror: Instance of 'Event'
Dartium debugger shows these fields:
Event [id=4]
_selector null
bubbles true
cancelable false
clipboardData null
currentTarget #document [id=5]
defaultPrevented false
eventPhase 3
hashCode 234642739
path NodeList[6] [id=6]
target canvas#main_canvas [id=7]
timeStamp 1398779450832
type "webkitfullscreenerror"
For security reasons requestFullscreen can only be called in an event handler of a keyboard or click event.
see also Javascript request fullscreen is unreliable

Unity WebGL editable configuration file

How to make an Unity WebGL project to read a kind of configuration file (any format), which is editable after the "Build" from Unity workspace.
Below is the sample of Build directory, which contains the packaged files
The use case is to have the backend API using by this WebGL project to be configurable at the hosting server, so that when the player/user browse it, it knows where to connect to the backend API.
The closest part I could explore currently is to implement the custom Javascript browser scripting. Any advice or any existing API could be used from Unity?
An update for the chosen solution for this question. The Javascript browser scripting method was used.
Total of 3 files to be created:
WebConfigurationManager.cs
Place it in the asset folder. This file is the main entry for the C# code, it decides where to get the web configuration, either via the default value from another C# class (while using the unity editor), or using the browser scripting method to retrieve (while browsing the distribution build via browser).
WebConfigurationManager.jslib
Place it the same folder as WebConfigurationManager.cs. This file is the javascript code, to be loaded by browser.
web-config.json
Your JSON configuration. The web configuration file could be hosted anywhere, example below placed under the root of the distribution build folder, you'll have to know where to load the file, for example https://<website>/web-config.json.
// WebConfigurationManager.cs
using System;
using UnityEngine;
using System.Runtime.InteropServices;
using AOT;
public class ConfigurationManager : MonoBehaviour
{
#if UNITY_WEBGL && !UNITY_EDITOR
// Load the web-config.json from the browser, and result will be passed via EnvironmentConfigurationCallback
public delegate void EnvironmentConfigurationCallback(System.IntPtr ptr);
[DllImport("__Internal")]
private static extern void GetEnvironmentConfiguration(EnvironmentConfigurationCallback callback);
void Start()
{
GetEnvironmentConfiguration(Callback);
}
[MonoPInvokeCallback(typeof(EnvironmentConfigurationCallback))]
public static void Callback(System.IntPtr ptr)
{
string value = Marshal.PtrToStringAuto(ptr);
try
{
var webConfig = JsonUtility.FromJson<MainConfig>(value);
// webConfig contains the value loaded from web-config.json. MainConfig is the data model class of your configuration.
}
catch (Exception e)
{
Debug.LogError($"Failed to read configuration. {e.Message}");
}
}
#else
void Start()
{
GetEnvironmentConfiguration();
}
private void GetEnvironmentConfiguration()
{
// do nothing on unity editor other than triggering the initialized event
// mock the configuration for the use of Unity editor
var testConfig = JsonUtility.FromJson<MainConfig>("{\n" +
" \"apiEndpoint\": \"ws://1.1.1.1:30080/events\",\n" +
" \"updateInterval\": 5\n" +
"}");
Debug.Log(testConfig.apiEndpoint);
Debug.Log(testConfig.updateInterval);
}
#endif
}
// WebConfigurationManager.jslib
mergeInto(LibraryManager.library, {
GetEnvironmentConfiguration: function (obj) {
function getPtrFromString(str) {
var buffer = _malloc(lengthBytesUTF8(str) + 1);
writeStringToMemory(str, buffer);
return buffer;
}
var request = new XMLHttpRequest();
// load the web-config.json via web request
request.open("GET", "./web-config.json", true);
request.onreadystatechange = function () {
if (request.readyState === 4 && request.status === 200) {
var buffer = getPtrFromString(request.responseText);
Runtime.dynCall('vi', obj, [buffer]);
}
};
request.send();
}
});

AeRender.exe After Effect

I would like to automate a render process of video in after Effects using AeRender.exe.
I am using the node.js for run the Aerender.exe but i am getting the error
"unable to read VR path from registry from c:\Users\username\AppData\Local\openvr\openvrpaths.vrpath"
I am using the following code.
var spawn = require('child_process').spawn;
var ae = spawn('/Program Files/Adobe/Adobe After Effects CC 2018/Support Files/aerender.exe',[
'-project', 'template.aep',
'-comp', 'final',
'-output', 'movie.mov',
'-OMtemplate', 'h264'
]);
ae.stderr.on('data', function (data) {
// Error occured
console.log('stderr: ' + data);
});
ae.on('close', function (code) {
// Video has rendered
});
Please help on this topic.