"Destroyed texture [Texture] used in a submit." when using a video texture in chrome/webgpu - webgpu

I'm trying to use an external_texture in WebGpu, but am running into an error.
localhost/:1 Destroyed texture [Texture] used in a submit.
at ValidateCanUseInSubmitNow (../../third_party/dawn/src/dawn_native/Texture.cpp:605)
at ValidateSubmit (../../third_party/dawn/src/dawn_native/Queue.cpp:395)
I've created a video element and created an external texture:
const video = document.createElement('video');
video.loop = true;
video.autoplay = true;
video.muted = true;
video.src = '/videos/sample.webm';
await video.play();
const videoTexture = device.importExternalTexture({
source: video,
});
I'm binding it like so:
{
binding: 2,
resource: videoTexture,
},
and am referencing it in my shader like the following:
[[binding(2), group(0)]] var diffuseTexture: texture_external;
...
var diffuse = textureSampleLevel(diffuseTexture, textureSampler, in.uv).xyz
I've stored both the video element and videoTexture to a variable just in case it was something to do with garbage collection, but it has not helped. I seem to be doing everything the same as in the video uploading sample (https://austin-eng.com/webgpu-samples/samples/videoUploading) except there's a lot more going on in my program.

It turns out that the lifetime of an video external texture is very limited. When your code returns control to the browser, the external texture will be destroyed. For most 3d applications, this would most likely be when the requestAnimationFrame finishes.
In order to alleviate this you have to create both the bind group and the external texture in the same frame as you render. It may be helpful to put your external texture(s) in a separate bind group since you will have to recreate it.
e.g.
function frame() {
var externalTextureBindGroup = device.createBindGroup({
layout: pipeline.getBindGroupLayout(1),
entries: [
{
binding: 0,
resource: device.importExternalTexture({
source: video,
}),
},
],
});
// additional setup.
passEncoder.setBindGroup(1, externalTextureBindGroup);
passEncoder.drawIndexed(group.count, 1, group.start, 0);
// additional draws
requestAnimationFrame(frame);
}
References:
https://github.com/toji/webgpu-best-practices/blob/main/img-textures.md#gpuexternaltexture-lifetime

Related

Importing multiple GLTF files using Three.js

I am trying to import multiple GLTF files into my Three.js scene using a LoadingManager, but I am encountering an issue where I can't access the properties of the loaded models. The code seems to be working since I have them loaded correctly. I have stored the loaded models in an object called "loadedModels" but when I try to access them, it gives me an error "Cannot read properties of undefined."
This is my current code. As I said, it loads the files correctly but when I try, for instance, to change each model's coordinates I get the error described before. I tried to put the forEach function inside a setTimeout to check if the problem was that the models were not loading fast enough for me to access it, but that didn't work.
//create the toLoad const where I will type each file url
const toLoad = [
{name: 'monkey', group: new THREE.Group(), url: '3D/monkey.gltf'},
{name: 'plane', group: new THREE.Group(), url: '3D/plane.gltf'}
]
//Create empty object to store the loaded models
const loadedModels = {};
//Create a loadingManager for progress bar
const loadingManager = new THREE.LoadingManager(() => {});
//Create loader loop from multiple local urls
const gltfLoader = new GLTFLoader(loadingManager);
toLoad.forEach(item=>{
gltfLoader.load(item.url, (model)=>{
item.group.add(model.scene);
scene.add(item.group);
loadedModels[item.name] = item.group;
});
});

Taking screenshot of jitsi meet conference using HTML2canvas

After adding a button to my jitsi install (via this thread), I am now trying to use htlm2canvas to take a screenshot of the video conference.
However, when I run the function, it returns the video as black, even though its showing on display.
screenshot
(Feed on the left should show video but its black)
And as you can see, the icons are also all messed up.
Is there a fix around this? or an alternative?
This is because you might be trying to capture screenshot from outside code and jitsi is running video in iframe. Security features of browser does not allow to read iframe content. you need to implement custom logic in jitsi to handle your scenario.
I have looked around, found logic in ScreenshotCaptureEffect.js. It works now…
You must have in focus video which you want to screenshot, or you can change script to send all video streams.
const storedCanvas = document.createElement('canvas');
const storedCanvasContext = storedCanvas.getContext('2d');
var vids = $('video#largeVideo');
vids[0].play();
storedCanvas.height = parseInt(vids[0].videoHeight, 10);
storedCanvas.width = parseInt(vids[0].videoWidth, 10);
storedCanvasContext.drawImage(vids[0], 0, 0, vids[0].videoWidth, vids[0].videoHeight);
storedCanvas.toBlob(
blob => {
console.debug(blob);
var data = new FormData();
data.append('file', blob);
$.ajax({
url: S3_API_URL,
cache: false,
contentType: false,
processData: false,
method: 'POST',
data: data
});
},
'png',
1.0,
);

Problems with WebAudio

I'm creating a research experiment that uses WebAudio API to record audio files spoken by the user.
I came up with a solution for this using recorder.js and everything was working fine... until I tried it yesterday.
I am now getting this error in Chrome:
"The AudioContext was not allowed to start. It must be resumed (or
created) after a user gesture on the page."
And it refers to this link: Web Audio API policy.
This appears to be a consequence of Chrome's new policy outlined at the link above.
So I attempted to solve the problem by using resume() like this:
var gumStream; //stream from getUserMedia()
var rec; //Recorder.js object
var input; //MediaStreamAudioSourceNode we'll be recording
// shim for AudioContext when it's not avb.
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioContext = new AudioContext; //new audio context to help us record
function startUserMedia() {
var constraints = { audio: true, video:false };
audioContext.resume().then(() => { // This is the new part
console.log('context resumed successfully');
});
navigator.mediaDevices.getUserMedia(constraints).then(function(stream) {
console.log("getUserMedia() success, stream created, initializing Recorder.js");
gumStream = stream;
input = audioContext.createMediaStreamSource(stream);
rec = new Recorder(input, {numChannels:1});
audio_recording_allowed = true;
}).catch(function(err) {
console.log("Error");
});
}
Now in the console I'm getting:
Error
context resumed successfully
And the stream is not initializing.
This happens in both Firefox and Chrome.
What do I need to do?
I just had this exact same problem! And technically, you helped me to find this answer. My error message wasn't as complete as yours for some reason and the link to those policy changes had the answer :)
Instead of resuming, it's best practise to create the audio context after the user interacted with the document (when I say best practise, if you have a look at padenot's first comment of 28 Sept 2018 on this thread, he mentions why in the first bullet point).
So instead of this:
var audioContext = new AudioContext; //new audio context to help us record
function startUserMedia() {
audioContext.resume().then(() => { // This is the new part
console.log('context resumed successfully');
});
}
Just set the audio context like this:
var audioContext;
function startUserMedia() {
if(!audioContext){
audioContext = new AudioContext;
}
}
This should work, as long as startUserMedia() is executed after some kind of user gesture.

leafeltjs (mapbox) z-index ordering not working

Another developer created our original map but I'm tasked with making some changes. One of these is making sure the activated marker is brought to the front when clicked on (where it is partially overlapped by other markers).
The developers have used mapbox 2.2.2.
I have looked at leafletjs's docs, have followed some instructions on other posted solutions (e.g. solution one and solution two). Neither of these makes any difference.
Examining the marker in Chrome's console I can see the value of options.zIndexOffset is being set (10000 in my test case). I've even set _zIndex to an artificially high value and can see that reflected in the marker's data structure. But visually nothing is changing.
This is how the map is set up initially. All features are from a single geojson feed:
L.mapbox.accessToken = '<access token here>';
var map = L.mapbox.map('map', 'map.id', {
}).setView([37.8, -96], 3);
var jsonFeed, jsonFeedURL;
var featureLayer = L.mapbox.featureLayer()
.addTo(map)
.setFilter(function (f) {
return false;
});
$.getJSON(jsonFeedURL, function (json) {
jsonFeed = json;
jsonFeedOld = json;
// Load all the map features from our json file
featureLayer.setGeoJSON(jsonFeed);
}).done(function(e) {
// Once the json feed has loaded via AJAX, check to see if
// we should show a default view
mapControl.activateInitialItem();
});
Below is a snippet of how I had tried setting values to change the z-index. When a visual marker on the featureLayer is clicked, 'activateMarker' is called:
featureLayer.on('click', function (e) {
mapControl.activateMarker(e);
});
The GEOjson feed has urls for the icons to show, and the active marker icon is switched to an alternative version (which is also larger). When the active feature is a single Point I've tried to set values for the marker (lines commented out, some of the various things I've tried!)
activateMarker: function (e) {
var marker = e.layer;
var feature = e.layer.feature;
this.resetMarkers();
if (feature.properties.hasOwnProperty('icon')) {
feature.properties.icon['oldIcon'] = feature.properties.icon['iconUrl'];
feature.properties.icon['iconUrl'] = feature.properties.icon['iconActive'];
feature.properties.icon['oldIconSize'] = feature.properties.icon['iconSize'];
feature.properties.icon['iconSize'] = feature.properties.icon['iconSizeActive'];
}
if (feature.geometry.type == 'Point') {
marker.setZIndexOffset(10001);
marker.addTo(featureLayer);
}
//featureLayer.setGeoJSON(jsonFeed);
}
Any advice would be greatly appreciated! I'm at the point where I don't know what else to try (and that's saying something).
What probably happens is that you just flush your markers with the last call to .setGeoJSON():
If the layer already has features, they are replaced with the new features.
You correctly adjust the GeoJSON data related to your icon, so that when re-created, your featureLayer can use the new values to show a new icon (depending on how you configured featureLayer).
But anything you changed directly on the marker is lost, as the marker is removed and replaced by a new one, re-built from the GeoJSON data.
The "cleanest" way would probably be to avoid re-creating all features at every click.
Another way could be to also change something else in your GeoJSON data that tells featureLayer to build your new marker (through the pointToLayer option) with a different zIndexOffset option.

Having trouble attaching event listener to a kml layer's polygon

Using Google Earth I have a loaded kml layer that displays polygons of every county in the US. On click a balloon pop's up with some relevant info about the state (name, which state, area, etc) When a user clicks the polygon I want the information to also pop up on a DIV element somewhere else.
This is my code so far.
var ge;
google.load("earth", "1");
function init() {
google.earth.createInstance('map3d', initCB, failureCB);
}
function initCB(instance) {
ge = instance;
ge.getWindow().setVisibility(true);
ge.getNavigationControl().setVisibility(ge.VISIBILITY_AUTO);
ge.getNavigationControl().setStreetViewEnabled(true);
ge.getLayerRoot().enableLayerById(ge.LAYER_ROADS, true);
//here is where im loading the kml file
google.earth.fetchKml(ge, href, function (kmlObject) {
if (kmlObject) {
// show it on Earth
ge.getFeatures().appendChild(kmlObject);
} else {
setTimeout(function () {
alert('Bad or null KML.');
}, 0);
}
});
function recordEvent(event) {
alert("click");
}
// Listen to the mousemove event on the globe.
google.earth.addEventListener(ge.getGlobe(), 'click', recordEvent);
}
function failureCB(errorCode) {}
google.setOnLoadCallback(init);
My problem is that when I change ge.getGlobe() to kmlObject or ge.getFeatures() it doesn't work.
My first question is what should I change ge.getGlobe() to to be able to get a click listener when a user clicks on a kml layer's polygon?
After that I was planning on using getDescription() or getBalloonHtml() to get the polygons balloons information. Am I even on the right track?
...what should I change ge.getGlobe() to...
You don't need to change the event object from GEGlobe. Indeed it is the best option as you can use it to capture all the events and then check the target object in the handler. This means you only have to set up a single event listener in the API.
The other option would be to somehow parse the KML and attach specific event handlers to specific objects. This means you have to create an event listener for each object.
Am I even on the right track?
So, yes you are on the right track. I would keep the generic GEGlobe event listener but extend your recordEvent method to check for the types of KML object you are interested in. You don't show your KML so it is hard to know how you have structured it (are your <Polygon>s nested in <Placemarks> or ` elements for example).
In the simple case if your Polygons are in Placemarks then you could just do the following. Essentially listening for clicks on all objects, then filtering for all Placmark's (either created via the API or loaded in via KML).
function recordEvent(event) {
var target = event.getTarget();
var type = target.getType();
if(type == "KmlPolygon") {
} else if(type == "KmlPlacemark") {
// get the data you want from the target.
var description = target.getDescription();
var balloon = target.getBalloonHtml();
} else if(type == "KmlLineString") {
//etc...
}
};
google.earth.addEventListener(ge.getGlobe(), 'click', recordEvent);
If you wanted to go for the other option you would iterate over the KML Dom once it has loaded and then add events to specific objects. You can do this using something like kmldomwalk.js. Although I wouldn't really recommend this approach here as you will create a large number of event listeners in the api (one for each Placemark in this case). The up side is that the events are attached to each specific object from the kml file, so if you have other Plaemarks, etc, that shouldn't have the same 'click' behaviour then it can be useful.
function placeMarkClick(event) {
var target = event.getTarget();
// get the data you want from the target.
var description = target.getDescription();
var balloon = target.getBalloonHtml();
}
google.earth.fetchKml(ge, href, function (kml) {
if (kml) {
parseKml(kml);
} else {
setTimeout(function () {
alert('Bad or null KML.');
}, 0);
}
});
function parseKml(kml) {
ge.getFeatures().appendChild(kml);
walkKmlDom(kml, function () {
var type = this.getType();
if (type == 'KmlPlacemark') {
// add event listener to `this`
google.earth.addEventListener(this, 'click', placeMarkClick);
}
});
};
Long time since i have worked with this.. but i can try to help you or to give you some tracks...
About your question on "google.earth.addEventListener(ge.getGlobe(), 'click', recordEvent);"
ge.getGlobe can not be replaced with ge.getFeatures() : if you look in the documentation ( https://developers.google.com/earth/documentation/reference/interface_g_e_feature_container-members) for GEFeatureContainer ( which is the output type of getFeatures() , the click Event is not defined!
ge.getGlobe replaced with kmlObject: waht is kmlObject here??
About using getDescription, can you have a look on the getTarget, getCurrentTarget ...
(https://developers.google.com/earth/documentation/reference/interface_kml_event)
As I told you, i haven't work with this since a long time.. so I'm not sure this can help you but at least, it's a first track on which you can look!
Please keep me informed! :-)