I am trying to import multiple GLTF files into my Three.js scene using a LoadingManager, but I am encountering an issue where I can't access the properties of the loaded models. The code seems to be working since I have them loaded correctly. I have stored the loaded models in an object called "loadedModels" but when I try to access them, it gives me an error "Cannot read properties of undefined."
This is my current code. As I said, it loads the files correctly but when I try, for instance, to change each model's coordinates I get the error described before. I tried to put the forEach function inside a setTimeout to check if the problem was that the models were not loading fast enough for me to access it, but that didn't work.
//create the toLoad const where I will type each file url
const toLoad = [
{name: 'monkey', group: new THREE.Group(), url: '3D/monkey.gltf'},
{name: 'plane', group: new THREE.Group(), url: '3D/plane.gltf'}
]
//Create empty object to store the loaded models
const loadedModels = {};
//Create a loadingManager for progress bar
const loadingManager = new THREE.LoadingManager(() => {});
//Create loader loop from multiple local urls
const gltfLoader = new GLTFLoader(loadingManager);
toLoad.forEach(item=>{
gltfLoader.load(item.url, (model)=>{
item.group.add(model.scene);
scene.add(item.group);
loadedModels[item.name] = item.group;
});
});
Related
I have recently started learning Sveltekit and am working on a very basic project to practise. Here is the structure of the project:
|-routes/
| |-nextpage/
| └ +page.svelte
|+page.svelte
|+page.server.js
I got stuck trying to pass data from the +page.server.js to the +page.svelte located inside the nextpage/ route and I have no idea what to do.
In the main +page.svelte there is a component with a button that when pressed sends a FormData via POST request to the /results endpoint, triggering a server action called results within the +page.server.js. Then redirects to /nextpage.
Component in +page.svelte:
let myObject = {
//stuff
}
const handleSubmit = () => {
const formData = new FormData();
for(const name in myObject){
formData.append(name, myObject[name]);
}
let submit = fetch('?/results', {
method: 'POST',
body: formData
})
.finally(() => console.log("done"))
window.location = "/nextpage";
}
+page.server.js:
let myObject = {};
export const load = () => {
return {
myObject
}
}
export const actions = {
results: async({ request }) => {
const formData = await request.formData();
formData.forEach((value, key) => (myObject[key] = value));
console.log(myObject);
}
}
Now I would like to be able to show myObject in the +page.svelte in /nextpage, but the usual export let data does not work:
/nextpage +page.svelte:
<script>
export let data;
</script>
{data.myObject} //undefined`
What can I do? Thank you for your help.
OK guys, I guess all I needed was to use cookies. Since the object I was trying to pass between pages didn't need to be stored for longer than a page load, I don't think it would make sense to save it in a database. Instead, what did it in my case was to set a cookie with cookies.set('name', JSON.stringify(obj)); inside the action function in the main +page.server.js, and then get it back inside the load function of the /nextpage +page.server.js with const obj = cookies.get('name');. I'm not sure it was the cleanest way to do it, but it worked for me.
That does not work. Pages are fully separate, you cannot load data from one page into another.
If you want to share loaded data use a layout load function.
You can use +layout.js or sveltkit/stores.
sveltekit/stores are above the layout layer because they don't depend on the flow of the pages compared to the +layout.js layer which does depend.
cookies are processed locally.
I'm trying to add the learning_text_recognition library to my Flutter project. I was able to get the example in the API docs to work with no problems (https://pub.dev/packages/learning_text_recognition/example), but now I'm trying to add it to my own project using the information found on the Readme tab of the same website. It's slightly different than how the example worked and I'm now receiving several errors that I didn't receive in the example. Specifically, the errors are on the following line:
RecognizedText result = await textRecognition.process(image);
It says that the await function can only be used in an async function, but I don't know if I should make the function or the class async? It also says that the method 'process isn't defined for the type 'TextRecognition', but I don't know what the method should be, since that part worked perfectly fine in the example. It was also complaining that image wasn't defined, but I just created a variable called image with InputCameraView, which seemed to work.
I've tried moving the code into a new function and made an image variable. This is what the code looks like now:
getInfo(){
var image = InputCameraView(
canSwitchMode: false,
mode: InputCameraMode.gallery,
title: 'Text Recognition',
onImage: (InputImage image) {
// now we can feed the input image into text recognition process
},
);
TextRecognition textRecognition = TextRecognition();
RecognizedText result = await textRecognition.process(image);
}
I've also included the following import statements:
import 'package:learning_input_image/learning_input_image.dart';
import 'package:learning_text_recognition/learning_text_recognition.dart';
import 'package:provider/provider.dart';
I'm not sure if I'm maybe missing a step?
Your function should have the async keyword to indicate that there will be wait points. See the dart async/await documentation.
Another detail for the example InputCameraView is a widget, it should not be inside the function. It must be using the onImage method of the InputCameraView to collect the recognition and the builder to build it. In the doc onImage calls the async function _startRecognition to collect the data you must do something in this line.
void getInfo() async {
var image = InputCameraView(
canSwitchMode: false,
mode: InputCameraMode.gallery,
title: 'Text Recognition',
onImage: (InputImage image) {
// now we can feed the input image into text recognition process
},
);
var textRecognition = TextRecognition();
var result = await textRecognition.process(image);
}
I'm trying to use an external_texture in WebGpu, but am running into an error.
localhost/:1 Destroyed texture [Texture] used in a submit.
at ValidateCanUseInSubmitNow (../../third_party/dawn/src/dawn_native/Texture.cpp:605)
at ValidateSubmit (../../third_party/dawn/src/dawn_native/Queue.cpp:395)
I've created a video element and created an external texture:
const video = document.createElement('video');
video.loop = true;
video.autoplay = true;
video.muted = true;
video.src = '/videos/sample.webm';
await video.play();
const videoTexture = device.importExternalTexture({
source: video,
});
I'm binding it like so:
{
binding: 2,
resource: videoTexture,
},
and am referencing it in my shader like the following:
[[binding(2), group(0)]] var diffuseTexture: texture_external;
...
var diffuse = textureSampleLevel(diffuseTexture, textureSampler, in.uv).xyz
I've stored both the video element and videoTexture to a variable just in case it was something to do with garbage collection, but it has not helped. I seem to be doing everything the same as in the video uploading sample (https://austin-eng.com/webgpu-samples/samples/videoUploading) except there's a lot more going on in my program.
It turns out that the lifetime of an video external texture is very limited. When your code returns control to the browser, the external texture will be destroyed. For most 3d applications, this would most likely be when the requestAnimationFrame finishes.
In order to alleviate this you have to create both the bind group and the external texture in the same frame as you render. It may be helpful to put your external texture(s) in a separate bind group since you will have to recreate it.
e.g.
function frame() {
var externalTextureBindGroup = device.createBindGroup({
layout: pipeline.getBindGroupLayout(1),
entries: [
{
binding: 0,
resource: device.importExternalTexture({
source: video,
}),
},
],
});
// additional setup.
passEncoder.setBindGroup(1, externalTextureBindGroup);
passEncoder.drawIndexed(group.count, 1, group.start, 0);
// additional draws
requestAnimationFrame(frame);
}
References:
https://github.com/toji/webgpu-best-practices/blob/main/img-textures.md#gpuexternaltexture-lifetime
I'm trying to import a GeoJSON file on NextJS but it says:
Module parse failed: Unexpected token (2:8)
You may need an appropriate loader to handle this file type, currently no loaders are configured to process this file. See https://webpack.js.org/concepts#loaders
It worked fine when the project was in ReactJS with create-react-app but now that we migrate to NextJS it doesn't.
Maybe I need to configure some loaders on next.config.js but I don't know how to do it
Here is my next.config.js:
const withCSS = require("#zeit/next-css")
const withLess = require('#zeit/next-less');
const withImages = require('next-images')
module.exports = withCSS(withLess({
webpack(config) {
config.module.rules.push({
test: /\.svg$/,
use: ["#svgr/webpack"]
});
return config;
},
lessLoaderOptions: {
javascriptEnabled: true,
},
}));
Can someone help me achieve this?
Okay guys, I managed to do it!
I will try to explain what I wanted to accomplish, what was happening and what do I did.
I wanted to load a geojson data from a file into google maps api to load some layers, so I wanted to use it on map.data.loadGeoJson(imported_file.geojson)
First, I needed to make Nextjs load my file from the source so I installed json-loader
npm i --save-dev json-loader
And then added it to next.config.js
const withCSS = require("#zeit/next-css")
const withLess = require('#zeit/next-less');
const withImages = require('next-images')
module.exports = withCSS(withLess({
webpack(config) {
config.module.rules.push({
test: /\.svg$/,
use: ["#svgr/webpack"]
});
config.module.rules.push({
test: /\.geojson$/,
use: ["json-loader"]
});
return config;
},
lessLoaderOptions: {
javascriptEnabled: true,
},
}));
And then, no more error message importing the geojson file!
But now, another problem. The layers didn't load! So I read Google Maps API and tried another method to load the geojson.
map.data.addGeoJson(imported_file.geojson)
And it worked! Hope it can help who is in trouble.
Another developer created our original map but I'm tasked with making some changes. One of these is making sure the activated marker is brought to the front when clicked on (where it is partially overlapped by other markers).
The developers have used mapbox 2.2.2.
I have looked at leafletjs's docs, have followed some instructions on other posted solutions (e.g. solution one and solution two). Neither of these makes any difference.
Examining the marker in Chrome's console I can see the value of options.zIndexOffset is being set (10000 in my test case). I've even set _zIndex to an artificially high value and can see that reflected in the marker's data structure. But visually nothing is changing.
This is how the map is set up initially. All features are from a single geojson feed:
L.mapbox.accessToken = '<access token here>';
var map = L.mapbox.map('map', 'map.id', {
}).setView([37.8, -96], 3);
var jsonFeed, jsonFeedURL;
var featureLayer = L.mapbox.featureLayer()
.addTo(map)
.setFilter(function (f) {
return false;
});
$.getJSON(jsonFeedURL, function (json) {
jsonFeed = json;
jsonFeedOld = json;
// Load all the map features from our json file
featureLayer.setGeoJSON(jsonFeed);
}).done(function(e) {
// Once the json feed has loaded via AJAX, check to see if
// we should show a default view
mapControl.activateInitialItem();
});
Below is a snippet of how I had tried setting values to change the z-index. When a visual marker on the featureLayer is clicked, 'activateMarker' is called:
featureLayer.on('click', function (e) {
mapControl.activateMarker(e);
});
The GEOjson feed has urls for the icons to show, and the active marker icon is switched to an alternative version (which is also larger). When the active feature is a single Point I've tried to set values for the marker (lines commented out, some of the various things I've tried!)
activateMarker: function (e) {
var marker = e.layer;
var feature = e.layer.feature;
this.resetMarkers();
if (feature.properties.hasOwnProperty('icon')) {
feature.properties.icon['oldIcon'] = feature.properties.icon['iconUrl'];
feature.properties.icon['iconUrl'] = feature.properties.icon['iconActive'];
feature.properties.icon['oldIconSize'] = feature.properties.icon['iconSize'];
feature.properties.icon['iconSize'] = feature.properties.icon['iconSizeActive'];
}
if (feature.geometry.type == 'Point') {
marker.setZIndexOffset(10001);
marker.addTo(featureLayer);
}
//featureLayer.setGeoJSON(jsonFeed);
}
Any advice would be greatly appreciated! I'm at the point where I don't know what else to try (and that's saying something).
What probably happens is that you just flush your markers with the last call to .setGeoJSON():
If the layer already has features, they are replaced with the new features.
You correctly adjust the GeoJSON data related to your icon, so that when re-created, your featureLayer can use the new values to show a new icon (depending on how you configured featureLayer).
But anything you changed directly on the marker is lost, as the marker is removed and replaced by a new one, re-built from the GeoJSON data.
The "cleanest" way would probably be to avoid re-creating all features at every click.
Another way could be to also change something else in your GeoJSON data that tells featureLayer to build your new marker (through the pointToLayer option) with a different zIndexOffset option.