difference between detectObjectOnImage and runModelonImage in tflite flutter - flutter

I'm trying to make a tflite multiple object detector in flutter
I came across two function which takes image path as input that's why this question.
the two function are detectObjectOnImage and runModelOnImage and when I use runModelOnImage my code is running and if I swap it with detectObjectOnImage the interprter does initialize but on calling the function the app automatically closes and shows Lost connection to device
this is how my code goes
classifyImage(String imgpath) async {
var output = await Tflite.runModelOnImage(
path: imgpath,
imageMean: 0.0,
imageStd: 255.0,
threshold: 0.2,
numResults: 1,
asynch: true,
);
setState(() {
_loading = false;
outputs = output;
});
print(outputs);
print(outputs[0]["label"]);
}
I guess my assumptions are correct but I don't know why its not working, apart from that I created a model from Teachable machine by google and it only detects one object at a time so my next question is how do I make it detect more than 1 object
Thanks

The difference between the two functions are their usages:
For object detection you use Tflite.detectObjectOnImage()
For image classification (finding objects without printing boxes around them) you use Tflite.runModelOnImage()
The two Methods return different sized tensors. When the Tensors can't be mapped to the expected output the app disconnects as you described it.
Regarding your second Question:
You set the parameter numResults, which limits the number of results, to 1. Increase this number to get more results.
(source: https://pub.dev/packages/tflite#Example)

Related

Unity - Set GUI.Box background color

I'm trying to set the background color of a GUI.Box:
void OnGUI()
{
string LatLong;
LatLong = map.calc.prettyCurrentLatLon;
var mousePosition = Input.mousePosition;
float x = mousePosition.x + 10;
float y = Screen.height - mousePosition.y + 10;
GUI.backgroundColor = Color.red;
GUI.Box(new Rect(x, y, 200, 200), LatLong);
}
However, the box is showing in a semi-transparent black, and the white text is subdued, not opaque white.
You have to use s gui style:
private GUIStyle currentStyle = null;
void OnGUI()
{
InitStyles();
GUI.Box( new Rect( 0, 0, 100, 100 ), "Hello", currentStyle );
}
private void InitStyles()
{
if( currentStyle == null )
{
currentStyle = new GUIStyle( GUI.skin.box );
currentStyle.normal.background = MakeTex( 2, 2, new Color( 0f, 1f, 0f, 0.5f ) );
}
}
private Texture2D MakeTex( int width, int height, Color col )
{
Color[] pix = new Color[width * height];
for( int i = 0; i < pix.Length; ++i )
{
pix[ i ] = col;
}
Texture2D result = new Texture2D( width, height );
result.SetPixels( pix );
result.Apply();
return result;
}
Taken from unity forum.
I'm gonna slide in with a more elegant solution here before this question gets old. I saw Thomas's answer and started to wonder if there is a way to do that without having to do the "InitStyles" in the OnGUI loop. Since ideally you only want to init the GuiSkin once in Awake or Start or wherever, but only once, and then never check to see if it's null ever again.
Anyway, after some trial and error, I came up with this.
private void Awake() {
// this variable is stored in the class
// 1 pixel image, only 1 color to set
consoleBackground = new Texture2D(1, 1, TextureFormat.RGBAFloat, false);
consoleBackground.SetPixel(0, 0, new Color(1, 1, 1, 0.25f));
consoleBackground.Apply(); // not sure if this is necessary
// basically just create a copy of the "none style"
// and then change the properties as desired
debugStyle = new GUIStyle(GUIStyle.none);
debugStyle.fontSize = 24;
debugStyle.normal.textColor = Color.white;
debugStyle.normal.background = consoleBackground;
}
REVISION - 17 July 2022 - GUI Style Creation and Storage
Prelude
Style creation through the methods provided by others are certainly functional methods of providing your custom editors with a unique look. They have some fundamental issues I should point out, which my method doesn't outright correct, just alleviate. This method still needs to be expanded upon and is still a partly experimental progression from a personal plugin.
Creating styles every OnGUI Call creates unnecessary, extra instructions for your editor window. This doesn't scale well past a handful (4~) styles.
By creating styles every time OnGUI is called, you're creating textures repeatedly for the background colour (not good). Over prolonged use of this method, memory leaks can occur although unlikely.
What does my method do differently?
Creates GUIStyle and Texture2D files. GUIStyles are saved as .JSON files, which is best compatible for [JSON <-> GUIStyle] conversion and storage.
Texture2Ds are encoded from raw data to PNG format through UnityEngine.
Checks if a style file is null before fetching or creating any missing styles again.
Contains a list of all styles through the use of a Style Manifest (struct) to store the names of all textures and styles to iteratively load on fetch.
Only creates styles if they are missing. Does not spend resources on creating pre-existing styles and pre-existing styles.
GUIStyles (as JSONs) and Texture2D files are stored in a Resources folder within the Plugin folder.
It should be noted that my style methods are done with the express understanding and consideration of GUISkins existing. They are not suitable for my UI/UX needs.
How is this done?
Plugin Style Handing Diagram
I separate Style Loading into a unique namespace for style handling and contain functions, as well as public variables for global plugin access, within. This namespace creates, loads and can send back styles on the requests sent by other scripts.
A call to a function is made when the plugin is opened to load the style manifest and subsequently all styles and textures are loaded, to be relinked for use.
If the styles manifest is missing then it is recreated along with all GUIStyle files. If the styles manifest is not missing but a style is then that style is recreated and the style manifest is modified to reference the new style.
Textures are handled separately from GUIStyle loading and are collected in a separate array. They are independently checked to see if they still exist and missing textures are recreated and encoded from raw data to PNG format, with the style manifest being modified when necessary.
Instead of repeatedly creating all styles or repeatedly loading them each frame, the plugin sends a request to fetch the first style from memory and checks if the result is null. If the first style returns as null then the plugin assumes all styles have been dereferenced and calls a reload or recreation of the relevant GUIStyle files (this can happen because of the engine entering/exiting play mode and is necessary to preserve UI/UX legibility).
If the style returns as a valid reference, plugins of mine do use it but this is risky. It's a good idea to also check at least one texture because textures are at risk of being dereferenced from the Texture2D array.
Once each check is done, the plugin renders the layout as normal and the next cycle begins. Using this method overall requires extra processing time and extra storage space for the plugin but in turn:
Quicker over a longer period of time due to styles being created or loaded only when necessary
Easier to modify themes for plugins, allowing individuals to customize the tools to their preferred theme. This can also be expanded on custom "theme packs" for a plugin.
Scalable for large amounts of styles to an extent.
This method still requires experience in GUI Styles, Data Saving, JSON Utilisation and C#.

using WebAudio AnalyserNode.getFloatFrequencyData() to shift pitch of a BufferSource

I have a BufferSource, which I create thusly:
const proxyUrl = location.origin == 'file://' ? 'https://cors-anywhere.herokuapp.com/' : '';
const request = new XMLHttpRequest();
request.open('GET', proxyUrl + 'http://heliosophiclabs.com/~mad/projects/mad-music/non.mp3', true);
// request.open('GET', 'non.mp3', true);
request.responseType = 'arraybuffer';
request.onload = () => {
audioCtx.decodeAudioData(request.response, buffer => {
buff = buffer;
}, err => {
console.error(err);
});
}
request.send();
Yes, the CORS workaround is pathetic, but this is the way I found to be able to work locally without needing to run a HTTP server. Anyway...
I would like to shift the pitch of this buffer. I've tried various different forms of this:
const source = audioCtx.createBufferSource();
source.buffer = buff;
const analyser = audioCtx.createAnalyser();
analyser.connect(audioCtx.destination);
analyser.minDecibels = -140;
analyser.maxDecibels = 0;
analyser.smoothingTimeConstant = 0.8;
analyser.fftSize = 2048;
const dataArray = new Float32Array(analyser.frequencyBinCount);
source.connect(analyser);
analyser.connect(audioCtx.destination);
source.start(0);
analyser.getFloatFrequencyData(dataArray);
console.log('dataArray', dataArray);
All to no avail. dataArray is always filled with -Infinity values, no matter what I try.
My idea is to get this frequency domain data and then to move all the frequencies up/down by some amount and create a new Oscillator node out of these, like this:
const wave = audioCtx.createPeriodicWave(real, waveCompnents);
oscillator.setPeriodicWave(wave);
Anyway. If anyone has a better idea of how to shift pitch, I'd love to hear it. Sadly, detune and playbackRate both seem to do basically the same thing (why are there two ways of doing the same thing?), namely just to speed up or slow down the playback, so that's not it.
First, there's a small issue with the code: you connect the analyser to the destination twice. You don't actually need to connect it at all.
Second, I think the reason you're getting all -infinity values is because you call getFloatFrequencyData right after you start the source. There's a good chance that no samples have been played so the analyser only has buffers of all zeros.
You need to call getFloatFrequencyData after a bit of time to see non-zero values.
Third, I don't think this will work at all, even for shifting the pitch of an oscillator. getFloatFrequencyData only returns the magnitude information. You will need the phase information for the harmonics to get everything shifted correctly. Currently there's no way to get the phase information.
Fourth, if you have an AudioBuffer with the data you need, consider using the playbackRate to change the pitch. Not sure if this will produce the shift you want.

Initializing pallet racks with parameters in AnyLogic

Dear stack overflow community,
I am trying to create a pallet rack in AnyLogic (version 7.1.2 university), in which the number of cells, the number of levels and some other properties are set from parameters. The parameters are set up prior to model execution from the simulation page. Has anyone done that before?
In my opinion, the problem starts with the palletRack-properties that do not allow to set parameters as values but require a number ("Number of cells: 10" instead of "Number of cells: myParameter"). But there are pre-defined functions like "setNumberOfPositions(int nPositions), so that I thought I could avoid the problem by calling these functions at the beginning of the simulation (at time zero). I used the action field of an event for that.
This caused an exception that said "root: palletRack: Markup element is already initialized and cannot be modified. Please use constructor without arguments, perform setup and finally call initialize() function.".
Exception during discrete event execution: Markup element is already initialized and cannot be modified. Please use constructor without arguments, perform setup and finally call initialize() function
Since I could not modify anything in the Java editor, I tried to construct a pallet rack in the event action field:
PalletRack palletRack = new PalletRack();
palletRack.setOwner(this);
[...]
palletRack.setNumberOfPositions(p_CellsInX);
palletRack.setNumberOfLevels(p_CellsInY);
palletRack.setCellWidth(p_WidthOfCell);
palletRack.setLevelHeight(p_HeightOfCell);
palletRack.initialize();
This did not throw any errors but did not built a rack either.
Additionally, I tried to add "#Override" in front of my functions.
Has anyone any ideas how I can initialize the pallet rack with parameters or override the initial values?
Obviously, I am a total beginner in AnyLogic. I would be very grateful for any advice. Thank you in advance!
It is possible but not straight forward. You need to do everything programmatically, i.e. create the pallet rack but also the line that goes through it, add them to a (new or existing) network and then initialize it all. Some dummy code to get you started below.
Note that myNetwork is an existing network I drew manually at design time here.
Also, one tip: draw the pallet rack and line through it manually first to easily obtain all the coordinates and ensure it would work. Then, remove those and create them programmatically but with the right settings...
PS: this might not work in AL7 but it works in AL8. You might need slightly different functions for adding to presentation
myRack = new PalletRack(this, // Agent owner
SHAPE_DRAW_2D3D, // ShapeDrawMode
true, // isPublic
ground, // ground
false, // isObstacle
-2480, // x pos
1980, // y pos
0.0, // z pos
35.2*numCellsPerRackPerLevel, // length (keep constant cell width and vary rack length accordingly)
20.0, // depth
20.0, // depthR (depth of the right riack (only if type is 2 racks and 1 aisle)
50.0, // levelHeight
0., // rotation
PALLET_RACK_TWO_PALLET_RACKS, // PalletRackType
PALLET_RACK_NO_DIRECTION, // PalletRackDirection
40.0, // aisleDepth = width
40.0, // aisleRDepth (width of right aisle, only if 1 rack 2 aisles)
35.2, // cellWidth
numCellsPerRackPerLevel, // nPositions
numLevelsPerRack, // nLevels
1, // nDeep
lavender, // fillColor
dodgerBlue, // lineColor
2); // cellsBetweenLegs
presentation.add(myRack);
// this must cut through both rack's aisles
MarkupSegmentLine segment = new MarkupSegmentLine(myRack.getX()-10, myRack.getY()+30, 0.0, myRack.getX()+myRack.getLength()+10, v_IMS_Rack1.getY()+30, 0.0);
Path path = new Path(this, SHAPE_DRAW_2D3D, true,
true, true, 1.0, false, 10,
PATH_LINE, dodgerBlue, 1.0,
segment);
presentation.add(myRack);
myNetwork.add( myRack);
myNetwork.add(path);
myNetwork.initialize();

How to avoid Thread.sleep() in a for loop from interrupting the UI Thread?

I have the following pseudo code to clarify my problem and a solution. My original posting and detailed results are on Stack Overflow at: Wait() & Sleep() Not Working As Thought.
public class PixelArtSlideShow { // called with click of Menu item.
create List<File> of each selected pixelArtFile
for (File pixelArtFile : List<File>) {
call displayFiles(pixelArtFile);
TimeUnits.SECONDS.sleep(5); }
}
public static void displayFiles(File pixelArtFile) {
for (loop array rows)
for (loop array columns)
read-in sRGB for each pixel - Circle Object
window.setTitle(....)
}
// when above code is used to Open a pixelArtFile, it will appear instantly in a 32 x 64 array
PROBLEM: As detailed extensively on the other post. Each pixelArtFile will display the setTitle() correctly and pause for about 5 secs but the Circle’s will not change to the assigned color except for the last file, after the 5 secs have passed. It's like all the code in the TimeUnits.SECONDS.sleep(5); are skipped EXCEPT the window.setTitle(...)?
My understanding is the TimeUnits.SECONDS.sleep(5); interrupts the UI Thread uncontrollable and I guess must somehow be isolated to allow the displayFiles(File pixelArtFile) to fully execute.
Could you please show me the most straight forward way to solve this problem using the pseudo code for a more completed solution?
I have tried Runnables, Platform.runLater(), FutureTask<Void>, etc. and I'm pretty confused as to how they are meant to work and exactly coded.
I also have the two UI windows posted on the web at: Virtual Art. I think the pixelArtFile shown in the Pixel Array window may clarify the problem.
THANKS
Don't sleep the UI thread. A Timeline will probably do what you want.
List<File> files;
int curFileIdx = 0;
// prereq, files have been appropriately populated.
public void runAnimation() {
Timeline timeline = new Timeline(
new KeyFrame(Duration.seconds(5), event -> {
if (!files.isEmpty()) {
displayFile(curFileIdx);
curFileIdx = (curFileIdx + 1) % files.size();
}
})
);
timeline.setCycleCount(Timeline.INDEFINITE);
timeline.play();
}
// prereq, files have been appropriately populated.
public void displayFile(int idx) {
File fileToDisplay = files.get(idx);
// do your display logic.
}
Note, in addition to the above, you probably want to run a separate task to read the file data into memory, and just have a List<ModelData> where ModelData is some class for data you have read from a file. That way you wouldn't be continuously running IO in your animation loop. For a five second per frame animation, it probably doesn't matter much. But, for a more frequent animation, such optimizations are very important.

How to use multiple USB webcam in Matlab working simultaneously?

I would like to take the live video with two USB webcams (Philips SPC 900NC), but I found that they cannot work simultaneously on my laptop. Either of the two USB webcams could work alone or work with another webcam (mounted on my laptop originally).
When I use the simulink block 'From video device', Matlab gave the error message with ' Multiple VIDEOINPUT objects cannot access the same device simultaneously.'. Then I checked the video input device with command 'imaqhwinfo', only one of the USB Philips webcam could be detected.
I would like to know that,
what's the reason of this situation? is it because the hardware limitation (USB bus bandwidth) or just matlab video object don't support same multiple video devices?
what's the solution of this? could anyone give me some suggestions?
You may interest in this link:
http://opencv.willowgarage.com/wiki/faq#How_to_use_2_cameras_.28multiple_cameras.29_with_cvCam_library
Which contains:
First, init the cvcam library and get the number of cams by:
int ncams = cvcamGetCamerasCount( ); //returns the number of available cameras in the system
Show dialog to choose which cameras in use
int* out; int nselected = cvcamSelectCamera(&out);
Get the selected cams and enable them.
int cam1 = out[0];
int cam2 = out[1];
cvcamSetProperty(cam1, CVCAM_PROP_ENABLE, CVCAMTRUE);
cvcamSetProperty(cam1, CVCAM_PROP_RENDER, CVCAMTRUE); //We'll render stream from this source
cvNamedWindow("Cam1", 1);
cvcamWindow MyWin1 = (cvcamWindow)cvGetWindowHandle("Cam1");
cvcamSetProperty(cam1, CVCAM_PROP_WINDOW, &MyWin1); // Selects a window for video rendering
//Same code for camera 2
cvcamSetProperty(cam2, CVCAM_PROP_ENABLE, CVCAMTRUE);
cvcamSetProperty(cam2, CVCAM_PROP_RENDER, CVCAMTRUE);
cvNamedWindow("Cam2", 1);
cvcamWindow MyWin2 = (cvcamWindow)cvGetWindowHandle("Cam2");
cvcamSetProperty(cam2, CVCAM_PROP_WINDOW, &MyWin1);
//If you want to open the property dialog for setting the video format parameters, uncomment this line
//cvcamGetProperty(cam1, CVCAM_VIDEOFORMAT, NULL);
//cvcamGetProperty(cam2, CVCAM_VIDEOFORMAT, NULL);
Enable the stereo mode (2 cameras working at the same time)
cvcamSetProperty(cam1, CVCAM_STEREO_CALLBACK , stereocallback); //stereocallback is the function running to process every frames
cvcamInit();
cvcamStart();
//Your app is working
while (1)
{
int key = cvWaitKey(5);
if (key == 27) break;
}
cvcamStop( );
cvcamExit( );
Define the stereocallback function outside of the function above.
void stereocallback(IplImage* image1, IplImage* image2) {
//Process 2 images here
}