Issue with Standalone in Processing - import

I have a sketch running in Processing 2.1.2, and it runs fine from the sketch window. When I try to export it to a standalone windows application, Processing creates the application.windows folder, which contains the 'lib' and 'source' subdirectories. But when I double-click the application, it just showing me a blank window.
Can anybody guide me on how do I resolve this issue?
Coding of program is given below:
import toxi.geom.*;
import toxi.geom.mesh.*;
import toxi.processing.*;
import processing.serial.*;
TriangleMesh mesh;
ToxiclibsSupport gfx;
PImage img;
String input;
Serial port;
int x,y,z;
void setup() {
size(448, 299,P3D);
println(Serial.list());
port = new Serial(this,Serial.list()[0], 9600);
port.bufferUntil('\n');
mesh=(TriangleMesh)new STLReader().loadBinary(sketchPath("check.stl"),STLReader.TRIANGLEMESH);
gfx=new ToxiclibsSupport(this);
img=loadImage("imagei.jpg");
}
void draw() {
background(img);
translate(width/2,height/2,0);
rotateX(radians(x)); // Pitch
rotateY(radians(y)); // Roll
rotateZ(radians(z)); // Yaw
directionalLight(192, 168, 128,0, -1000, -0.5);
directionalLight(255, 64, 0, 0.5f, -0.5f, -0.1f);
noStroke();
scale(2);
gfx.mesh(mesh,false);
}
void serialEvent(Serial port)
{
input = port.readString();
if(input != null) {
String[] values = split(input, " ");
println(values[0]);
println(values[1]);
println(values[2]);
x= int(values[0]);y= int(values[1]);z= int(values[2]);
}
}

Edit this line of program:
mesh=(TriangleMesh)new STLReader().loadBinary(sketchPath("check.stl"),STLReader.TRIANGLEMESH);
by:
mesh=(TriangleMesh)new STLReader().loadBinary(sketchPath("data/check.stl"),STLReader.TRIANGLEMESH);
and rest of the program is fine, just check it and let me know if you get any error.

Related

Unity mirror and webcam streaming WebGL

I have a project in witch I use mirror (unet 2.0 :P) to make a simple low poly WebGL game in unity. the game works great. Now I want to send a webcamstream over the network. I want to do this for every player.
I managed to get the cam working locally. Now I want to get it to run via the network. I made a Colors32[] array witch holds the image, it also works locally. Only If I try to send it over the network using a [Command] function I get the message that my packet is to large. I heard that splitting a package gives a lot of latency so I tried this solution sendTextures but it doesn't play nice with mirror. I only get one frame (if I'm lucky) if I turn my networkmanager on.
Do you guys have any pointers for me to get it working?
This is the code I have so far:
public class WebCamScript : NetworkBehaviour
{
static WebCamTexture camTexture;
Color32[] data;
private int width = 640, height = 480;
public GameObject screen;
void Start()
{
screen = GameObject.Find("screen");
if (camTexture == null)
camTexture = new WebCamTexture(width, height);
data = new Color32[width * height];
if (!camTexture.isPlaying)
camTexture.Play();
}
// Update is called once per frame
void Update()
{
CmdSendCam(data.Length, camTexture.GetPixels32(data));
}
[Command]
void CmdSendCam(int length, Color32[] receivedImage)
{
/*
Texture2D t = new Texture2D(width, height);
t.SetPixels32(receivedImage);
t.Apply();
screen.GetComponent<Renderer>().material.mainTexture = t;
*/
}
}

Take Screenshot Before any ARCore model is rendered

I'm using Screenshot code to take a screenshot of the screen which is working fine but its also taking the arcore models with it. Is there a way to take a screenshot before models are rendered?
I tried to SetActive(false) then take a screenshot then SetActive(true), it does work but there's a noticeable difference i.e. model disappears than reappears.
Update: This is a script applied on ScreenShotCamera and it is updated after removing all the bugs (thanks to #Shingo), feel free to use it it's working properly
using GoogleARCore;
using OpenCVForUnitySample;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.XR;
[RequireComponent(typeof(Camera))]
public class SnapshotCamera : MonoBehaviour
{
Camera snapCam;
public UnityEngine.UI.Text text;
public RenderTexture mRenderTexture;
int resWidth=480;
int resHeight=800;
// Start is called before the first frame update
public void initialize(ARBackgroundRenderer background, Material material)
{
background = new ARBackgroundRenderer();
snapCam = GetComponent<Camera>();
background.backgroundMaterial = material;
background.camera = snapCam;
background.mode = ARRenderMode.MaterialAsBackground;
if (snapCam.targetTexture == null)
{
snapCam.targetTexture = new RenderTexture(resWidth, resHeight, 24);
}
else
{
snapCam.targetTexture.height = resHeight;
snapCam.targetTexture.width = resWidth;
//resHeight = snapCam.targetTexture.height;
//resWidth = snapCam.targetTexture.width;
}
background.camera.cullingMask = LayerMask.NameToLayer("Default");
//snapCam.CopyFrom(background.camera);
snapCam.gameObject.SetActive(false);
}
public void TakeSnapShot()
{
snapCam.gameObject.SetActive(true);
}
void LateUpdate()
{
if (snapCam.gameObject.activeInHierarchy)
{
snapCam.cullingMask = LayerMask.NameToLayer("Default");
if (ARCoreBackgroundRenderer.screenShot == null)
ARCoreBackgroundRenderer.screenShot = new Texture2D(resWidth, resHeight, TextureFormat.RGB24, false);
snapCam.Render();
RenderTexture.active = snapCam.targetTexture;
ARCoreBackgroundRenderer.screenShot.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
ARCoreBackgroundRenderer.screenShot.Apply();
snapCam.gameObject.SetActive(false);
HandPoseRecognition.captureTexture = false;
//string name = string.Format("{0}_Capture{1}_{2}.png", Application.productName, "{0}", System.DateTime.Now.ToString("yyyy-MM-dd_HH-mm-ss"));
//UnityEngine.Debug.Log("Permission result: " + NativeGallery.SaveImageToGallery(ARCoreBackgroundRenderer.screenShot, Application.productName + " Captures", name));
}
}
}
Perhaps I was a little ambiguous, what u mentioned in the comment has already been resolved thanks to you but the problem now is.
I'll show you the images:
These are the 2 cameras I have:
This is what my Main (ARCore Camera) shows
And this is what the (ScreenShot Camera) Shows
You can use layer, put every arcore models in one layer (eg. ARLAYER), then set camera's culling mask to avoid these models.
Pseudo code:
// Set models' layer
foreach (GameObject arcoreModel in arcoreModels)
arcoreModel.layer = ARLAYER;
// Set camera's culling mask
camera.cullingMask = ~(1 << ARLAYER);
camera.Render();
Create screenshot camera from another camera
var go = new GameObject("screenshotcamera");
// Copy transform
go.transform.position = mainCamera.transform.position.
...
// Copy camera
var screenshotcamera= go.AddComopnent<Camera>();
screenshotcamera.CopyFrom(mainCamera);
Update with your script
snapCam = GetComponent<Camera>();

GTK TextView auto scroll when text is added to text buffer

I am trying to create a very simple log-like GUI application that merely displays text from a log file dynamically and asynchronously. The problem is that when the log file is updated, the text view in the GUI scrolls back up to line 1. Every attempt to fix this has failed and I am wondering if I have stumbled across a bug in GTK. Here is a summary of my code:
using Cairo;
using Gtk;
namespace ServerManager {
public class ServerManager : Window {
public TextView text_view;
public TextIter myIter;
public TextMark myMark;
public async void read_something_async (File file) {
var text = new StringBuilder ();
var dis = new DataInputStream (file.read ());
string line;
while ((line = yield dis.read_line_async (Priority.DEFAULT)) != null) {
text.append (line);
text.append_c('\n');
}
this.text_view.buffer.text = text.str;
text_view.buffer.get_end_iter(out myIter);
text_view.scroll_to_iter(myIter, 0, false, 0, 0);
}
public static int main (string[] args) {
Gtk.init (ref args);
var window = new ServerManager ();
// The read-only TextView
window.text_view = new TextView ();
window.text_view.editable = false;
window.text_view.cursor_visible = false;
window.text_view.wrap_mode = Gtk.WrapMode.WORD;
// Add scrolling functionality to the TextView
var scroll = new ScrolledWindow (null, null);
scroll.set_policy (PolicyType.AUTOMATIC, PolicyType.AUTOMATIC);
scroll.add (window.text_view);
// Vbox so that our TextView has someplace to live
var vbox = new Box (Orientation.VERTICAL, 0);
vbox.pack_start (scroll, true, true, 0);
window.add (vbox);
window.set_border_width (12);
window.set_position (Gtk.WindowPosition.CENTER);
window.set_default_size (800, 600);
window.destroy.connect (Gtk.main_quit);
window.show_all ();
File file = File.new_for_path ("/home/user/temp.log");
FileMonitor monitor = file.monitor (FileMonitorFlags.NONE, null);
stdout.printf ("Monitoring: %s\n", file.get_path ());
monitor.changed.connect (() => {
window.read_something_async(file);
});
Gtk.main ();
return 0;
}
}
}
I also tried using TextMarks instead of Iters but that had no affect.
Scroll to 1st row happens because read_something_async() deletes the current contents of the buffer and then writes the new one (this is what setting the text property does). Maybe this is what you want but unless you keep track of the scroll location you will lose it.
The reason your scroll_to_iter() didn't work as expected is probably this:
Note that this function uses the currently-computed height of the lines in the text buffer. Line heights are computed in an idle handler; so this function may not have the desired effect if it’s called before the height computations. To avoid oddness, consider using gtk_text_view_scroll_to_mark() which saves a point to be scrolled to after line validation.
Calling TextView.ScrollToMark() with a "right gravity" TextMark should work for you.

Howto sub class a Clutter.Actor (involves Cairo/Clutter.Canvas)

Can anyone help me get this to run? I'm aiming for a custom Actor. (I have only just started hacking with Vala in the last few days and Clutter is a mystery too.)
The drawme method is being run (when invalidate is called) but there doesn't seem to be any drawing happening (via the Cairo context).
ETA: I added one line in the constructor to show the fix - this.set_size.
/*
Working from the sample code at:
https://developer.gnome.org/clutter/stable/ClutterCanvas.html
*/
public class AnActor : Clutter.Actor {
public Clutter.Canvas canvas;
public AnActor() {
canvas = new Clutter.Canvas();
canvas.set_size(300,300);
this.set_content( canvas );
this.set_size(300,300);
//Connect to the draw signal.
canvas.draw.connect(drawme);
}
private bool drawme( Cairo.Context ctx, int w, int h) {
stdout.printf("Just to test this ran at all: %d\n", w);
ctx.scale(w,h);
ctx.set_source_rgb(0,0,0);
//Rect doesn't draw.
//ctx.rectangle(0,0,200,200);
//ctx.fill();
//paint doesn't draw.
ctx.paint();
return true;
}
}
int main(string [] args) {
// Start clutter.
var result = Clutter.init(ref args);
if (result != Clutter.InitError.SUCCESS) {
stderr.printf("Error: %s\n", result.to_string());
return 1;
}
var stage = Clutter.Stage.get_default();
stage.destroy.connect(Clutter.main_quit);
//Make my custom Actor:
var a = new AnActor();
//This is dodgy:
stage.add_child(a);
//This works:
var r1 = new Clutter.Rectangle();
r1.width = 50;
r1.height = 50;
r1.color = Clutter.Color.from_string("rgb(255, 0, 0)");
stage.add_child(r1);
a.canvas.invalidate();
stage.show_all();
Clutter.main();
return 0;
}
you need to assign a size to the Actor as well, not just the Canvas.
the size of the Canvas is independent of the size of the Actor to which the Canvas is assigned to, as you can assign the same Canvas instance to multiple actors.
if you call:
a.set_size(300, 300)
you will see the actor and the results of the drawing.
Clutter also ships with various examples, for instance how to make a rectangle with rounded corners using Cairo: https://git.gnome.org/browse/clutter/tree/examples/rounded-rectangle.c - or how to make a simple clock: https://git.gnome.org/browse/clutter/tree/examples/canvas.c

bufferuntil('\n) won’t trigger serialevent processing with Eclipse

I want to do the simplest thing to plot a graph from the serial port of Arduino with the Processing software. I use Eclipse.
I did as the tutorials say about the plugins. I also copied the code from the Arduino site which is this:
import processing.serial.*;
Serial myPort; // The serial port
int xPos = 1; // Horizontal position of the graph
void setup () {
// Set the window size:
size(400, 300);
// List all the available serial ports
println(Serial.list());
// I know that the first port in the serial list on my mac
// is always my Arduino, so I open Serial.list()[0].
// Open whatever port is the one you're using.
myPort = new Serial(this, Serial.list()[0], 9600);
// Don't generate a serialEvent() unless you get a newline character:
myPort.bufferUntil('\n');
// Set the initial background:
background(0);
}
void draw () {
// Everything happens in the serialEvent()
}
void serialEvent (Serial myPort) {
// Get the ASCII string:
String inString = myPort.readStringUntil('\n');
if (inString != null) {
// Trim off any whitespace:
inString = trim(inString);
// Convert to an int and map to the screen height:
float inByte = float(inString);
inByte = map(inByte, 0, 1023, 0, height);
// Draw the line:
stroke(127, 34, 255);
line(xPos, height, xPos, height - inByte);
// At the edge of the screen, go back to the beginning:
if (xPos >= width) {
xPos = 0;
background(0);
}
else {
// Increment the horizontal position:
xPos++;
}
}
}
There is a problem that the bufferUntil('\n') does not trigger the serialevent.
I know that there was a bug. In case you try to set an 8-bit int to a 32-bit int, it goes to hell.
The Processing IDE works great though. Eclipse does not trigger at all. Is there a solution?
Note that bufferUntil('\n') takes an integer value. You're giving it a char. At the very least try bufferUntil(10) just to see if there's some oddness going on there, but it might be worth simply printing the values see coming in on myPort and see what comes by when you send a newline.