I want to create an object that will work like MultiPointTouchArea (so it will have touchUpdated signal) but also it would not steal touches, so that objects placed beneath it will receive the touch events as well.
The solution may require creating C++ object.
Is there a simple way to create such object? Is it possible to process (touch) events without "stealing" them? Any hint will be appreciated.
Here is an example of what I am trying to do. I want to touch top Rectangle but at the same time I want both MultiPointTouchAreas process touches:
import QtQuick 2.3
import QtQuick.Window 2.2
Window {
visible: true
width: 300
height: 300
Rectangle {
id: rectangle1
anchors.centerIn: parent
width: 150
height: width
color: "red"
MultiPointTouchArea {
anchors.fill: parent
mouseEnabled: false
onTouchUpdated: {
console.log("Bottom touch area contains:",
touchPoints.length,
"touches.")
}
}
}
Rectangle {
id: rectangle2
anchors.centerIn: parent
width: 100
height: width
color: "blue"
MultiPointTouchArea {
anchors.fill: parent
mouseEnabled: false
onTouchUpdated: {
console.log("Top touch area contains:",
touchPoints.length,
"touches.")
}
}
}
}
If I will find working solution I will post it here. I will be trying now to implement Mitch's solution.
You can subclass QQuickItem and override the touchEvent() function:
This event handler can be reimplemented in a subclass to receive touch events for an item. The event information is provided by the event parameter.
You'll probably need to explicitly set accepted to false to ensure that the item doesn't steal the events:
Setting the accept parameter indicates that the event receiver wants the event. Unwanted events might be propagated to the parent widget. By default, isAccepted() is set to true, but don't rely on this as subclasses may choose to clear it in their constructor.
I can verify that the above will result in the lower touch area taking all events after the press (tested on an Android phone). In that case, you'll need to filter the events somehow. One way to do this is, inside your QQuickItem subclass, declare a property that will be used to point to the lower touch area. When that property changes, install an event filter on the touch area:
main.cpp:
#include <QGuiApplication>
#include <QtQuick>
class CustomTouchArea : public QQuickItem
{
Q_OBJECT
Q_PROPERTY(QQuickItem *targetTouchArea READ targetTouchArea WRITE setTargetTouchArea NOTIFY targetTouchAreaChanged)
public:
CustomTouchArea() :
mTargetTouchArea(0) {
}
bool eventFilter(QObject *, QEvent *event) {
if (event->type() == QEvent::TouchUpdate) {
qDebug() << "processing TouchUpdate...";
}
// other Touch events here...
return false;
}
QQuickItem *targetTouchArea() const {
return mTargetTouchArea;
}
void setTargetTouchArea(QQuickItem *targetTouchArea) {
if (targetTouchArea == mTargetTouchArea)
return;
if (mTargetTouchArea)
mTargetTouchArea->removeEventFilter(this);
mTargetTouchArea = targetTouchArea;
if (mTargetTouchArea)
mTargetTouchArea->installEventFilter(this);
emit targetTouchAreaChanged();
}
signals:
void targetTouchAreaChanged();
private:
QQuickItem *mTargetTouchArea;
};
int main(int argc, char *argv[])
{
QGuiApplication::setAttribute(Qt::AA_EnableHighDpiScaling);
QGuiApplication app(argc, argv);
qmlRegisterType<CustomTouchArea>("App", 1, 0, "CustomTouchArea");
QQmlApplicationEngine engine;
engine.load(QUrl(QStringLiteral("qrc:/main.qml")));
return app.exec();
}
#include "main.moc"
main.qml:
import QtQuick 2.3
import QtQuick.Window 2.2
import App 1.0
Window {
visible: true
width: 300
height: 300
Rectangle {
id: rectangle1
anchors.centerIn: parent
width: 150
height: width
color: "red"
MultiPointTouchArea {
id: touchArea
anchors.fill: parent
mouseEnabled: false
onTouchUpdated: {
console.log("Bottom touch area contains:",
touchPoints.length,
"touches.")
}
}
}
Rectangle {
id: rectangle2
anchors.centerIn: parent
width: 100
height: width
color: "blue"
CustomTouchArea {
targetTouchArea: touchArea
anchors.fill: parent
}
}
}
You can read more about event filters here.
Related
I am pretty new to AudioKit, and I was wondering is there a way to adjust sample rate, color and margins for the NodeRollingView?
I am creating the view similarly to the CookBook sample project:
struct LevelView: View {
#StateObject var conductor = TunerConductor()
var body: some View {
VStack {
NodeRollingView(conductor.tappableNodeA).clipped()
}
.navigationBarTitle("Level One")
.onAppear {
conductor.start()
}
.onDisappear {
conductor.stop()
}
}
}
To be more specific:
I am trying to use NRV to draw which note a person in singing into the microphone.
Something like this tutorial.
Since AKFrequencyTracker no longer exist.
EDIT:
So basically the output looks like this, but I want to make it much smoother with reduced noise, so that you can actually understand what frequency is being played:
How it looks now
And this what I want it to look as(see the red line)
With the large empty area on the top being filled as well:
How it should look like
Looking at the code of the NRV, it has a public init, but I am not sure how to use it:
public init(_ node: Node, color: Color = .gray, bufferSize: Int = 1024) {
metalFragment = FragmentBuilder(foregroundColor: color.cg, isCentered: false, isFilled: false) nodeTap = RawDataTap(node, bufferSize: UInt32(bufferSize)) }
`
I'm trying to implement basic drag n' drop in QML. Functionally, it works -- I'm able to drag a string around. However, I can't get my draggable Rectangle object to follow the cursor. It sets the Rectangle's x and y properly the frame that it becomes visible, but then it remains stationary rather than move with the mouse. Here is my code:
MouseArea {
id: mouseArea
anchors.fill: parent
drag.target: draggable
}
Rectangle {
id: draggable
height: 18
width: dragText.width + 8
clip: true
color: "#ff333333"
border.width: 2
border.color: "#ffaaaaaa"
visible: false
anchors.verticalCenter: parent.verticalCenter
anchors.horizontalCenter: parent.horizontalCenter
Drag.active: mouseArea.drag.active
Drag.hotSpot.x: 0
Drag.hotSpot.y: 0
Drag.mimeData: { "text/plain": "Teststring" }
Drag.dragType: Drag.Automatic
Drag.onDragStarted: {
visible = true
}
Drag.onDragFinished: {
visible = false
}
Text {
id: dragText
x: 4
text: "Teststring"
font.weight: Font.Bold
color: "#ffffffff"
horizontalAlignment: Text.AlignHCenter
}
}
Your rectangle is not moving, because you set anchors to your Rectangle. Anchors are intended to be set stationary to the anchoring point.
Remove
anchors.verticalCenter: parent.verticalCenter
anchors.horizontalCenter: parent.horizontalCenter
in your QML.
If you want to place it in the center of the parent, you would need to set it like this instead:
x: parent.width / 2 - this.width / 2
y: parent.height / 2 - this.height / 2
You may also want to remove
Drag.dragType: Drag.Automatic
if the rectangle should follow your cursor, rather than only moving, after the drag ended.
I ended up solving this by avoiding the Draggable framework altogether and basically just using workarounds. I added the following to my MouseArea to make the rectangle move around properly:
onMouseXChanged: {
draggable.x = mouseX - draggable.width/2
}
onMouseYChanged: {
draggable.y = mouseY - draggable.height/2
}
To emulate dropping functionality, I programatically calculate the position of the "drop area," compare it to the mouse position with rudimentary collision detection, and then append to the ListView that's attached to the "drop area."
I have a simple touch/mouseclick script attached to a GameObject as a sort of "Master Script" i.e. the GameObject is invisible and doesn't do anything but hold this Touch script when the game is running.
How do I tell other named GameObjects that are generated at runtime to do things e.g. highlight when touched/clicked from this Master Script?
The script for highlighting seems to be: renderer.material.color= colorRed;
But I'm not sure how to tell the GameObject clicked on to become highlighted from the Master Script.
Please help! (am programming in C#)
Thanks!
Alright so you'll want to use a ray cast if you're not doing it in GUI. Check out Unity Ray Casting and then use
hit.transform.gameObject.renderer.material.color = red;
You can have a switch that is like:
if (hit.transform.gameObject.CompareTag("tag")) {
// turn to red;
} else {
// turn to white;
}
Use the ScreenPointToRay or ScreenPointToWorld depending on what you're doing.
For touch, should look like:
void Update () {
foreach (Touch touch in Input.touches)
{
Ray ray = Camera.main.ScreenPointToRay(touch.position);
RaycastHit hit;
if (Physics.Raycast(ray, out hit, 1000.0f))
{
if (hit.collider.gameObject.CompareTag("tag"))
{
hit.collider.gameObject.renderer.material.color = red;
}
}
}
// You can also add in a "go back" feature in the update but this will "go back" then when the touch ends or moves off
// Also not so good to search for Objects in the update function but that's at your discretion.
GameObject[] gObjs = GameObject.FindGameObjectsWithTag("tag");
foreach (GameObject go in gObjs) {
go.renderer.material.color = white;
}
}
To answer your question about pinging the 'manager'
I would do one of two options.
Either:
// Drop the object containing YourManager into the box in the inspector where it says "manage"
public YourManager manage;
// In the update and in the Ray Cast function (right next to your color change line):
manager.yourCall ();
// and
manager.yourString = "cool";
OR
private YourManager manage;
void Awake () {
manager = GameObject.FindObjectWithTag("manager").GetComponent<YourManager> ();
}
// In the update and in the Ray Cast function (right next to your color change line):
// In your manager file have "public bool selected;" at the top so you can access that bool from this file like:
manager.selected = true;
I detail this a little in another one of my answers HERE
For mouse clicks, I would check out the MonoDevelop functions they have in store such as:
// This file would be on the game object in the scene
// When the mouse is hovering the GameObject
void OnMouseEnter () {
selected = true;
renderer.material.color = red;
}
// When the mouse moved out
void OnMouseExit () {
selected = false;
renderer.material.color = white;
}
// Or you can use the same system as above with the:
Input.GetMouseButtonDown(0))
Resolution:
Use a bool in your manager file true is selected, false isn't. Have all the objects you instantiate have a tag, use the ray cast from the master file to the game object. When it his the game object with that tag, swap colors and sap the bool from the master file. Probably better to do it internally from the master file.
(All depends on what you're doing)
If you know what the name of the GameObjects will be at runtime, you can use GameObject.Find("") and store that in a GameObject variable. You can then set the renderer of that GameObject to whatever you like (assuming a renderer is linked to that GameObject).
The most obvious way of doing this would be to use prefabs and layers or tags.
You can add a tag to your prefab (say "Selectable") or move the prefab to some "Selectable" layer and then write your code around this, knowing that all selectable items are on this layer/have this tag.
Another way of doing this (And in my opinion is also a better way) is implementing your custom 'Selectable' component. You would search for this component on a clicked item and then perform the selection, if you have found that component. This way is better because you can add some additional selection logic in this component which would otherwise reside in your selection master script (image the size of your script after you've added a couple of selectables).
You can do it by implementing a SelectableItem script (name is arbitrary) and a couple of it's derivatives:
public class SelectableItem : MonoBehavior {
public virtual void OnSelected() {
renderer.material.color = red;
}
}
public class SpecificSelectable : SelectableItem {
public override void OnSelected() {
//You can do whatever you want in here
renderer.material.color = green;
}
}
//Adding new selectables is easy and doesn't require adding more code to your master script.
public class AnotherSpecificSelectable : SelectableItem {
public override void OnSelected() {
renderer.material.color = yellow;
}
}
And in your master script:
// Somewhere in your master selection script
// These values are arbitrary and this whole mask thing is optional, but would improve your performance when you click around a lot.
var selectablesLayer = 8;
var selectablesMask = 1 << selectablesLayer;
//Use layers and layer masks to only raycast agains selectable objects.
//And avoid unnecessary GetComponent() calls.
if (Physics.Raycast(ray, out hit, 1000.0f, selectablesMask))
{
var selectable = hit.collider.gameObject.GetComponent<SelectableItem>();
if (selectable) {
// This one would turn item red or green or yellow depending on which type of SelectableItem this is (which is controlled by which component this GameObject has)
// This is called polymorphic dispatch and is one of the reasons we love Object Oriented design so much.
selectable.OnSelected();
}
}
So say you have a couple of different selectables and you want them to do different things upon selection. This is the way you would do this. Some of your selectables would have one of the selection components and others would have another one. The logic that performs the selection resides in the Master script while the actions that have to be performed are in those specific scripts that are attached to game objects.
You can go further and add OnUnselect() action in those Selectables:
public class SelectableItem : MonoBehaviour {
public virtual void OnSelected() {
renderer.material.color = red;
}
public virtual void OnUnselected() {
renderer.material.color = white;
}
}
and then even do something like this:
//In your master script:
private SelectableItem currentSelection;
var selectable = hit.collider.gameObject.GetComponent<SelectableItem>();
if (selectable) {
if (currentSelection) currentSelection.OnUnselected();
selectable.OnSelected();
CurrentSelection = selectable;
}
And we've just added deselection logic.
DISCLAIMER: These are just a bunch of snippets. If you just copy and paste those they probably wouldn't work right away.
I have a working collision system to sprites that i dont want "player" to pass. Problem is that i have no idea what should i execute on collision to make player not pass sprites.
wallCollision() method is currently empty.
if(tmxTileProperties.containsTMXProperty("collision", "1")) {
Rectangle rect = new Rectangle(tmxTile.getTileX(), tmxTile.getTileY() ,128, 128, mEngine.getVertexBufferObjectManager())
{
#Override
protected void onManagedUpdate(float pSecondsElapsed)
{
if (player.collidesWith(this))
{
wallCollision();
}
}
};
rect.setVisible(false);
mainScene.attachChild(rect);
}
The question located here addresses this. The method below creates a JBox2D body at the same position as the blocked tile. I'm not sure how this works in conjunction with the pathfinding to exclude blocked tiles, but I've seen the same approach used in other places, assuming you're using GLES2. Hope this helps.
private void createUnwalkableObjects(TMXTiledMap map){
// Loop through the object groups
for(final TMXObjectGroup group: this.mTMXTiledMap.getTMXObjectGroups()) {
if(group.getTMXObjectGroupProperties().containsTMXProperty("wall", "true")){
// This is our "wall" layer. Create the boxes from it
for(final TMXObject object : group.getTMXObjects()) {
final Rectangle rect = new Rectangle(object.getX(), object.getY(),object.getWidth(), object.getHeight());
final FixtureDef boxFixtureDef = PhysicsFactory.createFixtureDef(0, 0, 1f);
PhysicsFactory.createBoxBody(this.mPhysicsWorld, rect, BodyType.StaticBody, boxFixtureDef);
rect.setVisible(false);
mScene.attachChild(rect);
}
}
}
}
Can anyone help me get this to run? I'm aiming for a custom Actor. (I have only just started hacking with Vala in the last few days and Clutter is a mystery too.)
The drawme method is being run (when invalidate is called) but there doesn't seem to be any drawing happening (via the Cairo context).
ETA: I added one line in the constructor to show the fix - this.set_size.
/*
Working from the sample code at:
https://developer.gnome.org/clutter/stable/ClutterCanvas.html
*/
public class AnActor : Clutter.Actor {
public Clutter.Canvas canvas;
public AnActor() {
canvas = new Clutter.Canvas();
canvas.set_size(300,300);
this.set_content( canvas );
this.set_size(300,300);
//Connect to the draw signal.
canvas.draw.connect(drawme);
}
private bool drawme( Cairo.Context ctx, int w, int h) {
stdout.printf("Just to test this ran at all: %d\n", w);
ctx.scale(w,h);
ctx.set_source_rgb(0,0,0);
//Rect doesn't draw.
//ctx.rectangle(0,0,200,200);
//ctx.fill();
//paint doesn't draw.
ctx.paint();
return true;
}
}
int main(string [] args) {
// Start clutter.
var result = Clutter.init(ref args);
if (result != Clutter.InitError.SUCCESS) {
stderr.printf("Error: %s\n", result.to_string());
return 1;
}
var stage = Clutter.Stage.get_default();
stage.destroy.connect(Clutter.main_quit);
//Make my custom Actor:
var a = new AnActor();
//This is dodgy:
stage.add_child(a);
//This works:
var r1 = new Clutter.Rectangle();
r1.width = 50;
r1.height = 50;
r1.color = Clutter.Color.from_string("rgb(255, 0, 0)");
stage.add_child(r1);
a.canvas.invalidate();
stage.show_all();
Clutter.main();
return 0;
}
you need to assign a size to the Actor as well, not just the Canvas.
the size of the Canvas is independent of the size of the Actor to which the Canvas is assigned to, as you can assign the same Canvas instance to multiple actors.
if you call:
a.set_size(300, 300)
you will see the actor and the results of the drawing.
Clutter also ships with various examples, for instance how to make a rectangle with rounded corners using Cairo: https://git.gnome.org/browse/clutter/tree/examples/rounded-rectangle.c - or how to make a simple clock: https://git.gnome.org/browse/clutter/tree/examples/canvas.c