I am pretty new to AudioKit, and I was wondering is there a way to adjust sample rate, color and margins for the NodeRollingView?
I am creating the view similarly to the CookBook sample project:
struct LevelView: View {
#StateObject var conductor = TunerConductor()
var body: some View {
VStack {
NodeRollingView(conductor.tappableNodeA).clipped()
}
.navigationBarTitle("Level One")
.onAppear {
conductor.start()
}
.onDisappear {
conductor.stop()
}
}
}
To be more specific:
I am trying to use NRV to draw which note a person in singing into the microphone.
Something like this tutorial.
Since AKFrequencyTracker no longer exist.
EDIT:
So basically the output looks like this, but I want to make it much smoother with reduced noise, so that you can actually understand what frequency is being played:
How it looks now
And this what I want it to look as(see the red line)
With the large empty area on the top being filled as well:
How it should look like
Looking at the code of the NRV, it has a public init, but I am not sure how to use it:
public init(_ node: Node, color: Color = .gray, bufferSize: Int = 1024) {
metalFragment = FragmentBuilder(foregroundColor: color.cg, isCentered: false, isFilled: false) nodeTap = RawDataTap(node, bufferSize: UInt32(bufferSize)) }
`
Related
I am trying to use a timer and fill the screen with color. To put it simply: I am getting the screen height with \ (UIScreen.main.bounds.height) and divide it with selectedTime, let's say \ (120)seconds. Problem occurs here: the screen fills up with roundly 232, not 844.0 screen size and it fills up in 32 seconds instead of 120 seconds. I'm probably missing something. Relevant code section:
.onChange(of: secondsToMinutesAndSeconds(seconds: timerManager.secondsLeft), perform: { waveTime in
let selectedCircularValue = availableMinutes[self.selectedCircularIndex] * 60
let heightProgress = CGFloat(UIScreen.main.bounds.height / CGFloat(selectedCircularValue))
if timerManager.screenHeightChanged < UIScreen.main.bounds.height {
timerManager.screenHeightChanged += CGFloat(heightProgress)
withAnimation(.linear) {
self.colorSize = CGFloat(Double(timerManager.screenHeightChanged))
}
} else {
timerManager.screenHeightChanged = 0
}
})
Progress output
{ seconds 120
screenHeight 844.0
estimatedTime screen / seconds -> (7.033333333333333)
...
progress 14.066666666666666
...
progress 225.06666666666663
...
progress 232.09999999999997
}
Finally, is it possible to make the animation smooth?
Mine
My expectation
UIScreen can be quite annoying in my experience. Try using GeometryReader instead, it works much better when dealing with SwiftUI views. Wrap the view like this
GeometryReader { geo in
SomeView()
}
.frame(width: .infinity, height: .infinity) //may not need this
Then Whenever you want the size of the view just use geo.size.width or geo.size.height
For the animation, use the withAnimation function in conjunction with a modifier like .animation(.linear).
As you haven't shared a minimal reproducible example, I can't share a complete solution or test my solution for your case.
On the iPad, I'm trying to build a simple app that continuously tracks the pointer's coordinates as it moves across the screen (via external mouse/trackpad). Basically something like this JavaScript example # 4:13 except in a SwiftUI view on the iPad.
MacOS has NSEvent.mouseLocation, but it doesn't seem like there's an iPadOS counterpart. Every resource I've come across online necessarily associates coordinates with a gesture (rotation, pinch, drag, click, etc.) with no way to respond to only cursor movement. This leads me to believe that the solution for pure pointer movement is likely independent of the Gesture protocol.
I was able to get halfway there after modifying code from this SO post. My code below displays updated mouse coordinates so long as the pointer is dragging (i.e., while at least one button is pressed).
import SwiftUI
struct ContentView: View {
#State private var pt = CGPoint()
#State private var txt = "init"
var body: some View {
let myGesture = DragGesture(
minimumDistance: 0,
coordinateSpace: .local
)
.onChanged {
self.pt = $0.location
self.txt = "x: \(self.pt.x), y: \(self.pt.y)"
}
// Spacers needed to make the VStack occupy the whole screen
return VStack {
Spacer()
Text(self.txt)
Spacer()
}
.frame(width: 1000, height: 1000)
.border(Color.green)
.contentShape(Rectangle()) // Make the entire VStack tappabable, otherwise, only the area with text generates a gesture
.gesture(myGesture) // Add the gesture to the Vstack
}
}
Now, how do I achieve this effect without needing to drag? Could there be some way to do this with the help of objective-C if a pure Swift solution isn't possible?
Thanks
Edit:
The WWDC video covers this very topic:
Handle trackpad and mouse input
Add SupportsIndirectInputEvents to your Info.plist
From the video:
It is required in order to get the new touch type indirect pointer and
EventType.transform.
Existing projects do not have this key set and will need to add it. Starting with iOS 14 and macOS Big Sur SDKs, new UIKit and SwiftUI
projects will have this value set to "true."
In addition you will use UIPointerInteraction. This tutorial shows you step by step including custom cursors:
https://pspdfkit.com/blog/2020/supporting-pointer-interactions/
I'm trying to set up a very simple RTS-like camera that moves around at the pressure of wasd or the arrow keys. But I can't manage to make it work.
I have a player controller that handles input like so:
void AMyPlayerController::SetupInputComponent()
{
Super::SetupInputComponent();
InputComponent->BindAxis("CameraForward", this, &AMyPlayerController::CameraForward);
}
void AMyPlayerController::CameraForward(float Amount)
{
MyRtsCameraReference->CameraForward(Amount);
}
In my camera actor, which inherits from APawn, and I initialize like this:
ARtsCamera::ARtsCamera()
{
PrimaryActorTick.bCanEverTick = true;
CameraComponent = CreateDefaultSubobject<UCameraComponent>(TEXT("CameraComponent"));
SpringArm = CreateDefaultSubobject<USpringArmComponent>(TEXT("SpringArm"));
// I tried to use a static mesh in place of this but nothing changes.
SceneRoot = CreateDefaultSubobject<USceneComponent>(TEXT("SceneRoot"));
MovementComponent = CreateDefaultSubobject<UFloatingPawnMovement>(TEXT("MovementComponent"));
SpringArm->TargetArmLength = 500.f;
SetRootComponent(SceneRoot);
SpringArm->SetupAttachment(SceneRoot);
CameraComponent->SetupAttachment(SpringArm);
SpringArm->AddLocalRotation(FRotator(0.f, -50.f, 0.f));
}
I try to handle the movement like this:
void ARtsCamera::CameraForward(float Amount)
{
if (Amount != 0)
{
// Here speed is a float used to control the actual speed of this movement.
// If Amount is printed here, I get the correct value of 1/-1 at the pressure of w/s.
MovementComponent->AddInputVector(GetActorForwardVector() * Amount * Speed);
}
}
In my set up i then create a blueprint which inherits from this to expose to the level design phase a few parameters like Speed or the spring arm length and so on, but this shouldn't be the issue as if I use the C++ class the behaviour is the same.
The following works as expected (except for the fact that when i then rotate the camera, vectors get messed up) but i would like to use a movement component.
void ARtsCamera::CameraForward(float Amount)
{
if (Amount != 0)
{
AddActorWorldOffset(GetActorForwardVector() * Amount * CameraMovingSpeed);
}
}
The mobility is set to movable. And so far everything seems to be simple enough, no compile errors, but the actor doesn't move. What am I missing?
Also would you use the FloatingPawnMovement for this or would you go for another movement component?
Thanks in advance.
I'm new to objective c and swift and I created a small app where small circles are rendered and once the player collides with a circle, the game ends. I managed to get everything to work, but how do I remove the nodes after they collide. I tried removeAllChildren(), but none of them disappear. When I use removeFromParent(), only 1 disappears. I want a way to remove all 3 nodes that will be rendered in the code below
//addEvilGuys() is called first
func addEvilGuys()
{
addEvilGuy(named: "paul", speed: 1.3, xPos: CGFloat(self.size.height/3))
addEvilGuy(named: "boris", speed: 1.7, xPos: frame.size.width/4 + 50)
addEvilGuy(named: "natasha", speed: 1.5, xPos: frame.size.width/4 + 150)
}
func addEvilGuy(#named:String, speed:Float, xPos: CGFloat)
{
evilGuyNode = SKSpriteNode(imageNamed: named)
evilGuyNode.zPosition = 10
evilGuyNode.physicsBody = SKPhysicsBody(circleOfRadius: 16)
evilGuyNode.physicsBody!.affectedByGravity = false
evilGuyNode.physicsBody!.categoryBitMask = ColliderType.BadGuy.rawValue
evilGuyNode.physicsBody!.contactTestBitMask = ColliderType.Hero.rawValue
evilGuyNode.physicsBody!.collisionBitMask = ColliderType.Hero.rawValue
evilGuyNodeCount++
var evilGuy = EvilGuy(speed: speed, eGuy: evilGuyNode)
evilGuys.append(evilGuy)
resetEvilGuy(evilGuyNode, xPos: xPos)
evilGuy.xPos = evilGuyNode.position.x
addChild(evilGuyNode)
}
func resetEvilGuy(evilGuyNode:SKSpriteNode, xPos:CGFloat)
{
evilGuyNode.position.y = endOfScreenBottom
evilGuyNode.position.x = xPos
}
It looks like in addEvilGuy you are recreating a stored property (i.e. that is visible for the entire class + whatever the access level allows) to create the SKSpriteNode that you're adding. This means that you are orphaning the previously created EvilGuy.
In addEvilGuy, replace
evilGuyNode = SKSpriteNode(imageNamed: named)
with
let evilGuyNode = SKSpriteNode(imageNamed: named)
and remove the property from your class (it doesn't seem like you have a need for in in a larger scope).
It also looks like you're creating EvilGuys and storing them in an array, which is good. So when you can remove all of them from the screen with a function like:
func removeAllEvilGuys(evilGuys: [EvilGuy]) {
for evilGuy in evilGuys {
evilGuy.eGuy.removeFromParent()
}
}
As a best practice advice, since you mentioned you're a beginner:
I'd recommend defining the characteristics of the evil guys in a .plist and then use the file to create an array of evil guys. This way you can easily make changes to the evil guys in that file without having to change anything in your code.
The code that creates an EvilGuy object should be separated from the one that adds the evil guy to the screen. As long as you are storing the SKNode of each one, you'll be able to add/remove without unnecessarily recreating the entire object.
I have a simple touch/mouseclick script attached to a GameObject as a sort of "Master Script" i.e. the GameObject is invisible and doesn't do anything but hold this Touch script when the game is running.
How do I tell other named GameObjects that are generated at runtime to do things e.g. highlight when touched/clicked from this Master Script?
The script for highlighting seems to be: renderer.material.color= colorRed;
But I'm not sure how to tell the GameObject clicked on to become highlighted from the Master Script.
Please help! (am programming in C#)
Thanks!
Alright so you'll want to use a ray cast if you're not doing it in GUI. Check out Unity Ray Casting and then use
hit.transform.gameObject.renderer.material.color = red;
You can have a switch that is like:
if (hit.transform.gameObject.CompareTag("tag")) {
// turn to red;
} else {
// turn to white;
}
Use the ScreenPointToRay or ScreenPointToWorld depending on what you're doing.
For touch, should look like:
void Update () {
foreach (Touch touch in Input.touches)
{
Ray ray = Camera.main.ScreenPointToRay(touch.position);
RaycastHit hit;
if (Physics.Raycast(ray, out hit, 1000.0f))
{
if (hit.collider.gameObject.CompareTag("tag"))
{
hit.collider.gameObject.renderer.material.color = red;
}
}
}
// You can also add in a "go back" feature in the update but this will "go back" then when the touch ends or moves off
// Also not so good to search for Objects in the update function but that's at your discretion.
GameObject[] gObjs = GameObject.FindGameObjectsWithTag("tag");
foreach (GameObject go in gObjs) {
go.renderer.material.color = white;
}
}
To answer your question about pinging the 'manager'
I would do one of two options.
Either:
// Drop the object containing YourManager into the box in the inspector where it says "manage"
public YourManager manage;
// In the update and in the Ray Cast function (right next to your color change line):
manager.yourCall ();
// and
manager.yourString = "cool";
OR
private YourManager manage;
void Awake () {
manager = GameObject.FindObjectWithTag("manager").GetComponent<YourManager> ();
}
// In the update and in the Ray Cast function (right next to your color change line):
// In your manager file have "public bool selected;" at the top so you can access that bool from this file like:
manager.selected = true;
I detail this a little in another one of my answers HERE
For mouse clicks, I would check out the MonoDevelop functions they have in store such as:
// This file would be on the game object in the scene
// When the mouse is hovering the GameObject
void OnMouseEnter () {
selected = true;
renderer.material.color = red;
}
// When the mouse moved out
void OnMouseExit () {
selected = false;
renderer.material.color = white;
}
// Or you can use the same system as above with the:
Input.GetMouseButtonDown(0))
Resolution:
Use a bool in your manager file true is selected, false isn't. Have all the objects you instantiate have a tag, use the ray cast from the master file to the game object. When it his the game object with that tag, swap colors and sap the bool from the master file. Probably better to do it internally from the master file.
(All depends on what you're doing)
If you know what the name of the GameObjects will be at runtime, you can use GameObject.Find("") and store that in a GameObject variable. You can then set the renderer of that GameObject to whatever you like (assuming a renderer is linked to that GameObject).
The most obvious way of doing this would be to use prefabs and layers or tags.
You can add a tag to your prefab (say "Selectable") or move the prefab to some "Selectable" layer and then write your code around this, knowing that all selectable items are on this layer/have this tag.
Another way of doing this (And in my opinion is also a better way) is implementing your custom 'Selectable' component. You would search for this component on a clicked item and then perform the selection, if you have found that component. This way is better because you can add some additional selection logic in this component which would otherwise reside in your selection master script (image the size of your script after you've added a couple of selectables).
You can do it by implementing a SelectableItem script (name is arbitrary) and a couple of it's derivatives:
public class SelectableItem : MonoBehavior {
public virtual void OnSelected() {
renderer.material.color = red;
}
}
public class SpecificSelectable : SelectableItem {
public override void OnSelected() {
//You can do whatever you want in here
renderer.material.color = green;
}
}
//Adding new selectables is easy and doesn't require adding more code to your master script.
public class AnotherSpecificSelectable : SelectableItem {
public override void OnSelected() {
renderer.material.color = yellow;
}
}
And in your master script:
// Somewhere in your master selection script
// These values are arbitrary and this whole mask thing is optional, but would improve your performance when you click around a lot.
var selectablesLayer = 8;
var selectablesMask = 1 << selectablesLayer;
//Use layers and layer masks to only raycast agains selectable objects.
//And avoid unnecessary GetComponent() calls.
if (Physics.Raycast(ray, out hit, 1000.0f, selectablesMask))
{
var selectable = hit.collider.gameObject.GetComponent<SelectableItem>();
if (selectable) {
// This one would turn item red or green or yellow depending on which type of SelectableItem this is (which is controlled by which component this GameObject has)
// This is called polymorphic dispatch and is one of the reasons we love Object Oriented design so much.
selectable.OnSelected();
}
}
So say you have a couple of different selectables and you want them to do different things upon selection. This is the way you would do this. Some of your selectables would have one of the selection components and others would have another one. The logic that performs the selection resides in the Master script while the actions that have to be performed are in those specific scripts that are attached to game objects.
You can go further and add OnUnselect() action in those Selectables:
public class SelectableItem : MonoBehaviour {
public virtual void OnSelected() {
renderer.material.color = red;
}
public virtual void OnUnselected() {
renderer.material.color = white;
}
}
and then even do something like this:
//In your master script:
private SelectableItem currentSelection;
var selectable = hit.collider.gameObject.GetComponent<SelectableItem>();
if (selectable) {
if (currentSelection) currentSelection.OnUnselected();
selectable.OnSelected();
CurrentSelection = selectable;
}
And we've just added deselection logic.
DISCLAIMER: These are just a bunch of snippets. If you just copy and paste those they probably wouldn't work right away.