I'm in UE4 4.21.2
I'm a total noob for UE4. I'm following along with a tutorial where I need to use the GetPlayerViewPoint method on the PlayerController class, however when I try and call that method, I get a compile time error that says: class "APlayerController" has no member "GetPlayerViewPoint"
Which is weird because I get autocomplete in Visual Studio for other methods on that class, but not that particular method, BUT I can see that method in the docs here:
http://api.unrealengine.com/INT/API/Runtime/Engine/GameFramework/APlayerController/index.html
Could it be that my compiler and autocomplete are using a different UE4 version than the docs and tutorial?
Anyways, here is my class.
// Copyright, 2018
#include "BryceEscapeRoomUe4.h"
#include "Grabber.h"
#include "Runtime/Engine/Classes/GameFramework/Actor.h"
#include "Engine/World.h";
#include "GameFramework/PlayerController.h"
#define OUT
// Sets default values for this component's properties
UGrabber::UGrabber()
{
// Set this component to be initialized when the game starts, and to be ticked every frame. You can turn these features
// off to improve performance if you don't need them.
PrimaryComponentTick.bCanEverTick = true;
// ...
}
// Called when the game starts
void UGrabber::BeginPlay()
{
Super::BeginPlay();
UE_LOG(LogTemp, Warning, TEXT("Grabber repoting for duty!"));
}
// Called every frame
void UGrabber::TickComponent(float DeltaTime, ELevelTick TickType, FActorComponentTickFunction* ThisTickFunction)
{
Super::TickComponent(DeltaTime, TickType, ThisTickFunction);
// get player view point this tick
FVector PlayerVeiwPointLocation;
FRotator PlayerVeiwPointRotaion;
GetWorld()->GetFirstPlayerController()->GetPlayerVeiwPoint(
OUT PlayerVeiwPointLocation,
OUT PlayerVeiwPointRotaion
);
//log out to test
//ray cast out to reach distance
// see what we hit
}
I think this may have been caused by the fact that you have spelled GetPlayerViewPoint as GetPlayerVeiwPoint, which isn't a method that exists in UE4. Hopefully that should fix the problem, although as this is a relatively old question I'm sure you figured that out a while ago!
Related
Even the official documentation has borderline insane recommendations to solve what is probably one of the most common UI/3D interaction issues:
If I click while the cursor is over a UI button, both the button (via the graphics raycaster) and the 3D world (via the physics raycaster) will receive the event.
The official manual:
https://docs.unity3d.com/Packages/com.unity.inputsystem#1.2/manual/UISupport.html#handling-ambiguities-for-pointer-type-input essentially says "how about you design your game so you don't need 3D and UI at the same time?".
I cannot believe this is not a solved problem. But everything I've tried failed. EventSystem.current.currentSelectedGameObject is sticky, not hover. PointerData is protected and thus not accessible (and one guy offered a workaround via deriving your own class from Standalone Input Module to get around that, but that workaround apparently doesn't work anymore). The old IsPointerOverGameObject throws a warning if you query it in the callback and is always true if you query it in Update().
That's all just mental. Please someone tell me there's a simple, obvious solution to this common, trivial problem that I'm just missing. The graphics raycaster certainly stores somewhere if it's over a UI element, right? Please?
I've looked into this a fair bit and in the end, the easiest solution seems to be to do what the manual says and put it in the Update function.
bool pointerOverUI = false;
void Update()
{
pointerOverUI = EventSystem.current.IsPointerOverGameObject();
}
Your frustration is well founded: there are NO examples of making UI work with NewInput which I've found. I can share a more robust version of the Raycaster workaround, from Youtube:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.EventSystems;
using UnityEngine.InputSystem;
using UnityEngine.UI;
/* Danndx 2021 (youtube.com/danndx)
From video: youtu.be/7h1cnGggY2M
thanks - delete me! :) */
public class SCR_UiInteraction : MonoBehaviour
{
public GameObject ui_canvas;
GraphicRaycaster ui_raycaster;
PointerEventData click_data;
List<RaycastResult> click_results;
void Start()
{
ui_raycaster = ui_canvas.GetComponent<GraphicRaycaster>();
click_data = new PointerEventData(EventSystem.current);
click_results = new List<RaycastResult>();
}
void Update()
{
// use isPressed if you wish to ray cast every frame:
//if(Mouse.current.leftButton.isPressed)
// use wasReleasedThisFrame if you wish to ray cast just once per click:
if(Mouse.current.leftButton.wasReleasedThisFrame)
{
GetUiElementsClicked();
}
}
void GetUiElementsClicked()
{
/** Get all the UI elements clicked, using the current mouse position and raycasting. **/
click_data.position = Mouse.current.position.ReadValue();
click_results.Clear();
ui_raycaster.Raycast(click_data, click_results);
foreach(RaycastResult result in click_results)
{
GameObject ui_element = result.gameObject;
Debug.Log(ui_element.name);
}
}
}
So, just drop into my "Menusscript.cs"?
But as a pattern, this is terrible for separating UI concerns. I'm currently rewiring EVERY separately-concerned PointerEventData click I had already working, and my question is, Why? I can't even find how it's supposed to work: to your point there IS no official guide at all around clicking UI, and it does NOT just drop-on-top.
Anyway, I haven't found anything yet which makes new input work easily on UI, and definitely not found how I'm going to sensibly separate Menuclicks from Activityclicks while keeping game & ui assemblies separate.
Good luck to us all.
Unity documentation for this issue with regard to Unity.InputSystem can be found at https://docs.unity3d.com/Packages/com.unity.inputsystem#1.3/manual/UISupport.html#handling-ambiguities-for-pointer-type-input.
IsPointerOverGameObject() can always return true if the extent of your canvas covers the camera's entire field of view.
For clarity, here is the solution which I found worked best (accumulated from several other posts across the web).
Attach this script to your UI Canvas object:
public class CanvasHitDetector : MonoBehaviour {
private GraphicRaycaster _graphicRaycaster;
private void Start()
{
// This instance is needed to compare between UI interactions and
// game interactions with the mouse.
_graphicRaycaster = GetComponent<GraphicRaycaster>();
}
public bool IsPointerOverUI()
{
// Obtain the current mouse position.
var mousePosition = Mouse.current.position.ReadValue();
// Create a pointer event data structure with the current mouse position.
var pointerEventData = new PointerEventData(EventSystem.current);
pointerEventData.position = mousePosition;
// Use the GraphicRaycaster instance to determine how many UI items
// the pointer event hits. If this value is greater-than zero, skip
// further processing.
var results = new List<RaycastResult>();
_graphicRaycaster.Raycast(pointerEventData, results);
return results.Count > 0;
}
}
In class containing the method which is handling the mouse clicks, obtain the reference to the Canvas UI either using GameObject.Find() or a public exposed variable, and call IsPointerOverUI() to filter clicks when over UI.
Reply to #Milad Qasemi's answer
From the docs you have attached in your answer, I have tried the following to check if the user clicked on a UI element or not.
// gets called in the Update method
if(Input.GetMouseButton(0) {
int layerMask = 1 << 5;
// raycast in the UI layer
RaycastHit2D hit = Physics2D.Raycast(Camera.main.ScreenToWorldPoint(Input.mousePosition), Vector2.zero, Mathf.Infinity, layerMask);
// if the ray hit any UI element, return
// don't handle player movement
if (hit.collider) { return; }
Debug.Log("Touched not on UI");
playerController.HandlePlayerMovement(x);
}
The raycast doesn't seem to detect collisions on UI elements. Below is a picture of the Graphics Raycaster component of the Canvas:
Reply to #Lowelltech
Your solution worked for me except that instead of Mouse I used Touchscreen
// Obtain the current touch position.
var pointerPosition = Touchscreen.current.position.ReadValue();
An InputSytem is a way to receive new inputs provided by Unity. You can't use existing scripts there, and you'll run into problems like the original questioner. Answers with code like "if(Input.GetMouseButton(0)" are invalid because they use the old system.
I recently tried to develop a flutter plugin with cameraX, but I found that there was no way to simply bind Preview to flutter's Texture.
In the past, I only needed use camera.setPreviewTexture(surfaceTexture.surfaceTexture()) to bind camera and texture, now I can't find the api.
camera.setPreviewTexture(surfaceTexture.surfaceTexture())
val previewConfig = PreviewConfig.Builder().apply {
setTargetAspectRatio(Rational(1, 1))
setTargetResolution(Size(640, 640))
}.build()
// Build the viewfinder use case
val preview = Preview(previewConfig).also{
}
preview.setOnPreviewOutputUpdateListener {
// it.surfaceTexture = this.surfaceTexture.surfaceTexture()
}
// how to bind the CameraX Preview surfaceTexture and flutter surfaceTexture?
I think you can bind texture by Preview.SurfaceProvider.
final CameraSelector cameraSelector = new CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_BACK).build();
final ListenableFuture<ProcessCameraProvider> listenableFuture = ProcessCameraProvider.getInstance(appCompatActivity.getBaseContext());
listenableFuture.addListener(() -> {
try {
ProcessCameraProvider cameraProvider = listenableFuture.get();
Preview preview = new Preview.Builder()
.setTargetResolution(new Size(720, 1280))
.build();
cameraProvider.unbindAll();
Camera camera = cameraProvider.bindToLifecycle(appCompatActivity, cameraSelector, preview);
Preview.SurfaceProvider surfaceProvider = request -> {
Size resolution = request.getResolution();
surfaceTexture.setDefaultBufferSize(resolution.getWidth(), resolution.getHeight());
Surface surface = new Surface(surfaceTexture);
request.provideSurface(surface, ContextCompat.getMainExecutor(appCompatActivity.getBaseContext()), result -> {
});
};
preview.setSurfaceProvider(surfaceProvider);
} catch (Exception e) {
e.printStackTrace();
}
}, ContextCompat.getMainExecutor(appCompatActivity.getBaseContext()));
Update: CameraX has added functionality which will now allow this since this answer was written, but this might still be useful to someone. See this answer for details.
It seems as though using CameraX is difficult to impossible due to it abstracting the more complicated things away and so not exposing things you need like being able to pass in your own SurfaceTexture (which is normally created by Flutter).
So the simple answer is that you can't use CameraX.
That being said, with some work you may be able to get this to work, but I have no idea if it will work for sure. It's ugly and hacky so I wouldn't recommend it. YMMV.
If we're going to do this, let's first look at how the flutter view creates a texture
#Override
public TextureRegistry.SurfaceTextureEntry createSurfaceTexture() {
final SurfaceTexture surfaceTexture = new SurfaceTexture(0);
surfaceTexture.detachFromGLContext();
final SurfaceTextureRegistryEntry entry = new SurfaceTextureRegistryEntry(nextTextureId.getAndIncrement(),
surfaceTexture);
mNativeView.getFlutterJNI().registerTexture(entry.id(), surfaceTexture);
return entry;
}
Most of that is replicable, so we may be able to do it with the surface texture the camera gives us.
You can get ahold of the texture the camera creates this way:
preview.setOnPreviewOutputUpdateListener { previewOutput ->
SurfaceTexture texture = previewOutput.surfaceTexture
}
What you're going to have to do now is to pass a reference to your FlutterView into your plugin (I'll leave that for you to figure out). Then call flutterView.getFlutterNativeView() to get ahold of the FlutterNativeView.
Unfortunately, FlutterNativeView's getFlutterJni is package private. So this is where it gets really hacky - you can create a class in that same package that calls that package-private method in a publicly accesible method. It's super ugly, and you may have to fiddle around with Gradle to get the compilation security settings to allow it, but it should be possible.
After that, it should be simple enough to create a SurfaceTextureRegistryEntry and to register the texture with the flutter jni. I don't think you want to detach from the opengl context, and I really have no idea if this will actually work. But if you want to try it out and report back what you find I would be interested in hearing the result!
I am making games using Unity and I have some problems and I will ask questions.
The animator of an object instantiated using a prefab does not work properly, and precisely a specific event is a problem. The object placed in the hierarchy is fine. However, certain events do not work for objects instantiated using a script.
this is code.
public Animator guestmove;
public void Jump_motion()
{
if (tag == "Boy")
{
guestmove.SetTrigger("Jump");
}
}
public void Angry_motion()
{
guestmove.SetTrigger("Angry");
}
Here, we implement the event by pressing a button.
I changed the code to work when the tags match, but the objects I placed in the hierarchy also do not work.
This is the code that creates the instance.
if (currentlyObject > 0){
boyObject = Instantiate(boy, tableObject.transform.position, tableObject.transform.rotation);
boyObject.transform.Translate(new Vector3(0, -3, -11));
girlObject = Instantiate(girl, tableObject.transform.position, tableObject.transform.rotation);
girlObject.transform.Translate(new Vector3(1.5f, -3, -11));
}
I kept searching for the relevant data, but I could not find any cases of similar problems. I do not know where the problem is, please help me.
this is link
(https://drive.google.com/file/d/1SKbSIfFQM4-n8l-ZBBvZb_3SuNx-kd5-/view?usp=sharing)
Use the transition to triggers from Any State, this should work.
And where are your functions called?
I am working on a web browser for fun and ran into this problem.
This works:
#include <webkit/webkit.h>
void init() {
GtkWidget *web_view = webkit_web_view_new();
g_signal_connect(
G_OBJECT(GTK_WIDGET(this->web_view)),
"notify::progress",
G_CALLBACK(LoadChangedProxy),
NULL);
webkit_web_view_load_uri(WEBKIT_WEB_VIEW(this->web_view), "http://google.com");
}
void LoadChangedProxy(GtkWidget *view, GParamSpec *pspec, gpointer p) {
puts("LOADING");
}
In this case the callback is never called:
#include <webkit2/webkit2.h>
void init() {
GtkWidget *web_view = webkit_web_view_new();
g_signal_connect(
G_OBJECT(GTK_WIDGET(this->web_view)),
"notify::estimated-load-progress",
G_CALLBACK(LoadChangedProxy),
NULL);
webkit_web_view_load_uri(WEBKIT_WEB_VIEW(this->web_view), "http://google.com");
}
void LoadChangedProxy(GtkWidget *view, GParamSpec *pspec, gpointer p) {
puts("LOADING");
}
I was attempting to use webkitgtk2 initially and was really hitting my head against the wall. I switched to the older webkitgtk1 header and api and it magically started working. I have not idea what would cause this, additionally no errors are printed to stderr or stdout (eg. attempting to connect to a signal that an object does not have).
Any suggestions out there? There is surprisingly little documentation on g_signal_connect, from glib. All I know has come from looking at some gnome app source code.
Edit:
I have found that using "notify::progress" signal identifier in the webkitgtk2 case, the callback works. However I then cannot use either webkit_web_view_get_progress() or webkit_web_view_get_estimated_load_progress() to read the progress value to display it.
The symptoms are strange enough that I can only think of one explanation: you are still linking with webkit-gtk. With trivial code like this you didn't happen to hit linking problems but of course the new signals wouldn't be there either.
I'm new in game development. Pass a training course Get Started (https://docs.unrealengine.com/latest/INT/Programming/QuickStart/7/index.html) I created a class AMyActorTest extended AActor:
#include "TestUProject.h"
#include "MyActorTest.h"
AMyActorTest::AMyActorTest(const class FPostConstructInitializeProperties& PCIP)
: Super(PCIP)
{
MyNumber = 12;
}
void AMyActorTest::BeginPlay()
{
Super::BeginPlay();
if (GEngine)
{
GEngine->AddOnScreenDebugMessage(-1, 5.f, FColor::Yellow, TEXT("Hello World!"));
GEngine->AddOnScreenDebugMessage(-1, 5.f, FColor::Yellow, FString::FromInt(MyNumber));
}
}
I have a problem that I can not move in Editor to AActor after placing it in ViewPort. I read that I was missing RootComponent for my Actor, but I do not understand how to add it (maybe I do not fully understand actors). Can help you have my source code to solve my problem? This code is doing in terms of training.
My goal - to add an actor and be able to move and rotate it.
Please add this code
RootComponent = PCIP.CreateDefaultSubobject<USceneComponent>(this, TEXT("Root"));
to your constructor. That's all. If you'd like to add another components, you can use similar code (this example creates UInstancedStaticMeshComponent :
UInstancedStaticMeshComponent* instancedComp = PCIP.CreateDefaultSubobject<UInstancedStaticMeshComponent>(RootComponent, TEXT("SubMeshInstanced"));
instancedComp->AttachTo(RootComponent); // this is important!
// this part is specific to this component
// (although all are common to other types of your Root subitems)
instancedComp->SetStaticMesh(mesh);
instancedComp->SetMaterial(0, material);
instancedComp->bOwnerNoSee = false;
instancedComp->bCastDynamicShadow = false;
instancedComp->CastShadow = false;
instancedComp->SetHiddenInGame(false);
instancedComp->SetMobility(EComponentMobility::Static);