Using nDisplay with Vive Trackers in UE4 - unreal-engine4

I have a question for anyone that's been using nDisplay with vive trackers.
I set up using Ben Kidd's videos on youtube with the nDisplay new project template and created this blueprint: https://blueprintue.com/blueprint/fgg1zcub/ by creating a blueprint subclass of DisplayClusterRootActor
I put the GetVRPNTrackerLocation not being a function down to being in a different version of UE (4.26)
In VRPN, I'm getting the following data from the controller (not using a tracker atm):
Tracker openvr/controller/LHR-F7FD3B46#192.168.0.41:3884, sensor 0:
pos (-0.08, 0.78, -0.36); quat ( 0.20, 0.07, -0.15, 0.96)
Tracker openvr/controller/LHR-F7FD3B46#192.168.0.41:3884, sensor 0:
pos (-0.08, 0.78, -0.36); quat ( 0.20, 0.07, -0.16, 0.96)
...
and that's coming through in my Print String
so I know that the data is passing through from the controller -> VRPN -> UE4 / nDisplay and it looks similar to Ben's (numbers from -2ish to 2ish)
lastly in my nDisplay cfg i have (alongside my monitor setup):
...
[camera] id="camera_static" loc="X=0,Y=0,Z=0" parent="eye_level" eye_swap="false" eye_dist="0.064" force_offset="0" tracker_id="ViveVRPN" tracker_ch="0"
[input] id="ViveVRPN" type="tracker" addr="openvr/controller/LHR-F7FD3B46#192.168.0.41:3884" loc="X=0,Y=0,Z=0" rot="P=0,Y=0,R=0" front="-Z" right="X" up="Y"
...
however the movement of the camera is really tiny and not representative of the actual camera movements.

Finally it's not only me with this issue.
The issue is that seemingly the GetVRPNTrackerLocation only takes into consideration the first axis, assigns it to front, and sets the rest to positive X.
As for the underlying problem, I have no idea where this happens, but since I needed a quick fix I just hardcoded these values into engine code, and didn't look for a more permanent fix.
So in case you want to do this workaround here's what I did:
Follow these steps to obtain a source version of Unreal 4.26 (I would recommed the 4.26 branch)
https://docs.unrealengine.com/en-US/ProductionPipelines/DevelopmentSetup/BuildingUnrealEngine/index.html
Find {UnrealSourceFolder}\Engine\Plugins\Runtime\nDisplay\Source\DisplayCluster\Private\Input\Devices\VRPN\Tracker\DisplayClusterVrpnTrackerInputDevice.cpp
Modify these two methods
FVector FDisplayClusterVrpnTrackerInputDevice::GetMappedLocation(const FVector& Loc, const AxisMapType Front, const AxisMapType Right, const AxisMapType Up) const
{
static TLocGetter funcs[] = { &LocGetX, &LocGetNX, &LocGetY, &LocGetNY, &LocGetZ, &LocGetNZ };
//return FVector(funcs[Front](Loc), funcs[Right](Loc), funcs[Up](Loc));
return FVector(funcs[AxisMapType::NZ](Loc), funcs[AxisMapType::X](Loc), funcs[AxisMapType::Y](Loc));
}
FQuat FDisplayClusterVrpnTrackerInputDevice::GetMappedQuat(const FQuat& Quat, const AxisMapType Front, const AxisMapType Right, const AxisMapType Up, const AxisMapType InAxisW) const
{
static TRotGetter funcs[] = { &RotGetX, &RotGetNX, &RotGetY, &RotGetNY, &RotGetZ, &RotGetNZ, &RotGetW, &RotGetNW };
//return FQuat(funcs[Front](Quat), funcs[Right](Quat), funcs[Up](Quat), -Quat.W);// funcs[axisW](quat));
return FQuat(funcs[AxisMapType::NZ](Quat), funcs[AxisMapType::X](Quat), funcs[AxisMapType::Y](Quat), -Quat.W);// funcs[axisW](quat));
}

So I just upgraded from a working 4.25 version to 4.26 and then came across the same issue. I then built the engine and debugged my way through to find the cause and a possible solution for this problem.
It seems like it is a problem with the .cfg text files and the axes getting parsed incorrectly.
An easy solution would be to import the .cfg file into the editor so it gets then converted to the new .json file format. Here you can then see the issue with wrong assigned axes on the tracker and can also change it in the new file. Afterwards just use the new .json file from then on for your ndisplay configuration and it should work correctly.

Related

Unity3D New Input System: Is it really so hard to stop UI clickthroughs (or figure out if cursor is over a UI object)?

Even the official documentation has borderline insane recommendations to solve what is probably one of the most common UI/3D interaction issues:
If I click while the cursor is over a UI button, both the button (via the graphics raycaster) and the 3D world (via the physics raycaster) will receive the event.
The official manual:
https://docs.unity3d.com/Packages/com.unity.inputsystem#1.2/manual/UISupport.html#handling-ambiguities-for-pointer-type-input essentially says "how about you design your game so you don't need 3D and UI at the same time?".
I cannot believe this is not a solved problem. But everything I've tried failed. EventSystem.current.currentSelectedGameObject is sticky, not hover. PointerData is protected and thus not accessible (and one guy offered a workaround via deriving your own class from Standalone Input Module to get around that, but that workaround apparently doesn't work anymore). The old IsPointerOverGameObject throws a warning if you query it in the callback and is always true if you query it in Update().
That's all just mental. Please someone tell me there's a simple, obvious solution to this common, trivial problem that I'm just missing. The graphics raycaster certainly stores somewhere if it's over a UI element, right? Please?
I've looked into this a fair bit and in the end, the easiest solution seems to be to do what the manual says and put it in the Update function.
bool pointerOverUI = false;
void Update()
{
pointerOverUI = EventSystem.current.IsPointerOverGameObject();
}
Your frustration is well founded: there are NO examples of making UI work with NewInput which I've found. I can share a more robust version of the Raycaster workaround, from Youtube:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.EventSystems;
using UnityEngine.InputSystem;
using UnityEngine.UI;
/* Danndx 2021 (youtube.com/danndx)
From video: youtu.be/7h1cnGggY2M
thanks - delete me! :) */
public class SCR_UiInteraction : MonoBehaviour
{
public GameObject ui_canvas;
GraphicRaycaster ui_raycaster;
PointerEventData click_data;
List<RaycastResult> click_results;
void Start()
{
ui_raycaster = ui_canvas.GetComponent<GraphicRaycaster>();
click_data = new PointerEventData(EventSystem.current);
click_results = new List<RaycastResult>();
}
void Update()
{
// use isPressed if you wish to ray cast every frame:
//if(Mouse.current.leftButton.isPressed)
// use wasReleasedThisFrame if you wish to ray cast just once per click:
if(Mouse.current.leftButton.wasReleasedThisFrame)
{
GetUiElementsClicked();
}
}
void GetUiElementsClicked()
{
/** Get all the UI elements clicked, using the current mouse position and raycasting. **/
click_data.position = Mouse.current.position.ReadValue();
click_results.Clear();
ui_raycaster.Raycast(click_data, click_results);
foreach(RaycastResult result in click_results)
{
GameObject ui_element = result.gameObject;
Debug.Log(ui_element.name);
}
}
}
So, just drop into my "Menusscript.cs"?
But as a pattern, this is terrible for separating UI concerns. I'm currently rewiring EVERY separately-concerned PointerEventData click I had already working, and my question is, Why? I can't even find how it's supposed to work: to your point there IS no official guide at all around clicking UI, and it does NOT just drop-on-top.
Anyway, I haven't found anything yet which makes new input work easily on UI, and definitely not found how I'm going to sensibly separate Menuclicks from Activityclicks while keeping game & ui assemblies separate.
Good luck to us all.
Unity documentation for this issue with regard to Unity.InputSystem can be found at https://docs.unity3d.com/Packages/com.unity.inputsystem#1.3/manual/UISupport.html#handling-ambiguities-for-pointer-type-input.
IsPointerOverGameObject() can always return true if the extent of your canvas covers the camera's entire field of view.
For clarity, here is the solution which I found worked best (accumulated from several other posts across the web).
Attach this script to your UI Canvas object:
public class CanvasHitDetector : MonoBehaviour {
private GraphicRaycaster _graphicRaycaster;
private void Start()
{
// This instance is needed to compare between UI interactions and
// game interactions with the mouse.
_graphicRaycaster = GetComponent<GraphicRaycaster>();
}
public bool IsPointerOverUI()
{
// Obtain the current mouse position.
var mousePosition = Mouse.current.position.ReadValue();
// Create a pointer event data structure with the current mouse position.
var pointerEventData = new PointerEventData(EventSystem.current);
pointerEventData.position = mousePosition;
// Use the GraphicRaycaster instance to determine how many UI items
// the pointer event hits. If this value is greater-than zero, skip
// further processing.
var results = new List<RaycastResult>();
_graphicRaycaster.Raycast(pointerEventData, results);
return results.Count > 0;
}
}
In class containing the method which is handling the mouse clicks, obtain the reference to the Canvas UI either using GameObject.Find() or a public exposed variable, and call IsPointerOverUI() to filter clicks when over UI.
Reply to #Milad Qasemi's answer
From the docs you have attached in your answer, I have tried the following to check if the user clicked on a UI element or not.
// gets called in the Update method
if(Input.GetMouseButton(0) {
int layerMask = 1 << 5;
// raycast in the UI layer
RaycastHit2D hit = Physics2D.Raycast(Camera.main.ScreenToWorldPoint(Input.mousePosition), Vector2.zero, Mathf.Infinity, layerMask);
// if the ray hit any UI element, return
// don't handle player movement
if (hit.collider) { return; }
Debug.Log("Touched not on UI");
playerController.HandlePlayerMovement(x);
}
The raycast doesn't seem to detect collisions on UI elements. Below is a picture of the Graphics Raycaster component of the Canvas:
Reply to #Lowelltech
Your solution worked for me except that instead of Mouse I used Touchscreen
// Obtain the current touch position.
var pointerPosition = Touchscreen.current.position.ReadValue();
An InputSytem is a way to receive new inputs provided by Unity. You can't use existing scripts there, and you'll run into problems like the original questioner. Answers with code like "if(Input.GetMouseButton(0)" are invalid because they use the old system.

Unity - Set GUI.Box background color

I'm trying to set the background color of a GUI.Box:
void OnGUI()
{
string LatLong;
LatLong = map.calc.prettyCurrentLatLon;
var mousePosition = Input.mousePosition;
float x = mousePosition.x + 10;
float y = Screen.height - mousePosition.y + 10;
GUI.backgroundColor = Color.red;
GUI.Box(new Rect(x, y, 200, 200), LatLong);
}
However, the box is showing in a semi-transparent black, and the white text is subdued, not opaque white.
You have to use s gui style:
private GUIStyle currentStyle = null;
void OnGUI()
{
InitStyles();
GUI.Box( new Rect( 0, 0, 100, 100 ), "Hello", currentStyle );
}
private void InitStyles()
{
if( currentStyle == null )
{
currentStyle = new GUIStyle( GUI.skin.box );
currentStyle.normal.background = MakeTex( 2, 2, new Color( 0f, 1f, 0f, 0.5f ) );
}
}
private Texture2D MakeTex( int width, int height, Color col )
{
Color[] pix = new Color[width * height];
for( int i = 0; i < pix.Length; ++i )
{
pix[ i ] = col;
}
Texture2D result = new Texture2D( width, height );
result.SetPixels( pix );
result.Apply();
return result;
}
Taken from unity forum.
I'm gonna slide in with a more elegant solution here before this question gets old. I saw Thomas's answer and started to wonder if there is a way to do that without having to do the "InitStyles" in the OnGUI loop. Since ideally you only want to init the GuiSkin once in Awake or Start or wherever, but only once, and then never check to see if it's null ever again.
Anyway, after some trial and error, I came up with this.
private void Awake() {
// this variable is stored in the class
// 1 pixel image, only 1 color to set
consoleBackground = new Texture2D(1, 1, TextureFormat.RGBAFloat, false);
consoleBackground.SetPixel(0, 0, new Color(1, 1, 1, 0.25f));
consoleBackground.Apply(); // not sure if this is necessary
// basically just create a copy of the "none style"
// and then change the properties as desired
debugStyle = new GUIStyle(GUIStyle.none);
debugStyle.fontSize = 24;
debugStyle.normal.textColor = Color.white;
debugStyle.normal.background = consoleBackground;
}
REVISION - 17 July 2022 - GUI Style Creation and Storage
Prelude
Style creation through the methods provided by others are certainly functional methods of providing your custom editors with a unique look. They have some fundamental issues I should point out, which my method doesn't outright correct, just alleviate. This method still needs to be expanded upon and is still a partly experimental progression from a personal plugin.
Creating styles every OnGUI Call creates unnecessary, extra instructions for your editor window. This doesn't scale well past a handful (4~) styles.
By creating styles every time OnGUI is called, you're creating textures repeatedly for the background colour (not good). Over prolonged use of this method, memory leaks can occur although unlikely.
What does my method do differently?
Creates GUIStyle and Texture2D files. GUIStyles are saved as .JSON files, which is best compatible for [JSON <-> GUIStyle] conversion and storage.
Texture2Ds are encoded from raw data to PNG format through UnityEngine.
Checks if a style file is null before fetching or creating any missing styles again.
Contains a list of all styles through the use of a Style Manifest (struct) to store the names of all textures and styles to iteratively load on fetch.
Only creates styles if they are missing. Does not spend resources on creating pre-existing styles and pre-existing styles.
GUIStyles (as JSONs) and Texture2D files are stored in a Resources folder within the Plugin folder.
It should be noted that my style methods are done with the express understanding and consideration of GUISkins existing. They are not suitable for my UI/UX needs.
How is this done?
Plugin Style Handing Diagram
I separate Style Loading into a unique namespace for style handling and contain functions, as well as public variables for global plugin access, within. This namespace creates, loads and can send back styles on the requests sent by other scripts.
A call to a function is made when the plugin is opened to load the style manifest and subsequently all styles and textures are loaded, to be relinked for use.
If the styles manifest is missing then it is recreated along with all GUIStyle files. If the styles manifest is not missing but a style is then that style is recreated and the style manifest is modified to reference the new style.
Textures are handled separately from GUIStyle loading and are collected in a separate array. They are independently checked to see if they still exist and missing textures are recreated and encoded from raw data to PNG format, with the style manifest being modified when necessary.
Instead of repeatedly creating all styles or repeatedly loading them each frame, the plugin sends a request to fetch the first style from memory and checks if the result is null. If the first style returns as null then the plugin assumes all styles have been dereferenced and calls a reload or recreation of the relevant GUIStyle files (this can happen because of the engine entering/exiting play mode and is necessary to preserve UI/UX legibility).
If the style returns as a valid reference, plugins of mine do use it but this is risky. It's a good idea to also check at least one texture because textures are at risk of being dereferenced from the Texture2D array.
Once each check is done, the plugin renders the layout as normal and the next cycle begins. Using this method overall requires extra processing time and extra storage space for the plugin but in turn:
Quicker over a longer period of time due to styles being created or loaded only when necessary
Easier to modify themes for plugins, allowing individuals to customize the tools to their preferred theme. This can also be expanded on custom "theme packs" for a plugin.
Scalable for large amounts of styles to an extent.
This method still requires experience in GUI Styles, Data Saving, JSON Utilisation and C#.

How to bind Preview and texture in CameraX

I recently tried to develop a flutter plugin with cameraX, but I found that there was no way to simply bind Preview to flutter's Texture.
In the past, I only needed use camera.setPreviewTexture(surfaceTexture.surfaceTexture()) to bind camera and texture, now I can't find the api.
camera.setPreviewTexture(surfaceTexture.surfaceTexture())
val previewConfig = PreviewConfig.Builder().apply {
setTargetAspectRatio(Rational(1, 1))
setTargetResolution(Size(640, 640))
}.build()
// Build the viewfinder use case
val preview = Preview(previewConfig).also{
}
preview.setOnPreviewOutputUpdateListener {
// it.surfaceTexture = this.surfaceTexture.surfaceTexture()
}
// how to bind the CameraX Preview surfaceTexture and flutter surfaceTexture?
I think you can bind texture by Preview.SurfaceProvider.
final CameraSelector cameraSelector = new CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_BACK).build();
final ListenableFuture<ProcessCameraProvider> listenableFuture = ProcessCameraProvider.getInstance(appCompatActivity.getBaseContext());
listenableFuture.addListener(() -> {
try {
ProcessCameraProvider cameraProvider = listenableFuture.get();
Preview preview = new Preview.Builder()
.setTargetResolution(new Size(720, 1280))
.build();
cameraProvider.unbindAll();
Camera camera = cameraProvider.bindToLifecycle(appCompatActivity, cameraSelector, preview);
Preview.SurfaceProvider surfaceProvider = request -> {
Size resolution = request.getResolution();
surfaceTexture.setDefaultBufferSize(resolution.getWidth(), resolution.getHeight());
Surface surface = new Surface(surfaceTexture);
request.provideSurface(surface, ContextCompat.getMainExecutor(appCompatActivity.getBaseContext()), result -> {
});
};
preview.setSurfaceProvider(surfaceProvider);
} catch (Exception e) {
e.printStackTrace();
}
}, ContextCompat.getMainExecutor(appCompatActivity.getBaseContext()));
Update: CameraX has added functionality which will now allow this since this answer was written, but this might still be useful to someone. See this answer for details.
It seems as though using CameraX is difficult to impossible due to it abstracting the more complicated things away and so not exposing things you need like being able to pass in your own SurfaceTexture (which is normally created by Flutter).
So the simple answer is that you can't use CameraX.
That being said, with some work you may be able to get this to work, but I have no idea if it will work for sure. It's ugly and hacky so I wouldn't recommend it. YMMV.
If we're going to do this, let's first look at how the flutter view creates a texture
#Override
public TextureRegistry.SurfaceTextureEntry createSurfaceTexture() {
final SurfaceTexture surfaceTexture = new SurfaceTexture(0);
surfaceTexture.detachFromGLContext();
final SurfaceTextureRegistryEntry entry = new SurfaceTextureRegistryEntry(nextTextureId.getAndIncrement(),
surfaceTexture);
mNativeView.getFlutterJNI().registerTexture(entry.id(), surfaceTexture);
return entry;
}
Most of that is replicable, so we may be able to do it with the surface texture the camera gives us.
You can get ahold of the texture the camera creates this way:
preview.setOnPreviewOutputUpdateListener { previewOutput ->
SurfaceTexture texture = previewOutput.surfaceTexture
}
What you're going to have to do now is to pass a reference to your FlutterView into your plugin (I'll leave that for you to figure out). Then call flutterView.getFlutterNativeView() to get ahold of the FlutterNativeView.
Unfortunately, FlutterNativeView's getFlutterJni is package private. So this is where it gets really hacky - you can create a class in that same package that calls that package-private method in a publicly accesible method. It's super ugly, and you may have to fiddle around with Gradle to get the compilation security settings to allow it, but it should be possible.
After that, it should be simple enough to create a SurfaceTextureRegistryEntry and to register the texture with the flutter jni. I don't think you want to detach from the opengl context, and I really have no idea if this will actually work. But if you want to try it out and report back what you find I would be interested in hearing the result!

Initializing pallet racks with parameters in AnyLogic

Dear stack overflow community,
I am trying to create a pallet rack in AnyLogic (version 7.1.2 university), in which the number of cells, the number of levels and some other properties are set from parameters. The parameters are set up prior to model execution from the simulation page. Has anyone done that before?
In my opinion, the problem starts with the palletRack-properties that do not allow to set parameters as values but require a number ("Number of cells: 10" instead of "Number of cells: myParameter"). But there are pre-defined functions like "setNumberOfPositions(int nPositions), so that I thought I could avoid the problem by calling these functions at the beginning of the simulation (at time zero). I used the action field of an event for that.
This caused an exception that said "root: palletRack: Markup element is already initialized and cannot be modified. Please use constructor without arguments, perform setup and finally call initialize() function.".
Exception during discrete event execution: Markup element is already initialized and cannot be modified. Please use constructor without arguments, perform setup and finally call initialize() function
Since I could not modify anything in the Java editor, I tried to construct a pallet rack in the event action field:
PalletRack palletRack = new PalletRack();
palletRack.setOwner(this);
[...]
palletRack.setNumberOfPositions(p_CellsInX);
palletRack.setNumberOfLevels(p_CellsInY);
palletRack.setCellWidth(p_WidthOfCell);
palletRack.setLevelHeight(p_HeightOfCell);
palletRack.initialize();
This did not throw any errors but did not built a rack either.
Additionally, I tried to add "#Override" in front of my functions.
Has anyone any ideas how I can initialize the pallet rack with parameters or override the initial values?
Obviously, I am a total beginner in AnyLogic. I would be very grateful for any advice. Thank you in advance!
It is possible but not straight forward. You need to do everything programmatically, i.e. create the pallet rack but also the line that goes through it, add them to a (new or existing) network and then initialize it all. Some dummy code to get you started below.
Note that myNetwork is an existing network I drew manually at design time here.
Also, one tip: draw the pallet rack and line through it manually first to easily obtain all the coordinates and ensure it would work. Then, remove those and create them programmatically but with the right settings...
PS: this might not work in AL7 but it works in AL8. You might need slightly different functions for adding to presentation
myRack = new PalletRack(this, // Agent owner
SHAPE_DRAW_2D3D, // ShapeDrawMode
true, // isPublic
ground, // ground
false, // isObstacle
-2480, // x pos
1980, // y pos
0.0, // z pos
35.2*numCellsPerRackPerLevel, // length (keep constant cell width and vary rack length accordingly)
20.0, // depth
20.0, // depthR (depth of the right riack (only if type is 2 racks and 1 aisle)
50.0, // levelHeight
0., // rotation
PALLET_RACK_TWO_PALLET_RACKS, // PalletRackType
PALLET_RACK_NO_DIRECTION, // PalletRackDirection
40.0, // aisleDepth = width
40.0, // aisleRDepth (width of right aisle, only if 1 rack 2 aisles)
35.2, // cellWidth
numCellsPerRackPerLevel, // nPositions
numLevelsPerRack, // nLevels
1, // nDeep
lavender, // fillColor
dodgerBlue, // lineColor
2); // cellsBetweenLegs
presentation.add(myRack);
// this must cut through both rack's aisles
MarkupSegmentLine segment = new MarkupSegmentLine(myRack.getX()-10, myRack.getY()+30, 0.0, myRack.getX()+myRack.getLength()+10, v_IMS_Rack1.getY()+30, 0.0);
Path path = new Path(this, SHAPE_DRAW_2D3D, true,
true, true, 1.0, false, 10,
PATH_LINE, dodgerBlue, 1.0,
segment);
presentation.add(myRack);
myNetwork.add( myRack);
myNetwork.add(path);
myNetwork.initialize();

Display points when there's great distance between them (GWT-Openlayers)

The case is the following: I have a layer and there are two points on it. The first is in Australia, the second is in the USA. The continent or the exact position of the points doesn't count. The essential part is the great distance between the points. When the application starts, the first point appears (zoomlevel is 18). The second point isn't displayed because it is far away from here and the zoomlevel is high. Then i call the panTo function with the location of the second point. The map jumps to the right location but the second point doesn't appear. The point appears only if i zoom in/out or resize the browser window. The GWT code:
LonLat center = new LonLat(151.304485, -33.807831);
final LonLat usaPoint = new LonLat(-106.356183, 35.842721);
MapOptions defaultMapOptions = new MapOptions();
defaultMapOptions.setNumZoomLevels(20);
// mapWidget
final MapWidget mapWidget = new MapWidget("100%", "100%", defaultMapOptions);
// google maps layer
GoogleV3Options gSatelliteOptions = new GoogleV3Options();
gSatelliteOptions.setIsBaseLayer(true);
gSatelliteOptions.setDisplayOutsideMaxExtent(true);
gSatelliteOptions.setSmoothDragPan(true);
gSatelliteOptions.setType(GoogleV3MapType.G_SATELLITE_MAP);
GoogleV3 gSatellite = new GoogleV3("Google Satellite", gSatelliteOptions);
mapWidget.getMap().addLayer(gSatellite);
// pointLayer
VectorOptions options = new VectorOptions();
options.setDisplayOutsideMaxExtent(true);
Vector vector = new Vector("layer1", options);
mapWidget.getMap().addLayer(vector);
mapWidget.getMap().addControl(new LayerSwitcher());
mapWidget.getMap().addControl(new MousePosition());
mapWidget.getMap().addControl(new ScaleLine());
mapWidget.getMap().addControl(new Scale());
// two points are added to the layer
center.transform(new Projection("EPSG:4326").getProjectionCode(), mapWidget.getMap().getProjection());
vector.addFeature(new VectorFeature(new Point(center.lon(), center.lat())));
usaPoint.transform(new Projection("EPSG:4326").getProjectionCode(), mapWidget.getMap().getProjection());
vector.addFeature(new VectorFeature(new Point(usaPoint.lon(), usaPoint.lat())));
// the center of the map is the first point
mapWidget.getMap().setCenter(center, 18);
// 3 sec later panTo second point
Timer t = new Timer() {
#Override
public void run() {
mapWidget.getMap().panTo(usaPoint);
}
};
t.schedule(3000);
I tried to reproduce this situation with pure Openlayers, but it worked fine. Here is the link
So i think the problem is with GWT-Openlayers. Has anybody experienced such behaviour? Or has anybody got a solution to this problem?
What a strange problem.
For now I did only found a way around it, but not a real fix. Seems to be a bug in GWT-OL as you say, but I can't imagine where.
What you can do is add the following 3 lines to your code :
mapWidget.getMap().panTo(usaPoint);
int zoom = mapWidget.getMap().getZoom();
mapWidget.getMap().setCenter(usaPoint, 0);
mapWidget.getMap().setCenter(usaPoint, zoom);
(note : I am a contributor to the GWT-OL project, I also informed other contributors of this problem, maybe they can find a better solution)
Edit : Another GWT-OL contributor looked into this but also couldn't find a real solution
but another workaround is to use zoomToExtend for the requested point :
Bounds b = new Bounds();
b.extend(new LonLat(usaPoint.getX(), usaPoint.getY()));
mapWidget.getMap().zoomToExtent(b);