I am using Unfolding Maps in Processing.
The following map works correctly:
UnfoldingMap map = new UnfoldingMap(this, new OpenStreetMap.OpenStreetMapProvider());
EventDispatcher eventDispatcher = MapUtils.createDefaultEventDispatcher(this, map)
The issue is that I want to create a custom handler for pan/zoom events to trigger other events in my application. For example, I would like to detect panning and zooming actions in the map to run queries in the background based on the current coordinates, while preserving the default panning/zooming behaviour.
In other web mapping platforms this is trivial (e.g. Leaflet), but I can't find any tutorial/reference in UnfoldingMaps.
Any pointers?
Unfolding provides a mapChanged event, similar to mouseClicked and other event handler in Processing's simplified event system. There, you can implement your background query stuff. You can even check and react to the specific MapEvent (even though a bit tedious).
Take a look at this MapChangedApp example.
You could also do it via a custom event handler, if you want, but this should be sufficient for your use case. Let me know if you plan to do something else.
Related
I would like to create a plugin mechanism for my custom wrapper around AgGrid. Currently I am doing so with an interface like this:
export interface IGridPlugin {
onGridReady: (e: GridReadyEvent) => void;
}
Then, in my wrapper around AgGrid, I add an onGridReady event handler that goes through the supplied list of plugins and invokes their onGridReady method one at a time. They can then perform any initialization that they require - like adding event handlers via e.api.addEventListener
gridOptions.onGridReady = (e: GridReadyEvent) => {
plugins?.forEach(plugin => plugin.onGridReady(e));
}
Unfortunately, the onGridReady event appears to only fire after the grid initially loads and renders. One of the plugins I've created will restore the grid state from url params when onGridReady fires, and will subsequently keep the url params updated as the grid state changes.
Since onGridReady fires after the initial rendering, there's a small delay where the grid is displayed without the application of the preserved grid state in the url. I was hoping to find an earlier event that might make more sense for this.
Alternatively, I suppose I could allow the plugins to modify the gridOptions directly before the grid is even constructed... but using events feels a little safer - preventing one plugin from setting itself up as the sole event handler for something that other plugins may want to use and accidentally disable some/all of the functionality of other plugins.
Is there a better event or alternative mechanism for adding such plugins to AgGrid? I would like to make sure that the plugins have access to the api and columnApi instances in any approach taken.
We are working on tracking a site which has components built using Shadow DOM concepts, when we are creating a rule in launch to add tagging to these components it’s not working.
Can you guide us with best practice on tagging components in Shadow DOM?
I found unanswered questions about google analytics Google analytics inside shadow DOM doesn't work is this true for adobe analytics also?
Best Practice
Firstly, the spirit of using Shadow DOM concepts is to provide scope/closure for web components, so that people can't just go poking at them and messing them up. In principle, it is similar to having a local scoped variable inside a function that a higher scope can't touch. In practice, it is possible to get around this "wall" and have your way with it, but it breaks the "spirit" of shadow DOM, which IMO is bad practice.
So, if I were to advise some best practice about any of this, my first advice is to as much as possible, respect the spirit of web components that utilize shadow DOM, and treat them like the black box they strive to be. Meaning, you should go to the web developers in charge of the web component and ask them to provide an interface for you to use.
For example, Adobe Launch has the ability to listen for custom events broadcast to the (light) DOM, so the site developers can add to their web component, create a custom event and broadcast it on click of the button.
Note: Launch's custom event listener will only listen for custom event broadcasts starting at document.body, not document, so make sure to create and broadcast custom events on document.body or deeper.
"But the devs won't do anything so I have to take matters into my own hands..."
Sadly, this is a reality more often than not, so you gotta do what you gotta do. If this is the case, well, Launch does not currently have any native features to really make life easier for you in this regard (for the "core" part of the below stuff, anyways), and as of this post, AFAIK there are no public extensions that offer anything for this, either. But that doesn't mean you're SoL.
But I want to state that I'm not sure I would be quick to call the rest of this answer "Best Practice" so much as "It's 'a' solution..". Mostly because this largely involves just dumping a lot of pure javascript into a custom code box and calling it a day, which is more of a "catch-all, last resort" solution.
Meanwhile, in general, it's best practice to avoid using custom code boxes when it comes to tag managers unless you have to. The whole point of tag managers is to abstract away the code.
I think the TL;DR here is basically me reiterating this something that should ideally be put on the site devs' plate to do. But if you still really need to do it all in Launch because ReasonsTM, keep on reading.
'A' Solution...
Note: This is a really basic example with a simple open-mode shadow DOM scenario - in reality your scenario is almost certainly a lot more complex. I expect you to know what you're doing with javascript if you're diving into this!
Let's say you have the following on the page. Simple example of a custom html element with a button added to its shadow DOM.
<script>
class MyComponent extends HTMLElement {
constructor() {
super();
this._shadowRoot = this.attachShadow({
mode: 'open'
});
var button = document.createElement('button');
button.id = 'myButton';
button.value = 'my button value';
button.innerText = 'My Button';
this._shadowRoot.appendChild(button);
}
}
customElements.define('my-component', MyComponent);
</script>
<my-component id='myComponentContainer'></my-component>
Let's say you want to trigger a rule when a visitor clicks on the button.
Quick Solution Example
At this point I should probably say that you can get away with doing a Launch click event rule with query selector my-component#myComponentContainer with a custom code condition along the lines of:
return event.nativeEvent.path[0].matches('button#myButton');
Something like this should work for this scenario because there are a lot of stars aligned here:
The shadow dom is open mode, so no hacks to overwrite things
There are easily identifiable unique css selectors for both light and shadow DOM levels
You just want to listen for the click event, which bubbles up and
acts like a click happened on the shadow root's light DOM root.
In practice though, your requirements probably aren't going to be this easy. Maybe you need to attach some other event listener, such as a video play event. Unfortunately, there is no "one size fits all" solution at this point; it just depends on what your actual tracking requirements are.
But in general, the goal is pretty much the same as what you would have asked the devs to do: create and broadcast a custom (light) DOM event within the context of the shadow DOM.
Better Solution Example
Using the same component example and requirement as above, you could for example create a rule to trigger on DOM Ready. Name it something like "My Component Tracking - Core" or whatever. No conditions, unless you want to do something like check if the web component's root light DOM element exists or whatever.
Overall, this is the core code for attaching the event listener to the button and dispatching a custom event for Launch to listen for. Note, this code is based on our example component and tracking requirements above. It is unique to this example. You will need to write similar code based on your own setup.
Add a custom js container with something along the lines of this:
// get the root (light dom) element of the component
var rootElement = document.querySelector('#myComponentContainer');
if (rootElement && rootElement.shadowRoot) {
// get a reference to the component's shadow dom
var rootElementDOM = rootElement.shadowRoot;
// try and look for the button
var elem = rootElementDOM.querySelector('button#myButton');
if (elem) {
// add a click event listener to the button
elem.addEventListener('click', function(e) {
// optional payload of data to send to the custom event, e.g. the button's value
var data = {
value: e.target.value
};
// create a custom event 'MyButtonClick' to broadcast
var ev = new CustomEvent('MyButtonClick', {
detail: data
});
// broadcast the event (remember, natively, Launch can only listen for custom events starting on document.body, not document!
document.body.dispatchEvent(ev);
}, false);
}
}
From here, you can create a new rule that listens for the custom event broadcast.
Custom Event Rule Example
Rule name: My Button clicks
Events
Extension: Core
Event Type: Custom Event
Name: MyButtonClick
Custom Event Type: MyButtonClick
Elements matching the CSS selector: body
Conditions
*None for this scenario*
From here, you can set whatever Actions you want (set Adobe Analytics variables, send beacon, etc.).
Note:
In this example, I sent a data payload to the custom event. You can reference the payload in any custom (javascript) code box with event.detail, e.g. event.detail.value. You can also reference them in Launch fields with the % syntax, e.g. %event.detail.value%.
I'm still pretty new to scripting in Unity3D, and I'm following along with a tutorial that uses GUI.Button() to draw a button on the screen.
I am intrigued by how this function works. Looking through the documentation, the proper use of GUI.Button is to invoke the function in an if statement and put the code to be called when the button is pushed within the if statement's block.
What I want to know is, how does Unity3D "magically" delay the code in the if statement until after the button is clicked? If it was being passed in as a callback function or something, then I could understand what was going on. Perhaps Unity is using continuations under the hood to delay the execution of the code, but then I feel like it would cause code after the if statement to be executed multiple times. I just like to understand how my code is working, and this particular function continues to remain "magical" to me.
I don't know if it's the right term, but I usually refer to such system as immediate mode GUI.
how does Unity3D "magically" delay the code in the if statement until
after the button is clicked?
GUI.Button simply returns true if a click event happened inside the button bounds during last frame. Basically calling that function you are polling: every frame for every button asking the engine if an event which regards that button (screen area) is happened.
If it was being passed in as a callback function or something, then I
could understand what was going on
You are probably used to an MVC like pattern, where you pass a controller delegate that's called when an UI event is raised from the view. This is something really different.
Perhaps Unity is using continuations under the hood to delay the
execution of the code, but then I feel like it would cause code after
the if statement to be executed multiple times.
No. The function simply returns immediately and return true only if an event happened. If returns false the code after the if won't be executed at all.
Side notes:
That kind of system is hard to maintain, especially for complex structured GUI.
It has really serious performance implications (memory allocation, 1 drawcall for UI element)
Unless you are writing an editor extension or custom inspector code, I'd stay away from it. If you want to build a menu implement your own system or use an external plugin (there are several good ones NGUI, EZGUI,..).
Unity has already announced a new integrated UI System, it should be released soon.
Good question. The unity3d gui goes through several event phases, or in the documentation
Events correspond to user input (key presses, mouse actions), or are UnityGUI layout or rendering events.
For each event OnGUI is called in the scripts; so OnGUI is potentially called multiple times per frame. Event.current corresponds to "current" event inside OnGUI call."
In OnGUI you can find out which event is currently happening with >Event.current
The following events are processed link:
Types of UnityGUI input and processing events.
-MouseDown
-MouseUp,mouse button was released
-MouseMove,Mouse was moved (editor views only)
-MouseDrag,Mouse was dragged
-KeyDown, A keyboard key was pressed
-KeyUp A keyboard key was released.
-ScrollWheel The scroll wheel was moved.
-Repaint A repaint event. One is sent every frame.
-Layout A layout event.
-DragUpdated Editor only: drag & drop operation updated.
-DragPerform Editor only: drag & drop operation performed.
-DragExited Editor only: drag & drop operation exited.
-Ignore Event should be ignored.
-Used Already processed event.
-ValidateCommand Validates a special command (e.g. copy & paste).
-ExecuteCommand Execute a special command (eg. copy & paste).
-ContextClick User has right-clicked (or control-clicked on the mac).
Unity GUI has much improved lately and is quite usefull if you want to handle things programmatically. If you want to handle things visually, i recommend looking at the plugins heisenbug refers to.
If you decide to use unity gui, i recommend using only one object with ongui, and let this object handle all your gui.
I'm automating an app that shows some overlay messages anywhere on the app for several scenarios, such as app installed for the first time etc. (I'm fairly new to Robotium too.)
The overlay displays a text that goes away by swiping or clicking on it. Also, there are different types of these overlays with different unique text on it. (let's call it Activity A)
I wanted to create a robust test case that handles this case gracefully. From the test's perspective we won't know that the activity A will be present all the time. But I want to recover from the scenario if it does, by writing a method that I can call any time. Currently, the tearDown method gets called since my expected activity name doesn't match.
Also, even if the activity A exists, there are other predefined overlay texts too. So, if I use solo.waitForText("abc") to check for text "abc", I may see the overlay 2 with the text "pqr" instead.
So I was looking for a way to automate this, and I can't use solo.assertCurrentActivity() or solo.waitForActivity methods as they just stop the execution after the first failure.
So any guidance is appreciated!
All the waitFor methods return a boolean. So you can use waitForActivity() exactly as you want to. If the Activity doesn't exist it will return false.
You can check which Activity is current:
Activity current = solo.getCurrentActivity();
I have a Containerwith so many Labels added inside it. When I try to capture the pointerReleased event in this Container, I have found some problems. The event only is captured when I released in the free area of the Container, no when I made the release over the Labels. Is there any way to encapsulate this event? I mean when I will do the realase over the main Container(instead I'm on a Label), the event must be launched.
Here you can take a look at my Container
You should look at the lead component feature added in LWUIT 1.5, it allows you to define a component that manages the entire hierarchy of container/component and all events for every component in the hierarchy are sent to it.
This adds another benefit of handling the style synchronization between all the different elements (e.g. if you use a button all components will become pressed together).
I dont' find any cool solution, so finally I propagate the pointerReleased from the Labelsto the Container.