Is there any way to embed scene created with UE4 into WhatsApp and let the user Interact - unreal-engine4

I created my scene with UE and ready to be deployed to the end-user. Which packaging should I use if I want to shown and to be interactable to end-user on WhatsApp?
I know that it may sound weird since embedding an executable into another application is clear violation of breaching into another system, it may bear notorious intentions.
But my intention is benignly to provide easy way of interaction with scene to the end user without loading any apk or executables. Is there any way of doing that, especially in WhatsApp?

No there is not a way to do this with the engine. If by interact you mean just move around the scene there may be tools to move it to a file format that supports this.

Related

Where should the user-independent configuration for a multi-app system exist within the registry?

I apologise if this seems like a question with no clear-cut answer (although I'm hoping there is one), but please bear with me...
The company I work for is developing an immersive environment in Unity, which consists of a box-shaped room with multiple projectors casting images onto the walls and ceiling to produce a sort of effect similar to VR, but without the need for a headset. We're using Unity and C# to develop the system, and I've been writing a sort of "platform" that acts as a starting point for the applications that we develop for the environment. One of the systems contained within this platform is for screen configuration; this includes the dimensions of the screens and the mapping of projectors to views (i.e. it indicates which projector is responsible for projecting the forwards view .etc.)
Now, in order to make things simple, I'm going to be storing this configuration within the registry; this way, all of the separate applications will share the configuration of the immersive environment. I've implemented this, and everything works as expected. However, as I'm pedantic about things, I just want to make sure I'm using the correct location within the register for what I'm storing.
At the moment, I'm using "HKEY_LOCAL_MACHINE\System\BlueRoom..." ("Blue Room" is the name of the environment we're developing). I know I'll want to store the configuration within HKLM as opposed to HKCU, as the setup of the Blue Room's screens are the same regardless of the user. However, beyond this I'm not sure if I should be storing the configuration in "\System\BlueRoom..." or "\Software\BlueRoom...". Are there set guidelines pertaining to this, or is it a matter of preference?
Put it in the "\Software\BlueRoom\" seeing as that's what it is. System is more reserved for things that pertain to the functionality of the system e.g. Hardware and Windows.

Creating overlays for different apps in OSX

I want to create an app for OSX which would work as an addon (displaying some overlaying information) to other app. Something like Poker Tracker for example - it shows extra information for poker games while playing on tables.
Just wondering is it possible using Swift? Can you point me to some direction what to look for? some libraries helping with such case? Never developed anything for OSX but keen to learn.
Thanks in advance.
Just wondering is it possible using Swift?
Yes, you can use Swift to create macOS applications. It's not magic, though -- your Swift code can only do things that are actually possible.
Can you point me to some direction what to look for?
Look for an API that lets other apps interact with the host application. That API will define what your "add on" application can reasonably do.
Without some sort of API or scripting interface, it's going to be very difficult to write a program that interacts with the host application. The best option is probably the Accessibility API in macOS. Accessibility is meant as an assistive technology, but it's often repurposed for tasks like automated testing. You might be able to use it to gain some level of control over the host app.
As far as I know it doesn't expose any API, so it would need to be image scraped.
This is really a tall order, and doubly so if you're asking basic questions about language capabilities. I think you'd have much better luck creating an efficient user interface so that the user could enter the relevant information directly, e.g. what cards the other users are showing, bet sizes, etc.

How would I use a second input device in maya to affect controls separately to the mouse?

Not sure if I'm in the right place but not having much luck finding anything out. What I wanted to try and do is create a plugin for autodesk software (namely maya) that allows a secondary input device to control things like the viewport camera. Basically the same concept as the 3Dconnexion space navigator but using a different input device.
Any help is appreciated
The Maya api samples include an example of how to connect external devices. You can find an example in the maya application directory in `devkit/mocap', which includes a C++ project that uses the maya Mocap api to output continuous rotation values based on the system clock. I've seen this used to add support for joysticks and game controllers:
http://download.autodesk.com/global/docs/maya2014/en_us/index.html?url=files/Motion_Capture_Animation_Server_.htm,topicNumber=d30e260341
You'd want to replace the clock part, of course, with something that spits out controller values you care about.
The maya side is handled by scripts that connect incoming "mocap" data to different scene elements. There used to be generic UI for it but nowadays you have to do it all in script:
http://download.autodesk.com/global/docs/maya2014/en_us/index.html?url=files/Motion_Capture_Animation_Server_.htm,topicNumber=d30e260341
I'm not too up on the current state of the art but some googling should show you how to attach device inputs to the scene

What is the best way to work with a user interface/user experience designer on an iPhone app?

I have a friend who is a graphic designer & user experience designer who will be collaborating with me to develop an iPhone app. He does not have previous iPhone experience. What is the best way to work with him on developing the user interface, i.e. custom colors for UITableViews, UIButtons, etc? We've looked into Photoshop mock ups, but that depends on me (the developer) implementing what he drew in Photoshop, which might get tricky.
Most of the methods I've thought of have long turn around time, i.e. he uses Photoshop, sends me the image, I develop, send him a test build of the app, he doesn't like it, rinse, lather, repeat.
Do you think it's feasible to set him up with Interface Builder so he can modify XIB files? Potentially, he could build and run the app in the simulator...
Does anyone have experience doing this? Any suggestions?
Thanks much,
-dan
This goes for a developer or designer. The best way in my opinion is to mock up designs in photoshop, debate on what is good and what is bad, then send the final mock ups to the developer.
The reason you want to do it this way is because your designer can't do everything he wants to do by simply using the IB. You need to allow your designer to express his creative freedom without the burdeon of figuring out how to use a piece of software correctly.
You can find plenty of templates of iPhone and iPad components on the web. Having those components will make it very simple for your friend to put together concepts. It will also keep things consistent so you can have an easier time implementing them.
A Great Collection of iPad Resources
iPhone Materials
One suggestion is to start with the elements that do not need graphic design but you know they will be there, this will be things like table views, tab bars, any UI element provided by UIKit or even custom UI elements that you make...I would say you will probably have most of your app made by this approach and will look VERY plain...once you have that basis you should be able to work with the graphic designer and identify where and what he needs to make, it should also be pretty easy for you to integrate it since it will probably be mostly images or textures, things like animations and such will have to be handled by you anyway...just a suggestion, hope its helpful
Omnigraffle is your best bet for quickly mocking out UIs. It produces nearly photorealistic mockups. It's easy for non-artist to use but can also utilize imported images of arbitrary complexity if he needs to do something fancy.
If you want my advice, keep the graphic designers away from the app until it is fully functional logically. They should only be brought in at the end of the process to tweak the UI.
They cause train wrecks if they come into the process early. Everybody in that field has been trained first and foremost to create visuals that attract attention. In an UI, that always translates into flashy, non-standard elements that turn into annoyance with repeated use. A good UI is essentially invisible to the user. Ideally, they should notice it only because they notice that they don't notice it. (It's all very Zen.)
People trained to attract attention in the blizzard of competing images of a media saturated world don't make invisible interfaces. They make "in your face" and "look at me!" interfaces that get old in a hurry.
Don't get me wrong: a good graphics person can really enhance an interface by the skillful and subtle use of proportion and color. Unfortunately finding a good UI graphics person is a challenge. Be prepared for fights over what works transparently versus what looks cool and draws attention the first time you see it.

GUI Automation testing - Window handle questions

Our company is currently writing a GUI automation testing tool for compact framework applications. We have initially searched many tools but none of them was right for us.
By using the tool you can record test-cases and group them together to test-suites. For every test-suite there is generated an application, which launches the application-under-test and simulates user-input.
In general the tool works fine, but as we are using window handles for simulation user input, you can't do very many things. For example it is impossible for us to get the name of a control (we just get the caption).
Another problem using window handles is checking for a change. At the moment we simulate a click on a control and depending on the result we know if the application has gone to the next step.
Is there any other (simpler) way for doing such things (for example the message queue or anything else)?
Interesting problem! I've not done any low-level (think Win32) Windows programming in a while, but here's what I would do.
Use a named pipe and have your application listen to it. Using this named pipe as a communication medium, implement a real simple protocol whereby you can query the application for the name of a control given its HWND, or other things you find useful. Make sure the protocol is rich enough so that there is sufficient information exchanged between your application and the test framework. Make sure that the test framework does not yield too much "special behavior" from the app, because then you wouldn't really be testing the features, but rather your test framework.
There's probably way more elegant and cooler ways to implement this, but this is what I remember from the top of my head, using only simple Win32 API calls.
Another approach, which we have implemented for our product at work, is to record user events, such as mouse clicks and key events in an event script. This should be rich enough so that you can have the application play it back, artificially injecting those events into the message queue, and have it behave the same way it did when you first recorded the script. You basically simulate the user when you play back the script.
In addition to that, you can record any important state (user's document, preferences, GUI controls hierarchy, etc.), once when you record the script, and once when you play it back. This gives you two sets of data you can compare, to make sure for instance that everything stays the same. This solution gives you tests that not easy to modify (you have to re-record if your GUI changes), but that provide awesome regression testing.
(EDIT: This is also a terrific QA tool during beta testing, for instance: just have your users record their actions, and if there's a crash, you have a good chance of easily reproducing the problem by just playing back the script)
Good luck!
Carl
If the Automated GUI testing tool has knowledge about the framework the application is written in it could use that information to make better or more advanced scripts. TestComplete for example knows about Borland's VCL and WinForms. If you test applications build using Windows Presentation Foundation has advanced support for this build in.
use NUnitForms. I've used them with great success for single and multi threading apps and you don't have to worry about handles and stuff like that
Here are some posts about NUnitForms worth reading
NUnitForms and failed DragDrop registration - problem of MTA vs STA
Compiled application exe GUI testing with NUnitForms
I finally found a solution to communicate between the testing-application and the application-under-test: Managed Spy. It's basically a .NET application build on top of ManagedSpyLib.
ManagedSpyLib allows programmatic access to the Windows Forms controls of another process. For this it uses Window Hooks and memory-mapping files.
Thanks for all who helped me to get to this solution!
Managed Spy does not provide a solution for compact framework applications.
The company Jamo Solutions (www.jamosolutions.com) meets the requirements for automation testing on mobile devices, including .net compact framework applications.