I have very little experience with UnrealEngine (just a basic handling of the menues as I played around a bit creating levels back in the day) and also have an Oculus DK2 to test out VR. Is it possible to create a UE Scene that displays a 360 picture, and then add "hotspots" to the picture that trigger voiceovers or even text messages on the screen?.
Basically the client wants to Demo their room and add commentary to specific items within the scene, but they do not have the resources for hiring 3D modelers. So they would settle for a static room (picture) where you can just look around (obviously not walk) and trigger some events depending on where you look.
Can anyone point me in the right direction, or enlighten me that this is not possible?
What you suggest sounds plausible - though it is not going to be easy under a tight deadline. Unreal Engine 4 is quite a bit different from 3 and earlier.
You also can get access to the C++ Source now via github when you register.
Take a look at the official best practices and the sample links off that to see if you can get that running on your development kit. That will at least give you an idea of what you are up against.
Related
Looking for points/advice re using Flame (Flutter library) for a point & click 2D adventure game. So mainly different rooms with images, tap listeners, some minimal animations. So no physics or "real time" stuff.
Q1 - Which of the flame starting points would be recommended re "game.dart" vs "base_game.dart"?
Q2 - Any other tips/guidance for this? (e.g. dont' bother using flame just use flutter?)
This is a very wide question for the StackOverflow format, but since I'm one of the developers of Flame I'll try to respond as well as I can.
Q1:
Definitely go with BaseGame, if you are using Game you get locked out of a lot of the features of the engine. Game is used if you pretty much only need the game loop.
Make use of the components, they will make your
development process a lot simpler. For examples use SpriteComponent
instead of Sprite directly etc.
There is also a package built on top of Flame that is called Bonfire (not built by the flame-engine team) that you could use too, but that is a more opinionated way for writing an RPG game.
Q2:
You can join our Discord chat and we'll try to answer any questions you have.
Use v1.0.0-rcX, even though it is still rc it is the definitely the way to go if you are starting a new project now, so that you don't have to migrate it later. In a couple of months the final v1.0.0 should be released.
Have a look in the examples directory in the repository, you can get inspiration for how to use most features available in the engine there.
Having trouble describing what I'm looking for - essentially, I'm classed as a beginner at unity, I've only made artwork for games, never created the core myself.
Preface: Here's game in a nutshell:
The camera is looking down with an Orthographic angle
The ground in scrolling down the screen, you're walking towards the top of the screen. In essence, you're on a treadmill.
As you walk/progress, shapes start entering the screen, like Tetris.
You simply walk over to that shape, pick it up and take it 1 of 3 baskets on the - left side of the screen.
Repeat until end of level
Help:
How would I start, what should I start with?
Do I seek out examples and guides on creating an endless runner, even though the game has more similarities to Tetris but with interactions? Because of this, I'm lost for words on what to search for in order to gain the knowledge to build what I'm thinking.
Background:
I'm coming from an Artistic background, I work with web development daily, so I think I'll be able to grasp the basics quick enough, I just need that finger pointing at the obvious!
Given that you're coming from a web development background I can see why you're looking for hoping to find pinpointed resources for your specific game. Web development generally involves finding specific ways to tackle each specific problem you encounter.
One thing you will want to be aware of is that Unity development is a bit different in that most things are built using the core set of basic fundamentals. Because of this I would actually recommend looking into tutorials that will teach you about the basics rather than trying to find something specific for what it is you're building, as you will be able to apply these basics to encounter any of the problems you encounter a long the way. The basic Unity tutorials provided by the Unity team do a great job of teaching these basics: The Unity Tutorials Page
At the very least the Roll-A-Ball tutorial should teach you most of the basics as it pertains to how objects work, creating scripts, etc.
After you are a bit more comfortable with Unity in general, then I would suggest looking into some more focused tutorials that are a bit closer to what you want to do as these will give you a better idea of how you can apply the basics you've learned. One recommendation I would have for the game you're trying to make is
this runner tutorial by Catlike Coding.
While the game you'll be creating in that tutorial will play quite differently than what you're describing, this should give you a better idea of how you can approach some of the challenges that you'll encounter in the development of your game (things like continually creating objects).
I was wondering if anyone here has had any luck using GKMinmaxStrategist. This class/feature was showed off at the WWDC, but most of the sample code was in Objective-C, which was a disappointment.
The WWDC videos for GameplayKit featured another game, Stone Flipper (Reversi/Othello), but they haven't published the code (yet?).
Has anyone had any luck with this? I was hoping to try this out with just a simple tic-tac-toe game, but am not at all sure how to start.
I agree that it's a tricky framework to learn – I just finished writing a tutorial about GameplayKit and GKMinmaxStrategist and it was no mean feat. If you follow the tutorial it builds a complete game from scratch, explaining how it all fits together. You might find it useful as a starting point, at the very least.
I'm hopeful that Apple will improve its documentation before iOS 9 is final!
If you want to dive straight in, here's the least you need to know:
Ensure your game model (data) and view (layouts) are kept separate.
Make your model implement the NSCopying protocol, because it will be copied many times as the AI runs.
You should also make it implement the GKGameModel protocol, which requires that you be able to enumerate the available moves, apply a move on a board copy (virtually, not for real), then judge the players scores afterwards.
Each "move" (for whatever that means in your game) needs to conform to the GKGameModelUpdate protocol, so it'll be a class you create that defines a particular move. You'll be given this back when you the best move has been chosen, so it will contain something like "move the knight to E4".
If your game does not have a score (in my tutorial I used Four in a Row, which has exactly this problem) then you need to come up with a heuristic estimating roughly how good a move was.
Run the AI on a background thread to ensure your UI remains responsive, then push the result back to the foreground thread when you're ready to make UI changes.
If you find the AI is running slowly, either restrict the number of moves it can make or reduce its look ahead depth.
Here's the GKMinmaxStrategist TicTacToe tutorial in Swift.
This should explain how things work and gives some pointers on how to make a good AI. The strategist surely isn't a template to create any kind of board game AI, it just provides a framework. 95% of the work still rests on your shoulders. ;)
The code is available here. Note that it not only requires Xcode 7 but also OS X 10.11. Though it should be straightforward to adapt to iOS 9.
I'm currently developing an application with the initial goal of obtaining, in real time, a 3D model of the environment "seen" by a Kinect device. This information would be later on used for projection mapping but that's not an issue, for the moment.
There are a couple of challenges to overcome, namely the fact that the Kinect will be mounted on a mobile platform (robot) and the model generation has to be in real-time (or close to it).
After a long research on this topic, I came up with several possible (?) architectures:
1) Use the depth data obtained from Kinect, convert it into a point cloud (using PCL for this step), then a Mesh and then export it into Unity for further work.
2) Use the depth data obtained from Kinect, convert it into a point cloud (using PCL for this step), export it into Unity and then convert it into a Mesh.
3) Use KinectFusion that already the option of creating a Mesh model, and (somehow) automatically load the Mesh model created into Unity.
4) Use OpenNI+ZDK (+ wrapper) to obtain the depth map and generate the Mesh using Unity.
Quite honestly, I'm kinda lost here, my main issue is the real-time requirement along with being forced to integrate several software components makes this it tricky problem. I don't know which if any of these solutions are viable and the information/tutorials on these issues isn't exactly abundant like the one, for example, for Skeleton tracking .
Any sort of help would be greatly appreciated.
Regards,
Nuno
Sorry, I might not be providing a solution for realtime mesh creation within Unity - but the process discussion here, was interesting enough for me to reply.
In the hard science novel Memories with Maya - there is discussion of exactly such a scenario:
"“Point taken,” he said. “So… Satish showed me a demo of the Quad [Quad=Drone] acquiring real-time depth and texture maps.”
“Nothing new in that,” I said.
“Yeah, but look above us.”
I tilted my head up. The crude shape of the Quad came into view.
“The Quad is here, but you can't see it because the FishEye [Fisheye=Kinect 2] is on it aimed straight ahead.”
“So it's mapping video texture over live geometry? Cool,” I said.
“Yeah, the breakthrough is I can freeze a frame… freeze real life as it were, step out of the scene and study it.”
“All you do is block out the live world with the cross polarizers?”
“Yeah,” he said. “It's a big deal for AYREE to be able to use such data-sets.”
“The resolution has improved,” I said.
“Good observation,” he said. “So has the range sensing. The lens optics have also been upgraded.”
“I noticed that if I turn around I don't see the live feed, just the empty street,” I said.
“Yes, of course,” he replied. “The Quad is facing the other way around. It's why I'm standing in front of you. The whole street, however, is a 3D model done by a standard laser scan taken from the top of that high tower.”
Krish pointed to a building block at the far end of the street. I turned back to the live 3D view again. He walked in front of me.
“This is uber cool. Everyone looks so real.”
“Haha. You should see how cool it is when you're here in person with the Wizer on,” he said. “I'm here watching these real people pass by, only they have a mesh of themselves mapped onto them.”
“Ahhh! Yes.”
“Yeah, it's like they have living paint on them. I feel like reaching out and touching, just to feel the texture.”...
The work that you're thinking of doing in this area, and this use of a live mesh goes far beyond Projection Mapping for events- for sure!
Wishing you the best on the project, and I will be following your updates.
Some of the science behind the story is on www.dirrogate.com if the topic interests you.
Kind Regards.
I would use Kinect Fusion, as it has a sample with the ability to export to .obj, which unity supports. You can automatically save it, and import it to unity to generate a mesh automatically. Especially if you have multiple Kinects, then Microsoft even has a sample to show the basics of Kinect Fusion with multiple Kinects. Also, since Fusion is already pre-written, there is not much code you will have to write.
Here is an example of a mesh from Fusion with one camera:
I do want you to notice how many vertices there are though... This could cause performance problems later on.
Good luck!
I was looking at some study i have to do in the future to do with procedural generation techniques and i was wondering what type of content you have:
Developed
Helped Develop
Seen implemented
Tried to develop
and what methods/techniques/procedures you used to develop it.
If you feel generous maybe you can even go into specifics of it such as data structures ad algorithms you have used to develop it.
If this needs to be put as community wiki because it is not me asking for a problem to be solved just let me know.
This is not a homework thread because it is a research unit that i'm not taking yet ;)
Introversion software, the makers of the games Defcon, Uplink and Darwinia (among others) have started working on a game about a year ago which extensively uses PCG for city generation, here is a video of their work, and you can read more about it on the development diary of the game (start from the first part at the bottom of the page!).
This immediately got me extremely interested, and seeing the potential for games I immediately started researching the technology. I have amassed a folder of 18 PDFs about the subject (research papers, SIGGRAPH presentations, etc). Here, I uploaded it for you.
The main approach is to use L-Systems, however, I never got around to understanding enough of that to make something out of this. I tried other, less successful approaches like using Voronois, recursively splitting a rectangular area into more smaller areas and shifting the boundaries a little to obtain a bit of randomness and polygon division.
The last method I had gotten from Mike's Code Blog's posts (here and here). The screenshots shown on his blog make me drool, it is my biggest programmer's dream to ever get something that looks like that. I emailed him to ask how he did it, and here is the relevant part of his reply, I'm sure he wouldn't mind me posting this here:
L-Systems is definitely one way to go, but that isn't what I'm doing. The basis of my method is polygon subdivision. I start with a simple polygon that represents the entire area of the city. Then, I split it (roughly) in half, and then split those two polygons, etc. until I get down to city-block size. At that point, the edges of all my polygons represent roads. I then use the same subdivision method to break the blocks down into building-size lots.
The devil is in the details, of course, but that is the basic method.
I for one still haven't managed to fully implement a solution of which I'm satisfied of, but it remains one of, if not my single biggest programmer's dream to ever achieve something like this.
Here are a few of the leaders in procedurally generated terrain (and to a lesser extent foliage). If you don't get a detailed answer here regarding methods and techniques, you might want to look in / ask in their forums. I have seen some discussions of techniques there.
TerraGen 2
World Builder
World Machine
Natural Graphics
Noone mentioned the demoscene that ONLY use procedural stuff?
So, go search for Werkkzeug, Kkrieger, MilkyTracker to start. Also you can visit the site pouet and see the wonder of well done procedural videos (yes, procedural videoclips! With music and graphics, all procedural!)
Allegorithmic's products are used in actual shipping titles. These guys focus on texture generation (both offline and at runtime).
They have some very pretty screenshots and demos.