Flutter board game without grid layout - flutter

I'm making a simple board game in Flutter.
The stations are marked with numbers and circles. The need would be to be able to define the position of each station dynamically (on the backend side).
The selected station should pulse.
Can anyone help me how to position each station in Flutter on a given SVG or PNG image so that they are in the same place on different sized devices? I don't need a copy-pastable solution, rather I need guidance in principle, how should I do it in Flutter.
I am currently calculating from pixels and screen resolution, but it seems a very hacky way.
Here is board sketch:

Related

How to make responsible camera for multiplie resolutions?

I m working and some 2d game for mobile phones relate to chess, and I have a trouble with show board for different resolutions of mobile screens.
Here u can see how it must be on 9:16 resolution:
https://drive.google.com/open?id=1MFt-FtEtqkk7QWQC2oAtMBA0WAgOIV4h
And how it looks on smaller screen:
https://drive.google.com/open?id=11WLYZwHEa9ijXbekzUbE5ZbjbEnjZ6Lb
How can I protect my chess board from cropping?
Answer: it depends on how you want it to look.
If you just want it to fit perfectly horizontally, you need to perform 3 steps:
Evaluate the width of the Sprite (with Sprite.bounds)
Evaluate the width of the current screen (with Screen.width and using the main Camera)
Scale the Sprite to fit the screen
If you are using UI elements, you can do the same thing but you won't need to use Camera, just scaling according to the Screen width (look for RectTransform.deltaSize to give you the size of a UI element).
If you are using UI, also consider using layout groups, they help you with fitting the content to screen size.
Anyway, in a device with small width, maybe the table will get too small and you might have to think about better options to display the board instead of just scaling with screen width.

How to fit content into a specified area of screen at game startup in Unity3D

We are using Unity3D to develop an interesting medical application. By making it very briefly, we have a very large touch screen, hanging from a wall. This screen is fixed to the wall and can not be moved. Patients could be adults or children. Tall or short people and so on. Before starting the game, we perform a calibration phase that consists in trying to understand more or less what the touch range is. That is, a taller person can reach the highest points on the screen, while a lower person can not. The calibration phase then identifies more or less which area is reachable. The result of the calibration (simplified) is a rectangle. We would like to fit the content of the game made with Unity3D to be included within this rectangle. That is, there is some function in Unity3D that allows you to specify when you start the game where you "draw" the elements of the game by defining a sort of "sub screen"?
Absolutely yes. It is quite easy, just change the Viewport Rect of the Camera:
Also check the Documentation for completeness (the paragraph Normalized Viewport Rectangles reports an example in games field, where the camera is split for a two-players match...you basically want the same thing but with a single camera).
In this doc, there's also an example in which the viewport is changed programmaticaly (that's your case). Basically:
Camera.main.rect = new Rect(xMin, xMax, yMin, yMax);

Unity large,dynamic tile map

I'm thinking of making city transport simulation game, based on tile map like simcity. Because transportation is the main component, I'm not gonna describe city thoroughly, like drawing all the buildings or others. But I need to divide the city to districts and villages.
I want them to be generated randomly, based on tile map system - Actually this doesn't have to be tile based, but I don't have any better idea to generate randomly divided districts.
The problem is there are literally to many tiles. Of course I won't make all the tiles to GameObjects, but array storing the information of tile just gets like 4000x3000 (for 12,000,000 tiles). Is it okay? I think this will seriously slow down the game.
I searched for many ways to generate tile maps, but those tile maps are for RPGs.. they are just sprites and background.
My tiles should change dynamically(maybe the colors, and I'm making the game with 3d so maybe the tile can have height that may change), with showing the status of that tile(a small region of the city). What will be better way for my needs?
Thank you!
The short answer is that you should generate a mesh to represent your tiles. This will allow you to represent a large number of tiles as a single game object.
It is not as troublesome as it sounds. I actually went through this myself recently, and have written a step-by-step guide on how to do it, and how to solve a host of other issues I ran in to working with tiles:
http://matthewlynch.net/2017/02/19/efficient-artefact-free-2d-tile-mapping-in-unity-5-5/
The only caveat I would mention (this is discussed in the tutorial, but not directly addressed) is that you should still divide your map up in some way.
I would not recommend exceeding 100x100 tiles in a single mesh, and in fact would recommend breaking it up even further. But once you get the basics of tile->mesh rendering up and running, this is not difficult to do.

Build Iphone app that can recognise colour from streaming camera

I am building an iphone app to recognise a specific colour through the iphone camera when placed onto a colour board.
Note that I want it to work through the streaming camera output not just a still image or photo.
My initial thoughts were to scan series of pixels (say 4 on each corner of the camera feed) and if the colours registered in each pixel match, then display colour (in text) to user.
Can someone please point me in the right direction as far as example code or API or even if there is a better design solution to the problem.

Polling IPhone Camera to Process Image

Scenario is I want my app to process (in the background if possible) images been seen by the iphone camera.
e.g. App is running, user places the phone down on a piece of red cardboard, than want to display an alertview saying "Phone placed on Red Surface"(this is a simplified version of what i want to do but just to keep the question direct).
Hope this makes sense. I know there is two seperate concerns here.
How to process images from the camera in the background of the app (if we cant do this that we can initiate the process with say a button click if needed).
Processing the image to say what solid colour it is sitting on.
Any help/guidance would be greatly appreciated.
Thanks
Generic answers to your two questions:
Background processing of image can be triggered as a timer event. Say for example, every 30 second, capture the image on the screen and do the processing behind. If the processing is not computing/time intensive, this should work
It is technically possible to know the color of say one pixel programatically. If you are sure that the entire image is just one color, you can try that approach. Get few random points and get the color of the pixel in the image. But if the image (in your example, red board) consists of an image or multiple colors, then that will require detailed image processing techniques.
Hope this helps
1) Image Capture
There's two kinds of apps that continually take imagery from the camera: media capture (e.g. Camera, iMovie) or Augmented Reality apps.
Here's the iPhone SDK tutorial for media capture:
https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW3
Access the camera with iPhone SDK
Augmented Reality apps take continual pictures from the camera for processing/overlay. I suggest you look into some of the available AR kits and see how they get a continual stream from the camera and also analyze the pixels.
Starting a augmented reality (AR) app like Panasonic VIERA AR Setup Simulator
http://blog.bordertownlabs.com/post/157320598/customizing-the-iphone-camera-view-with
2) Image Processing
Image processing is a really big topic that's been addressed in multiple other places:
https://photo.stackexchange.com/questions/tagged/image-processing
https://dsp.stackexchange.com/questions/tagged/image-processing
https://mathematica.stackexchange.com/questions/tagged/image-processing
..but for starters, you'll need to use some heuristical analysis to determine what you're looking for. Sampling the captured pixels in a bunch of places (e.g. corners + middle) may help, as would generating a histogram of colour intensities - if there's lots of red but little or no blue and green, it's a red card.