mono for android application - android-emulator

I am working on gaming application in mono android. I want sample code for background image scrolling vertically from top to bottom. I have a code but it is not working properly.So plz somebody help me.
mBGFarMoveY = mBGFarMoveY + 3;
int newFarY = mBackgroundImageFar.Height + (+ mBGFarMoveY);
if (newFarY <= 0)
{
mBGFarMoveY = 0;
canvas.DrawBitmap (mBackgroundImageFar,0,mBGFarMoveY,null);
}
else
{
canvas.DrawBitmap (mBackgroundImageFar,0,mBGFarMoveY,null);
canvas.DrawBitmap (mBackgroundImageFar,0, newFarY, null);
}
Thanks&Regard's,
Chakradhar.

What do you see and what do you expect? There are several issues with your code as far as I can see.
The position is not calculated based upon time, so scrolling will
be jumpy.
The overlap code doesn't look too great and blits a lot out of bounds. I'm not sure
what 'canvas' is, but if it is the one from android.graphics, you
can specify the source and destination rectangles to blit rather
than just the 'y' position.
so something like (untested and I've not written code for this platform before but you should get the idea):
y = (time_seconds * pixels_per_second);
y = y % image.Height; // wrap
src_rect.left = 0;
src_rect.right = image.Width - 1;
src_rect.top = y;
src_rect.bottom = image.Height - 1;
dst_rect.left = 0;
dst_rect.right = image.Width - 1;
dst_rect.top = 0;
dst_rect.bottom = image.Height - 1;
if (y == 0) {
canvas.DrawBitmap(image, src_rect, dst_rect, null);
}
else {
dst_rect.bottom = src_rect.height() - 1;
canvas.DrawBitmap(image, src_rect, dst_rect, null);
src_rect.top = 0;
src_rect.bottom = y - 1;
dst_rect.top = dst_rect.bottom + 1;
dst_rect.bottom = image.Height - 1;
canvas.DrawBitmap(image, src_rect, dst_rect, null);
}

Related

How to generate a honeycomb field in Unity?

I need to generate such a field:
Photo
But I don't know how to do it. What happened to me:
My result
My code:
[ContextMenu("Generate grid")]
public void GenerateGrid()
{
for(int x = 0; x < _gridSize.x; x++)
{
for (int z = 0; z < _gridSize.z; z++)
{
var meshSize = _cell.GetComponent<MeshRenderer>().bounds.size;
var position = new Vector3(x * (meshSize.x + _offset), 0, z * (meshSize.z + _offset));
var cell = Instantiate(_cell, position, Quaternion.Euler(_rotationOffset), _parent.transform);
cell.GridActions = GridActions;
cell.Position = new Vector2(x, z);
cell.name = $"Cell: x:{x}, z:{z}";
GridActions.AllCell.Add(cell);
}
}
}
Simply for every odd z value, move the cell up/down by half a cell size, and move them inward toward the previous cell half a cell size. I didnt test it, but here is the code that might do that, not sure tho, again I didnt test this.
[ContextMenu("Generate grid")]
public void GenerateGrid()
{
for(int x = 0; x < _gridSize.x; x++)
{
for (int z = 0; z < _gridSize.z; z++)
{
int xResize = 0;
int zResize = 0;
if (z % 2 == 1) {
xResize = meshSize.x / 2;
zResize = meshSize.z / 2;
}
var meshSize = _cell.GetComponent<MeshRenderer>().bounds.size;
var position = new Vector3(x * (meshSize.x + _offset - xResize), 0, z * (meshSize.z + _offset - zResize));
var cell = Instantiate(_cell, position, Quaternion.Euler(_rotationOffset), _parent.transform);
cell.GridActions = GridActions;
cell.Position = new Vector2(x, z);
cell.name = $"Cell: x:{x}, z:{z}";
GridActions.AllCell.Add(cell);
}
}
}

PCL: merging two sets of points to one cloud and visualizeing with PCL cloudviewer

I am trying to merge two sets of points from two different views to one single point cloud and visualize it with PCL cloud viewer.
mPtrPointCloud->points.clear();
mPtrPointCloud->points.resize(mFrameSize * 2);
auto it = mPtrPointCloud->points.begin();
received = PopReceived();
if(received != nullptr)
{
// p_data_cloud = (float*)received->mTransformedPC.data;
p_data_cloud = (float*)received->mCVPointCloud.data;
index = 0;
for (size_t i = 0; i < mFrameSize; ++i)
{
float X = p_data_cloud[index];
if (!isValidMeasure(X)) // Checking if it's a valid point
{
it->x = it->y = it->z = it->rgb = 0;
}
else
{
it->x = X;
it->y = p_data_cloud[index + 1];
it->z = p_data_cloud[index + 2];
it->rgb = convertColor(p_data_cloud[index + 3]); // Convert a 32bits float into a pcl .rgb format
}
index += 4;
++it;
}
}
frame = PopFrame();
if(frame != nullptr)
{
// p_data_cloud = frame->mSLPointCloud.getPtr<float>();
p_data_cloud = (float*)frame->mCVPointCloud.data;
index = 0;
for (size_t i = 0; i < mFrameSize; ++i)
{
float X = p_data_cloud[index];
if (!isValidMeasure(X)) // Checking if it's a valid point
{
it->x = it->y = it->z = it->rgb = 0;
}
else
{
it->x = X;
it->y = p_data_cloud[index + 1];
it->z = p_data_cloud[index + 2];
it->rgb = convertColor(p_data_cloud[index + 3]); // Convert a 32bits float into a pcl .rgb format
}
index += 4;
++it;
}
}
mPtrPCViewer->showCloud(mPtrPointCloud);
What I want to have is two sets of points are "fused" to one frame. However, it seems these two sets of points are still shown separately one after the other.
Could anyone help to explain how to really merge two sets of points into one cloud? Thanks
(1) Create a new empty pointcloud which will be the merged pointcloud at the end
pcl::PointCloud<pcl::PointXYZ> mPtrPointCloud;
(2) Transform point clouds to origin
pcl::PointCloud<pcl::PointXYZ> recieved_transformed;
Eigen::Transform<Scalar, 3, Eigen::Affine> recieved_transformation_mat(recieved.sensor_origin_ * recieved.sensor_orientation_);
pcl::transformPointCloud(recieved, recieved_transformed, recieved_transformation_mat);
pcl::PointCloud<pcl::PointXYZ> frame_transformed;
Eigen::Transform<Scalar, 3, Eigen::Affine> frame_transformation_mat(frame.sensor_origin_ * frame.sensor_orientation_);
pcl::transformPointCloud(frame, frame_transformed, frame_transformation_mat);
(3) Use the += operator
mPtrPointCloud += received_transformed;
mPtrPointCloud += frame_transformed;
(4) Visualize merged pointcloud
mPtrPCViewer->showCloud(mPtrPointCloud);
That's it. See also example http://pointclouds.org/documentation/tutorials/concatenate_clouds.php
http://pointclouds.org/documentation/tutorials/matrix_transform.php

moving back ground using single image in cocos2dx v3

I am new in cocos2dx programming.Please can anybody help me about moving background sprite by using single image with code but the code should be in cocos2dx-3.0 because i have already search this problem but all code is in old cocos2dx thanks.
I get this example for a cocos2d-x Book
for (int i = 0; i < 2; i ++)
{
backgroundSpriteArray[i] = Sprite::create
("GameScreen/Game_Screen_Background.png");
backgroundSpriteArray[i]->setPosition
(Point((visibleSize.width / 2) + origin.x, (-1 *
visibleSize.height * i) + (visibleSize.height / 2) +
origin.y));
this->addChild(backgroundSpriteArray[i], -2);
}
for (int i = 0; i < 2; i ++)
{
backgroundSpriteArray[i]->setPosition
(Point(backgroundSpriteArray[i]->getPosition().x,
backgroundSpriteArray[i]->getPosition().y + (0.75 *
visibleSize.height * dt)));
}
Although the backgrounds scroll, they do not reset once they have gone
above the screen. Add the following code above the previous code:
Size visibleSize = Director::getInstance()->
getVisibleSize();
Point origin = Director::getInstance()->getVisibleOrigin();
for (int i = 0; i < 2; i ++)
{
if (backgroundSpriteArray[i]->getPosition().y >=
visibleSize.height + (visibleSize.height / 2) -1)
{
backgroundSpriteArray[i]->setPosition
(Point((visibleSize.width / 2) + origin.x, (-1 *
visibleSize.height) + (visibleSize.height / 2)));
}
}

Save frame from TangoService_connectOnFrameAvailable

How can I save a frame via TangoService_connectOnFrameAvailable() and display it correctly on my computer? As this reference page mentions, the pixels are stored in the HAL_PIXEL_FORMAT_YV12 format. In my callback function for TangoService_connectOnFrameAvailable, I save the frame like this:
static void onColorFrameAvailable(void* context, TangoCameraId id, const TangoImageBuffer* buffer)
{
...
std::ofstream fp;
fp.open(imagefile, std::ios::out | std::ios::binary );
int offset = 0;
for(int i = 0; i < buffer->height*2 + 1; i++) {
fp.write((char*)(buffer->data + offset), buffer->width);
offset += buffer->stride;
}
fp.close();
}
Then to get rid of the meta data in the first row and to display the image I run:
$ dd if="input.raw" of="new.raw" bs=1 skip=1280
$ vooya new.raw
I was careful to make sure in vooya that the channel order is yvu. The resulting output is:
What am I doing wrong in saving the image and displaying it?
UPDATE per Mark Mullin's response:
int offset = buffer->stride; // header offset
// copy Y channel
for(int i = 0; i < buffer->height; i++) {
fp.write((char*)(buffer->data + offset), buffer->width);
offset += buffer->stride;
}
// copy V channel
for(int i = 0; i < buffer->height / 2; i++) {
fp.write((char*)(buffer->data + offset), buffer->width / 2);
offset += buffer->stride / 2;
}
// copy U channel
for(int i = 0; i < buffer->height / 2; i++) {
fp.write((char*)(buffer->data + offset), buffer->width / 2);
offset += buffer->stride / 2;
}
This now shows the picture below, but there are still some artifacts; I wonder if that's from the Tango tablet camera or my processing of the raw data... any thoughts?
Can't say exactly what you're doing wrong AND tango images often have artifacts in them - yours are new, but I often see baby blue as a color where glare seems to be annoying deeper systems, and as it begins to loose sync with the depth system under load, you'll often see what looks like a shiny grid (its the IR pattern, I think) - At the end, any rational attempt to handle the image with openCV etc failed, so I hand wrote the decoder with some help from SO thread here
That said, given imagebuffer contains a pointer to the raw data from Tango, and various other variables like height and stride are filled in from the data received in the callback, then this logic will create an RGBA map - yeah, I optimized the math in it, so it's a little ugly - it's slower but functionally equivalent twin is listed second. My own experience says its a horrible idea to try and do this decode right in the callback (I believe Tango is capable of loosing sync with the flash for depth for purely spiteful reasons), so mine runs at the render stage.
Fast
uchar* pData = TangoData::cameraImageBuffer;
uchar* iData = TangoData::cameraImageBufferRGBA;
int size = (int)(TangoData::imageBufferStride * TangoData::imageBufferHeight);
float invByte = 0.0039215686274509803921568627451; // ( 1 / 255)
int halfi, uvOffset, halfj, uvOffsetHalfj;
float y_scaled, v_scaled, u_scaled;
int uOffset = size / 4 + size;
int halfstride = TangoData::imageBufferStride / 2;
for (int i = 0; i < TangoData::imageBufferHeight; ++i)
{
halfi = i / 2;
uvOffset = halfi * halfstride;
for (int j = 0; j < TangoData::imageBufferWidth; ++j)
{
halfj = j / 2;
uvOffsetHalfj = uvOffset + halfj;
y_scaled = pData[i * TangoData::imageBufferStride + j] * invByte;
v_scaled = 2 * (pData[uvOffsetHalfj + size] * invByte - 0.5f) * Vmax;
u_scaled = 2 * (pData[uvOffsetHalfj + uOffset] * invByte - 0.5f) * Umax;
*iData++ = (uchar)((y_scaled + 1.13983f * v_scaled) * 255.0);;
*iData++ = (uchar)((y_scaled - 0.39465f * u_scaled - 0.58060f * v_scaled) * 255.0);
*iData++ = (uchar)((y_scaled + 2.03211f * u_scaled) * 255.0);
*iData++ = 255;
}
}
Understandable
for (int i = 0; i < TangoData::imageBufferHeight; ++i)
{
for (int j = 0; j < TangoData::imageBufferWidth; ++j)
{
uchar y = pData[i * image->stride + j];
uchar v = pData[(i / 2) * (TangoData::imageBufferStride / 2) + (j / 2) + size];
uchar u = pData[(i / 2) * (TangoData::imageBufferStride / 2) + (j / 2) + size + (size / 4)];
YUV2RGB(y, u, v);
*iData++ = y;
*iData++ = u;
*iData++ = v;
*iData++ = 255;
}
}
I think that there is a better way to do if you can to do it offline.
The best way to save the image should be something like this (don't forgot to create the folder Pictures or you won't save anything)
void onFrameAvailableRouter(void* context, TangoCameraId id, const TangoImageBuffer* buffer) {
//To write the image in a txt file.
std::stringstream name_stream;
name_stream.setf(std::ios_base::fixed, std::ios_base::floatfield);
name_stream.precision(3);
name_stream << "/storage/emulated/0/Pictures/"
<<cur_frame_timstamp_
<<".txt";
std::fstream f(name_stream.str().c_str(), std::ios::out | std::ios::binary);
// size = 1280*720*1.5 to save YUV or 1280*720 to save grayscale
int size = stride_ * height_ * 1.5;
f.write((const char *) buffer->data,size * sizeof(uint8_t));
f.close();
}
Then to convert the .txt file to png you can do this
inputFolder = "input"
outputFolderRGB = "output/rgb"
outputFolderGray = "output/gray"
input_filename = "timestamp.txt"
output_filename = "rgb.png"
allFile = listdir(inputFolder)
numberOfFile = len(allFile)
if "input" in glob.glob("*"):
if "output/rgb" in glob.glob("output/*"):
print ""
else:
makedirs("output/rgb")
if "output/gray" in glob.glob("output/*"):
print ""
else:
makedirs("output/gray")
#The output reportories are ready
for file in allFile:
count+=1
print "current file : ",count,"/",numberOfFile
input_filename = file
output_filename = input_filename[0:(len(input_filename)-3)]+"png"
# load file into buffer
data = np.fromfile(inputFolder+"/"+input_filename, dtype=np.uint8)
#To get RGB image
# create yuv image
yuv = np.ndarray((height + height / 2, width), dtype=np.uint8, buffer=data)
# create a height x width x channels matrix with the datatype uint8 for rgb image
img = np.zeros((height, width, channels), dtype=np.uint8);
# convert yuv image to rgb image
cv2.cvtColor(yuv, cv2.COLOR_YUV2BGRA_NV21, img, channels)
cv2.imwrite(outputFolderRGB+"/"+output_filename, img)
#If u saved the image in graysacale use this part instead
#yuvReal = np.ndarray((height, width), dtype=np.uint8, buffer=data)
#cv2.imwrite(outputFolderGray+"/"+output_filename, yuvReal)
else:
print "not any input"
You just have to put your .txt in a folder input
It's a python script but if you prefer a c++ version it's very close.

Teaching a Neural Net: Bipolar XOR

I'm trying to to teach a neural net of 2 inputs, 4 hidden nodes (all in same layer) and 1 output node. The binary representation works fine, but I have problems with the Bipolar. I can't figure out why, but the total error will sometimes converge to the same number around 2.xx. My sigmoid is 2/(1+ exp(-x)) - 1. Perhaps I'm sigmoiding in the wrong place. For example to calculate the output error should I be comparing the sigmoided output with the expected value or with the sigmoided expected value?
I was following this website here: http://galaxy.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html , but they use different functions then I was instructed to use. Even when I did try to implement their functions I still ran into the same problem. Either way I get stuck about half the time at the same number (a different number for different implementations). Please tell me if I have made a mistake in my code somewhere or if this is normal (I don't see how it could be). Momentum is set to 0. Is this a common 0 momentum problem? The error functions we are supposed to be using are:
if ui is an output unit
Error(i) = (Ci - ui ) * f'(Si )
if ui is a hidden unit
Error(i) = Error(Output) * weight(i to output) * f'(Si)
public double sigmoid( double x ) {
double fBipolar, fBinary, temp;
temp = (1 + Math.exp(-x));
fBipolar = (2 / temp) - 1;
fBinary = 1 / temp;
if(bipolar){
return fBipolar;
}else{
return fBinary;
}
}
// Initialize the weights to random values.
private void initializeWeights(double neg, double pos) {
for(int i = 0; i < numInputs + 1; i++){
for(int j = 0; j < numHiddenNeurons; j++){
inputWeights[i][j] = Math.random() - pos;
if(inputWeights[i][j] < neg || inputWeights[i][j] > pos){
print("ERROR ");
print(inputWeights[i][j]);
}
}
}
for(int i = 0; i < numHiddenNeurons + 1; i++){
hiddenWeights[i] = Math.random() - pos;
if(hiddenWeights[i] < neg || hiddenWeights[i] > pos){
print("ERROR ");
print(hiddenWeights[i]);
}
}
}
// Computes output of the NN without training. I.e. a forward pass
public double outputFor ( double[] argInputVector ) {
for(int i = 0; i < numInputs; i++){
inputs[i] = argInputVector[i];
}
double weightedSum = 0;
for(int i = 0; i < numHiddenNeurons; i++){
weightedSum = 0;
for(int j = 0; j < numInputs + 1; j++){
weightedSum += inputWeights[j][i] * inputs[j];
}
hiddenActivation[i] = sigmoid(weightedSum);
}
weightedSum = 0;
for(int j = 0; j < numHiddenNeurons + 1; j++){
weightedSum += (hiddenActivation[j] * hiddenWeights[j]);
}
return sigmoid(weightedSum);
}
//Computes the derivative of f
public static double fPrime(double u){
double fBipolar, fBinary;
fBipolar = 0.5 * (1 - Math.pow(u,2));
fBinary = u * (1 - u);
if(bipolar){
return fBipolar;
}else{
return fBinary;
}
}
// This method is used to update the weights of the neural net.
public double train ( double [] argInputVector, double argTargetOutput ){
double output = outputFor(argInputVector);
double lastDelta;
double outputError = (argTargetOutput - output) * fPrime(output);
if(outputError != 0){
for(int i = 0; i < numHiddenNeurons + 1; i++){
hiddenError[i] = hiddenWeights[i] * outputError * fPrime(hiddenActivation[i]);
deltaHiddenWeights[i] = learningRate * outputError * hiddenActivation[i] + (momentum * lastDelta);
hiddenWeights[i] += deltaHiddenWeights[i];
}
for(int in = 0; in < numInputs + 1; in++){
for(int hid = 0; hid < numHiddenNeurons; hid++){
lastDelta = deltaInputWeights[in][hid];
deltaInputWeights[in][hid] = learningRate * hiddenError[hid] * inputs[in] + (momentum * lastDelta);
inputWeights[in][hid] += deltaInputWeights[in][hid];
}
}
}
return 0.5 * (argTargetOutput - output) * (argTargetOutput - output);
}
General coding comments:
initializeWeights(-1.0, 1.0);
may not actually get the initial values you were expecting.
initializeWeights should probably have:
inputWeights[i][j] = Math.random() * (pos - neg) + neg;
// ...
hiddenWeights[i] = (Math.random() * (pos - neg)) + neg;
instead of:
Math.random() - pos;
so that this works:
initializeWeights(0.0, 1.0);
and gives you initial values between 0.0 and 1.0 rather than between -1.0 and 0.0.
lastDelta is used before it is declared:
deltaHiddenWeights[i] = learningRate * outputError * hiddenActivation[i] + (momentum * lastDelta);
I'm not sure if the + 1 on numInputs + 1 and numHiddenNeurons + 1 are necessary.
Remember to watch out for rounding of ints: 5/2 = 2, not 2.5!
Use 5.0/2.0 instead. In general, add the .0 in your code when the output should be a double.
Most importantly, have you trained the NeuralNet long enough?
Try running it with numInputs = 2, numHiddenNeurons = 4, learningRate = 0.9, and train for 1,000 or 10,000 times.
Using numHiddenNeurons = 2 it sometimes get "stuck" when trying to solve the XOR problem.
See also XOR problem - simulation