I want to detect multiple faces in my project. Therefore I planned to use the trackingID property of the CIFaceFure to keep the track of the face. But I found that every time it is coming same for every face.
So my problem is that how can I identify a face uniquely when multiple face are there in the video frame. I don't want to recognize the face for later purpose only detection for the current video frame. Thanks.
I am using the same code as in SqaureCam apple sample project. in iOS 6.
for ( CIFaceFeature *face in features ) {
NSLog(#"face.trackingID %d",face.trackingID);
}
The above code is priting the same ID for every face.
If you haven't already done so, you need to make sure to specify the usage of CIDetectorTracking in the detector's options. If I remember correctly, it should look something like this:
NSDictionary *detectorOptions = #{CIDetectorTracking: #YES};
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions];
Related
I am trying to persist markers in an augmented reality game. Here is the gist of what I am doing:
I have my users recording and saving an area to an ADF. Then they drop marker’s into the scene and save out their position data in Unity World coordinates to a text file. I then restart the app, load and localize to the ADF and load the markers.
In order to get this working, I've modified the ARPoseController.cs file in the Unity demo package to use the Area Description as it's base frame. In the _UpdateTransformation method I've swapped out the frame pairs
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_START_OF_SERVICE;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
for
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
I've also added some code confirming that I'm successfully localizing to the ADF, but I'm noticing that my markers position in Unity World Space do not position properly relative to real environment.
I can confirm that my markers save and load properly based on START_OF_SERVICE origin so I assume that they are properly serializing and deserializing. What could be causing this? Am I wrong in assuming this should just work by switching the base framepair to Area_Description instead of START_OF_SERVICE?
I had a similar problem getting the AR and ADF integrated, I had to modify the TangoPointCloud to check if you're using an AreaDescription in OnTangoDepthAvailable() and adjust the baseFrame target as required.
i.e.:
if (m_tangoDeltaPoseController.m_useAreaDescriptionPose)
{
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
}
else
{
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_START_OF_SERVICE;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
}
That way, the geometry of the point cloud adjusts itself based on the ADF offset instead of from device start.
After that change, when I'm using the sample code for AR to drop markers, it registers the surface properly so I'm placing the markers in the correct spots and orientation. I'm still encountering some flakiness with the markers not adjusting when relocalized though, have to look into the AreaLearningInGameController for loop closure events.
Hope that helps!
I want there to be a "Quickmatch" mode in my turn-based game, where the player gets automatically matched with the first player to become available. I'm using my own custom UI. My code so far looks like this:
- (void)quickMatch {
GKMatchRequest *request = [[GKMatchRequest alloc] init];
request.minPlayers = 2;
request.maxPlayers = 2;
request.playersToInvite = nil;
[GKTurnBasedMatch findMatchForRequest:request withCompletionHandler:^(GKTurnBasedMatch *match, NSError *error) {
NSLog(#"MATCH: %# %# %# %d",error,match,match.matchID,(int)match.status);
}];
This successfully creates a match, but the 2nd participant in the match has a null ID (playerID:(null) status:Matching).
I thought that if I ran this same code on another instance, using a different Game Center ID, then the two users would be matched against each other... but that does not appear to be correct. Whenever I call the GKTurnBasedMatch loadMatchesWithCompletionHandler function, I continue to retrieve the same matches, each with only 1 valid participant (the local player).
This question appears to be similar to iOS Development: How do I auto match players in Game Center? which does indicate that setting request.playersToInvite = nil; should accomplish auto-matching, yet this doesn't appear to be working for me.
How can I cause Game Center to automatically match these players against each other?
Let me address the issues you are seeing here. First off, it is not necessary to set the playersToInvite property to nil, as that is the default state unless players are assigned to it, however that is not causing your "issue". I put that in quotes because you actually did the code correctly, and only perceive a problem that isn't there. Let me walk you through what happens when findMatchForeRequest is completed.
When the completion block is called, Game Center has created a new GKTurnBasedMatch object with two participants, the first is the local player (you), and the second is actually an empty participant with a nil playerID. The reason for this is that Game Center does not assign all participants when a match is created with random (unassigned) opponents. A random participant is assigned when the turn is sent to them. In your game, that match will not show up in the cloud for others to play in until you take your first turn.
Now, calling loadMatchesWithCompletionHandler on your other device/Game Center ID will not automatically display the match UNLESS you specifically invited that player with playersToInvite (and have already taken your turn as specified above). Think about it like this: if it worked that way, every player in the world would see every auto-match in existence when they called loadMatchesWithCompletionHandler.
The other Game Center ID must actually call findMatchForRequest with no playersToInvite property set in order to be matched into the empty seat available in the game your other ID created. This way the paradigm of "it's always your turn" when creating a match is preserved, but that player is now in the second slot, not the first. Simply create a game on the second ID in the exact same way you did on the first, and your game will be created with two participants, the first being from the ID that originally created it, and the second being the ID that joined the match by calling findMatchForRequest. The key here is findMatchForRequest doesn't ALWAYS create a new match if playersToInvite is nil. If there is an existing match with an open seat, it will simply match the local player into that.
Happy Coding!
Corbin
We are developing a game about driving awareness.
The problem is we need to show videos to the user if he makes any mistakes after completing driving. For example, if he makes two mistakes we need to show two videos at the end of the game.
Can you help with this. I don't have any idea.
#solus already gave you an answer, regarding "how to play a (pre-registered) video from your application". However, from what I've understood, you are asking about saving (and visualize) a kind of replay for the "wrong" actions, performed by the player. This is not an easy task, and I don't think that you can receive an exaustive answer, but only some advices. I will try to give you my own ones.
First of all, you should "capture" the position of the player's car, in various time periods.
As an example, you could read player's car position every 0.2 seconds, and save it into a structure (example: a List).
Then, you would implement some logic to detect the "wrong" actions (crashes, speeding...They obviously depend on your game) and save a reference to the pair ["mistake", "relevant portion of the list containg car's positions for that event"].
Now, you have all what you need to recreate a replay of the action: that is, making the car "driving alone", by reading the previously saved positions (that will act as waypoints for generating the route).
Obviously, you also have to deal with the camera's position and rotation: just leave it attached to the car (as the normal "in-game" action), or modify it during time to catch the more interesting angulations, as the AAA racing games do (this will make the overall task more difficult, of course).
Unity will import a video as a MovieTexture. It will be converted to the native Theora/Vorbis (Ogg) format. (Use ffmpeg2theora if import fails.)
Simply apply it as you would any texture. You could use a plane or a flat cube. You should adjust its localScale to the aspect ratio of your video (movie.width/(float)movie.height).
Put the attached audioclip in an AudioSource. Then call movie.Play() and audio.Play().
You could also load the video from a local file path or the web (in the correct format).
var movie = new WWW(#"file://C:\videos\myvideo.ogv").movie;
...
if(movie.isReadyToPlay)
{
renderer.material.mainTexture = movie;
audio.clip = movie.audioClip;
movie.Play();
audio.clip.Play();
}
Use MovieTexture, but do not forget to install QuickTime, you need it to import movie clip (.mov file for example).
i'm back with one more question related to BASS. I already had posted this question How Can we control bass of music in iPhone, but not get as much attention of your people as it should get. But now I have done some more search and had read the Core AUDIO. I got one sample code which i want to share with you people here is the link to download it iPhoneMixerEqGraphTest. Have a look on it in this code what i had seen is the developer had use preset Equalizer given by iPod in Apple. Lets see some code snippet too:----
// iPodEQ unit
CAComponentDescription eq_desc(kAudioUnitType_Effect, kAudioUnitSubType_AUiPodEQ, kAudioUnitManufacturer_Apple);
What kAudioUnitSubType_AUiPodEQ does is it get preset values from iPod's equalizer and return us in Xcode in an array which we can use in PickerView/TableView and can set any category like bass, rock, Dance etc. It is helpless for me as it only returns names of equalizer types like bass, rock, Dance etc. as i want to implement bass only and want to implement it on UISLider.
To implement Bass on slider i need values so that i can set minimum and maximum value so that on moving slider bass can be changed.
After getting all this i start reading Core Audio's Audio Unit framework's classes and got this
after that i start searching for bass control and got this
So now i need to implement this kAudioUnitSubType_LowShelfFilter. But now i don't know how to implement this enum in my code so that i can control the bass as written documentation. Even Apple had not write that how can we use it. kAudioUnitSubType_AUiPodEQ this category was returning us an array but kAudioUnitSubType_LowShelfFilter category is not returning any array. While using kAudioUnitSubType_AUiPodEQ this category we can use types of equalizer from an array but how can we use this category kAudioUnitSubType_LowShelfFilter. Can anybody help me regarding this in any manner? It would be highly appreciable.
Thanks.
Update
Although it's declared in the iOS headers, the Low Shelf AU is not actually available on iOS.
The parameters of the Low Shelf are different from the iPod EQ.
Parameters are declared and documented in `AudioUnit/AudioUnitParameters.h':
// Parameters for the AULowShelfFilter unit
enum {
// Global, Hz, 10->200, 80
kAULowShelfParam_CutoffFrequency = 0,
// Global, dB, -40->40, 0
kAULowShelfParam_Gain = 1
};
So after your low shelf AU is created, configure its parameters using AudioUnitSetParameter.
Some initial parameter values you can try would be 120 Hz (kAULowShelfParam_CutoffFrequency) and +6 dB (kAULowShelfParam_Gain) -- assuming your system reproduces bass well, your low frequency content should be twice as loud.
Can u tell me how can i use this kAULowShelfParam_CutoffFrequency to change the frequency.
If everything is configured right, this should be all that is needed:
assert(lowShelfAU);
const float frequencyInHz = 120.0f;
OSStatus result = AudioUnitSetParameter(lowShelfAU,
kAULowShelfParam_CutoffFrequency,
kAudioUnitScope_Global,
0,
frequencyInHz,
0);
if (noErr != result) {
assert(0 && "error!");
return ...;
}
Working in Objective-c at the moment.
I am drawing a path for my sprite to follow and it all seems to be working fine but i just had one question that didnt seem to be answered anywhere.
My first two points in the Bezier are rather close together in relation to the third point and when my sprite animates along this path it seems like it is being eased in to the animation with an abrupt stop at the end.
Is there a way to control this i'd like to have the animation be one consistent speed or possibly be eased out?
id bezierForward = [CCBezierTo actionWithDuration:totalDistance/300.f bezier:bezier];
[turkey runAction:bezierForward];
Give this a try:
id bezierForward = [CCBezierTo actionWithDuration:totalDistance/300.f bezier:bezier];
id easeBezierForward = [CCEaseOut actionWithAction:bezierForward rate:2.0]
[turkey runAction:easeBezierForward];
You will want to play with the rate value to see what ends up looking best to you. You may have to try out some of the other CCEaseOut options like CCEaseSineOut
Link: Cocos2d Ease Actions Guide
Should probably be something like this, according to the docs:
id bezierForward = [CCEaseOut actionWithDuration:totalDistance/300.f bezier:bezier];
[turkey runAction:bezierForward];
As stated in the docs:
Variations
CCEaseIn: acceleration at the beginning
CCEaseOut: acceleration at the end
CCEaseInOut: acceleration at the beginning / end