ALAsset Does every asset belong to an Event? - alassetslibrary

I am doing a project that involves ALassets. I know that from iphoto every picture in the phone belongs to an event. Events seem to be like folders. What is not clear to me is, if you synch your phone to a PC and not a mac, does every asset still belong to an event?
Thanks for your advice.

The AssetLibrary has so called groups. The groups contain the assets. There are groups of the following types:
enum {
ALAssetsGroupLibrary = (1 << 0), // The Library group that includes all assets.
ALAssetsGroupAlbum = (1 << 1), // All the albums synced from iTunes or created on the device.
ALAssetsGroupEvent = (1 << 2), // All the events synced from iTunes.
ALAssetsGroupFaces = (1 << 3), // All the faces albums synced from iTunes.
ALAssetsGroupSavedPhotos = (1 << 4), // The Saved Photos album.
ALAssetsGroupPhotoStream = (1 << 5), // The PhotoStream album.
ALAssetsGroupAll = 0xFFFFFFFF, // The same as ORing together all the available group types,
// with the exception that ALAssetsGroupLibrary is not included.
};
An asset is only part of an event, if it has been synced by iTunes or created by the Camera Connection Kit. Photos taken on the device are always part of the ALAssetsGroupSavedPhotos (Camera Roll) and can additionally be assigned to user created album.

Related

Can not show album artwork on Carplay music

I'm building an iOS Music app that can integrate with Apple Carplay, i can play music on Carplay normally and can display some information of song such as title, album, artist name. However can not display album artwork.
This is bulk code for display media information on Carplay:
if let nowPlayingItem: PlaylistItem = self.nowPlayingItem {
let info: NSMutableDictionary = NSMutableDictionary()
info[MPMediaItemPropertyArtist] = nowPlayingItem.mediaItem?.artist?.name
info[MPMediaItemPropertyAlbumTitle] = nowPlayingItem.mediaItem?.album?.title
info[MPMediaItemPropertyTitle] = nowPlayingItem.mediaItem?.title
info[MPMediaItemPropertyPlaybackDuration] = nowPlayingItem.mediaItem?.playbackDuration
info[MPMediaItemPropertyArtwork] = nowPlayingItem.mediaItem?.artwork()
let sec: TimeInterval = CMTimeGetSeconds(time)
info[MPNowPlayingInfoPropertyElapsedPlaybackTime] = Int(sec)
MPNowPlayingInfoCenter.default().nowPlayingInfo = info as? [String: Any]
}
This is my current app:
And this is what i want :
So what i have to do ? Please help me to find solution for this.
You're likely testing on the simulator. While the following code is sometimes necessary to show the now playing screen at all, it is currently not possible to get the CarPlay simulator to figure out the player is actually playing:
#if targetEnvironment(simulator)
UIApplication.shared.endReceivingRemoteControlEvents()
UIApplication.shared.beginReceivingRemoteControlEvents()
#endif
I couldn't get the album artwork to show up on the now playing screen in the simulator. Do you have access to a physical car radio (I strongly recommend to test on one before submitting to the App Store)? I know it might not be very convincing but if your artwork shows up in the notification center player on the iPhone it will also show up in CarPlay (on a real device) since the now playing components are just proxied.

How to get vuforia cloud recognition kind of feature in Arcore(unity)

I have created a prototype application in unity using vuforia where I upload an image to myserver the server then sends the image (and associated assetbundle's link in metadata) to vuforia cloud to add it to the image target database. then in unity when camera tracks the image target I download the asset bundle to augment it.
public void OnNewSearchResult(TargetFinder.TargetSearchResult targetSearchResult)
{
TargetFinder.TargetSearchResult cloudRecoSearchResult =
(TargetFinder.TargetSearchResult)targetSearchResult;
mTargetMetadata = cloudRecoSearchResult.MetaData;
Debug.Log(mTargetMetadata);
mCloudRecoBehaviour.CloudRecoEnabled = false;
// Build augmentation based on target
if (ImageTargetTemplate)
{
Debug.Log("Image target activated");
// enable the new result with the same ImageTargetBehaviour:
ObjectTracker tracker = TrackerManager.Instance.GetTracker<ObjectTracker>();
ImageTargetBehaviour imageTargetBehaviour =
(ImageTargetBehaviour)tracker.TargetFinder.EnableTracking(
targetSearchResult, ImageTargetTemplate.gameObject);
JsonData jd = JsonMapper.ToObject(mTargetMetadata);
string url = jd["content-url"].ToString();
Debug.Log("video url :"+ "http://192.168.2.92/arads/" + url);
vidPlayer.url = "http://192.168.2.92/arads/"+url;
vidPlayer.Prepare();
if(!vidPlayer.isPlaying)
vidPlayer.Play();
}
}
the above code is to get associated video from the server. Can I get similar functionality with arcore or arfoundation, I read that arcore's refrence image database can have 1000 images,
what if the image I am tracking is not in the current database, can I switch to different db in that case ?
do I have to download and add the image to database in the applicatoin whenever I upload a new image on the server ?
can these images in arcore have meta data like in vuforia ?
The difference between ARCore and Vuforia is in ARCore you can add images to database in run time so you do not have to use any server.
You can switch to a different database by modifying Session config using this: GoogleARCore.ARCoreSessionConfig.AugmentedImageDatabase
As i said you can add images to database in run time so as long as you have the image in your project hierarchy you can add images to database.
I do not think having a meta data is possible only information you can get is database index of the image.
Good Luck!

Swift| How To change the sound of Push notification to other system sound?

I'm kind of stuck with this issue. I have a working Push Notification in my app, and i'm trying to let the user to choose his own sound (All sounds are from iPhone built in sounds - kind of 'whatsapp') for a specific push notification.
the Payload i'm getting is something like
aps: {
alert = Message Type 1;
badge = 1;
sound = "default";
}
now in my app the user has three types of Push Notification, and I let the user choose which ringtone/Sound he wants for a certain push, after he choose I've saved his choice in NSUserDefaults.
my problem is that I Don't know how override the default sound in my userInfo and make a new Notification with a different sound that I saved in the NSUserDefaults.
I did it easily on Java, but it's so different with Swift and the way it's work with IOS.
Would be grateful, If anyone can shade some light on the subject.

How to write a web-based music visualizer?

I'm trying to find the best approach to build a music visualizer to run in a browser over the web. Unity is an option, but I'll need to build a custom audio import/analysis plugin to get the end user's sound output. Quartz does what I need but only runs on Mac/Safari. WebGL seems not ready. Raphael is mainly 2D, and there's still the issue of getting the user's sound... any ideas? Has anyone done this before?
Making something audio reactive is pretty simple. Here's an open source site with lots audio reactive examples.
As for how to do it you basically use the Web Audio API to stream the music and use its AnalyserNode to get audio data out.
"use strict";
const ctx = document.querySelector("canvas").getContext("2d");
ctx.fillText("click to start", 100, 75);
ctx.canvas.addEventListener('click', start);
function start() {
ctx.canvas.removeEventListener('click', start);
// make a Web Audio Context
const context = new AudioContext();
const analyser = context.createAnalyser();
// Make a buffer to receive the audio data
const numPoints = analyser.frequencyBinCount;
const audioDataArray = new Uint8Array(numPoints);
function render() {
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
// get the current audio data
analyser.getByteFrequencyData(audioDataArray);
const width = ctx.canvas.width;
const height = ctx.canvas.height;
const size = 5;
// draw a point every size pixels
for (let x = 0; x < width; x += size) {
// compute the audio data for this point
const ndx = x * numPoints / width | 0;
// get the audio data and make it go from 0 to 1
const audioValue = audioDataArray[ndx] / 255;
// draw a rect size by size big
const y = audioValue * height;
ctx.fillRect(x, y, size, size);
}
requestAnimationFrame(render);
}
requestAnimationFrame(render);
// Make a audio node
const audio = new Audio();
audio.loop = true;
audio.autoplay = true;
// this line is only needed if the music you are trying to play is on a
// different server than the page trying to play it.
// It asks the server for permission to use the music. If the server says "no"
// then you will not be able to play the music
// Note if you are using music from the same domain
// **YOU MUST REMOVE THIS LINE** or your server must give permission.
audio.crossOrigin = "anonymous";
// call `handleCanplay` when it music can be played
audio.addEventListener('canplay', handleCanplay);
audio.src = "https://twgljs.org/examples/sounds/DOCTOR%20VOX%20-%20Level%20Up.mp3";
audio.load();
function handleCanplay() {
// connect the audio element to the analyser node and the analyser node
// to the main Web Audio context
const source = context.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(context.destination);
}
}
canvas { border: 1px solid black; display: block; }
<canvas></canvas>
Then it's just up to you to draw something creative.
note some troubles you'll likely run into.
At this point in time (2017/1/3) neither Android Chrome nor iOS Safari support analysing streaming audio data. Instead you have to load the entire song. Here'a a library that tries to abstract that a little
On Mobile you can not automatically play audio. You must start the audio inside an input event based on user input like 'click' or 'touchstart'.
As pointed out in the sample you can only analyse audio if the source is either from the same domain OR you ask for CORS permission and the server gives permission. AFAIK only Soundcloud gives permission and it's on a per song basis. It's up to the individual artist's song's settings whether or not audio analysis is allowed for a particular song.
To try to explain this part
The default is you have permission to access all data from the same domain but no permission from other domains.
When you add
audio.crossOrigin = "anonymous";
That basically says "ask the server for permission for user 'anonymous'". The server can give permission or not. It's up to the server. This includes asking even the server on the same domain which means if you're going to request a song on the same domain you need to either (a) remove the line above or (b) configure your server to give CORS permission. Most servers by default do not give CORS permission so if you add that line, even if the server is the same domain, if it does not give CORS permission then trying to analyse the audio will fail.
music: DOCTOR VOX - Level Up
By WebGL being "not ready", I'm assuming that you're referring to the penetration (it's only supported in WebKit and Firefox at the moment).
Other than that, equalisers are definitely possible using HTML5 audio and WebGL. A guy called David Humphrey has blogged about making different music visualisers using WebGL and was able to create some really impressive ones. Here's some videos of the visualisations (click to watch):
I used SoundManager2 to pull the waveform data from the mp3 file. That feature requires Flash 9 so it might not be the best approach.
My waveform demo with HMTL5 Canvas:
http://www.momentumracer.com/electriccanvas/
and WebGL:
http://www.momentumracer.com/electricwebgl/
Sources:
https://github.com/pepez/Electric-Canvas
Depending on complexity you might be interested in trying out Processing (http://www.processing.org), it has really easy tools to make web-based apps, and it has tools to get the FFT and waveform of an audio file.

Can I record a video without using UIImagePickerController?

Can I record a video without using UIImagePickerController?
Of course without needing to jailbreak or anything else that would cause the App Store to reject the app.
I think there is a way to access video device not using UIImagePickerController because these camera applications can record video and work on iPhone 2G/3G which utilizes ffmpeg:
iVideoCamera
iVidCam
I pick this code up by googling.
AVFormatParameters formatParams;
AVInputFormat *iformat;
formatParams.device = "/dev/video0";
formatParams.channel = 0;
formatParams.standard = "ntsc";
formatParams.width = 640;
formatParams.height = 480;
formatParams.frame_rate = 29;
formatParams.frame_rate_base = 1;
filename = "";
iformat = av_find_input_format("video4linux");
av_open_input_file(&ffmpegFormatContext,
filename, iformat, 0, &formatParams);
This code tell me how to open camera device, but I don't know device path of iPhone.
How do iVideoCamera and iVidCam record video?
Both of these use CoreSurface which is a private API. You can google it for more information. In iOS 4, there are new API's to get direct frame access from the camera.