Where can I download the image or video stimuli for the visual coding experiments (allen-sdk)? - allen-sdk

I am trying to download the images or videos that were used for the visual coding experiments of the Allen institute Brain observatory.
I did follow the tutorials and successfully found the spikes and all sorts of meta data, but I did not find any resource to get the images or videos stimuli.
Specifically, I am looking for the drifting grating videos, the videos of natural movies or the images of natural scenes.

I found an answer on the Allen Institute Forum:
https://community.brain-map.org/t/how-can-i-generate-visual-stimuli-like-those-used-in-the-brain-observatory/33
This second link provides some code to extract the stimuli:
http://alleninstitute.github.io/AllenSDK/_static/examples/nb/brain_observatory_stimuli.html

Related

are there any tools/scripts for analyzing/retrieving flash/html5 video information/metadata

I want to play youtube video with a certain resolution, like 360p
and capture the packets, and then extract the video from the packets
and then I want to analyzing/retrieving flash/html5 video information/metadata from these videos
BTW, are videos still with the same resolution when they are extracted from the captured packets?
note that these videos may not be complete
are there any good tools for analyzing/retrieving flash/html5 video information/metadata
like video bit rate, video resolution(like 360p, 480p), used audio/video codecs, video size and duration/duration
if the video is not complete, the information would ideally include the original video size, the actual video size, the original video length/duration and the actual video length/duration
I hope it is a script, if it is a tool. I hope it can be run through shell using command line coz I want automation.
A paper says perl could do this, but I don't how
thanks!
(long comment, not a complete answer)
IANAL, but your goals may not fit the YouTube Terms of Service:
Section 4. C
You agree not to access Content through any technology or means other than the video playback pages of the Service itself, the Embeddable Player, or other explicitly authorized means YouTube may designate.
Section 4. H
You agree not to use or launch any automated system, including without limitation, "robots," "spiders," or "offline readers," that […] sends more request messages to the YouTube servers […] than a human can reasonably produce in the same period by using a conventional on-line web browser. Notwithstanding the foregoing, YouTube grants the operators of public search engines permission to use spiders to copy materials from the site for the sole purpose of and solely to the extent necessary for creating publicly available searchable indices of the materials, but not caches or archives of such materials. […]
You may be able to access the required information directly using the YouTube Data API. Here is a reference, and here is a list of directly supported programming languages. Perl will work as well, as the underlying data format is plain XML or JSON.
You might also find these SO questions YouTube Player API: How to get duration of a loaded/cued video without playing it? and Youtube API get video duration from the XML enlightening.

Video image processing tools and resources for iPhone/ios

I am working on developing an iOS video app that needs to do stuff like apply filters, adjust brightness/contrast/saturation add overlays etc. As I am new to image processing I am not able to judge which resources (i.e. APIs, open source libraries) I can use. So any guidance from those who have experience in this field will be of great help.
Here is a great tutorial about GPU-accelerated video processing on Mac and iOS:
http://www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios
Use OpenCV.
1. Get OpenCV
Check out OpenCV homepage to download OpenCV source.
2. Check out this SOF for more details on OpenCV on iOS
iPhone and OpenCV
3. Get and read some good books on OpenCV
The best book on OpenCV is "Learning OpenCV" written by Gary Bradsky, main founder of OpenCV.
Second one is "OpenCV cookbook".
These books contains lots of examples on OpenCV along with description
4. Check out OpenCV documentation.
OpenCV documentation contains details of complete functions. It also includes a lot of tutorials, which are really good for all.
5. Also try running OpenCV samples. It contains a lot of good programs
All the best.

iphone AVEditDemo or any video processing examples

I am trying to process videos in some way:
Cut and merge videos.
Record the screen and make the video out of that recording.
I look up on the Internet and stackoverflow as well and see that there is a code sample from apple called AVEditDemo but I could never find it out. If anybody has that example and willing to share with me or has any similar examples that can teach me how to do the 2 above jobs, it would be excellent.
I see there are some similar questions like this one, but I would love to have the code sample. It would help me move forward quickly.
There is a WWDC 2010 video called Editing Media with AV Foundation which may be useful to you and is available through the Developer Portal.
The AVEditDemo application is included in the WWDC 2010 Sample Code because it goes with the video. This should contain the AVEditDemo Application which goes with the Video. Unfortunately you need to download all the WWDC 2010 Sample Code to get it (232.6mb). You can get the entire download of all the code here: http://connect.apple.com/cgi-bin/WebObjects/MemberSite.woa/wa/getSoftware?code=y&source=x&bundleID=20645

How to learn OpenGL by example, say, building a rotating globe?

I have two years of experience on iPhone programming but totally new to OpenGL. What should I pick up in order to build a rotating globe on iPhone? What I want to archive:
a 3G globe shown on an iPhone
basically a 3D ball with a texture map on it
when a user drag on the screen, the globe rotates
Thanks
Well if you are completely new to OpenGl like me than I would suggest you to follow this link to get you started
http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-table-of.html
enjoy
Edit:
Try this too
how to replace images in puzzle game
I would recommend Brad Larson's Course on OpenGL (ES). It's available on iTunes (for free):
The videos of the Advanced iPhone
Development class I taught this past
semester at the Madison Area Technical
College are now available for free on
iTunes U. These videos amount to over
35 hours of HD content, covering more
advanced iPhone development topics
such as Core Animation,
multithreading, Quartz 2-D drawing,
and OpenGL ES. The course notes that
accompany the class are available for
download here in VoodooPad format, or
for viewing here in HTML. Links to all
sample applications used for the class
are present in the notes.
Source
Many people would suggest that you try the NeHe Tutorials for OpenGL, and while I do think that they cover a few features of the OpenGL API, I would instead recommend buying a book on OpenGL if you are serious about learning it. Of course, learning how to write programs using OpenGL comes with practice, but reading the books helps you understand how and why the API is designed how it is, and also introduces you to the graphics pipeline, which is crucial in understanding how your function calls are really processed. I would personally recommend the OpenGL Superbible, but I have heard the the Red Book is good as well. Here's a link to a free HTML file containing an older version of the Red Book.

How do I extract a screenshot from a video in the iPhone SDK?

I'd like to be able to take a screenshot of an MPEG recorded using the iPhone camera at set intervals.
I've seen a few ways to do this; namely compiling and using FFmpeg (Using FFMPEG library with iPhone SDK for video encoding), however it seems it's quite difficult to comply with the LGPL (http://ffmpeg.org/legal.html) for commercial use.
This term of the contract pretty much makes it useless to us:
Q: Is it perfectly alright to incorporate the whole FFmpeg core into my own commercial product?
A: You might have a problem here. There have been cases where companies have used FFmpeg in their products. These companies found out that once you start trying to make money from patented technologies, the owners of the patents will come after their licensing fees. Notably, MPEG LA is vigilant and diligent about collecting for MPEG-related technologies.
Is there any other way? - or simply by accessing the rendering layer of an MPEG am I going to be "making money from patented technologies"?
As usual - any help on this would be greatly appreciated.
Cheers!
Yes, you can do it - if I am not wrong, since iOS 3.2... at least for the videos you have on your library. After loading the movie on your MPMoviePlayerController object, do this
UIImage *aThumbnail = [player thumbnailImageAtTime:timeCode timeOption:MPMovieTimeOptionExact];
//timeCode is a time within de video length, for example: 3.12 seconds.
//player is the MPMoviePlayerController object.
Unfortunately there is no official way to grab image frames from the camera in realtime.
I encourage you to file a bug report / feature request with Apple. Many people want this. If many people request a specific feature then they might consider to actually put this in.