can someone please tell me how can i make a jpeg like this that is animated
https://oss-pk-arab.badambiz.com/icon_8323870_d2d3b09a2e7a94ea0753b5bdce7f53b3.jpeg
even trying to save it to your pc gets rid of the animation so i suspect its done with code?
can a jpeg have an animation?
Yes: while the JPEG file format (i.e. JFIF) does not support animation or multiple frames, there is "Motion JPEG" which refers to a variety of file-formats based on sequential JPEG image/frames.
However! - The animated image you linked to is actually an animated GIF in disguise: despite the .jpeg filename extension in the URI and the webserver incorrectly serving the image with Content-Type: image/jpeg, if you download the 1.8MB file and use a hex editor, you'll see it's a GIF file, and a GIF editor can reveal the frames:
Visible GIF animation frames in a GIF editor:
Related
I want to render a video file using some images with animation.
I have tried some solution out there and i did not get any solution either I don't know how to write command of FFmpegKit. I also tried to first implement a single image with mp3 but it was also not working.
I would like to generate video content from text/images with provided content size. Have tried many options from FFmpeg to h264 encoders. So a want to build a solution that takes one image or array of images and generate video stream, let's say rtmp. Also image is always changing, adding text, change color etc. I have tried with Golang.
I’m working on a Matlab application that uses a VLC class to control a VLC-instance. One of the features is to set the VLC player to fullscreen. This feature works perfectly fine.
The VLC player is downloaded from Matlab’s File Exchange: https://se.mathworks.com/matlabcentral/fileexchange/56215-vlc (Thanks a lot Léa Strobino)
However, one particular clip insists on resizing the player to a smaller size.
I have done some research and it turns out that this is a common problem in some VLC versions.
Normal workarounds are to uncheck the “adapt interface to video size” (something like that) and to check the “Fullscreen” box.
This ought to make the player open in fullscreen and not resize the screen to video size. The video still resizes the player to a smaller size.
All the specs of the clips are the same: Same file extension (.vob), formats and were made the same way (I did some video trimming and such using ffmpeg – but the same way every time).
I have noticed one difference and that is that this particular video has a lower Data and bitrate (~1000-1500kbps) where as the others are higher (<4000kbps). Also when showing the properties of the clip the frame height and width are blank as opposed to the others that have specific values.
This should however not have an effect of the fullscreen command from Matlab called after loading the video into the playlist. The command has no effect on this video, but does on all other.
It is possible to set the player to fullscreen manually by clicking the window, so it is not caused by some restriction in the video not allowing it to fullscreen.
Why does the video refuse to go in to fullscreen?
Hope somebody is able to help.
Okay so I seem to have solved the problem now. Without being completely sure why - the problem was in the lowered data/framerate.
I tried to add -crf 18 when converting my .mp4 to a .vob file:
ffmpeg -i input.mp4 -vcodec copy -acodec ac3 -crf 18 output.vob
The -crf stands for Constant Rate Factor and is a way to ensure a specific Data rate. The values goes from 0-51 and 18 seems to be the lowest 'sane' value (highest data rate). A good explanation can be found here: https://superuser.com/questions/677576/what-is-crf-used-for-in-ffmpeg
With this higher data rate the video opens up in fullscreen everytime :=)
I am a newbie trying to capture camera video images using AVFoundation and
want to render the captured frames without using AVCaptureVideoPreviewLayer. I
want a slider control to be able to slow down or speed up the rate of display of
camera images.
Using other peoples code as examples, I can capture images and using an NSTimer,
with my slider control can define on the fly how often to display them, but I
can't convert the image to something I can display. I want to move these
images into a UIView or UIImageView and render them in the timer Fire function.
I have looked at Apples AVCam app, (which uses an AVCaptureVideoPreviewLayer)
but because it has its own built in AVCaptureSession, I can't adjust how often the
images are displayed. (well, you can adjust the preview layer frame rate but
that can't be done on the fly)
I have looked at the AVFoundation programming guide, which talks about AVAssets
and AVPlayer, etc. but I can't see how a camera image can be turned into an
AVAsset. When I look at the AVFoundation guide, and other demos which show how
to define an AVAsset, it only gives me choices of using http stream data to
create the asset, or a url to define an asset using an existing file. I can't
figure out how to make my captured UIImage into an AVAsset, in which case I guess
I could use an AVPlayer, AVPlayerItems and AVAssetTracks to show the image with
an observeValueForKeyPath function checking status and doing [myPlayer play].
(I also studied the WWDC session 405 "Exploring AV Foundation" to see how that
is done)
I have tried similar code as in the WWDC Session 409 "Using the Camera on iPhone."
Like that myCone demo, I can set up the device, the input, the capture session,
the output, the setting up of a callback function to a CMSampleBuffer, and I
can collect UIImages and size them, etc. At this point I want to send that image
to a UIView or UIimageView. The session 409 just talks about doing it with
CFShow(sampleBuffer). This wasn't explained, and I guess its just assuming a
knowledge of Core Foundation I don't yet have. I think I am turning the captured
output in the sample buffer into a UIImage, but I can't figure out how to render
it. I created an IBOutlet UIImageView in my nib file, but when I try to stuff
the image into that view, nothing gets displayed. Do I need an AVPlayerLayer?
I have looked at the UIImagePickerViewController as an alternate method of
controlling how often I display captured camera images, and I dont see that I
can change the time on the fly to display images using that controller either.
So, as you can see, I am learning this stuff with the Apple development forum and
their documentation, the WWDC videos, and various websites such as
stackoverflow.com but have yet to see any examples of doing camera to screen
without using AVCaptureVideoPreviewLayer, UIImagePickderViewController or by
using an AVAsset that isnt already a file or http stream.
Can anybody make a suggestion? Thanks in advance.
I have an iOS app that I want to record some of its visual output into a video. It looks like the way to create a video on iOS is to use AVMutableComposition and feed AVAssets to it via insertTimeRange.
All the documentation and examples that I can find only add video and audio assets to an AVMutableComposition. Is there a way to add image data to it (i.e. add an image for each frame of the video)? I can get this image data as straight RGB, PNG, JPG, UIImage, or whatever is easiest to feed to AV Foundation (if it's even possible).
If it's not possible to feed images into an AVMutableComposition for the video frames, is there another way to generate an .mp4 file from frames in iOS.
To generate movies from frame you can use AVAssetWriter, here is a question that sort of covers that here on SO, question