Improving my uml class diagram for a media library [closed] - class

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm making a class diagram for a media library, like iTunes or Windows Media Player. My library contains audio, video and images.
I'm fairly new to this, so I'm not sure if I'm heading in the right direction. This is what I got so far:
I feel like there should be a few more classes. Does anyone have some tips/suggestions on how to improve/expand this class diagram?
EDIT!
I've tried to make the playlists a bit more clearer. I've also added an interface:

It seems fine to me in the main lines:
The Media specialization seems correct
The Person specialization seems correct
The Directs and Composes relationships seem right
Nothing seems wrong here. But the Playlist composition is however not very clear. I have no obvious alternative, but here is the point...
How it is introduced, your playlist might be composed by images, videos, audio records. The question is the relationship between the compositions.
If you wish a playlist composed by image OR videos OR audio records non-exclusively, the playlist should be composed by medias in general.
If you wish a playlist composed by image OR videos OR audio records exclusively, things become quite subtle. In your representation this is not obvious at all. At least a note should be welcome in order to specify the exclusive composition relationship. A solution would be to specialize the playlists: the specialized version would be instantiated on the insertion of the first element. This is up to what you really want to show. In any case, an explanation note would be very useful.

Related

What is the best way to record audio in browser? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 days ago.
Improve this question
Now I try to record audio in browser in small pices by RecordRtc js library.
What I've explored for this moment:
Inspite of what codec and format I point in RecordRtc options Chrome want to record only webm;codecs=opus audio, Firefox want to work only with ogg and only firefox know what the ogg is this, my android-mobile-webview records wav (I haven't checked Ff, but chrome is definitely not gonna work with wav)
Is there any crossbrowser solution in this branch? maybe I just need to explore another js recorders? Or I have to be prepared to recode every formats from one to another by my hands if I want to unify audio format in my project.
I event cannot find any generalized information about what codecs-browsers supporting pairs there are.
Next I'm gonna send these blobs to another user one by one and than play it by MediaSource.souresBuffer that of cource have pretty much differences in its way to work in defferent browsers too.
May you share any experience or good practices in all this browser-audio-recording stuff with me?
I alredy have been checking different combinations of borwsers and codecs, talking with chatgpt about possible solutions, but it only advice to checking codecs suppornig by methods like MediaRecorder.isTypeSupported and recoding audio by ffmpeg.js if sth wrong.

Can we use two class Diagrams for two apps -- child app and parent app [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I have two modules of an app. one is parent app and another is child app. should I create two different class diagrams for both apps or should there only be one diagram?
This is entirely up to you to decide. Some thoughts that may guide you:
If the classes or components of these both classes deal with parts of the same domain (for example, one app is maintaining a catalogue, and the other allows to purchase catalogue items), you may consider one model.
If anyway you consider your apps two be just two faces of the same system, you should consider one model.
If the apps are two different things and only share some "modules", components, or classes (in a library), you may consider two or three models. THe package of the common part might then be imported into the model of the apps.
One model does not mean one diagram: in fact several diagrams may show the same model under different viewpoints. So you might very well use several diagrams, each focusing on some elements of one of the app.
You could even have one diagram to show the main classes of both apps and several other diagrams showing some of those classes as well as more detailed classes. The key is to keep each diagram of one model sufficiently simple to be easily understood.
Whatever the situation, avoid a huge mega-diagram showing everything with 50 classes: nobody except you would be able to absorb such a complexity.
If you're confused regarding the difference between modeling models and drawing diagrams, you may have a look at this question

How does Apple make their Swift programming books? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to create a book using a template (or just formatting in general) similar to the ones used with the Apple Swift Programming Language iBook.
I'm struggling with figuring out exactly what software Apple uses to create the book with such simple and clean formatting along with what they use to create the headers, coding samples, comments, etc (is it a special version of Markdown?).
My Google searches with "what apple uses to create books", "what apple uses to create api documentation", "apple software for dev books", etc. didn't really lead to much. The searches themselves might not have been effective either, so there's that possibility.
Regardless, I got answers involving RegexKit, HeaderDocs, and Gentle Bytes, and it didn't seem too relevant to what I was trying to do.
So then I did some digging into the main frameworks that build up the iBook and I found these files (there are more but the image only shows some).
Mainly xhtml files.
So really all I'm asking is what software does Apple use to combine all these files or did they use a different application that automatically combined them as they inserted them while creating the iBook? Do they even use iBooks Author or rather an internal application that's not available for download outside of Apple? OR maybe it's something that's not related to anything I said and I'm way off track.
Any help would be greatly appreciated.
Because it is an iBook I would assume that they used iBooks Author
After you created your iBook with iBooks Author, you can convert it to HTML, it will create many files then.

Does logic done first, appearance second, work well in iOS development? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I wish to make an iOS application that includes a document library, log/journal, forums, possible randomized quotes ?and coaching tools?, and have built applications of that size in other contexts but this is my first iOS application.
Right now I'm working through http://www.raywenderlich.com/1797/how-to-create-a-simple-iphone-app-tutorial-part-1 , and I'd welcome comments on other tutorials, but I wanted to ask: does it work to work out the logical gears of an application before developing the graphic design? I would like to have somewhere between a Dirtylicious and Nature look, but my natural bent (no pun intended) is to get most the gears working and then defer most of the design work until after the gears. I expect they should not be completely separated, and there are cases where you apply the design and then realized that what the gears are doing only looked good on paper, but I wanted to do a sanity check on whether it makes to look up tutorials appropriate to a document library, a log/journal, forums, etc. and get them to work together first, and then skin it.
TIA,
It is recommended that you follow the MVC pattern, which strives for separation between layers.
http://developer.apple.com/library/ios/#documentation/general/conceptual/devpedia-cocoacore/MVC.html
Xcode helps you implementing that pattern.
I think you should try to put in "paper" everything you want to do, before doing any actual coding, check how many views you are gonna have, what you need, the flow between views, try to diagram everything, that will save you a lot of pain later. You don't have to be so specific about the GUI at this stage, you only need to know what kind of visuals you need in the views, (buttons, labels, etc...)
And yes, I think you're safe doing the Model first.

How does CamScanner, Genius Scan, and JotNot work? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I was looking at CamScanner, Genius Scan, and JotNot and trying to figure out how they work.
They are known as 'Mobile Pocket Document Scanners.' What each of them do is take a picture of a document through the iPhone camera, then they find the angle/position of the document (because it is nearly impossible to shoot it straight on), straightens the photo and readjusts the brightness and then turns it into a pdf. The end-result is what looks like a scanned document.
Take a look here of one of the apps, Genuis Scan, in action:
http://www.youtube.com/watch?v=DEJ-u19mulI
It looks pretty difficult to implement but I'm thinking someone smart on stackoverflow can point me in the right direction!
Does any know how one would go about developing something like that? What sort of library or image processing technologies do you think they're using? Anyone know if there is something open source that is available?
I found an open source library that does the trick:
http://code.google.com/p/simple-iphone-image-processing
It probably is pretty difficult, and you will likely need to find at least some algorithms or libraries capable of detecting distorted text within bitmaps, analyzing the likely 2D and 3D geometric distortion within a text image, image processing to correct that distortion with its inverse, and DSP filtering to adaptively adjust the image contrast... plus use of iOS APIs to take photos in the first place.