Apparently Apple has changed some term in the agreement again.
From http://www.appleoutsider.com/2010/06/10/hello-lua/
section 3.3.2 is now
Unless otherwise approved by Apple in writing, no interpreted code may be downloaded or used in an Application except for code that is interpreted and run by Apple’s Documented APIs and built-in interpreter(s). Notwithstanding the foregoing, with Apple’s prior written consent, an Application may use embedded interpreted code in a limited way if such use is solely for providing minor features or functionality that are consistent with the intended and advertised purpose of the Application.
instead of the original
No interpreted code may be downloaded or used in an Application except for code that is interpreted and run by Apple’s Documented APIs and built-in interpreter(s).
I am more interested in embedding Lua, but other people have other embeddings they want to make.
I am wondering how you ask for permission, and what they mean by the terms "minor features" and "consistent" and how will Apple interpret this section? It seems to have enough loopholes to drive a real firetruck through.
(BTW this is a terribly important question for me an my product.)
Realistically, the ultimate interpretation of the developer agreement is up to Apple.
Since this is all new, it's not clear who to e-mail. You could start with the iTunes Connect people, but be prepared for a long wait to hear back. Alternatively, I've gotten some occasional quick help just by calling up the nice Apple people in Ireland.
Given the wording , if you want to embed Lua, you should be prepared to justify that you will be using Lua in a limited fashion to provide minor features or functionality.
Since you've stated this is a really important question, you might want to consider the risks inherent with pushing the envelope/being a pioneer. If there's an alternative way to get around all this, you should consider it. If there's no way around it, it might make more business sense for you to pursue another platform for now.
Related
I have not used libbpf in a while. Now, when I'm looking at the source code and examples, it looks to me that all API now is built around bpf_object while before it was based on program FD (at least on the user-facing level). I believe that fd is now hidden in bpf_object or such.
Of course it keeps backward compatibility and I still can use bpf_prog_load for example, however it looks like the preferred way of writing application code using libbpf is by bpf_object API?
Correct me if I'm wrong. Thanks!
Sounds mostly correct to me.
Low-Level Wrappers
If I remember correctly, the functions returning file descriptors in libbpf, mostly defined in tools/lib/bpf/bpf.c, have always been very low-level. This is the case for bpf_load_program() for example, which is no more than a wrapper around the bpf() system call for loading programs. Such functions are still available, but their use may be tedious for complex use cases.
bpf_prog_load()
Some more advanced functions have long been provided. bpf_prog_load(), that you mention, is one of them, but it returns an error code, not a file descriptor. It is still available as one option to load programs with the library.
bpf_object__*()
Although I don't think there are strict guidelines, I believe it is true that most example now use the bpf_object__*() function. One reason is that they provide a more consistent user experience, being organised around the manipulation of an object file to extract all the relevant bytecode and metadata, and then to load and attach the program. One other reason, I think, is that since this model has been favoured over the last releases, these functions have better support for recent eBPF features and the bpf_object__*() functions offer features that the older bpf_prog_load() workflow does not support.
Libbpf Evolves
At last, it's worth mentioning that libbpf's API is currently undergoing some review and will likely be reworked as part of a major v1.0 release. You may want to have a look at the work document linked in the announcement: Some bpf_object__ functions may be deprecated, and similarly there is currently a proposal to:
Deprecate bpf_prog_load() and bpf_prog_load_xattr() in favor of bpf_object__open_{mem, file}() and bpf_object__load() combo.
There is nothing certain yet regarding the v1.0 release, so I wouldn't worry too much about “deprecation” at the moment - I don't expect all functions to be removed just yet. But that's something you may want to consider when building your next applications.
If a company works on matlab projects, then how do they provide the client the project? I mean which file do they send to the client as they cannot hand over the client the whole codes and data ?
It would depend on lots of things, such as the nature of the product you are building for the client, your relationship and contractual agreement with them, and whether they need to modify the product in the future.
When I carry out consultancy on MATLAB projects for a company, I usually supply them with MATLAB source code. Part of the contract would typically say that they own the code (and the copyright to the code) that I produce for them, and they can then do pretty much whatever they want with it.
If you have a different relationship, where you continue to own the code and need to prevent them from reading it and/or modifying it, then the issue is really the same as it is for any other language: you rely on a mixture of technological restrictions and legal restrictions, designed to be as restrictive as you need while minimizing inconvenience for the end-user.
For example,
You can obfuscate your code using the command pcode. That will prevent almost everyone who isn't extremely determined from seeing your code and modifying it (there are some loopholes though), but they will still be able to run it within MATLAB. Downsides might be that your code may become unexecutable in a future version of MATLAB, so you may need to support it again to fix that later. To mitigate this, you could specify in your contract or license agreement that only specific versions of MATLAB will be supported.
You can use MATLAB Compiler to produce a standalone library or executable that contains the code in an encrypted form. Downsides might be that they would rather use the code from within MATLAB. An upside would be that unlike the first option it doesn't require MATLAB, so you're not vulnerable to backward-compatibility issues in future.
You can include licence-management code within your MATLAB application. You can either roll your own, perhaps by calling a bit of Java for the cryptography (you will likely not be able make it very secure, unless you're very talented, but you'll probably be able to make something simple and workable), or you can buy third-party C libraries that do it well, and call them from MATLAB.
You can simply put copyright lines in your code saying that you own the copyright, and licence the code to them under specific terms, such as that they may view it, use it, but not modify or redistribute it. If you really want, you could ask them to also sign a non-disclosure agreement requiring them not to discuss the content of the code with third parties.
Although the technological restrictions available are a little different in MATLAB than they would be for a compiled language such as C or Java, at the end of the day those are only ever there to keep honest people honest - anyone determined will be able to get around them eventually, and they may well inconvenience the honest people, annoying them into disliking your product or service.
Better to use a mixture of very light technological restrictions, crystal-clear contract and licensing terms, and trust.
<advert> One of the consultancy services I offer is advice and help in preparing MATLAB code for deployment, including protecting it. If you think you'd benefit from that, please get in touch. </advert>
You can use the Matlab Compiler and compile your codes in to an exe file for windows. This is what is usually expected by a company. Some who have R&D themselves might ask you for the original m-code, or specific functions depending on your relationship/contract with them. I've been asked to give m-code several times, and it says on my contract that I am suppose to give this information to them. (UK based)
I like C# 3.0 features especially lambda expressions, auto implemented properties or in suitable cases also implicitly typed local variables (var keyword), but when my boss revealed that I am using them, he asked me not to use any C# 3.0 features in work. I was told that these features are not standard and confusing for most developers and its usefulness is doubtful. I was restricted to use only C# 2.0 features and he is also considering forbidding anonymous methods.
Since we are targeting .NET Framework 3.5, I cannot see any reason for these restrictions. In my opinion, maybe the only disadvantage is that my few co-workers and the boss (also a programmer) would have to learn some basics of C# 3.0 which should not be difficult. What do you think about it? Is my boss right and am I missing something? Are there any good reasons for such a restriction in a development company where C# is a main programming language?
I have had a similar experience (asked not to use Generics, because the may be confusing to my colleagues).
The fact is, that we now use generics and non of my colleagues are having a problem with them. They may not have grasped how to create generic classes, but they sure do understand how to use them.
My opinion on that is that any developer can learn how to use these language features. They may seem advanced at first but as people get used to them the shock of newness lessens.
The main argument for using these features (or any new language features) is that this is a simple and easy way to help my colleagues advance their skills, rather than stagnating.
As for your particular problem - not using lambdas. Lots of the updates to the BCL have overloads that take delegates as parameters - these are in many cases most easily expressed as lambdas, not using them this way is ignoring some of the new and updated uses of the BCL.
In regards to the issues with your peers not being able to learn lambdas - I found that Jon Skeets C# in depth deals with how they evolved from delegates in a manner that was easy to follow and real eye opener. I would recommend you get a copy for your boss and colleagues.
You boss is going to need to understand that language (and other) improvements are designed to give developers more capabilities, and make them more efficient in completing the task at hand, and that if he is not going to allow them for unknown reasons then:
The development team isn't producing at its greatest potential.
The company isn't benefiting from increased efficiency/productivity.
like others have said developers aren't worth their salt if they can't keep up with some of the latest improvements in the language that they are using on a daily basis. I suspect your boss hasn't done much coding lately and it is his inability to understand the latest language improvements that has motivated this decision.
I was told that these features are not standard and confusing for most developers and its usefulness is doubtful. I was restricted to use only C# 2.0 features and he is also considering forbidding anonymous methods.
Presumably roughly translates to your boss meaning...
These features are confusing for me, and I don't find them useful because I don't understand them.
Which is fairly symptomatic of the Blub paradox (well, or just sheer laziness). Either way there's no merit in what he's saying, and you should start looking for another job if he continues down that road.
If the project is strictly C# 3+ from now on, then you would not break the build by including these items. However, before using them you should be aware of the following:
You can't use them if the project lead gets to make the decision and votes no.
Other than that, you should use them where it makes the code significantly easier to maintain.
You should not use them in ways that are confusing, or unnecessary in the sense that they do not significantly improve the maintainability of the code. This does mean you should not use them where the code is effectively the same or barely improved.
If Microsoft didn't define the standard and these were features that they added to a non-Microsoft language, I would say your boss might have a point. However, since Microsoft defines the language and uses these very features in implementing significant parts of .NET 3.5 (and 4.0), I'd say that you'd be foolish to ignore them. You may not choose to use some of them -- var, for instance, may not be acceptable in all environments due to coding standards -- but a blanket policy of avoiding new features seems unreasonable.
The trickier bit is when should you start using new features, because they can be confusing and may delay development. In general, I choose to use new language features and platform elements on new projects. I often avoid using them on projects that are currently in development when the feature/framework enhancement comes out, deferring until the next project. On a long project, I might introduce them at a significant milestone if the amount of rearchitecting is small or the feature is worth the changes. Normally, I'd wait until the project is due for significant changes anyway and then evaluate if refactoring to newer features is warranted.
The jury is still out on the long term consequences of some features, but if their main rationale is 'it is confusing to other developers' or something similar than I would be concerned about the quality of the talent.
I like C# 3.0 features especially
lambda expressions, auto implemented
properties or in suitable cases also
implicitly typed local variables (var
keyword), but when my boss revealed
that I am using them, he asked me not
to use any C# 3.0 features in work. I
was told that these features are not
standard and confusing for most
developers and its usefulness is
doubtful.
He's got a point.
Following that line of thought, let's make a rule against generic collections since List<T> doesn't make any sense (angle brackets? wtf?).
While we're at it, let's eliminate all interfaces (when are you ever gonna need a class without any implementation?).
Hell, let's go ahead eliminate inheritance since its so tricky these days (is-a? has-a? can't we all just be friends?).
And use of recursion is grounds for dismissal (Foo() invokes Foo()? Surely you must be joking!).
Errrm... back to reality.
Its not that C# 3.0 features are confusion to programmers, its that the features are confusing to your boss. He's familiar with one technology and stubbornly refuses to part with it. You're about to enter the Twilight Zone Blub Paradox:
Programmers get very attached to their
favorite languages, and I don't want
to hurt anyone's feelings, so to
explain this point I'm going to use a
hypothetical language called Blub.
Blub falls right in the middle of the
abstractness continuum. It is not the
most powerful language, but it is more
powerful than Cobol or machine
language.
And in fact, our hypothetical Blub
programmer wouldn't use either of
them. Of course he wouldn't program in
machine language. That's what
compilers are for. And as for Cobol,
he doesn't know how anyone can get
anything done with it. It doesn't even
have x (Blub feature of your choice).
As long as our hypothetical Blub
programmer is looking down the power
continuum, he knows he's looking down.
Languages less powerful than Blub are
obviously less powerful, because
they're missing some feature he's used
to. But when our hypothetical Blub
programmer looks in the other
direction, up the power continuum, he
doesn't realize he's looking up. What
he sees are merely weird languages. He
probably considers them about
equivalent in power to Blub, but with
all this other hairy stuff thrown in
as well. Blub is good enough for him,
because he thinks in Blub.
When we switch to the point of view of
a programmer using any of the
languages higher up the power
continuum, however, we find that he in
turn looks down upon Blub. How can you
get anything done in Blub? It doesn't
even have y.
C# 3.0 isn't hard. Sure you can abuse it, but it isn't hard or confusing to any programmer with more than week of C# 3.0 experience. Your boss's skills have just fallen behind and he wants to bring the rest of the team down to his level. DON'T LET HIM!
Continue using anonymous funcs, the var keyword, auto-properties, and what have you to your hearts content. You won't lose your job over it. If he gets pissy about it, laugh it off.
Like it or not, if you plan on using LINQ in any situation, you're going to have to utilize some of the C# 3.0 language specs.
Your boss is going to have to warm up to them if he wants to utilize the feature sets you get from 3.5, which are numerous and worth your time investing in.
Also, from my experience in leading teams, I've found that using the 3.0 specs actually has helped devs readability and understanding of the code base. There's about a weeks worth of time that is spent by the dev trying to understand what the syntax means, but once they get it they much prefer the new way over the old way.
Perhaps you can do a presentation once a week on each feature to everyone and get some of the developers on your side to help convince management of the benefits.
I recently moved from a bleeding edge C# house to a C# house that was running mostly on dot.Net 1.1 and some 2.0 projects, using mostly only 1.1 features. Luckily management stay away from the code. Most of the developers love all the new features in the newer frameworks, they just don't have the time or inclination to figure them out by themselves. Once I managed to show them how they can make their own lives easier they started using them by themselves and we have migrated several projects to gain the new language features and better tool advantages.
Some people are just afraid of change, because maybe you'll make them all look stupid using fancy new technologies. Could also be that your boss doesn't want the team learning new things instead of getting work done the old fasioned way.
The var keyword can certainly be abused, but in most cases reduces redundant code. LINQ is the main thing you want from .Net 3.5 because of the huge time saving in the amount of code you have to write. Your boss should be encouraging you to use it. Also the base class libraries now take delegates are parameters, so you will be limiting yourself a lot by not using them. Lambda's are just some fancy syntactic sugar to make delegates cleaner.
I would refer you to Effectively Integrating into Software Development Teams and Leading by Example. Two really great articles on how to deal with teams that are afraid of change.
What are your criteria for selection a (open source) library (or framework) for enterprise usage?
Some libraries are pretty small and can be easily checked for security flaws or tested for performance. But most libraries are too big to be reviewed before you can start to use them.
When I think of me selecting a library, most if the selection process is just gut feeling. When I try to be more specific, these are the first criteria which come to my mind:
How many developers are working on the project? My feeling is that more developers will find more bugs and security issues. In addition it will be harder to introduce security issues intentionally.
How good is the support? Compared to closed source libraries, I've got the feeling that the support of open source is often much better since you have a community around the globe which will be available whenever you need them.
How wide spread is the library? Are there any books about it on the market? Which other projects are using the library?
What are your criteria? Feel free to edit this note as community wiki.
For me, it depends on whether or not it is paid for or not. In your case, you give the impression you are looking at open source libraries.
In that specific case, I'll look at test coverage. Regardless of the number of contributors, if there aren't any unit tests that I can run myself (as well as enhance and test my use cases for if they fall outside the coverage of the unit tests provided), then that's a massive issue for me.
It's not that I don't appreciate the work that is done already in providing the library, but code in projects like this should have unit tests already with good coverage in order to gain traction.
If there are no libraries that have unit tests, then I would start searching for the library on search engines, actively seeking out negative replies. People who have negative feelings about the code and can crystalize the objective basis for those feelings in terms of how the code failed them will provide more valuable feedback than the masses that say "it works great".
Now for a commercial piece of code, it's completely different. At that point, I'd start looking at the company and it's support staff as a whole, and using that as a determination (as well as tests of your own to see if the library is right for you) as to whether or not to use that company's offering.
Quite often in open source libraries you cannot get reliable support. In such situations your best bet is to fix it yourself, which involves the following requirements.
You need to have the ability to read often messy and undocumented code.
The technical ability to ask the right questions from the right people -- i.e., these people aren't being paid to fix problems and they will only answer you if you make it easy enough for them.
Then you need the ability to fix the bug and get the patch accepted -- because if the patch isn't accepted .....
With this in mind I would be inclined to get a commercial library, or dual licensed library so that I could pay to get a competent engineer (motivated by the money I pay his company) to fix my problem.
Are there any guidelines on pitfalls to avoid while developing iPhone applications?
Sure, thousands. The same is true for any software development. Unfortunately, the easiest way to enumerate them is to write them down on a sheet of paper while waiting for a friendly soul to release you from the one you just fell into.
However:
Don't try to reinvent the wheel. The iPhone API is very complete -- you just have to LOOK for the facility you need. Things are NOT always implemented the way you would expect. Read the guides, carefully. Look at the tutorials and analyze how they work. (Try changing a line here or there in the tutorial to see what difference the change makes.) The single biggest mistake I have made in 1 year of iPhone development is not trying hard enough to find the iPhone way of doing something.
Don't ignore memory management; master it early and often. Use the Object Allocation and Leaks tools in Instruments to check for memory leaks frequently. I'd recommend checking after you complete each feature or view; more often than that if you keep finding bugs. Eventually you may understand it so well you can stop doing this.
Don't just use the default build settings. Play around with them to understand what they do. Figure out certification and distribution. GET INTO THE DEVELOPER PROGRAM QUICKLY -- it can take a while to push through that pipeline. [ AND when you get that notification that you need to renew, get it on instantly -- there have been problems with that process. ]
Don't neglect to read the Human Interface Guidelines (HIG) carefully. If they say not to do something -- DON'T DO IT. Apple will reject applications that misuse their iconography.
Don't stint on marketing. Yes, the App Store puts your app in front of millions of people... In theory. But the odds of getting front-paged are slim. There are a lot of great apps on the App Store that haven't sold much because no one knows about them.
Don't rest on your laurels. If a new technology comes out, find out if it makes your job easier; if it does, take the time to learn it. Personal example: I'm just now trying to switch from SQLite-based data management to Core Data, because I was in a hurry at the time I started my most recent project; now I wish I had slowed down and thought about it.
Don't go into your design thinking (for example) "How do I implement my concept with a table view?" It's true that table views are natural for many informational and utility applications, but don't be constrained. Instead, think about what users will want to be able to do, how you can make it easier for them -- put things together that will be used together, etc. If you've never explored the concept of Use Cases, read up on them.
Don't hesitate to build composite views. Many of the questions I have seen here on Stack Overflow have to do with putting a toolbar at the top of a table, or having an image in the background of a text field. I understand the desire to do things the easy way, and as I state in #1 above, if there is an easy way, use it. But in many cases the solution is just to layer a couple of views with appropriate placement and transparency.
Think about what might be Apple-approved from the start.
App Rejected is one of several useful sites to help understand Apple's mostly undocumented standards. (One more.) (A previous question on app store rejection reasons.)
A few quick examples:
Using a UIWebView can get your app a 17+ rating.
Coding with an undocumented/private API = rejected
Version number < 1.0 might= rejected
Not enough feedback about network success/fail = rejected
Too much network use = rejected
Clearly limited free version vs full version = rejected
The word 'iPhone' in the app name = rejected
The above links contain many more examples, and more details about those examples.
Don't neglect the programming guides. While the documentation is quite extensive, the programming guides contains a veritable trove of useful tips and "insider" information that simply cannot be gleaned from reading method definitions. I spend just as much time reading the guides for a technology (say, Core Data) as I do actually implementing it.
Don't assume you know what a method does. If you have any degree of doubt about the functionality of a method, it is well worth your time to go look it up in the documentation to verify.
Wonderful examples from #Amagrammer above.
I would love to add that the first place to start is iPhone development is Photoshop. This is still the best advice I can give to anyone who is starting out. I now use OmniGraffle because it has awesome stencil templates.
What I find is that even for super simple app's, draw up your prototype and look for usability issues and work flow issues. It is 100x quicker to redraw your app than re-code it. I have fallen into this trap numerous times and now actually draw up some pretty simple functionality to see what it will look and feel like.
This advice will save you 10s maybe even 100s of hours in hopefully getting your app right first time and getting you to think through what the issues are. Throwing away code sucks and I have done it not because the code was bad but because it made the usability or solution worse. I think the best of us end up throwing code away and prototyping your design definitely will help in having to RTFM for something you did not have to build in the first place.
If you don't have an great designer, and can't do great design by yourself, then don't even start iPhone app development. This rule only applies if you want/need to make money with your apps.