Object class members as pointers to avoid #include in headers - is it good practice? - class

This is really a question of precedence: which is more preferred in C++, avoiding pointers or avoiding #includes in header files?
"Don't Use #include in header files."
There seems to be some ambiguity based on my research. In this SO question, the top answer says "...make sure you actually need an include, [don't use one] when a forward declaration or even leaving it out completely will do." (From Header files and include best practice)
And this article explains the negative effect excess header inclusions can have on compile-time: http://blog.knatten.org/2012/11/09/another-reason-to-avoid-includes-in-headers/
As well as this tutorial, stating, "...you should try to put all of your code in the CPP class and only the class declaration in the HPP file.": https://github.com/LaurentGomila/SFML/wiki/Tutorial%3A-Basic-Game-Engine#wiki-declarations
"Don't Use Pointers."
But, there is also evidence that pointers should be avoided most often as well:
c++: when to use pointers?
https://softwareengineering.stackexchange.com/questions/56935/why-are-pointers-not-recommended-when-coding-with-c
Which preference takes precedence?
If my understanding about avoiding #includes in header files is correct, this can easily be done by changing things like class members to pointers so I can use a forward declaration instead, but is this a good idea for class members whose lifetime only lasts as long as the class itself?

It's not really an "one or the other". Both statements are true, but you need to understand the reasoning behind them.
tl;dr: Use forward declaration where possible to reduce compile time. Use stack objects or references as much as possible and pointers only in rare cases.
"Don't Use #include in header files."
This is a rather general statement, which as is, would be wrong. The more important part behind this statement actually is: "Use forward declarations where ever possible". Includes in header files are not something bad per se, but they often aren't needed either.
Forward declarations can be used, if the included type/class/etc. is used as a pointer in the new type/class/etc. declaration within the given header. Forward declaration just tells the compiler: "Somewhere a long the way you'll find the actual declaration of type X." The include can even be removed if the type isn't used at all in the declaration. The reason is that the compiler doesn't need to know anything about these types to calculate the required memory layout for the new type. For example a pointer has "always" the same size. Including the file additionally in the header, would potentially only waste processing power, since the compiler would have to open and parse the file, thus adding expensive seconds to the compile time. So in most cases you'll do yourself a favor by reducing the unnecessary includes in the header files and instead use forward declaration.
For the sake of completion: Forward declaration are explicitly needed if you get circular references (class A depends on class B, which depends on class C, which depends on class A). However this can often also reveal either bad design and/or old/outdate coding standards which would lead us to the second topic.
"Don't use pointers."
Again the statement is a tiny bit too general. One might rather want to say: "Don't use raw pointers."
With C++11 and soon C++1y the language itself has changed a lot. As much bad C++ books the world has seen, the more outdated C++ books float around nowadays (here's a good list however). While in the past we were mostly stuck with pointers new and delete for memory management, we've evolved to better, more readable, less risk and 100% memory leak free ways to manage the data in memory. One of the magic words is RAII - since you linked something from SFML above, here's a nice demonstration of the power of RAII. I see many people use pointers and new and delete just because or maybe because they are thinking in Java or C# terms were objects get instantiated with the new keyword. In C++ however object don't need to use new to be allocated and it's mostly preferable to run things on the stack instead of the heap. This works for many, many things, especially when using STL containers, which will hide the dynamic management in the background. The usage of the heap is mostly all cases only preferable if you need the data to be dynamic, non "local" or you need a lot of it. However when you use the heap, make sure to use smart pointers such as std::unique_ptr or std::shared_ptr depending on the use case, but certainly not raw pointers. In modern C++ raw pointers should never own an object anymore. There are cases where it's okay to return a raw pointer to reference an object, but there's really no reason in modern C++ to call new on a raw pointer.
Lets get back to the original question though. The "Don't use raw pointers" is essentially more of a design question and quite unrelated to the whole header issue. While there might be some cases where you'll have to switch to raw pointers, due to circular references, the use of forward declarations is otherwise just about compilation time (and maybe clean code), but it's not as essential for the programming itself.
In short: Don't use raw pointers to avoid inclusions in header files, but use forward declaration where ever possible and utilize smart pointers as much as possible.

Related

How can i import a a library , hfile in a CLR project [duplicate]

I have heard using namespace std; is bad practice, and that I should use std::cout and std::cin directly instead.
Why is this? Does it risk declaring variables that share the same name as something in the std namespace?
Consider two libraries called Foo and Bar:
using namespace foo;
using namespace bar;
Everything works fine, and you can call Blah() from Foo and Quux() from Bar without problems. But one day you upgrade to a new version of Foo 2.0, which now offers a function called Quux(). Now you've got a conflict: Both Foo 2.0 and Bar import Quux() into your global namespace. This is going to take some effort to fix, especially if the function parameters happen to match.
If you had used foo::Blah() and bar::Quux(), then the introduction of foo::Quux() would have been a non-event.
It can get worse than what Greg wrote!
Library Foo 2.0 could introduce a function, Quux(), that is an unambiguously better match for some of your calls to Quux() than the bar::Quux() your code called for years. Then your code still compiles, but it silently calls the wrong function and does god-knows-what. That's about as bad as things can get.
Keep in mind that the std namespace has tons of identifiers, many of which are very common ones (think list, sort, string, iterator, etc.) which are very likely to appear in other code, too.
If you consider this unlikely: There was a question asked here on Stack Overflow where pretty much exactly this happened (wrong function called due to omitted std:: prefix) about half a year after I gave this answer. Here is another, more recent example of such a question.
So this is a real problem.
Here's one more data point: Many, many years ago, I also used to find it annoying having to prefix everything from the standard library with std::. Then I worked in a project where it was decided at the start that both using directives and declarations are banned except for function scopes. Guess what? It took most of us very few weeks to get used to writing the prefix, and after a few more weeks most of us even agreed that it actually made the code more readable. There's a reason for that: Whether you like shorter or longer prose is subjective, but the prefixes objectively add clarity to the code. Not only the compiler, but you, too, find it easier to see which identifier is referred to.
In a decade, that project grew to have several million lines of code. Since these discussions come up again and again, I once was curious how often the (allowed) function-scope using actually was used in the project. I grep'd the sources for it and only found one or two dozen places where it was used. To me this indicates that, once tried, developers don't find std:: painful enough to employ using directives even once every 100 kLoC even where it was allowed to be used.
Bottom line: Explicitly prefixing everything doesn't do any harm, takes very little getting used to, and has objective advantages. In particular, it makes the code easier to interpret by the compiler and by human readers — and that should probably be the main goal when writing code.
The problem with putting using namespace in the header files of your classes is that it forces anyone who wants to use your classes (by including your header files) to also be 'using' (i.e. seeing everything in) those other namespaces.
However, you may feel free to put a using statement in your (private) *.cpp files.
Beware that some people disagree with my saying "feel free" like this -- because although a using statement in a cpp file is better than in a header (because it doesn't affect people who include your header file), they think it's still not good (because depending on the code it could make the implementation of the class more difficult to maintain). This C++ Super-FAQ entry says,
The using-directive exists for legacy C++ code and to ease the transition to namespaces, but you probably shouldn’t use it on a regular basis, at least not in your new C++ code.
The FAQ suggests two alternatives:
A using-declaration:
using std::cout; // a using-declaration lets you use cout without qualification
cout << "Values:";
Just typing std::
std::cout << "Values:";
I recently ran into a complaint about Visual Studio 2010. It turned out that pretty much all the source files had these two lines:
using namespace std;
using namespace boost;
A lot of Boost features are going into the C++0x standard, and Visual Studio 2010 has a lot of C++0x features, so suddenly these programs were not compiling.
Therefore, avoiding using namespace X; is a form of future-proofing, a way of making sure a change to the libraries and/or header files in use is not going to break a program.
Short version: don't use global using declarations or directives in header files. Feel free to use them in implementation files. Here's what Herb Sutter and Andrei Alexandrescu have to say about this issue in C++ Coding Standards (bolding for emphasis is mine):
Summary
Namespace usings are for your convenience, not for you to inflict on others: Never write a using declaration or a using directive before an #include directive.
Corollary: In header files, don’t write namespace-level using directives or using declarations; instead, explicitly namespace-qualify all names. (The second rule follows from the first, because headers can never know what other header #includes might appear after them.)
Discussion
In short: You can and should use namespace using declarations and directives liberally in your implementation files after #include directives and feel good about it. Despite repeated assertions to the contrary, namespace using declarations and directives are not evil and they do not defeat the purpose of namespaces. Rather, they are what make namespaces usable.
One shouldn't use the using directive at the global scope, especially in headers. However, there are situations where it is appropriate even in a header file:
template <typename FloatType> inline
FloatType compute_something(FloatType x)
{
using namespace std; // No problem since scope is limited
return exp(x) * (sin(x) - cos(x * 2) + sin(x * 3) - cos(x * 4));
}
This is better than explicit qualification (std::sin, std::cos...), because it is shorter and has the ability to work with user defined floating point types (via argument-dependent lookup (ADL)).
Do not use it globally
It is considered "bad" only when used globally. Because:
You clutter the namespace you are programming in.
Readers will have difficulty seeing where a particular identifier comes from, when you use many using namespace xyz;.
Whatever is true for other readers of your source code is even more true for the most frequent reader of it: yourself. Come back in a year or two and take a look...
If you only talk about using namespace std; you might not be aware of all the stuff you grab -- and when you add another #include or move to a new C++ revision you might get name conflicts you were not aware of.
You may use it locally
Go ahead and use it locally (almost) freely. This, of course, prevents you from repetition of std:: -- and repetition is also bad.
An idiom for using it locally
In C++03 there was an idiom -- boilerplate code -- for implementing a swap function for your classes. It was suggested that you actually use a local using namespace std; -- or at least using std::swap;:
class Thing {
int value_;
Child child_;
public:
// ...
friend void swap(Thing &a, Thing &b);
};
void swap(Thing &a, Thing &b) {
using namespace std; // make `std::swap` available
// swap all members
swap(a.value_, b.value_); // `std::stwap(int, int)`
swap(a.child_, b.child_); // `swap(Child&,Child&)` or `std::swap(...)`
}
This does the following magic:
The compiler will choose the std::swap for value_, i.e. void std::swap(int, int).
If you have an overload void swap(Child&, Child&) implemented the compiler will choose it.
If you do not have that overload the compiler will use void std::swap(Child&,Child&) and try its best swapping these.
With C++11 there is no reason to use this pattern any more. The implementation of std::swap was changed to find a potential overload and choose it.
If you import the right header files you suddenly have names like hex, left, plus or count in your global scope. This might be surprising if you are not aware that std:: contains these names. If you also try to use these names locally it can lead to quite some confusion.
If all the standard stuff is in its own namespace you don't have to worry about name collisions with your code or other libraries.
Another reason is surprise.
If I see cout << blah, instead of std::cout << blah I think: What is this cout? Is it the normal cout? Is it something special?
Experienced programmers use whatever solves their problems and avoid whatever creates new problems, and they avoid header-file-level using-directives for this exact reason.
Experienced programmers also try to avoid full qualification of names inside their source files. A minor reason for this is that it's not elegant to write more code when less code is sufficient unless there are good reasons. A major reason for this is turning off argument-dependent lookup (ADL).
What are these good reasons? Sometimes programmers explicitly want to turn off ADL, other times they want to disambiguate.
So the following are OK:
Function-level using-directives and using-declarations inside functions' implementations
Source-file-level using-declarations inside source files
(Sometimes) source-file-level using-directives
I agree that it should not be used globally, but it's not so evil to use locally, like in a namespace. Here's an example from "The C++ Programming Language":
namespace My_lib {
using namespace His_lib; // Everything from His_lib
using namespace Her_lib; // Everything from Her_lib
using His_lib::String; // Resolve potential clash in favor of His_lib
using Her_lib::Vector; // Resolve potential clash in favor of Her_lib
}
In this example, we resolved potential name clashes and ambiguities arising from their composition.
Names explicitly declared there (including names declared by using-declarations like His_lib::String) take priority over names made accessible in another scope by a using-directive (using namespace Her_lib).
I also consider it a bad practice. Why? Just one day I thought that the function of a namespace is to divide stuff, so I shouldn't spoil it with throwing everything into one global bag.
However, if I often use 'cout' and 'cin', I write: using std::cout; using std::cin; in the .cpp file (never in the header file as it propagates with #include). I think that no one sane will ever name a stream cout or cin. ;)
It's nice to see code and know what it does. If I see std::cout I know that's the cout stream of the std library. If I see cout then I don't know. It could be the cout stream of the std library. Or there could be an int cout = 0; ten lines higher in the same function. Or a static variable named cout in that file. It could be anything.
Now take a million line code base, which isn't particularly big, and you're searching for a bug, which means you know there is one line in this one million lines that doesn't do what it is supposed to do. cout << 1; could read a static int named cout, shift it to the left by one bit, and throw away the result. Looking for a bug, I'd have to check that. Can you see how I really really prefer to see std::cout?
It's one of these things that seem a really good idea if you are a teacher and never had to write and maintain any code for a living. I love seeing code where (1) I know what it does; and, (2) I'm confident that the person writing it knew what it does.
It's all about managing complexity. Using the namespace will pull things in that you don't want, and thus possibly make it harder to debug (I say possibly). Using std:: all over the place is harder to read (more text and all that).
Horses for courses - manage your complexity how you best can and feel able.
A concrete example to clarify the concern. Imagine you have a situation where you have two libraries, foo and bar, each with their own namespace:
namespace foo {
void a(float) { /* Does something */ }
}
namespace bar {
...
}
Now let's say you use foo and bar together in your own program as follows:
using namespace foo;
using namespace bar;
void main() {
a(42);
}
At this point everything is fine. When you run your program it 'Does something'. But later you update bar and let's say it has changed to be like:
namespace bar {
void a(float) { /* Does something completely different */ }
}
At this point you'll get a compiler error:
using namespace foo;
using namespace bar;
void main() {
a(42); // error: call to 'a' is ambiguous, should be foo::a(42)
}
So you'll need to do some maintenance to clarify that 'a' meant foo::a. That's undesirable, but fortunately it is pretty easy (just add foo:: in front of all calls to a that the compiler marks as ambiguous).
But imagine an alternative scenario where bar changed instead to look like this instead:
namespace bar {
void a(int) { /* Does something completely different */ }
}
At this point your call to a(42) suddenly binds to bar::a instead of foo::a and instead of doing 'something' it does 'something completely different'. No compiler warning or anything. Your program just silently starts doing something completely different than before.
When you use a namespace you're risking a scenario like this, which is why people are uncomfortable using namespaces. The more things in a namespace, the greater the risk of conflict, so people might be even more uncomfortable using namespace std (due to the number of things in that namespace) than other namespaces.
Ultimately this is a trade-off between writability vs. reliability/maintainability. Readability may factor in also, but I could see arguments for that going either way. Normally I would say reliability and maintainability are more important, but in this case you'll constantly pay the writability cost for an fairly rare reliability/maintainability impact. The 'best' trade-off will determine on your project and your priorities.
Consider
// myHeader.h
#include <sstream>
using namespace std;
// someoneElses.cpp/h
#include "myHeader.h"
class stringstream { // Uh oh
};
Note that this is a simple example. If you have files with 20 includes and other imports, you'll have a ton of dependencies to go through to figure out the problem. The worse thing about it is that you can get unrelated errors in other modules depending on the definitions that conflict.
It's not horrible, but you'll save yourself headaches by not using it in header files or the global namespace. It's probably all right to do it in very limited scopes, but I've never had a problem typing the extra five characters to clarify where my functions are coming from.
You need to be able to read code written by people who have different style and best practices opinions than you.
If you're only using cout, nobody gets confused. But when you have lots of namespaces flying around and you see this class and you aren't exactly sure what it does, having the namespace explicit acts as a comment of sorts. You can see at first glance, "oh, this is a filesystem operation" or "that's doing network stuff".
Using many namespaces at the same time is obviously a recipe for disaster, but using JUST namespace std and only namespace std is not that big of a deal in my opinion because redefinition can only occur by your own code...
So just consider them functions as reserved names like "int" or "class" and that is it.
People should stop being so anal about it. Your teacher was right all along. Just use ONE namespace; that is the whole point of using namespaces the first place. You are not supposed to use more than one at the same time. Unless it is your own. So again, redefinition will not happen.
I agree with the others here, but I would like to address the concerns regarding readability - you can avoid all of that by simply using typedefs at the top of your file, function or class declaration.
I usually use it in my class declaration as methods in a class tend to deal with similar data types (the members) and a typedef is an opportunity to assign a name that is meaningful in the context of the class. This actually aids readability in the definitions of the class methods.
// Header
class File
{
typedef std::vector<std::string> Lines;
Lines ReadLines();
}
and in the implementation:
// .cpp
Lines File::ReadLines()
{
Lines lines;
// Get them...
return lines;
}
as opposed to:
// .cpp
vector<string> File::ReadLines()
{
vector<string> lines;
// Get them...
return lines;
}
or:
// .cpp
std::vector<std::string> File::ReadLines()
{
std::vector<std::string> lines;
// Get them...
return lines;
}
A namespace is a named scope. Namespaces are used to group related declarations and to keep separate
items separate. For example, two separately developed libraries may use the same name to refer to different
items, but a user can still use both:
namespace Mylib{
template<class T> class Stack{ /* ... */ };
// ...
}
namespace Yourlib{
class Stack{ /* ... */ };
// ...
}
void f(int max) {
Mylib::Stack<int> s1(max); // Use my stack
Yourlib::Stack s2(max); // Use your stack
// ...
}
Repeating a namespace name can be a distraction for both readers and writers. Consequently, it is possible
to state that names from a particular namespace are available without explicit qualification. For example:
void f(int max) {
using namespace Mylib; // Make names from Mylib accessible
Stack<int> s1(max); // Use my stack
Yourlib::Stack s2(max); // Use your stack
// ...
}
Namespaces provide a powerful tool for the management of different libraries and of different versions of code. In particular, they offer the programmer alternatives of how explicit to make a reference to a nonlocal name.
Source: An Overview of the C++ Programming Language
by Bjarne Stroustrup
An example where using namespace std throws a compilation error because of the ambiguity of count, which is also a function in algorithm library.
#include <iostream>
#include <algorithm>
using namespace std;
int count = 1;
int main() {
cout << count << endl;
}
It doesn't make your software or project performance worse. The inclusion of the namespace at the beginning of your source code isn't bad. The inclusion of the using namespace std instruction varies according to your needs and the way you are developing the software or project.
The namespace std contains the C++ standard functions and variables. This namespace is useful when you often would use the C++ standard functions.
As is mentioned in this page:
The statement using namespace std is generally considered bad
practice. The alternative to this statement is to specify the
namespace to which the identifier belongs using the scope operator(::)
each time we declare a type.
And see this opinion:
There is no problem using "using namespace std" in your source file
when you make heavy use of the namespace and know for sure that
nothing will collide.
Some people had said that is a bad practice to include the using namespace std in your source files because you're invoking from that namespace all the functions and variables. When you would like to define a new function with the same name as another function contained in the namespace std you would overload the function and it could produce problems due to compile or execute. It will not compile or executing as you expect.
As is mentioned in this page:
Although the statement saves us from typing std:: whenever
we wish to access a class or type defined in the std namespace, it
imports the entirety of the std namespace into the current namespace
of the program. Let us take a few examples to understand why this
might not be such a good thing
...
Now at a later stage of development, we wish to use another version of
cout that is custom implemented in some library called “foo” (for
example)
...
Notice how there is an ambiguity, to which library does cout point to?
The compiler may detect this and not compile the program. In the worst
case, the program may still compile but call the wrong function, since
we never specified to which namespace the identifier belonged.
It's case by case. We want to minimize the "total cost of ownership" of the software over its lifespan. Stating "using namespace std" has some costs, but not using it also has a cost in legibility.
People correctly point out that when using it, when the standard library introduces new symbols and definitions, your code ceases to compile, and you may be forced to rename variables. And yet this is probably good long-term, since future maintainers will be momentarily confused or distracted if you're using a keyword for some surprising purpose.
You don't want to have a template called vector, say, which isn't the vector known by everyone else. And the number of new definitions thus introduced in the C++ library is small enough it may simply not come up. There is a cost to having to do this kind of change, but the cost is not high and is offset by the clarity gained by not using std symbol names for other purposes.
Given the number of classes, variables, and functions, stating std:: on every one might fluff up your code by 50% and make it harder to get your head around. An algorithm or step in a method that could be taken in on one screenful of code now requires scrolling back and forth to follow. This is a real cost. Arguably it may not be a high cost, but people who deny it even exists are inexperienced, dogmatic, or simply wrong.
I'd offer the following rules:
std is different from all other libraries. It is the one library everyone basically needs to know, and in my view is best thought of as part of the language. Generally speaking there is an excellent case for using namespace std even if there isn't for other libraries.
Never force the decision onto the author of a compilation unit (a .cpp file) by putting this using in a header. Always defer the decision to the compilation unit author. Even in a project that has decided to use using namespace std everywhere may fine a few modules that are best handled as exceptions to that rule.
Even though the namespace feature lets you have many modules with symbols defined the same, it's going to be confusing to do so. Keep the names different to the extent possible. Even if not using the namespace feature, if you have a class named foo and std introduces a class named foo, it's probably better long-run to rename your class anyway.
An alternative to using namespaces is to manually namespace symbols by prefixing them. I have two libraries I've used for decades, both starting as C libraries, actually, where every symbol is prefixed with "AK" or "SCWin". Generally speaking, this is like avoiding the "using" construct, but you don't write the twin colons. AK::foo() is instead AKFoo(). It makes code 5-10% denser and less verbose, and the only downside is that you'll be in big trouble if you have to use two such libraries that have the same prefixing. Note the X Window libraries are excellent in this regard, except they forgot to do so with a few #defines: TRUE and FALSE should have been XTRUE and XFALSE, and this set up a namespace clash with Sybase or Oracle that likewise used TRUE and FALSE with different values! (ASCII 0 and 1 in the case of the database!) One special advantage of this is that it applies seemlessly to preprocessor definitions, whereas the C++ using/namespace system doesn't handle them. A nice benefit of this is that it gives an organic slope from being part of a project to eventually being a library. In a large application of mine, all window classes are prefixed Win, all signal-processing modules Mod, and so on. There's little chance of any of these being reused so there's no practical benefit to making each group into a library, but it makes obvious in a few seconds how the project breaks into sub-projects.
I agree with others – it is asking for name clashes, ambiguities and then the fact is it is less explicit. While I can see the use of using, my personal preference is to limit it. I would also strongly consider what some others pointed out:
If you want to find a function name that might be a fairly common name, but you only want to find it in the std namespace (or the reverse – you want to change all calls that are not in namespace std, namespace X, ...), then how do you propose to do this?
You could write a program to do it, but wouldn't it be better to spend time working on your project itself rather than writing a program to maintain your project?
Personally, I actually don't mind the std:: prefix. I like the look more than not having it. I don't know if that is because it is explicit and says to me "this isn't my code... I am using the standard library" or if it is something else, but I think it looks nicer. This might be odd given that I only recently got into C++ (used and still do C and other languages for much longer and C is my favourite language of all time, right above assembly).
There is one other thing although it is somewhat related to the above and what others point out. While this might be bad practise, I sometimes reserve std::name for the standard library version and name for program-specific implementation. Yes, indeed this could bite you and bite you hard, but it all comes down to that I started this project from scratch, and I'm the only programmer for it. Example: I overload std::string and call it string. I have helpful additions. I did it in part because of my C and Unix (+ Linux) tendency towards lower-case names.
Besides that, you can have namespace aliases. Here is an example of where it is useful that might not have been referred to. I use the C++11 standard and specifically with libstdc++. Well, it doesn't have complete std::regex support. Sure, it compiles, but it throws an exception along the lines of it being an error on the programmer's end. But it is lack of implementation.
So here's how I solved it. Install Boost's regex, and link it in. Then, I do the following so that when libstdc++ has it implemented entirely, I need only remove this block and the code remains the same:
namespace std
{
using boost::regex;
using boost::regex_error;
using boost::regex_replace;
using boost::regex_search;
using boost::regex_match;
using boost::smatch;
namespace regex_constants = boost::regex_constants;
}
I won't argue on whether that is a bad idea or not. I will however argue that it keeps it clean for my project and at the same time makes it specific: True, I have to use Boost, but I'm using it like the libstdc++ will eventually have it. Yes, starting your own project and starting with a standard (...) at the very beginning goes a very long way with helping maintenance, development and everything involved with the project!
Just to clarify something: I don't actually think it is a good idea to use a name of a class/whatever in the STL deliberately and more specifically in place of. The string is the exception (ignore the first, above, or second here, pun if you must) for me as I didn't like the idea of 'String'.
As it is, I am still very biased towards C and biased against C++. Sparing details, much of what I work on fits C more (but it was a good exercise and a good way to make myself a. learn another language and b. try not be less biased against object/classes/etc which is maybe better stated as less closed-minded, less arrogant, and more accepting.). But what is useful is what some already suggested: I do indeed use list (it is fairly generic, is it not ?), and sort (same thing) to name two that would cause a name clash if I were to do using namespace std;, and so to that end I prefer being specific, in control and knowing that if I intend it to be the standard use then I will have to specify it. Put simply: no assuming allowed.
And as for making Boost's regex part of std. I do that for future integration and – again, I admit fully this is bias - I don't think it is as ugly as boost::regex:: .... Indeed, that is another thing for me. There are many things in C++ that I still have yet to come to fully accept in looks and methods (another example: variadic templates versus var arguments [though I admit variadic templates are very very useful!]). Even those that I do accept it was difficult, and I still have issues with them.
From my experiences, if you have multiple libraries that uses say, cout, but for a different purpose you may use the wrong cout.
For example, if I type in, using namespace std; and using namespace otherlib; and type just cout (which happens to be in both), rather than std::cout (or 'otherlib::cout'), you might use the wrong one, and get errors. It's much more effective and efficient to use std::cout.
I do not think it is necessarily bad practice under all conditions, but you need to be careful when you use it. If you're writing a library, you probably should use the scope resolution operators with the namespace to keep your library from butting heads with other libraries. For application level code, I don't see anything wrong with it.
With unqualified imported identifiers you need external search tools like grep to find out where identifiers are declared. This makes reasoning about program correctness harder.
This is a bad practice, often known as global namespace pollution. Problems may occur when more than one namespace has the same function name with signature, then it will be ambiguous for the compiler to decide which one to call and this all can be avoided when you are specifying the namespace with your function call like std::cout . Hope this helps. :)
"Why is 'using namespace std;' considered a bad practice in C++?"
I put it the other way around: Why is typing five extra characters considered cumbersome by some?
Consider e.g. writing a piece of numerical software. Why would I even consider polluting my global namespace by cutting general "std::vector" down to "vector" when "vector" is one of the problem domain's most important concepts?
To answer your question I look at it this way practically: a lot of programmers (not all) invoke namespace std. Therefore one should be in the habit of NOT using things that impinge or use the same names as what is in the namespace std. That is a great deal granted, but not so much compared to the number of possible coherent words and pseudonyms that can be come up with strictly speaking.
I mean really... saying "don't rely on this being present" is just setting you up to rely on it NOT being present. You are constantly going to have issues borrowing code snippets and constantly repairing them. Just keep your user-defined and borrowed stuff in limited scope as they should be and be VERY sparing with globals (honestly globals should almost always be a last resort for purposes of "compile now, sanity later"). Truly I think it is bad advice from your teacher because using std will work for both "cout" and "std::cout" but NOT using std will only work for "std::cout". You will not always be fortunate enough to write all your own code.
NOTE: Don't focus too much on efficiency issues until you actually learn a little about how compilers work. With a little experience coding you don't have to learn that much about them before you realize how much they are able to generalize good code into something something simple. Every bit as simple as if you wrote the whole thing in C. Good code is only as complex as it needs to be.

When decorating ObjC code with nullability annotations, do you also have to do the definition, or just the declaration?

We're prepping a bunch of or ObjC code for consumption by Swift, which of course requires nullability annotations. Now it's my understanding that those nullability annotations only need to be at the declaration site, not on the definition. This means for cases where the declaration is in say a header file and the definition is in a m/mm file, you don't need to add them to the latter.
For instance...
Foo.h:
- (nullable Foo *)getFooWithKey:(NSString *_Nonnull)key;
Foo.mm:
- (Foo *)subtree:(NSString *)key
{
// Some implementation here...
}
Now some of my coworkers who have a lot more experience with ObjC are saying they have to go in both places, meaning the mm file actually has to be this...
Foo.mm:
- (nullable Foo *)subtree:(NSString *_Nonnull)key
{
// Some implementation here...
}
When I ask why, they say so they 'match' up. However, when we remove them from the m/mm files, they still seem to import in Swift just fine without them because Swift is only looking at the headers.
That said, I'm not sure if there are other things to consider that do require them in both places that we're just not testing for so I can't say that's definitive, only that our tests worked.
Readability vs. Productivity
Now normally, even if the latter doesn't actually do anything, if it aids in readability, that would be enough to say 'put it in both places'. However, in our particular case, we have potentially tens of thousands of APIs to update so eliminating that much extra work would be a huge win for everyone. Plus, it makes writing code-mods easier too.
The closest thing I've found in Apple's documentation about this are these two excerpts (emphasis mine)...
However, in the common case there’s a much nicer way to write these annotations: within method declarations you can use the non-underscored forms nullable and nonnull immediately after an open parenthesis, as long as the type is a simple object or block pointer.
and
The non-underscored forms are nicer than the underscored ones, but you’d still need to apply them to every type in your header. To make that job easier and to make your headers clearer, you’ll want to use audited regions.
It's not definitive though. The first may be calling that out as 'a nicer way' that is 'specific to declarations' but it doesn't say annotations in general only go there. The latter too says the non-underscored ones can be used in a header but again, doesn't say nullability annotations in general can only appear in a header, only that the audited regions do.
That said, does anyone know of where Apple would clarify this, or can anything else be shared which would let us know it's ok to skip them in the definitions/implementations?
Only the declaration needs to be annotated. For methods, that means the header file with the interface.

What is the advantage of saying your function should never be inlined?

I understand Swift's inlining well. I know the nuances between the four function-inlining attributes. I use #inline(__always) a lot, especially when I'm just making sugary APIs like this:
public extension String {
#inline(__always)
var length: Int { count }
}
I do this because there's not really a cost involved in inlining it, but there would be the cost of an extra stack frame if it weren't inlined. For less-obvious sugar, I'll lean toward #inlinable andor #usableFromInline as needed.
However, one distinction vexes me. The two possible arguments to #inline are never and __always. Despite the lack of actual documentation, this choice of spelling here acts as a sort of self-documentation, implying that if you are going to use one of these, you should lean toward never, and __always is discouraged.
But why is this the direction the Swift language designers encourage? As far as I know, if no attribute is applied at all, then this is the behavior:
If a function (et al) is used within the module in which it's declared, the compiler might choose to inline it or not, depending on which would produce better code (by some measure)
If that function (et al) is used outside the module, its implementation is not exposed in a way that allows it to be inlined, so it is never inlined.
So, it seems most of the time, not-inlining is the default. That's fine and dandy, I have no problem with that on the surface; don't bloat the executable any more than you need to.
But then, I've never had a reason to think #inline(never) is useful. From what I understand, the only reason I would use #inline(never) is if I've noticed that the Swift compiler is choosing to inline a non-annotated function too much, and it's bloating my executable. This seems like a super-niche occurrence:
My software is running fine
The Swift compiler's algorithm for deciding whether to inline something is not making the right choice for my code
I care about the size of the binary so much that I'm inspecting it closely enough to discover that a function is being inlined automatically too much
The problem is only in code that I've written into my own module; not code I'm using from some other module
Or, as Rob said in the comments, if you're going through some disassembly and automatic inlining makes it hard to read.
I can't imagine that these are the use cases which the Swift language designers had in mind when designing this attribute. Especially since Swift is not meant for embedded systems, binary size (and the (dis)assembly in general) isn't really that much of a concern. I've never seen an unreasonably-large Swift binary anyway (>50MB).
So why is never encouraged more than __always? I often run into reasons why I should force a function to be inlined, but I've not yet seen a reason to force a function to be stacked, at least in my own work.

Progress-gl - What's benefit of placing variable declaration on top of the procedure

I've been doing Progress 4GL for 8 years though it's not my main responsibility. I do C++ and Java a lot more. When programming in other language it's suggested to have the declaration close to the usage. With 4GL however I see people place the declaration on top of the file. It's even in the coding standard.
I think placing them on top of them file would lead to 'vertical separation' problem. In most other language it's even suggested to do the assignment at the same line as the declaration.
The question is why it's suggested to do so in 4GL ? What's the benefit ? I know that it's possible to place the declaration anywhere in the file, given that it's declared before it is used.
I think the answer is to do with scoping, or the lack of it, within Progress 4GL.
If you are used to Java, say, and read a Progress 4GL program, that looks like
DO:
DEFINE VARIABLE x AS INTEGER INITIAL 4.
DISPLAY x.
END.
then you wouldn't expect to be able to use this value of x anywhere else in the program, and that any changes made in the block, wouldn't effect anything outside the block.
As I understand it, all progress variables declared within the body of a program are scoped to the whole program, unless they are declared are within an internal procedure or function, in which case they are scoped to the procedure or function.
(Incidentally any default buffers [i.e. undeclared] you use within an internal procedure/function are scoped to the whole program, not just the procedure or function, so you need to be very careful to explicity declare buffers in functions you intend ot use recursively).
I therefore think the convention of declaring variables at the beginning of a program is in order to reflect the fact that Progress will treat them has having been done so, regardless of where you put the declaration.
There is absolutely no benefit in scoping anything to the program as a whole when it could be scoped smaller.
Smaller scopes are easier to test, give less possibility of namespace conflict, and less opportunity for error.
Tightly scoped named buffers are especially useful when writing to the database because they eliminate the possibility of there ever being some other part of your code that uses the same buffer and causes a share-lock, i.e., this fails to compile:
do for b-customer transaction:
find b-customer where .... exclusive...
...
end.
...
find b-customer...
On the other hand, procedures and functions (and include files...) that share scope with the main body of code are a major source of bugs, because when you pick up your variable or whatever, you can never be entirely certain where it has been...
All of this is just basic Structured Programming, of course. It's true for every language and has been accepted since the 70's.
The "reason" that you usually see variables defined at the top is simple. Habit. That is just how things were done in the bad old days.
A lot of old code, or code written by old fossils, is written that way. No matter the language.
Some languages (COBOL springs to mind) even formalized it.
Is there any advantage to such an approach?
Not especially. I guess you could argue "they are all in one place and easy to find" but that isn't very compelling.
"Habit" is actually more compelling ;) If you are working with a team that expects a certain style or in an application where a particular style is prevalent then you should think twice before unilaterally throwing out a new way of doing things - the confusion could be a bigger problem than the advantages gained.

Does it matter if there are unused functions I put into a big CoolFunctions.h / CoolFunctions.m file that's included everywhere in my project?

I want to create a big file for all cool functions I find somehow reusable and useful, and put them all into that single file. Well, for the beginning I don't have many, so it's not worth thinking much about making several files, I guess. I would use pragma marks to separate them visually.
But the question: Would those unused methods bother in any way? Would my application explode or have less performance? Or is the compiler / linker clever enough to know that function A and B are not needed, and thus does not copy their "code" into my resulting app?
This sounds like an absolute architectural and maintenance nightmare. As a matter of practice, you should never make a huge blob file with a random set of methods you find useful. Add the methods to the appropriate classes or categories. See here for information on the blob anti-pattern, which is what you are doing here.
To directly answer your question: no, methods that are never called will not affect the performance of your app.
No, they won't directly affect your app. Keep in mind though, all that unused code is going to make your functions file harder to read and maintain. Plus, writing functions you're not actually using at the moment makes it easy to introduce bugs that aren't going to become apparent until much later on when you start using those functions, which can be very confusing because you've forgotten how they're written and will probably assume they're correct because you haven't touched them in so long.
Also, in an object oriented language like Objective-C global functions should really only be used for exceptional, very reusable cases. In most instances, you should be writing methods in classes instead. I might have one or two global functions in my apps, usually related to debugging, but typically nothing else.
So no, it's not going to hurt anything, but I'd still avoid it and focus on writing the code you need now, at this very moment.
The code would still be compiled and linked into the project, it just wouldn't be used by your code, meaning your resultant executable will be larger.
I'd probably split the functions into seperate files, depending on the common areas they are to address, so I'd have a library of image functions separate from a library of string manipulation functions, then include whichever are pertinent to the project in hand.
I don't think having unused functions in the .h file will hurt you in any way. If you compile all the corresponding .m files containing the unused functions in your build target, then you will end up making a bigger executable than is required. Same goes for if you include the code via static libraries.
If you do use a function but you didn't include the right .m file or library, then you'll get a link error.