Fiction Exceptional C++ Pdf


Friday, July 26, 2019

The puzzles and problems in Exceptional C++ not only entertain, they will help you hone your skills to become the sharpest C++ programmer you can be. ABC Amber CHM Converter Trial version, html Exceptional C++ Style 40 New Engineering Puzzles, Programming Problems. Exceptional C++ Style. 40 New Engineering Puzzles, Programming Problems, and Solutions. Herb Sutter. AAddison-Wesley. Boston • San Francisco • New York.

Language:English, Spanish, Japanese
Published (Last):14.01.2016
ePub File Size:25.89 MB
PDF File Size:11.31 MB
Distribution:Free* [*Regsitration Required]
Uploaded by: MARLENE

Coding books. Contribute to fri13sid/Coding-books development by creating an account on GitHub. The puzzles and problems in Exceptional C++ not only entertain, they will help you hone your skills to become the sharpest C++ programmer you can be. How can I download the C++ language tutorial in PDF? Most of the content in Exceptional C++ are based on the contents from GOTW. More Exceptional C++. By Herb Sutter. Generic Programming and the C++ Standard. Library. One of C++'s most powerful features is its support for generic.

Skip to main content. Log In Sign Up. Trifan Mariana. Addison Wesley Pub Date: August 02, ISBN: Uses and Abuses of vector Solution C hapter 2.

It also means you can get Really Obscure error messages when compiling code that tries to call destroy FwdIter,FwdIter with nonpointer iterators, because at least one of the actual failure s will be on the destroy first line inside the two-parameter version, which typically generates such useful messages as the following, taken from one popular compiler: The first message indicates that the compiler was trying to resolve the statement destroy first ; as a call to the two-parameter version; the second indicates an attempt instead to resolve it as a call to the one-parameter version.

Both attempts failed, each for a different reason: The two-parameter version can take iterators but needs two of them, not just one, and the one-parameter version can take just one parameter but needs it to be a pointer. No dice. Having said all that, in reality we'd almost never want to use destroy with anything but pointers in the first place just because of the way the function is intended to be used, given that it turns things back into raw memory and all.

Still, only a simple change is needed to let FwdIter be any iterator type, not just a pointer, so why not do it: Have destroy iter,iter call the destructor explicitly.

In the two-parameter version of destroy, change: Here we are dereferencing the iterator to get a direct reference to the contained object and then taking its address, which guarantees that we get the pointer that we want. In a little more detail: What's the moral of the story? Beware subtle genericity drawbacks when implementing one generic function in terms of another. In this case, there was a subtle principal drawback: The two-parameter version wasn't as generic for iterators as we originally thought.

There was also an even subtler secondary drawback: Both traps were neatly avoided when we stopped implementing one function in terms of the other. I am not discouraging you from implementing templates with templates; I'm encouraging you to be aware of the potential interactions.

Clearly templates are often correctly implemented in terms of other templates. For example, programmers are commonly expected to specialize the std:: It therefore requires that T have a copy constructor and a copy assignment operator. If that's all you said, give yourself half marks only.

One of the important things to note about the semantics of any function is its exception-safety status, including what guarantees it provides. In this case, swap is not at all exception-safe if T's copy assignment operator can throw an exception. In particular, if T:: Therefore this swap must be documented as follows: There are two ways to remove the requirement that T have an assignment operator, and the first additionally provides better exception safety. Specialize or overload swap.

Say that we have a class MyClass that uses the common idiom of providing a nonthrowing Swap. Then we can specialize standard functions for MyClass as follows.

Specializing swap. Swap b ; Overloading swap. Not in namespace std. For example, the standard library itself overloads[10] swap for vector so that calling swap actually invokes vector:: This makes a lot of sense, because vector:: The primary template in Example a would create a complete new copy temp of one of the vectors, and then perform additional copying from one vector to the other, then perform additional copying from temp to the other vector, which results in a lot of T operations and has complexity O N where N is the combined size of the vectors being swapped.

The specialized version will typically simply just assign a few pointers and integral types, and it runs in constant and usually negligible time. Zap, you're done. See Item 7 for more about function templates and specialization. So, if you create a type that provides a swap-like operation, it's usually a good idea to specialize std:: It will usually be more efficient than a routine application of the primary std:: Guideline Consider specializing std:: The idea here is to write swap in terms of T's copy constructor instead of its copy assignment operator, and of course this works only if T indeed has a copy constructor: You can get into situations where the objects not only hold indeterminate values but no longer exist at all!

If we know that T copy assignment is guaranteed not to throw, though, this version does have the extra ability to deal with types that can't be assigned but can be copy constructed, and there are indeed many such types.

Being able to swap such types is not necessarily a good thing, because if a type can't be assigned, it's probably set up that way for a good reasonfor example, it likely doesn't have value semantics, and it might have const or reference membersand so providing a mechanism to imbue or impose value semantics might be misguided and produce surprising and incorrect results.

Worse still, this approach plays games with object lifetimes, and that's always questionable. Here by "plays games" I mean that it changes not only the values, but the very existence, of the operated-upon objects.

Code using the Example d form of swap could easily produce surprising results when users forget about the unusual destruction semantics. A guideline: If you must play games with object lifetimes and you know that doing so is okay, and you're certain that the operated-upon objects' copy constructors can never throw, and you're very sure that the unusually "imposed" value semantics will be all right in your application for those specific objects, then and only then you might legitimately decide to use such an approach in those specific situations only.

This Item is about when and why not to specialize templates. In the following code, which version of f will be invoked by the last line? Overloading vs. Specialization It's important to make sure we have the terms down pat, so here's a quick review. These two kinds of templates don't work in exactly the same ways, and the most obvious difference is in overloading: This is pretty natural. What we have so far is summarized in Example Class vs. Further, primary templates can be specialized.

This is where class templates and function templates diverge further, in ways that will become important later in this Item. The following code illustrates these differences: Writing what looks like a function template partial specialization is really writing a distinct primary function template. Finally, let's focus on function templates only and consider the overloading rules to see which ones get called in different situations.

The rules are pretty simple, at least at a high level, and can be expressed as a classic two-class system: A plain old nontemplate function that matches the parameter types as well as any function template will be selected over an otherwise-just-as-good function template.

Which primary function template gets selected depends on which matches best and is the "most specialized" according to the following set of fairly arcane rules. Important note: This use of "specialized" oddly enough has nothing to do with template specializations; it's just an unfortunate colloquialism.

If that primary template happens to be specialized for the types being used, the specialization will get used, otherwise the primary template instantiated with the correct types will be used.

The programmer will have to do something to qualify the call and say which one is wanted. Putting these rules together, here's a sample of what we get: Why Not Specialize: The question of the day, however, is why you expected it.

If you expected it for the wrong reason, you will be very surprised by what comes next. After all, "So what," someone might say, "I wrote a specialization for a pointer to int, so obviously that's what should be called"and that's exactly the wrong reason. The answer is… the third f. Here's the code again, this time annotated similarly to Example a to compare and contrast the two examples: The key to understanding this is simple, and here it is: Specializations don't overload.

Only the primary templates overload along with nontemplate functions, of course. Consider again the salient part from the summary I gave earlier of the overload resolution rules, this time with specific words highlighted: Which primary function template gets selected depends on which matches best and is the "most specialized" […] according to a set of fairly arcane rules: Overload resolution selects only a primary template or a nontemplate function, if one is available.

Only after it's been decided which primary template is going to be selected and that choice is locked in will the compiler look around to see if there happens to be a suitable specialization of that template available, and if so that specialization will get used. Guideline Remember that function template specializations don't participate in overload resolution. Important Morals If you're like me, the first time you see this you'll ask the question: If you want to be sure it will always be used in the case of an exact match, that's what a plain old function is forso just make it a function instead of a specialization.

The rationale for why specializations don't participate in overloading is simple, once explained, because the surprise factor is exactly the reverse: The standards committee felt it would be surprising that, just because you happened to write a specialization for a particular template, it would in any way change which template gets used.

Under that rationale, and because we already have a way of making sure our version gets used if that's what we want we just make it a function, not a specialization , we can understand more clearly why specializations don't affect which template gets selected.

Guidelines Moral 1: If you want to customize a primary function template and want that customization to participate in overload resolution or to always be used in the case of exact match , don't make it a specializationmake it a plain old function.

If you do provide overloads of a function template, avoid also providing specializations. But what if you're the one who's writing, not just using, a function template? Can you do better and avoid this and other problem s up front, for yourself and for your users? Indeed you can: If you're writing a primary function template that is likely to need specialization, prefer to write it as a single function template that should never be specialized or overloaded, and then implement the function template entirely as a simple handoff to a class template containing a static function with the same signature.

Everyone can specialize thatboth fully and partially, and without affecting the results of overload resolution. Summary It's okay to overload function templates. Whatever templates are visible are considered for overload resolution, and the compiler simply picks the best match.

It's a lot less intuitive to specialize function templates. For one thing, you can't partially specialize themyou overload them instead. For another thing, function template specializations don't overload. This means that any specializations you write will not affect which template gets used, which runs counter to what most people would intuitively expect.

After all, if you had written a nontemplate function with the identical signature instead of a function template specialization, the nontemplate function would always be selected because it's always considered to be a better match than a template. If you're writing a function template, prefer to write it as a single function template that should never be specialized or overloaded, and implement the function template entirely in terms of a class template.

This is the proverbial level of indirection that steers you well clear of the limitations and dark corners of function templates. This way, programmers using your template will be able to partially specialize and explicitly specialize the class template to their heart's content without affecting the expected operation of the function template. This avoids both the limitation that function templates can't be partially specialized and the sometimes surprising effect that function template specializations don't overload.

Problem solved. If you're using someone else's plain old function template one that's not implemented in terms of a class template , and you want to write your own special-case version that should participate in overloading, don't make it a specialization; just make it an overloaded function with the same signature.

Befriending Templates Difficulty: According to real-world compilers, however, one of the syntaxes is widely unsupported; the other works on all current versions of popular compilers… except one. Let's say we have a function template that does SomethingPrivate to the objects it operates on. In particular, consider the boost:: The solution is simple: The only other option is to give up and make Test's destructor public. What could be easier?

If only compilers would agree…. Show the obvious standards-conforming syntax for declaring boost:: Why is the obvious syntax unreliable in practice? Describe the more reliable alternative.

Befriending a template in another namespace is easier said in the standard than done using real-world compilers that don't quite get the standard right. In sum, I have some good news, some bad news, and then some good news again: There are two perfectly good standards-conforming ways to do it, and the syntax is natural and unsurprising.

Neither standard syntax works on all current compilers. Even some of the strongest and most conformant compilers don't let you write one or both of the legal, sanctioned, standards-conforming and low-cholesterol methods that you should be able to use. One of the perfectly good standards-conforming ways does work on every current compiler I tried except gcc.

Let's investigate. The Original Attempt 1. This Item was prompted by a question on Usenet by Stephan Born, who wanted to do just that. His problem was that when he tried to write the friend declaration to make a specialization of boost:: Here's his original code: In brief, Example 's friend declaration has the following characteristics: It's easy. I'm also going to have some fun showing you what real compilers do, and then finish with a guideline for how to write the most portable code.

They boil down to this: When you declare a friend without saying the keyword template anywhere: If the name of the friend looks like the name of a template specialization with explicit template arguments e.

Else if the name of the friend is qualified with a class or namespace name e. Name AND that class or namespace contains a matching non-template function Then the friend is that function. Name AND that class or namespace contains a matching function template deducing appropriate template parameters Then the friend is that function template specialization.

Else the name must be unqualified and declare or redeclare an ordinary nontemplate function. Clearly 2 and 4 match only nontemplates, so to declare the template specialization as a friend we have two choices: Write something that puts us into bucket 1, or write something that puts us into bucket 3. Even though both are legal, the first makes use of a dark corner of the friend declaration rules that is sufficiently surprising to peopleand to most current compilers!

It Doesn't Always Work As already noted, the Bucket 3 syntax is a shorthand for explicitly naming the template arguments in angle brackets, but the shorthand works only if the name is qualified and the indicated class or namespace does not also contain a matching nontemplate function.

In particular, if the namespace has or later gets a matching nontemplate function, that would get chosen instead, because the presence of a nontemplate function means bucket 2 preempts 3. Kind of subtle and surprising, isn't it? Kind of easy to mistake, isn't it? Let's avoid such subtleties. It's Surprising to People Bucket 3 is edgy and fragile and surprising to programmers who look at the code and try to figure out what it does. For example, consider this very slight variantall that I've changed is to remove the qualification boost I'll bet you dollars to donuts that just about everyone on our beautiful planet will agree with me that it's Pretty Surprising that just omitting a namespace name changes the meaning of the friend declaration so drastically.

Let's avoid such edgy constructs. It's Surprising to Compilers Bucket 3 is edgy and fragile and surprising to compilers, and that can make it unusable in practice even if we disregard the other shortcomings mentioned earlier.

Let's try the two options, bucket 1 and bucket 3, on a wide range of current compilers and see what they think. Will the compilers understand the standard as well as we do having read this Item so far? Will at least all the strongest compilers do what we expect? No and no, respectively. Let's try bucket 3 first: If you've ever watched the game show "Family Feud" on television, you can now imagine Richard Dawson's voice saying: Let's try writing it the other standards-conforming way, for bucket 1: It's the Namespace That's Confusing Them Note that if the function template we're trying to befriend weren't in a different namespace, we could use bucket 1 correctly today on nearly all these compilers: Say that three times fast.

Two Non-Workarounds When this question arose on Usenet, some responses suggested writing a using-declaration or equivalently a using-directive and making the friend declaration unqualified: The Standard is not clear that this is legal, there's an open issue in the standards committee to decide whether or not this ought to be legal, there is sentiment that it should not be legal, and in the real world virtually all current compilers that I tried reject it.

Why do people feel that it should not be legal? For consistency, because using exists to make it easier to use namesto call functions and to use type names in variable or parameter declarations. Declarations are different: Just as you must declare a template specialization in the template's original namespace you can't do it in another namespace "through a using" , so you should be able to declare a template specialization as a friend only by naming the template's original namespace not "through a using".

Summary To befriend a function template specialization, you can choose one of two syntaxes: Be explicit. If you're talking about a template and there's any question about what you mean, include a possibly empty template argument list. Avoid the dark corners of the language, including constructs that might be arguably legal but that are liable to confuse programmers, or even compilers.

For example, you could create a proxy class inside namespace boost and befriend that. Fundamentals Difficulty: What is meant by the "inclusion model" for templates? What is meant by the "separation model" for templates? What are some of the major drawbacks to the inclusion model for: This Item and the next take a closer look at our experience to date with export.

As of this writing there is still exactly one commercially available compiler that supports the export feature. There is still little experience with using export on real-world projects, although that will hopefully change if export-capable implementations become more widely available and used. But there are things that we do know and that the original implementers have learned. Here's what this Item and the next cover: In the inclusion model, template code is as good as all inline from a source perspective though the template doesn't have to be actually inline: The template's full source code must be visible to any code that uses the template.

This is called the inclusion model because we basically have to include all template definitions right there in the template's header file. On the other hand, the separation model is intended to allow "separate" compilation of templates. The "separate" is in quotation marks for a reason. In the separation model, template definitions do not need to be visible to callers. It's tempting to add "just like plain functions," but that's actually incorrectit's a similar mental picture, but the effects are significantly different, as we shall see when we get to the surprises.

The separation model is relatively newit was added to the standard in the mids, but the first commercial implementation, by EDG, didn't appear until the summer of But Cfront's implementation was slow, and it was based on a "works most of the time" heuristic such that, when Cfront users encountered template-related build problems, a common first step to get rid of the problem was to blow away the cache of instantiated templates and reinstantiate everything from scratch. Bear with me as I risk delving too deeply into compilerese for one paragraph: That is, they're about how you can choose to arrange and organize your source code.

They are not different instantiation models; that is, a compiler does essentially the same work to instantiate templates under either source model, inclusion or export.

This is important because this is part of the underlying reason why export's limitations, which we'll get to in a moment, surprise many people, especially that using export is unlikely to improve build times to the degree that separate compilation for functions routinely does.

For example, under either source model, the compiler can still perform optimizations such as relying on rather than enforcing the One Definition Rule ODR to only instantiate each unique combination of template parameters once, no matter how often and widely that combination is used throughout your project.

Such optimizations and instantiation policies are available to compiler writers regardless of whether the inclusion or separation model is being used to physically organize the template's source code; although it's true that the separation model allows the optimizations, so does the inclusion model. To illustrate, let's look at some code.

We'll look at a function template under both the inclusion and separation models, but for comparison purposes I'm also going to show a plain old function under the usual inline and out-of-line separately-compiled models. This will help to highlight the differences between today's usual function separate compilation and export's "separate" template compilation. The two are not the same, even though the terms commonly used to describe them look the same, and that's why I put "separate" in quotes for the latter.

Consider the following code, a plain old inline function and an inclusion-model function template: The whole world can see the perhaps-proprietary definitions for f and g. In itself, that might or might not be such a bad thing; more on that later. All callers of f and g depend on the respective bodies' internal details, so every time the body changes, all its callers have to recompile. Also, if either f's or g's body uses any other types not already mentioned in their respective declarations, then all their respective callers will need to see those types' full definitions too.

For the function, the answer is an easy "of course," because of separate compilation: We can still ship the implementation's source code if we want to, but we don't have to. Note that many popular libraries, even closely guarded proprietary ones, ship source code anyway possibly at extra cost because users demand it for debuggability and other reasons. Callers no longer depend on f's internal details, so every time the body changes, all its callers only have to relink.

This frequently makes builds an order of magnitude or more faster. Similarly, usually to somewhat less dramatic effect on build times, f's callers no longer depend on types used only in the body of f. That's all well and good for the function, but we already knew all that. We've been doing this since C, and since before C which is a very very long time ago. What about the template? The idea behind export is to get something like this effect for templates. One would be wrong, but one would still be in good company because this has surprised a lot of people, including world-class experts.

A more independent little template? It might have ameliorated one of them, depending. Let's consider the issues in turn.

Source Exposure The first problem is unsolved: Source exposure for the definition remains. Indeed, in the only existing implementation of export, the compiler requires that the template's full definition be shippedthe full source code. T he answer is that any encryption that a program can undo without user intervention say to enter a password each time is easily breakable.

At the point of instantiation or a use of the template, dependent names must be looked up in two places. They must be looked up in the instantiation context; that's easy, because that's where the compiler is already working. Think about Example b from the compiler's point of view: Your library has an exported function template g with its definition nicely ensconced away outside the header. Well and good.

The library gets shipped. A year later, one fine sunny day, it's used in some customer's translation unit h. It has to look, among other places, at g's definition, at your implementation file. And there's the rub… export does not eliminate such dependencies on the template's definition, it merely hides them.

Exported templates are not truly "separately compiled" in the usual sense we mean when we apply that term to functions. Exported templates cannot in general be separately compiled to object code in advance of use; for one thing, until the exact point of use, we can't even know the actual types the template will be instantiated with.

So exported templates are at best "separately partly compiled" or "separately parsed. There is a similarity here to Java and. NET libraries where the bytecode or IL can be reversed to reveal something very like the source code.

Guideline Remember that export doesn't imply true separate compilation of templates like we have for functions. Dependencies and Build Times The second problem is likewise unresolved: Dependencies are hidden, but remain. Every time the template's body changes, the compiler has to reinstantiate all the uses of the template. During that process, the translation units that use g are still processed together with all of g's internals, including the definition of g and the types used only in the body of g.

The template code still has to be compiled in full later, when each instantiation context is known. Here is the key concept to remember: Guideline Remember that export only hides dependencies; it doesn't eliminate them. It's true that callers no longer visibly depend on g's internal details, inasmuch as g's definition is no longer openly brought into the caller's translation unit via included code; the dependency can be said to be hidden at the human-reading-the-source-code level.

But that's not the whole story, because we're talking compilation-the-compiler-must-perform dependencies here, not human-reading-the-code-while-sipping-a-latte dependencies, and compilation dependencies on the template definitions still exist.

True, the compiler might not have to go recompile every translation unit that uses the template; but it must go away and recompile at least enough of the other translation units that use the template so that all the combinations of template parameter types on which the template is ever used get reinstantiated from scratch.

The compiler can't just go relink object code that is truly separately compiled. The idea is that the compiler would rely on the One Definition Rule rather than enforcing it; i. Further, remember that many templates use other templates, and therefore the compiler next performs a cascading recompilation of those templates and their translation units too, and then of whatever templates those templates use, and so on recursively, until there are no more cascading instantiations to be done.

If, at this point in our discussion, you are glad that you personally don't have to implement export, that's a normal reaction. Even with export, it is not the case that all callers of a changed exported template "just have to relink. Neither outcome is promised by export. The community's experience to date is that source or its direct equivalent must still be shipped and that build speeds are expected to be the same or slower, rarely faster, principally because dependencies, though masked, still exist, and the compiler might still have to do the same amount of work or more in common cases.

We'll also see some initial advice on how to use export effectively if you happen to acquire an export-capable compiler. Interactions, Usability Issues, and Guidelines Difficulty: When was it first implemented?

Briefly explain the interactions. How does export affect the programmer? What real and potential benefits does export have? In the previous Item, we covered the following: We looked at an analysis of the similarities and differences between the "inclusion" and "export" template source code organization models, and why they're not parallel to the differences between inline and separately compiled functions. Widespread expectations notwithstanding, export is not about truly "separate" compilation for templates in the same way we have true separate compilation for nontemplates.

Exceptional C++ - Book Home Page

The community's most informed experience to date is that full source or its direct equivalent must still be shipped and that it is yet unknown whether build speeds will be better, worse, or just about the same in common real-world usage. Principally this is because dependencies, though masked, still exist, and the compiler still has to do at least the same amount of work in common cases.

In short, it's a mistake albeit a natural one to think that export gives true separate compilation for templates in the sense that the template author need only ship declaration headers and object code. Rather, what is exported is similar to Java and. NET libraries where the bytecode or IL can be reversed to reveal something very like the source; it is not traditional object code. This time, I'll cover: But first, consider a little history. Historical Perspective: The answers are and , respectively.

Given this, and given also that there are some valid criticisms of export, it might be tempting to start casting derisive stones and sharp remarks at the people who came up with what we might view as a misfeature. It would also be ungracious and unkind, and could possibly smack of armchair-quarterbacking. This "backgrounder" part of the Item exists for balance, because on the export issue it's been easy for people to go to extremes in both directions, pro and con export.

If export doesn't appear to deliver the advantages that many people expect, why does it exist? The reason is quite simple: In the mids, a majority of the committee believed that shipping a standard that did not have separate compilation for templates, as C already did for functions, would be incomplete and embarrassing.

In short, export was retained in the then-draft standard on principle. Principle is very often a good thing. It should never be disparaged, especially by armchair quarterbacks like us, looking back with the benefit of many years' worth of hindsight.

That "like ourselves" part includes me. Although now, years later, I chair the ISO committee, I didn't start personally attending committee meetings till the following year. Remember that, in and , templates themselves were still pretty new: At that time, the focus was entirely on enabling parameterized types and functions, the given examples being a List container that could hold different types of objects and a sort that could sort different types of sequences. Even in these early days, however, templates were conceived with the desire for a separate compilation model in mind.

Remember, it wasn't until late that Stepanov made his first presentation of the STL to the committee, which adopted it in as a groundbreaking achievementand by today's standards the STL was "just" a container and algorithm library.

This is why I say that, "in , templates themselves were still pretty new. Around that time, only one commercial compiler could cope with the initial STL, for example. So it was that the community in general and the standards committee in particular still had a comparatively short record of real-world experience with even the simpler ARM templates that existed. The climate in was no longer quite embryonic, but it was young and still growing and forming.

And it was in this formative climate, with that limited experience, that the standards committee was forced to decide whether to keep exported templates in the then-draft standard.

In particular, it made all the compiler vendors nervous. Even supporters of export viewed it as a necessary compromise while disliking export as a source of complexity; some would have preferred general separate compilation with no special keyword. In there was a coordinated push within the committee to remove the notion of "separate" template compilation.

It was finally as a concession to this objection that the export keyword was soon thereafter invented to help out compilers by providing a means to at least tag which templates were supposed to be separately compiled. In particular, it was argued, the separate template compilation model had never actually been implemented, and the committee had no idea whether it would work as intended.

Infact, there were papers presented at that timepapers that in retrospect could be called insightful bordering on prescientthat detailed some of the major potential shortcomings of the export model as described in the draft standard. In particular, all the compiler implementers unanimously opposed including separate template compilation in the standard on the grounds that it was too early to know if they were doing the right thing.

They had serious unanswered concerns about the existing formulations with or without an export keyword , and they didn't feel they had enough experience yet to come up with a fully baked alternative not to mention insufficient time; the standard was being stabilized and would be set in stone the following year, Rather, they wanted to take time to design it right and do it in the next standard.

They favored the idea of separate template compilation in principle, but felt that export wasn't fully baked and they still didn't know enough to do it right. They lost, narrowly, and export stayed in the standard. As I summarized earlier, a slim majority of the committee believed that shipping a standard that did not have some form of "separate" compilation for templates, as C already did for functions, would be incomplete and embarrassing.

Several compilers had already been experimenting with forms of "separate" template compilation and it seemed to be a good idea in principle. And it's a good principle, not to be disparaged. At the March meeting, the straw vote was 2-to-1 against separate template compilation.

At the July meeting where the export keyword was introduced, the vote was 2-to-1 in favor of export. To emphasize, note that the world's compiler vendors opposed export in particular, and did not oppose the principle of separate template compilation. They just felt they needed more time to be confident that the standard would get it right. Although some of the world-class experts who in voted in favor of retaining export now see it as a mistake, the intent and motivation was good, and there is still hope that export will deliver some benefitsif not all the big ones that were initially hoped foras we gain experience with the first shipping compiler to implement export Comeau 4.

The export feature alone took more than three person-years to code and test, not counting design work; by comparison, implementing the entire Java language took the same three people only two person-years. Why is export so difficult to implement, and so complex?

Here are two major reasons: Export relies on Koenig lookup. Most compilers still get Koenig lookup wrong even within a single translation unit informally, this means a source file. Export requires performing Koenig lookup across translation units. For more about Koenig lookup, see [Sutter00, Item 27]. Conceptually, export requires a compiler to deal simultaneously with many symbol tables. Instantiating an exported template can trigger cascaded instantiations in other translation units.

Each must be able to refer to entities that existed or "sort of existed" when the template definition was parsed. With export, at least conceptually you need to simultaneously deal with an arbitrary number of symbol tables.

Many of these real effects of export are not mentioned or addressed in the standard. In particular, export "exports" more than its template: Similarly, some file-static functions and objects must now have external linkage, or at least behave as though they did, if they are used in exported templates.

This is counter to the intent of unnamed namespaces and namespace-scope static, which was to make those names strictly internal to their original translation unit. A major benefit of putting internal functions into the unnamed namespace and the deprecated file static was to "privatize" those functions so you could give them simple names without worrying about name conflicts and overloading effects across source files.

Now, because part of the protection is being removed and they can and do participate in overload resolution with each other via exported templates, it's alas a good idea to obfuscate their names again if you use such functions or objects in an exported template, even if the function is in an unnamed namespace or file static, so as to avoid silent changes of meaning.

For example, a class might have multiple befriending entities in different translation units, and declarations of that class from those different translation units might all be participating in an instantiation. If so, which set of access rules should be applied?

Sutter H. Exceptional C++: 47 Engineering Puzzles, Programming Problems, and Solutions

These issues might seem minor and many of the errors might be innocuous, but on some popular platforms, ODR violations are increasingly important see [Sutter02c] for one example. Here are three examples to illustrate why this is so. Example 1: It is easier than before for programmers to write programs that have hard-to-predict meaning.

Like an inclusion-model template, an exported template commonly has different paths by which it could be instantiated, and each path commonly has a different context. For those who might say, "But we already have that problem with functions defined in header files e. Templates use dependent names, names that are dependent on and therefore vary with the template arguments, and so for each instantiation of the template with the very same template arguments the template's user must be careful to provide exactly the same context e.

Why is this expected to be somewhat worse under the export model than for inclusion-model templates? Example 2: It is harder for the compiler to generate high-quality diagnostics to aid programmers. Template error messages are already notoriously hard to understand because of long and verbose names, but besides that, what's less obvious to programmers is that it's already harder for compiler writers to give good error messages for templates because templates can generate multiple and cascading instantiations.

With export, there is now the additional dimension of multiple translation unitsa message such as "error on line X, caused by the instantiation of this function, caused by the instantiation of this function, caused by the instantiation of this function, …" must now add which translation unit it was in when it happened, and each line in the traceback could be from a different translation unit.

Detecting ODR violations for exported templates is a challenging problem in itself, but detecting what was really meant so as to provide "did you mean" guidance is even harder. Many of us would be happy just to have our compiler emit readable error messages for plain old templates.

Example 3: Export puts new constraints on the build environment. The build environment does not consist of just. As noted in the previous Item, if you change an exported template file, you need to recompile that, but you also need to recompile the instantiations; that is, export really does not separate dependencies, it just hides them.

It's hard to make up simple usage guidelines that will keep users out of trouble. Here are two actual and potential values of export that some early adopters hope to achieve: Build speed still. It is still unknown what, if any, impact export will have on build speed in common real-world template-using code.

If the feature becomes more widely adopted and used, exploration in this area will let us discover how common the beneficial cases are and how easy or difficult those cases are to construct. In particular, it is hoped that translation units that use exported templates will be less sensitive to i.

Caveats to 1: For reasons why being able to break dependencies might not be the case and why dependencies still exist, see the previous Item. Macro leakage. This is a real advantage of export. Macros leak across traditional inclusion-model header files.

Because the inclusion-model source code is entirely available in each translation unit, outside macros pulled in from elsewhere earlier in that translation unit can affect the template's definition. With export, macros don't leak across translation units, and this will help the template author maintain better control over his template definitions which are off in a separate file and prevent outside macros from as easily interfering with his template definitions' internals.

This is a real advantage of export, but it is not unique to export. If such a solution is adopted, it would eliminate entirely this advantage of export, because the preprocessor scope control solution would deliver all the macro-protection benefits of export and many more, in a better and more general way.

In summary, it remains to be seen in the coming years how much benefit export gives over normal include-all-the-code-in-the-header templates, but I'd like to strongly encourage the people who run those tests to also report the results of organizing their code to take full advantage of the EDG implementation's non-export template optimization capabilities and see whether any advantages to export actually remain.

Morals So should you use export, and if so, how can you use it safely? They can't, not anytime soon, so they won't. What if you're using one of those up-and-coming newfangled export-capable compilers?

Ah, now we can finally come up with an initial guideline: Guideline For portable code, don't use export. This borders on being a truism: What if you don't need portable code, have export, and are tempted to use it? Then caveat emptor: Be aware that exported templates can also be trickier to write for the reasons mentioned in these two Items and summarized again below. Let someone else be the guinea pig as we spend the next year or two trying it out and learning about what export will really give us.

Guideline For now Avoid export. But, if you do decide to be one of the early-adopter experimenters, here are somethings we already know you can do to make life safer and less stressful: Guidelines If you do choose to use export selectively for some templates, then: Don't expect that export means you don't have to ship source code or its equivalent anyway. You still do, and this will not change. Don't expect that export means your builds will be earth-shatteringly faster.

Initial experience is inconclusive, but your builds could well be slower. Do check that your tools and environment can handle the new build requirements and dependencies e. If your exported template uses any functions or objects that are in an unnamed namespace or file static: This is a pity because the unnamed namespace and file static are supposed to protect you from this so you don't have to obfuscate the names, but if you use export you can too easily silently lose this protection and should obfuscate them again.

Do understand that this is not a complete list and that you will probably encounter some other issues beyond the ones we already know about for today's normal template uses. As Spicer put it: It's too early too tell whether the "avoid export" guideline will turn into permanent advice. Time and experimentation will tell. As vendors slowly begin to adopt and support export in the coming years and the community gets a chance to finally try it out, we'll know much more about how and when to use itor not.

In this section, we continue to build on that material by turning our attention to some specific exception-related language features. We begin by answering some perennial questions: Is exception safety all about writing TRy and catch in the right places? If not, then what? And what kinds of things should you consider when developing an exception safety policy for your software?

Delving beyond that, it's worth spending an entire Item to lay out reasons why writing exception-safe code is, well, just plain good for you, because doing that promotes programming styles that lead to more robust and more maintainable code in general, quite apart from their benefits in the presence of exceptions.

But there is a limit to goodness and to "if some is good, then more is better" thinking, and that limit is hit well and hard when we get to exception specifications: Why are they in the language? Why are they well motivated in principle? And why, despite all that, should you stop using them in your programs?

This and more, as we dip our cups and drink again from the font of today's most current exceptional community wisdom. Try and Catch Me Difficulty: What is a try-block? When should try and catch be used? When should they not be used?

Express the answer as a good coding standard guideline. A try-block is a block of code compound statement whose execution will be attempted, followed by a series of one or more handler blocks that can be entered to catch an exception of the appropriate type if one is emitted from the attempted code. There's More to Life Than Playing catch 2. Put bluntly, such a statement reflects a fundamental misunderstanding of exception safety.

Exceptions are just another form of error reporting, and we certainly know that writing error-safe code is not just about where to check return codes and handle error conditions. Actually, it turns out that exception safety is rarely about writing TRy and catchand the more rarely the better. Also, never forget that exception safety affects a piece of code's design; it is never just an afterthought that can be retrofitted with a few extra catch statements as if for seasoning.

There are three major considerations when writing exception-safe code: Where and when should I throw? This consideration is about writing throw in the right places. In particular, we need to answer: That is, what errors will we choose to report by throwing an exception instead of by returning a failure value or using some other method?

In particular, what code should provide the no-fail guarantee? See Item 12 and [Sutter99]. Where and when should I handle an exception? This is the only consideration that is in part about writing try and catch in the right places, and even this can be automated most of the time. First, consider the questions we need to answer: That is, what code has enough context and knowledge to handle the error being reported by the exception possibly by translating the exception into another form?

In particular, note that the catching code also needs to have enough knowledge to perform any necessary cleanup, such as of dynamic resources. That is, of the code that could catch the exception, which is best suited to do so? Once we've answered those questions, note that using the "resource acquisition is initialization" idiom can eliminate many try-blocks by automating the cleanup work. If you wrap dynamically allocated resources in owner-manager objects, typically the destructor can perform automatic cleanup at the right time without any try or catch at all.

This is clearly desirable, not to mention that it's also usually easier to code now and to read later. In all other places, is my code going to be safe if an exception comes roaring through out of any given function call? This consideration is about using good resource management to avoid leaks, maintaining class and program invariants, and other kinds of program correctness.

Put another way, it's about keeping the program from blowing up just because an exception happens to pass from its throw site through code that shouldn't have to particularly care about it before arriving at an appropriate handler. For most programmers I've encountered, it turns out that this is typically by far the most time-consuming and difficult-to-learn aspect of exception safety. Notice that only one of these three considerations has anything to do with writing try and catch. And even that one can often be avoided with the judicious use of de-structors to automate cleanup.

Here's one suggestion. In brief: Determine an overall error reporting and handling policy for your application or subsystem, and stick to it. In particular, the policy should cover the following basic aspects and generally includes much more: Generally it's good to choose the most readable and maintainable method for each case by default; for example, exceptions are most useful for constructors and operators that cannot emit return values or where the throw site and the handler are widely separated.

Among other things, define the boundaries that exceptions shall not cross; typically these are module or API boundaries. Write tHRow in the places that detect an error and cannot deal with it themselves. Clearly, code that can resolve an error immediately doesn't need to report it!

For every operation, document what exceptions the operation might throw, and why, as part of the documentation for every function and module.

You don't need to actually write an exception specification on each function and you shouldn't; see Item 13 , but you do need to document clearly and rigorously what the caller can expect, because error semantics are part of the function's or module's interface. Write TRy and catch in the places that have sufficient knowledge to handle the error, to translate it, or to enforce boundaries defined in the error policy.

In particular, I've found that there are three main reasons to write try and catch: This is the simple case: An error happened, we know what to do about it, and we do it. Life goes on sans the original exception, which has been safely put to rest.

This means catching one exception that reports a lower-level problem and throwing another that is couched in the context of the translating code's own higher-level semantics. Alternatively, the original exception can be translated to another representation, such as an error code. For example, consider a communications session utility class that works across many host types and transport protocols: The Open function can handle these conditions itself, and there's no use reporting them to the caller, who after all has no idea what a Foo packet is or what to do if it Barifies; the session class handles its internal low-level errors directly, keeps itself in a consistent state, and reports its own higher-level error or exception to inform its caller that the session could not be opened.

This usually also involves translating the error, usually to an error code or other nonexceptional representation. For example, when your stack unwinds up to a C API, you have only two choices: Guidelines Determine an overall error reporting and handling policy for your application or subsystem, and stick to it.

Include a policy for error reporting, error propagation, and error handling. Write throw in the places that detect an error and cannot deal with it themselves. Write try and catch in the places that have sufficient knowledge to handle the error, to translate it, or to enforce boundaries defined in the error policy e.

Summary A wise man once said: Lead, follow, or get the blazes out of the way! In exception safety analysis, we might say instead: In practice, the last get-out-of-the-way case accounts for the bulk of exception safety analysis and testing. That's the major reason why exception-safe coding is not fundamentally about writing TRy and catch in the right places. Rather, it's fundamentally about getting out of the bullet's way in the right places.

This should no longer be a seriously disputed and debated point… but sometimes it still is. Briefly define the Abrahams exception safety guarantees basic, strong, and nofail. When is it worth it to write code that meets: Failed operations guarantee that program state is unchanged with respect to the objects operated upon.

This means no side effects that affect the objects, including the validity or contents of related helper objects such as iterators pointing into containers being manipulated. Finally, the nofail guarantee says that failure simply will not be allowed to happen.

Exceptional C++: 47 Engineering Puzzles, Programming Problems, and Solutions

In terms of exceptions, the operation will not throw an exception. I have switched to calling it the nofail guarantee because these guarantees apply equally to all error handling, whether using exceptions or some other mechanism such as error codes. When Are Stronger Guarantees Worthwhile? It is always worth it to write code that meets at least one of these guarantees.

There are several good reasons: Exceptions happen. To paraphrase a popular saying.

More Exceptional C++

They just do. The standard library emits them. The language emits them. We have to code for them. Fortunately, it's not that big a deal, because we now know how to do it. It does require adopting a few habits, however, and following them diligentlybut then so did learning to program with error codes. The big thorny problem is, as it ever was, the general issue of error handling. The detail of how to report errors, using return codes or exceptions, is almost entirely a syntactic detail where the main differences are in the semantics of how the reporting is done, so each approach requires its own style.

Writing exception-safe code is good for you. Exception-safe code and good code go hand in hand. The same techniques that have been popularized to help us write exception-safe code are, pretty much without exception, things we usually ought to be doing anyway.

That is, exception-safety techniques are good for your code in and of themselves, even if exception safety weren't a consideration. To see this in action, consider the major techniques I and others have written about to make exception safety easier: It should come as no surprise that among their many benefits we should also find exception safety.

How many times have you seen a function here we're talking about someone else's function, of course, not something you wrote where one of the code branches that leads to an early return fails to do some cleanup because cleanup wasn't being managed automatically using RAII? Such transactional programming is clearer, cleaner, and safer even with error codes.

View the revised index. Order via Amazon. Exception safety issues and techniques. Class design and inheritance. Compiler firewalls and the Pimpl Idiom. Name lookup, namespaces, and the Interface Principle. Memory management. Traps, pitfalls, and anti-idioms. Miscellaneous topics. The following information has been compiled by the author. Here, in approximate reverse chronological order, are all the reviews I know about.

If you know of other reviews not yet listed here, please tell me about them. Current reviews on Amazon. This is a good place to see what real-world developers are saying about the book. Thanks, everyone! He notes: He writes in part: I don't see how a project attempting to program in the presence of exceptions can proceed without at least being familiar with this material.

I can't imagine anyone reading this book and not learning something new.

I know I did. The review in Dr. Bob's Programming Book Reviews concludes with the kind words: In almost all items there is good advice to be found. This makes the text engaging, interesting, and enjoyable to read.

HARLEY from New York
I love wrongly. Feel free to read my other posts. I'm keen on powerbocking.