Blog Archive
ACCU 2023: presentation and free books
Friday, 14 April 2023
The ACCU 2023 conference is next week, running from 19th-22nd April 2023, in Bristol, UK.
My presentation
This year I will be presenting "Designing for concurrency using message passing" on 22nd April. The abstract is:
One common way to design concurrent code with less potential for synchronization and other concurrency errors is to use message passing. In this talk, I will walk through the design of some example code, showing how to build such a system in practice
This talk will include a brief description of what message passing frameworks are, and how they can help avoid concurrency errors.
I will then work through some examples, showing how the tasks are divided between the elements, and how the system can therefore utilise concurrency, while insulating the application code from the details of synchronization.
Free books
I have recently been through my bookshelves and found old books I don't want any more. If you want any of them, let me know, and I'll bring them along. These are all free to a good home:
- C++ Templates: The Complete Guide, First Edition by Daveed Vandevoorde and Nicolai Josuttis (Addison Wesley)
- The C++ Standard Library Extensions by Pete Becker (Addison Wesley)
- Modern C++ Design by Andrei Alexandrescu (Addison Wesley)
- C++ Primer 2nd edition by Stan Lippman (Addison Wesley)
- Learning XML by Erik T. Ray (O'Reilly)
- Unit Test Frameworks by Hamill (O'Reilly)
- Extreme Programming Adventures in C# by Ron Jeffries (Microsoft)
- Generative Programming by Czarnecki and Eisenecker (Addison Wesley)
- Find the Bug by Barr (Addison Wesley)
- DOS Programmer's Reference 3rd edition (MSDOS 5) by Dettman and Johnson (Que)
- Borland Turbo Assembler Quick Reference guide
- Oracle: The Complete Reference (for Oracle 7.3)
- Lloyds TSB Small business guide 2005
- Guide to Wave Division Multiplexing Technology
C++ Concurrency in Action, First Edition
I still have a small number of copies of the first edition of my book. If anyone wants these, I'll be selling them for £5 each. Let me know if you want one of these.
C++ Concurrency in Action, Second Edition
I'll also be bringing some copies of the second edition of my book. These will be for sale for £25 each. Let me know if you'd like one of these.
I look forward to seeing you there!
Posted by Anthony Williams
[/ news /] permanent link
Tags: C++, accu, ccia, book
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Review of Embracing Modern C++ Safely by John Lakos, Vittorio Romeo, Rostislav Khlebnikov and Alisdair Meredith
Wednesday, 01 March 2023
Verdict: Conditionally Recommended (3/5)
This is a huge book, over 1300 pages, and contains a wealth of information about all the language (not library) facilities added in C++11 and C++14. It has one "section" per language feature, and the sections are largely independent of each other, so they can be read in almost any order, and used as a reference. This is mostly a useful structuring, except for the few cases where there are closely related changes in both C++11 and C++14, and these are split into separate sections, that aren't even necessarily consecutive. Though there are already extensive cross-references between the chapters, the book would benefit from more cross references in such cases.
The word "section" is quoted above because they are what might be called "chapters" in other books. As it stands, the book has 4 "chapters", with "chapter 0" being the introduction, and the remaining 3 being what might be called "parts" in other books, and covering "Safe Features", "Conditionally Safe Features" and "Unsafe Features" respectively.
This use of the term "safe" is my biggest gripe with this book; much like a book on chainsaws might be titled "how to use chainsaws safely", the title makes the book seem as if it is going to be a tutorial on how to use modern C++ features without cutting your leg off. However, that is NOT the way that the authors use the word. They are using it with regard to "business risk" of adopting a specific language feature into a codebase that is mostly C++03 without providing explicit training to all the developers. Consequently the list of "safe features" is relatively small, and most features end up in the "conditionally safe" chapter, meaning that developers might need some training to use those features, rather than them being clear to developers only experienced with C++03. The authors explicitly call out that this "safety" axis is the one area where they intentionally deviate from their "Facts, Not Opinions" intent, and is the worst aspect of the book. As a general rule, developers should not be using language constructs they don't understand, so if a company wants to upgade to C++11/C++14 from C++03 then that company should provide appropriate training.
Consequently, I recommend that readers disregard the author's "safety" classification of the language features, and instead read the "Use Cases", "Potential Pitfalls" and "Annoyances" sections for each language feature and make up their own mind. It is particularly frustrating, since features that by and large make code clearer and less error-prone (such as lambdas, enum classes and range-for) end up being marked "conditionally safe" because they require training.
In the section on "Generalized PODs", the book covers what it means for objects to be "trivial", and be "trivially destructible", which are language constructs with specific meanings and consequences. They then go on to define their own term "notionally trivially destructible" to mean "those objects where the destructor has no code that affects program logic" (e.g. a destructor that only logs), and thus is safe to omit if you are directly controlling object lifetime (which should be a rare scenario anyway). This is not a language construct, and has no meaning to the compiler, but the similarity to the standard terms is too close and could easily lead to confusion. The inclusion of this in the book actually has the potential to decrease safety of a codebase, by encouraging developers to do things that might accidentally introduce undefined behaviour.
This book only covers the language features, not the library
features. Since a big part of the improvements from using C++11 and
C++14 comes from using the new library features (std::unique_ptr
vs
std::auto_ptr
, for example), this is a real let down, but given that
the book is already over 1300 pages, I can see why the authors decided
to leave it for another time.
Finally, this is a large, heavy book which makes it uncomfortable to hold it for any length of time, leading to contorted positions when reading. It is also a paper back book with very thin paper, so you can see print on the reverse side of the paper showing through, and a pale-grey font for comments in the example listings which is almost unreadable in many lighting conditions. This is particularly problematic, since many of the information in the examples comes from the descriptions of what each line means in the comment on that line. The e-book editions might thus be an improvement, though I haven't checked myself.
These gripes aside, there is a lot of useful information in this book. Each section covers a single language feature, how it differs from closely-related features in C++03, the precise details of syntax and use, intended use cases and potential pitfalls. If you want to know "how does this language feature work and what might go wrong", then reading the relevant section will give you a really useful set of information. You might still want to get training for the more complex features, but the sections actually contain enough information to get started, with copious examples.
In conclusion: this book is conditionally recommended. There is plenty of useful information there, but the presentation (both physically, and organizationally) is problematic.
Buy this book
Posted by Anthony Williams
[/ reviews /] permanent link
Tags: review, book, cplusplus
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
2-day More Concurrent Thinking class at CppCon 2022
Friday, 26 August 2022
I am excited to be going to CppCon again this year, where I will be running a 2-day class: More Concurrent Thinking in C++: Beyond the Basics.
The class is onsite at the conference venue in Aurora, Colorado, USA, on Saturday 10th September 2022 and Sunday 11th September 2022, immediately before the main conference.
I'll be teaching you to think about the synchronization properties of the constructs you use, and how to work with the low level facilities provided by the C++20 standard to build high level abstractions. We'll look at actors, thread pools and executors, and how to design code that works with those abstractions.
We'll look at how to use low-level atomic operations to build lock-free data structures, and the issues that can arise when doing so. We'll also look at how scalability concerns can impact your design choices.
Finally, we'll look at the important issue of testing multithreaded code, and the tools we can use to help us find the causes of problems.
If this sounds interesting, sign up for the class when you register for the main conference.
I'll also be doing a presentation on Tuesday: An Introduction to Multithreading in C++20. This is a whistle-stop tour of the multithreading features of C++20, briefly covering each feature and what it is used for, and which you should choose by default.
Whether you come to my class and presentation or not, I hope to see you there!
Posted by Anthony Williams
[/ news /] permanent link
Tags: cppcon, concurrency, workshop, class
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Online Concurrency Workshop at C++ on Sea 2021
Wednesday, 09 June 2021
The restrictions brought upon us by COVID-19 are not over yet, and C++ on Sea is the latest conference that will be running as an online-only conference.
I will be running my More Concurrent Thinking class as an online workshop for C++ on Sea on 30th June and 1st July 2021.
The workshop will run from 09:30 UTC to 18:15 UTC each day. For attendees from North and South America, this is likely quite an early morning, and may be a late night for attendees from the far East, so please check the times in your local timezone.
Tickets include the full day of "normal" conference presentations on 2nd July 2021. Get yours from the C++ On Sea tickets page.
I hope to see you there!
Posted by Anthony Williams
[/ news /] permanent link
Tags: C++, concurrency, classes, workshops
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Using atomics for thread synchronization in C++
Monday, 19 April 2021
In my previous blog post I wrote about spin locks, and how compilers must not move the locking loop above a prior unlock.
After thinking about this done more, I realised that is not something specific to locks — the same issue arises with any two step synchronization between threads.
Consider the following code
std::atomic<bool> ready1{false};
std::atomic<bool> ready2{false};
void thread1(){
ready1.store(true, std::memory_order_release);
while(!ready2.load(std::memory_order_acquire)){}
}
void thread2() {
while(!ready1.load(std::memory_order_acquire)) {}
ready2.store(true, std::memory_order_release);
}
thread1
sets ready1
to true
, then waits for thread2
to set ready2
to
true
. Meanwhile, thread2
waits for ready1
to be true
, then sets
ready2
to true
.
This is almost identical to the unlock/lock case from the previous
blog post, except the waiting thread is just using plain load
rather
than exchange
.
If the compiler moves the wait loop in thread1
above the store
then
both threads will hang forever. However it cannot do this for the same
reason the spinlocks can't deadlock in the previous post: the store
has to be visible to the other thread in a finite period if time, so
must be issued before the wait loop. https://eel.is/c++draft/intro.multithread#intro.progress-18
An implementation should ensure that the last value (in modification order) assigned by an atomic or synchronization operation will become visible to all other threads in a finite period of time.
If the optimizer moved the store
across the loop in thread1
, then
it could not guarantee that the value became visible to the other
thread in a finite period of time. Therefore such an optimization is
forbidden.
Posted by Anthony Williams
[/ cplusplus /] permanent link
Tags: cplusplus, atomics, multithreading, synchronization
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Can non-overlapping spinlocks deadlock in C++?
Thursday, 15 April 2021
There has been discussion on Twitter recently about whether or not the C++
memory model allows spinlocks to deadlock if they just use
memory_order_acquire
in lock
and memory_order_release
in unlock
, due to
compiler optimizations. The case in question is where a thread locks one mutex,
unlocks it, and locks a second: can the compiler reorder the second lock above
the first unlock? If it does, and another thread does the same in the reverse
order, with the same optimization, then sequential locks could deadlock.
Here is the code in question, with all the lock/unlock code inlined.
std::atomic<bool> mutex1{false};
std::atomic<bool> mutex2{false};
int x=0;
int y=0;
void thread1(){
while(mutex1.exchange(true,std::memory_order_acquire)){} // #1
x=1;
mutex1.store(false,std::memory_order_release); // #2
while(mutex2.exchange(true,std::memory_order_acquire)){} // #3
y=1;
mutex2.store(false,std::memory_order_release); // #4
}
void thread2(){
while(mutex2.exchange(true,std::memory_order_acquire)){} // #5
x=2;
mutex2.store(false,std::memory_order_release); // #6
while(mutex1.exchange(true,std::memory_order_acquire)){} // #7
y=2;
mutex1.store(false,std::memory_order_release); // #8
}
For there to even be the possibility of deadlock, thread1
must successfully
execute line #1 before thread2
successfully executes line #7, and thread2
must
successfully execute line #5 before thread1
successfully executes line #3.
Because these are RMW operations, the threads must agree on the ordering.
The modification order of mutex1
must thus be #1(success), #2, #7(success), #8.
Similarly, the modification order of mutex2
must be #5(success), #6, #3(success), #4.
All threads must agree on these modification orders. https://eel.is/c++draft/intro.multithread#intro.races-4
From the point of view of thread1
, everything must run in program order:
compilers can only optimize things as long as they run "as if" in program order.
The store to mutex1
at #2 is guaranteed to be visible to thread2
in "a finite
period of time". https://eel.is/c++draft/intro.multithread#intro.progress-18
Consequently, thread2
must eventually see that store at line #7, even if it
executes line #7 a large number of times first.
Therefore, the compiler cannot move line #3 completely above line #2, since
doing so would not guarantee the visibility of #2 to other threads in a finite
period of time. It can move an arbitrary number of executions of line #3 above
line #2 (all of which will see that mutex2
is still true
), but not all the
executions of line #3.
Given that thread2
eventually sees the store from #2 at line #7, the exchange at
line #7 will eventually succeed, and thread2
will eventually complete.
Likewise, the store at #6 must become visible to thread1
in a finite period of
time. Therefore the exchange at line #3 will eventually see the value stored by
6, the exchange will succeed, and thread1
will complete, and the compiler is
not allowed to move all the executions of line #7 above #6.
No amount of compiler optimization is allowed to break this, so no: spinlocks cannot deadlock if they don't overlap.
Posted by Anthony Williams
[/ cplusplus /] permanent link
Tags: cplusplus, atomics, multithreading, spinlocks
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Ticket Maps
Saturday, 20 March 2021
It has been an increasingly common scenario that I've encountered where you have some ID that's monotonically increasing, such as a subscription or connection index, or user ID, and you need your C++ program to hold some data that's associated with that ID value. The program can then pass round the ID, and use that ID to access the associated data at a later point.
Over time, IDs can become invalidated, so the data associated with
that value is removed, and you end up with a sparse set of
currently-active IDs. You would therefore naturally lean towards using
a map (whether a std::map
, std::unordered_map
or some other custom
map) to associate the data with the ID.
Often such maps are implemented as node-based containers, which means that the nodes can be allocated at disparate memory addresses, which is bad for cache performance. Adding and removing nodes also always requires memory allocation/deallocation.
In his "Better Code: Relationships" presentation, Sean Parent describes an alternative implementation, which he calls the "Russian Coat-Check Algorithm". In this algorithm, the map is implemented as a vector of pairs of key/optional data. Because the keys come from a monotonically increasing index, the vector is always sorted, and inserts are always at the end. Entries can be removed by clearing the data, and if there are too many empty entries then the vector can be compacted. Lookups are always fast, because the vector is always sorted, so a simple binary search will find the right element.
Inspired by watching Sean's presentation at ACCU 2021 last week, I implemented what I call a Ticket Map (it maps a Ticket to a Value). This is an implementation of the algorithm Sean described, fleshed out to a full container. When you insert a value, it is assigned the next available ticket value. You can later access or erase the value using that ticket.
#include <string>
#include <iostream>
#include "ticket_map.hpp"
int main(){
jss::ticket_map<int,std::string> map;
auto ticket1=map.insert("hello");
auto ticket2=map.insert("world");
std::cout<<map[ticket1]<<" "<<map[ticket2]<<std::endl;
map.erase(ticket1);
}
You can of course iterate through the container: in this case the iterators are
Input Iterators, where the value_type
is a std::pair<Ticket const&,Value&>
. This allows you to access both the tickets and the raw
elements, but also allows the iterator to provide a nice view over the data
without exposing the std::optional
implementation detail.
#include <string>
#include <iostream>
#include "ticket_map.hpp"
int main(){
jss::ticket_map<int,std::string> map;
auto ticket1=map.insert("hello");
auto ticket2=map.insert("world");
auto ticket3=map.insert("goodbye");
for(auto& [ticket,value]: map){
std::cout<<ticket<<": "<<value<<std::endl;
}
}
The code is available on GitHub under the Boost Software License.
Enjoy!
Posted by Anthony Williams
[/ cplusplus /] permanent link
Tags: cplusplus, maps, containers
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Online Concurrency Classes
Tuesday, 18 August 2020
With all the restrictions brought upon us by COVID-19, many C++ conferences are moving online.
This includes Cppcon and NDC Tech Town, both of which are being run as 100% virtual conferences this year.
I will be running my More Concurrent Thinking class as an online class for both conferences.
The class at NDC Tech Town will be a 2-day class running on 31st August 2020 and 1st September 2020, 9am - 5pm CEST, which is 0700 - 1500 UTC.
The class at Cppcon will be a 3-day class running on 9th September 2020 to 11th September 2020, 11am - 5pm EDT, which is 1500 - 2100 UTC.
The content is the same in both cases, though the timings are clearly different. Hopefully one or other fits in with your local timezone.
I hope to see you there!
Posted by Anthony Williams
[/ news /] permanent link
Tags: C++, concurrency, classes, workshops
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Invariants and Preconditions
Thursday, 05 March 2020
I tend to think about invariants and preconditions a lot. Pretty much every class has invariants, and most functions have preconditions. I don't think they are complicated concepts, but somehow they seem to confuse people anyway, so I decided it was time to write down some of my thoughts.
Invariants
A class invariant is something that is always true for every instance of that class. Users of objects of that class never see an object for which the invariant is not true; it is true for all states of all objects of that class at all times. This is what makes it invariant.
In order to perform the required operations, class member functions may temporarily break invariants, and then restore them afterwards. Unless you allow concurrent invocations of member functions on the same object, or deliberately pass the object to another function from inside a member function, this is OK, as the outside world will not be able to interact with the object when it is in such a state. It does mean you need to be especially careful when calling other functions from inside a member function, to ensure that the invariants still hold.
Invariants are important. If you writing a function that operates on an object, the only thing that is guaranteed to be true about it is that the class invariants hold, unless you choose to impose additional preconditions.
Preconditions
Preconditions are those that must be true when a function is called. Every
operation on a class has the implicit precondition that the class invariants
hold, but operations can add additional preconditions. For example, you always
call v.empty()
to check if a vector is empty, but you only call v.front()
to
retrieve the first element if the vector is not empty. v[i]
and v.at(i)
differ only in their preconditions; v[i]
requires the vector to have more than
i
elements, whereas v.at(i)
is specified to work for any vector, throwing if
i
is out of range.
Some people like to write classes with two-stage construction — first you
default-construct the object, then you call x.init()
or similar to "finish
off" the initialization, and you can't call any other function on that object
until after the initialization is complete. Though I don't like this pattern it
is valid code; every member function has a precondition that x.init()
has been
called.
Some people like to say that such a class can have "invariants" that do not hold until after initialization is complete. I think this is a misuse of the term; invariants must always hold for every object of a class, outside the internals of its member functions. As I just stated above, what such a class really has is a precondition on every function, not an invariant.
If I write a function
void do_stuff(X& x){
// my code here
}
then I can rely on the invariants of X
holding, but cannot rely on anything
else unless I add preconditions to do_stuff
.
Likewise, it should not be possible to do anything to an object that breaks its invariants. If you can then either you gave a bug, or they are not really invariants, just preconditions for most operations.
Some people like to write classes that have operations to transfer the internals from one object to another, leaving the source in a special "emptier than empty" state. This can be good for efficiency, especially when doing otherwise would require allocating resources to return the source to the plain "empty" state, and it is likely that the source will be destroyed immediately anyway.
Again, this is a perfectly reasonable thing to do, particularly when the resources are expensive to allocate and performance of the code is important. The problem comes when people then want to say that the class invariants don't hold for this state. Just as with the pre-initialization state above, I think this is a misuse of the term; they are not invariants, it is just that most operations have a precondition that the object is not "emptier than empty".
Move semantics
Since C++11, C++ has had the concept of "moving" objects, transferring resources
from the source to the destination. This is an important facility for
expressiveness of code and efficiency. Objects that own resources such as
std::unique_ptr
, std::thread
or std::fstream
can now be moved,
transferring ownership of the resource from the source to the destination. This
allows such objects to be put in containers, and transferred between scopes,
which can greatly simplify code. Likewise, containers like std::vector
can be
moved, transferring the entire set of contained objects from one vector to
another without reallocation. These facilities are of great importance for
writing clean, efficient code.
One thing all these operations have in common is that after a move, the class invariants still hold for the source object. This should go without saying: the source is still an object of its type, so of course the invariants hold.
The issue is with things like std::list
, where some implementations allocate a
sentinel node for the head/tail of the list in the default constructor, which is
thus transferred during move operations, so the move constructor has to allocate
a new sentinel node for the source, in order to ensure the class invariants
still hold. A similar thing occurs with anything that uses the pimpl idiom: if
the implementation pointer is transferred then either the class invariants must
allow for there to be a null implementation pointer, or a new implementation
object must be allocated.
Some people therefore argue that move operations should allow the source to be left with broken invariants, as a hollow shell of an object, in an "emptier than empty" state. I think this is misuse of the term "invariants". By all means, steal the internals and leave the source with a null pointer to the internals, or whatever. However, you now need to change the invariants of your class to allow this, or they are not invariants, because they would no longer apply to all objects of that class. There is no need to update any of the functions on your class to handle this new state, but if you do not do so, then any functions that don't work for objects in this "emptier than empty" state now have an additional precondition that the object is not in that state.
This is all perfectly fine, objects with such states are plentiful, and have
legitimate reasons for existing, but it is important to accept that their
functions now have preconditions. My do_stuff
function above can only call
member functions that are safe to use in such a state unless it too has a
precondition that the object x
is not in such a state.
From a user perspective, it would be nice to be able to query such a state, so I
could know what operations are permitted. For example, std::future
provides a
member function valid
so you can call f.valid()
before trying to do anything
with it, and std::thread
provides the joinable
member function, which can be
used to see if the object holds a thread handle. My do_stuff
function can then
call x.is_emptier_than_empty()
in order to check for the special state and
take appropriate action. That said, this is a "nice to have": the absence of
such a function doesn't mean that the state can't exist, just that it's
potentially harder to deal with.
Interaction with other code
If you pass an object to a function or class, then you need to know what that
function requires of your object and ensure that those expectations are met. If
the function expects your object to be a container with at least one element and
you pass an empty container, then you've broken the expectations and your code
has a bug. Likewise, if the function expects to be able to call x.foo()
on
your object, but foo cannot be called on an "emptier than empty" object, and you
passed such an object to the function, your code has a bug.
The difficulty comes when such a state arises as a consequence of other
actions. If x.foo()
can leave x
in an "emptier than empty" state, then
do_stuff
had better be prepared to handle the scenario, otherwise there is a
bug somewhere. Where the bug is depends on the documentation: if do_stuff
requires that you can always perform certain actions on the object it is passed
as a parameter, and those actions require that the object is not "emptier than
empty", then the bug is in the caller, or maybe the implementation of class
X
. If there is no such documentation, the bug is in do_stuff
.
Note that requiring that x.foo()
can only be called with objects that are not
"emptier than empty" is a precondition. It should be perfectly acceptable for
do_stuff
to call any functions on x
that do not have preconditions, even if
x
ends up in an "emptier than empty" state.
This is the exactly the same scenario we get with other code. For example if you
pass a vector to a function that requires that the vector isn't empty, the
function can erase the last element without a problem. If it wishes to erase the
last element again, thus removing two elements in total, then it needs to
check for and handle the case that the first erasure left the vector
empty. Calling v.clear()
or v.empty()
or v.push_back(y)
would be fine
without checking, as those functions have no preconditions.
If x.foo()
is instead spelled y=std::move(x)
, then nothing changes: it is
perfectly fine for x
to end up in an "emptier than empty" state if do_stuff
knows how to handle such a state, or doesn't care, because it doesn't touch x
again.
One option is for do_stuff
to say that after a move-assignment like this, it
can't rely on x
having any particular value, but it is still an instance of
class X
, so its invariants must hold, and therefore any operation without a
precondition can be used. It can therefore call x.is_emptier_than_empty()
and
behave appropriately.
The other option is for do_stuff
to place stronger requirements on x
, such
as requiring that it does not end up "emptier than empty" after a move, or even
requiring that it has a specific state.
Valid but unspecified states
The standard library has chosen option 1: after move, objects must be valid - i.e. still actually be objects of the expected type, and not destroyed - but they have an unspecified state, so the function cannot rely on any specific value, and can therefore only perform operations without preconditions, until it has verified that the preconditions for other operations hold.
This holds both for standard library types such as std::string
or
std::vector<int>
, but also for user defined types that you use with the
standard library. If you write
std::string s="hello";
std::string s2=std::move(s);
then s
was the source of a move operation, and thus is in a valid, but
unspecified state. For implementations that use the Small String Optimization,
such that short strings are stored entirely within the string object itself,
without requiring dynamic allocation, this might mean that s
still has the
value hello
after the move, because then it doesn't have to store an empty
string value in s
as well as copying the contents to s2
. It is also possible
that the implementation might clear s
, for consistency with longer
strings. Both are valid choices, as in either case s
is still a std::string
object, and the exact details of the state are unspecified.
Likewise, if you write
std::vector<MyWidget> v=create_some_widgets();
v.insert(v.begin(),MyWidget());
then that call to v.insert
is going to have to move the widgets around in
order to make room at the beginning for the extra one. The library requires that
moving widgets leaves them in a valid, but unspecified, state. In this case,
that means that having moved a widget from one position to another, it is OK to
move another widget on top of the first one, as that is the only operation the
vector will perform anyway. If you pass your object to a standard library
function that does something other than move things around (such as std::sort
or std::remove_if
), then you need to check that the other operations the
function might do can still be done on a moved-from object. By calling the
library function you are stating that your objects meet (and will continue to
meet) any preconditions you have imposed on the operations that the library
function specification says it might perform.
Invariants and Concurrency
Right back at the beginning of this article I stated that "Users of objects of that class never see an object for which the invariant is not true", but also that "class member functions may temporarily break invariants, and then restore them afterwards". These two things don't work together very well if an object can be accessed concurrently from multiple threads, and this is one aspect that makes writing concurrent code hard, especially if you try to use fine-grained locks or atomic operations.
Consider a simple class Names
which holds two vectors, one for first names and
one for last names:
class Names{
std::vector<std::string> firstNames,lastNames;
};
I want the invariant of this class to be that the number of elements in
firstNames
and lastNames
is always the same, so that I can add operations to
this class knowing that is always true. There is a strong argument that this
ought to be a std::vector<std::pair<std::string,std::string>>
rather than two
vectors, but assume that the class author has a legitimate reason for the
separate vectors.
At first glance, a member function to add an entry is quite simple:
void Names::addEntry(std::string const& first,std::string const& last){
firstNames.push_back(first);
lastNames.push_back(last);
}
However, even for the single-threaded case, this isn't good: if
lastNames.push_back(last)
throws an exception, then our invariant is broken,
as we successfully added an element to firstNames
but not to lastNames
. We
therefore need to handle that case:
void Names::addEntry(std::string const& first,std::string const& last){
firstNames.push_back(first);
try{
lastNames.push_back(last);
} catch(...){
firstNames.resize(lastNames.size());
throw;
}
}
Now, if lastNames.push_back(last)
throws, then std::vector
guarantees that
lastNames
is unchanged, so we can ensure our invariant holds again by
shrinking firstNames
back to the same size, and the single-threaded case is
now sorted. If our Names
object is only ever accessed by a single thread, then
the invariant holds at all times outside the internals of a member function.
What about the concurrent case? If we call Names::addEntry
from multiple
threads, then everything is broken: std::vector
is not safe for concurrent
access from multiple threads, so we have data races and undefined
behaviour. Using a ThreadSafeVector
class instead which provides the
operations we need but is safe for concurrent access removes these data races
and the undefined behaviour, but doesn't fix the overall problem. Thread
safety is not composable. In this case, the invariants are still broken
during the call to Names::addEntry
, so a concurrent call will see an object
with broken invariants: the thread safety of the two vectors doesn't matter if
the second thread can see firstNames
and lastNames
with different sizes.
We can fix the problem by using a mutex: the mutex lock prevents the second
thread from accessing the internals of the object until the first thread has
finished, so the second thread cannot see the state with a broken invariant. It
also allows us to revert back to std::vector
rather than ThreadSafeVector
since the individual vectors are only ever accessed by one thread at a time:
class Names{
std::mutex mutex;
std::vector<std::string> firstNames,lastNames;
public:
void addEntry(std::string const& first,std::string const& last){
std::lock_guard guard(mutex);
firstNames.push_back(first);
try{
lastNames.push_back(last);
} catch(...){
firstNames.resize(lastNames.size());
throw;
}
}
};
This is one of the big reasons why lock-free data structures are so hard to write: we cannot just stick a mutex lock on the outside to ensure other threads never see broken invariants. Instead, we must ensure that the invariants hold at all times, even during the operation of member functions. For this reason, lock-free data structures are often designed so that as much as possible is done off to the side, without modifying the core data structures, and then either the entire change is committed with a single atomic operation, or care is taken to ensure that the invariants still hold after each incremental step.
End note
Invariants are important. They are vital to working with objects, and without them it is much harder to write correct code. However, you have to choose your invariants carefully: invariants that make the rest of your code easier to reason about can have a cost, so you have to be aware of the trade-offs. Like everything in programming it is up to you what trade-off you choose: simplicitly vs performance is a common choice, but it can also be a choice of which operations are fast and which are slow. Whatever you choose, there will be cases where the other choice was more appropriate.
Whatever you choose, you must be honest with yourself, and the users of your class. Don't claim something is an invariant just because it holds most of the time; it must hold all of the time. Obviously, you can make it so a precondition of a function that objects it operates on are only in certain states, and then write your program logic to ensure that the conditions are met, but don't confuse the two.
Posted by Anthony Williams
[/ cplusplus /] permanent link
Tags: cplusplus, invariants, preconditions
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
CppCon 2019 Trip Report and Slides
Tuesday, 01 October 2019
Having been back from CppCon 2019 for over a week, I thought it was about time I wrote up my trip report.
The venue
This year, CppCon was at a new venue: the Gaylord Rockies Resort near Denver, Colorado, USA. This is a huge conference centre, currently surrounded by vast tracts of empty space, though people told me there were many plans for developing the surrounding area.
There were hosting multiple conferences and events alongside CppCon; it was quite amusing to emerge from the conference rooms and find oneself surrounded by people in ballgowns and fancy evening wear for an event in the nearby ballroom!
There were a choice of eating establishments, but they all had one thing in common: they were overpriced, taking advantage of the captured nature of the hotel clientelle. The food was reasonably nice though.
The size of the venue did make for a fair amount of walking around between sessions.
Overall the venue was nice, and the staff were friendly and helpful.
Pre-conference Workshop
I ran a 2-day pre-conference class, entitled More Concurrent Thinking in C++: Beyond the Basics, which was for those looking to move beyond the basics of threads and locks to the next level: high level library and application design, as well as lock-free programming with atomics. This was well attended, and I had interesting discussions with people over lunch and in the evening.
If you would like to book this course for your company, please see my training page.
The main conference
Bjarne Stroustrup kicked off the main conference with his presentation on "C++20: C++ at 40". Bjarne again reiterated his vision for C++, and outlined some of the many nice language and library features we have to make development easier, and code clearer and less error-prone.
Matt Godbolt's presentation on "Compiler Explorer: Behind the Scenes" was good and entertaining. Matt showed how he'd evolved Compiler Explorer from a simple script to the current website, and demonstrated some nifty things about it along the way, including features you might not have known about such as the LLVM instruction cost view, or the new "run your code" facility.
In "If You Can't Open It, You Don't Own It", Matt Butler talked about security and trust, and how bad things can happen if something you trust is compromised. Mostly this was obvious if you thought about it, but not something we necessarily do think about, so it was nice to be reminded, especially with the concrete examples. His advice on what we can do to build more secure systems, and existing and proposed C++ features that help was also good.
Barbara Geller and Ansel Sermersheim made an enthusiastic duo presenting "High performance graphics and text rendering on the GPU for any C++ application". I am excited about the potential for their Copperspice wrapper for the Vulkan rendering library: rendering 3D graphics portably is hard, and text more so.
Andrew Sutton's presentation on "Reflections: Compile-time Introspection of Source Code" was an interesting end to Monday. There is a lot of scope for eliminating boilerplate if we can use reflection, so it is good to see the progress being made on it.
Tuesday morning began with a scary question posed by Michael Wong, Paul McKenney
and Maged Michael: "Will Your Code Survive the Attack of the Zombie Pointers?"
Currently, if you delete
an object or call free
then all copies of those
pointers immediately become invalid across all threads. Since invalid pointers
can't even be compared, this can result in zombies eating your brains. Michael,
Paul and Maged looked at what we can do in our code to avoid this, and what they
are proposing for the C++ Standard to fix the problem.
Andrei Alexandrescu's presentation
on
"Speed is found in the minds of people" was
an insightful look at optimizing sort
. Andrei showed how compiler and
processor features mean that performance can be counter-intuitive, and code with
a higher algorithmic complexity can run faster in the right conditions. Always
use infinite loops (except for most cases).
I love the interactive slides in Hana Dusikova's
presentation
"A State of Compile Time Regular Expressions". She
is pushing the boundaries of compile-time coding to make our code perform better
at runtime. std::regex
can be slow compared to other regular expression
libraries, but ctre
can be much better. I am excited to see how this can be
extended to compile-time parsing of other DSLs.
In "Applied WebAssembly: Compiling and Running C++ in Your Web Browser", Ben Smith showed the use of WebAssembly as a target to allow you to write high-performance C++ code that will run in a suitable web browser on any platform, much like the "Write once, run anywhere" promise of Java. I am interested to see where this can lead.
Samy Al Bahra and Paul Khuong presented the final session I attended: "Abusing Your Memory Model for Fun and Profit". They discussed how they have written code that relies on the stronger memory ordering requirements imposed by X86 CPUs over and above the standard C++ memory model in order to write high-performance concurrent data structures. I am intrigued to see if any of their techniques can be used in a portable fashion, or used to improve Just::Thread Pro.
Whiteboard code
This year there were a few whiteboards around the conference area for people to use for impromptu discussions. One of them had a challenge written on it:
"Can you write a requires
expression that ensures a class has a member
function with a specified signature?"
This led to a lot of discussion, which Arthur O'Dwyer wrote up as a blog post. Though the premise of the question is wrong (we shouldn't want to constrain on such specifics), it was fun, interesting and enlightening trying to think how one might do it — it allows you to explore the corner cases of the language in ways that might turn out to be useful later.
My presentation
As well as the workshop, I presented a talk on "Concurrency in C++20 and beyond", which was on Tuesday afternoon. It was in an intermediate-sized room, and I believe was well attended, though it was hard to see the audience with the bright stage lighting. There were a number of interesting questions from the audience addressing the issues raised in my presentation, which is always good, though the acoustics did make it hard to hear some of them.
Slides are available here.
~trip_report()
So that was an overview of another awesome CppCon. I love the in-person interactions with so many people involved in using C++ for such a wide variety of things. Everyone has their own perspective, and I always learn something.
The videos are being uploaded incrementally to the CppCon YouTube channel, so hopefully the video of my presentation and the ones above that aren't already available will be uploaded soon.
Posted by Anthony Williams
[/ news /] permanent link
Tags: conferences, C++, cppcon, ccia, book, workshop, slides
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Design and Content Copyright © 2005-2024 Just Software Solutions Ltd. All rights reserved. | Privacy Policy