Blog Archive
August 2009 C++ Standards Committee Mailing
Thursday, 06 August 2009
The August 2009 mailing for the C++ Standards Committee was published yestoday. This is the post-meeting mailing for the July committee meeting. There isn't a new working draft this time, for one very simple reason: the "Concepts" feature was voted out of C++0x, so huge sections of the draft will have to be changed. Thankfully, the concurrency-related parts had not yet had concepts applied, so will not be affected directly by this.
C++0x timetable
What does this mean for the timetable? Well, the committee currently intends to issue a new Committee Draft for voting and comments by National Bodies in March 2010. How long it takes to get from that draft to a new Standard will depend on the comments that come back, and the length of the committee's issue lists. This is not intended to be a Final Committee Draft, implying there would be at least one more round of voting and comments. As I understand it, the earliest we could get a new Standard is thus early 2011, though if that were to be the case then the details would be finalised late 2010.
Concurrency-related papers
This time, there's only one concurrency-related paper in the mailing:
- N2925: More Collected Issues with Atomics
This paper gathers together the proposed wording for a number of outstanding issues with respect to the atomic data types. This tightens up the wording regards the memory ordering guarantees of fences, and adjusts the overload set for both free functions and member functions to ensure atomics behave sensibly in more circumstances. Crucially, the header for atomic operations is also changed — rather than the <cstdatomic> / <stdatomic.h> pairing from the previous draft there is a single header: <atomic>.
Future developments
Though futures were discussed, no change was made to the working paper in this regard. Detlef and I have been asked to write a further paper on the topic to incorporate the issues raised at the meeting, which will therefore be in the next committee mailing.
Likewise, the proposals for dealing with the lifetime of
thread_local
variables and for launching an asynchronous
task with a return value are still under discussion, and further
papers on those can be expected.
Your input
How do you feel about the removal of concepts? Do you think the slip in schedule will have an impact on you? Let me know in the comments.
Posted by Anthony Williams
[/ cplusplus /] permanent link
Tags: C++0x, C++, standards, concurrency
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
just::thread C++0x Thread Library Linux Port Released
Monday, 03 August 2009
I am pleased to announce that version 1.0 of just::thread, our C++0x Thread Library is now available for Linux as well as Windows.
The just::thread
library is a complete implementation
of the new C++0x thread library as per the current
C++0x working paper. Features include:
std::thread
for launching threads.- Mutexes and condition variables.
std::promise
,std::packaged_task
,std::unique_future
andstd::shared_future
for transferring data between threads.- Support for the new
std::chrono
time interface for sleeping and timeouts on locks and waits. - Atomic operations with
std::atomic
. - Support for
std::exception_ptr
for transferring exceptions between threads. - Special deadlock-detection mode for tracking down the call-stack leading to deadlocks, the bane of multithreaded programming.
The linux port is available for 32-bit and 64-bit Ubuntu linux, and takes full advantage of the C++0x support available from g++ 4.3. Purchase your copy and get started NOW.
Posted by Anthony Williams
[/ news /] permanent link
Tags: multithreading, concurrency, C++0x
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Multithreading in C++0x part 5: Flexible locking with std::unique_lock<>
Wednesday, 15 July 2009
This is the fifth in a series of blog posts introducing the new C++0x thread library. So far we've looked at the various ways of starting threads in C++0x and protecting shared data with mutexes. See the end of this article for a full set of links to the rest of the series.
In the previous installment we looked at the use of std::lock_guard<>
to simplify the locking and unlocking of a mutex and provide exception
safety. This time we're going to look at the std::lock_guard<>
's
companion class template std::unique_lock<>
. At
the most basic level you use it like std::lock_guard<>
— pass a mutex to the constructor to acquire a lock, and the
mutex is unlocked in the destructor — but if that's all you're
doing then you really ought to use std::lock_guard<>
instead. The benefit to using std::unique_lock<>
comes from two things:
- you can transfer ownership of the lock between instances, and
- the
std::unique_lock<>
object does not have to own the lock on the mutex it is associated with.
Let's take a look at each of these in turn, starting with transferring ownership.
Transferring ownership of a mutex lock
between std::unique_lock<>
instances
There are several consequences to being able to transfer ownership
of a mutex lock
between std::unique_lock<>
instances: you can return a lock from a function, you can store
locks in standard containers, and so forth.
For example, you can write a simple function that acquires a lock on an internal mutex:
std::unique_lock<std::mutex> acquire_lock() { static std::mutex m; return std::unique_lock<std::mutex>(m); }
The ability to transfer lock ownership between instances also provides an easy way to write classes that are themselves movable, but hold a lock internally, such as the following:
class data_to_protect { public: void some_operation(); void other_operation(); }; class data_handle { private: data_to_protect* ptr; std::unique_lock<std::mutex> lk; friend data_handle lock_data(); data_handle(data_to_protect* ptr_,std::unique_lock<std::mutex> lk_): ptr(ptr_),lk(lk_) {} public: data_handle(data_handle && other): ptr(other.ptr),lk(std::move(other.lk)) {} data_handle& operator=(data_handle && other) { if(&other != this) { ptr=other.ptr; lk=std::move(other.lk); other.ptr=0; } return *this; } void do_op() { ptr->some_operation(); } void do_other_op() { ptr->other_operation(); } }; data_handle lock_data() { static std::mutex m; static data_to_protect the_data; std::unique_lock<std::mutex> lk(m); return data_handle(&the_data,std::move(lk)); } int main() { data_handle dh=lock_data(); // lock acquired dh.do_op(); // lock still held dh.do_other_op(); // lock still held data_handle dh2; dh2=std::move(dh); // transfer lock to other handle dh2.do_op(); // lock still held } // lock released
In this case, the function lock_data()
acquires a lock
on the mutex used to protect the data, and then transfers that along
with a pointer to the data into the data_handle
. This
lock is then held by the data_handle
until the handle
is destroyed, allowing multiple operations to be done on the data
without the lock being released. Because
the std::unique_lock<>
is movable, it is easy to make data_handle
movable too,
which is necessary to return it from lock_data
.
Though the ability to transfer ownership between instances is
useful, it is by no means as useful as the simple ability to be able
to manage the ownership of the lock separately from the lifetime of
the std::unique_lock<>
instance.
Explicit locking and unlocking a mutex with
a std::unique_lock<>
As we saw
in part
4 of this
series, std::lock_guard<>
is very strict on lock ownership — it owns the lock from
construction to destruction, with no room for
manoeuvre. std::unique_lock<>
is rather lax in comparison. As well as acquiring a lock in the
constructor as
for std::lock_guard<>
,
you can:
- construct an instance without an associated mutex at all (with the default constructor);
- construct an instance with an associated mutex, but leave the mutex unlocked (with the deferred-locking constructor);
- construct an instance that tries to lock a mutex, but leaves it unlocked if the lock failed (with the try-lock constructor);
- if you have a mutex that supports locking with a timeout (such
as
std::timed_mutex
) then you can construct an instance that tries to acquire a lock for either a specified time period or until a specified point in time, and leaves the mutex unlocked if the timeout is reached; - lock the associated mutex if
the
std::unique_lock<>
instance doesn't currently own the lock (with thelock()
member function); - try and acquire lock the associated mutex if
the
std::unique_lock<>
instance doesn't currently own the lock (possibly with a timeout, if the mutex supports it) (with thetry_lock()
,try_lock_for()
andtry_lock_until()
member functions); - unlock the associated mutex if
the
std::unique_lock<>
does currently own the lock (with theunlock()
member function); - check whether the instance owns the lock (by calling
the
owns_lock()
member function; - release the association of the instance with the mutex, leaving
the mutex in whatever state it is currently (locked or unlocked)
(with
the
release()
member function); and - transfer ownership between instances, as described above.
As you can
see, std::unique_lock<>
is quite flexible: it gives you complete control over the underlying
mutex, and actually meets all the requirements for
a Lockable object itself. You can thus have
a std::unique_lock<std::unique_lock<std::mutex>>
if you really want to! However, even with all this flexibility it
still gives you exception safety: if the lock is held when the
object is destroyed, it is released in the destructor.
std::unique_lock<>
and condition variables
One place where the flexibility
of std::unique_lock<>
is used is
with std::condition_variable
. std::condition_variable
provides an implementation of a condition variable, which
allows a thread to wait until it has been notified that a certain
condition is true. When waiting you must pass in
a std::unique_lock<>
instance that owns a lock on the mutex protecting the data related to
the condition. The condition variable uses the flexibility
of std::unique_lock<>
to unlock the mutex whilst it is waiting, and then lock it again
before returning to the caller. This enables other threads to access
the protected data whilst the thread is blocked. I will expand upon
this in a later part of the series.
Other uses for flexible locking
The key benefit of the flexible locking is that the lifetime of the lock object is independent from the time over which the lock is held. This means that you can unlock the mutex before the end of a function is reached if certain conditions are met, or unlock it whilst a time-consuming operation is performed (such as waiting on a condition variable as described above) and then lock the mutex again once the time-consuming operation is complete. Both these choices are embodiments of the common advice to hold a lock for the minimum length of time possible without sacrificing exception safety when the lock is held, and without having to write convoluted code to get the lifetime of the lock object to match the time for which the lock is required.
For example, in the following code snippet the mutex is unlocked
across the time-consuming load_strings()
operation,
even though it must be held either side to access
the strings_to_process
variable:
std::mutex m; std::vector<std::string> strings_to_process; void update_strings() { std::unique_lock<std::mutex> lk(m); if(strings_to_process.empty()) { lk.unlock(); std::vector<std::string> local_strings=load_strings(); lk.lock(); strings_to_process.insert(strings_to_process.end(), local_strings.begin(),local_strings.end()); } }
Next time
Next time we'll look at the use of the new std::lock()
and std::try_lock()
function
templates to avoid deadlock when acquiring locks on multiple mutexes.
Subscribe to the RSS feed or email newsletter for this blog to be sure you don't miss the rest of the series.
Try it out
If you're using Microsoft Visual Studio 2008 or g++ 4.3 or 4.4 on
Ubuntu Linux you can try out the examples from this series using our
just::thread
implementation of the new C++0x thread library. Get your copy
today.
Multithreading in C++0x Series
Here are the posts in this series so far:
- Multithreading in C++0x Part 1: Starting Threads
- Multithreading in C++0x Part 2: Starting Threads with Function Objects and Arguments
- Multithreading in C++0x Part 3: Starting Threads with Member Functions and Reference Arguments
- Multithreading in C++0x Part 4: Protecting Shared Data
- Multithreading
in C++0x Part 5: Flexible locking
with
std::unique_lock<>
- Multithreading in C++0x part 6: Lazy initialization and double-checked locking with atomics
- Multithreading in C++0x part 7: Locking multiple mutexes without deadlock
- Multithreading in C++0x part 8: Futures, Promises and Asynchronous Function Calls
Posted by Anthony Williams
[/ threading /] permanent link
Tags: concurrency, multithreading, C++0x, thread
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
June 2009 C++ Standards Committee Mailing - New Working Draft and Concurrency Papers
Wednesday, 24 June 2009
The June 2009 mailing for the C++ Standards Committee was published today. This is the pre-meeting mailing for the upcoming committee meeting. Not only does it include the current C++0x working draft, but there are 39 other papers, of which 6 are concurrency related.
Concurrency-related papers
- N2880: C++ object lifetime interactions with the threads API
This is a repeat of the same paper from the May 2009 mailing. It is referenced by several of the other concurrency papers.
- N2888: Moving Futures - Proposed Wording for UK comments 335, 336, 337 and 338
This is the first of my papers in this mailing. The current working draft restricts
std::unique_future
to beMoveConstructible
, but notMoveAssignable
. It also restrictsstd::shared_future
in a similar way, by making itCopyConstructible
, but notCopyAssignable
. This severely limits the potential use of such futures, so the UK panel filed comments on CD1 such as UK comment 335 which requested relaxation of these unnecessary restrictions. This paper to provide a detailed rationale for these changes, along with proposed wording.- N2889: An Asynchronous Call for C++
This is the first of two papers in the mailing that proposes a
std::async()
function for starting a task possibly asynchronously and returning a future for the result from that task. This addresses a need identified by UK comment 329 on CD1 for a simple way of starting a task with a return value asynchronously.- N2901: A simple async()
This is the second of the papers in the mailing that proposes a
std::async()
function for starting a task possibly asynchronously and returning a future for the result from that task.The primary difference between the papers is the type of the returned future. N2889 proposes a new type of future (
joining_future
), whereas N2901 proposes usingstd::unique_future
. There are also a few semantic differences surrounding whether tasks that run asynchronously aways do so on a new thread, or whether they may run on a thread that is reused for multiple tasks. Both papers provide a means for the caller to specify synchronous execution of tasks, or to allow the implementation to decide between synchronous execution and asynchronous execution on a case-by-case basis. N2901 also explicitly relies on the use of lambda functions orstd::bind
to bind parameters to a call, whereas N2889 supports specifying function arguments as additional parameters, as per thestd::thread
constructor (see my introduction to starting threads with arguments in C++0x).Personally, I prefer the use of
std::unique_future
proposed in N2901, but I rather like the argument-passing semantics of N2889. I also think that thethread_local
issues can be addressed by my proposal for that (N2907, see below). A final concern that I have is that calling the task insidefuture::get()
can yield surprising behaviour, as futures can be passed across threads, so this may end up being called on another thread altogether. For synchronous execution, I would prefer invoking the task inside thestd::async
call itself, but delaying it to theget()
does allow for work-stealing thread pools.- N2907: Managing the lifetime of thread_local variables with contexts
This is the second of my papers in this mailing. It proposes a potential solution to the lifetime-of-
thread_local
-variables issues from N2880 discussed in last month's blog post.The idea is that you manage the lifetime of
thread_local
variables by constructing an instance of a new classstd::thread_local_context
. When thestd::thread_local_context
instance is destroyed, allthread_local
variables created in that context are also destroyed. You can then construct a subsequent instance ofstd::thread_local_context
, and create newthread_local
instances in that new context. This means that you can reuse a thread for multiple unrelated tasks, without "spilling"thread_local
variables from an earlier task into later tasks. It can also be used with astd::async()
function to ensure that thethread_local
variables are destroyed before the associated future is made ready.- N2917: N2880 Distilled, and a New Issue With Function Statics
This is Herb Sutter's take on the issues from N2880. He starts with a general discussion of the issue with detached threads and static destruction of globals, and then continues with a discussion of the issues surrounding the destruction of function-scoped
thread_local
variables. In particular, Herb focuses on something he calls Functionthread_local
statics poisoningthread_local
destructors — if the destructor of athread_local
objectx
calls a function that itself uses a function-scopethread_local
y
, then the destructor ofy
might already have been called, resulting in undefined behaviour.I found Herb's coverage of the issues surrounding detached threads dismissive of the idea that people could correctly write manual synchronization (e.g. using a
std::condition_variable
or astd::unique_future
), even though this is common practice amongst those using POSIX threads (for example, in David Butenhof's Programming with POSIX Threads, he says "pthread_join is a convenience, not a rule", and describes examples using detached threads and condition variables to signal when the thread is done). I can see many possibilities for such usage, so as a consequence, I am personally in favour of his "solution" 1D: leave things as they are with regard to detached threads — it is already undefined behaviour to access something after its destruction.However, the issue Herb raises with regard to order of destruction for
thread_local
variables is important, and not something that mystd::thread_local_context
proposal addresses. As Herb points out, the problem does exist with regard to function-localstatic
variables already —thread_local
just amplifies the problem. I am inclined to go with what POSIX threads does, and whatboost::thread_specific_ptr
does: make them "phoenix" variables that are re-initialized when accessed after destruction, and are thus added back onto the destructor list. This is Herb's solution 2B.
Other papers
Now that CD1 is out, and the committee is trying to get things pinned down for CD2, Concepts are getting a lot of discussion. There are therefore several papers on Concepts in the mailing. There are also papers on automatically defining move constructors, revised wording for lambdas, a proposal for unifying the function syntax, and several others. Check out the full list of papers for details.
Your comments
Do you have any comments on the papers (especially the concurrency
ones, but if you have any comments on the others I'd like to know
too)? Which std::async
proposal do you prefer, or do you
like aspects of both or neither? Do you think that
thread_local_context
objects combined with resurrecting
thread_local
objects on post-destruction access solves
the thread_local
issues?
Let me know by posting a comment.
Posted by Anthony Williams
[/ cplusplus /] permanent link
Tags: C++0x, C++, standards, concurrency
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Importing an Existing Windows XP Installation into VirtualBox
Wednesday, 03 June 2009
On Monday morning one of the laptops I use for developing software died. Not a complete "nothing happens when I turn it on" kind of death — it still runs the POST checks OK — but it won't rebooted of its own accord whilst compiling some code and now no longer boots into Windows (no boot device, apparently). Now, I really didn't fancy having to install everything from scratch, and I've become quite a big fan of VirtualBox recently, so I thought I'd import it into VirtualBox. How hard could it be? The answer, I discovered, was "quite hard".
Since it seems that several other people have tried to import an existing Windows XP installation into VirtualBox and had problems doing so, I thought I'd document what I did for the benefits of anyone who is foolish enough to try this in the future.
Step 1: Clone the Disk into VirtualBox
The first thing to do is clone the disk into VirtualBox. I have a handy laptop disk caddy lying around in my office which enables you to convert a laptop hard disk into an external USB drive, so I took the disk out of the laptop and put it in that. I connected the drive to my linux box, and mounted the partition. A quick look round seemed to indicate that the partition was in working order and the data intact. So far so good. I unmounted the partition again, in preparation for cloning the disk.
I started VirtualBox and created a new virtual machine with a virtual disk the same size as the physical disk. I then booted the VM with the System Rescue CD that I use for partitioning and disk backups. You might prefer to use another disk cloning tool.
Once the VM was up and running, I connected the USB drive to the VM using VirtualBox's USB support and cloned the physical disk onto the virtual one. This took a long time, considering it was only a 30Gb disk. Someone will probably tell me that there are quicker ways of doing this, but it worked, and I hope I don't have to do it again.
Step 2: Try (and fail) to boot Windows
Once the clone was complete, I disconnected the USB drive and unmapped the System Rescue CD and rebooted the VM. Windows started to boot, but would hang on the splash screen. If you're trying this and Windows now boots in your VM, be very glad.
Booting into safe mode showed that the hang occurred after loading "mup.sys". It seems lots of people have had this problem, and mup.sys is not the problem — the problem is that the hardware drivers configured for the existing Windows installation don't match the VirtualBox emulated hardware in some drastic fashion. This is not surprising if you think about it. Anyway, like I said, lots of people have had this problem, and there are lots of suggested ways of fixing it, like booting with the Windows Recovery Console and adjusting which drivers are loaded, using the "repair" version of the registry and so forth. I tried most of them, and none worked. However, there was one suggestion that seemed worth following through, and it was a variant of this that I finally got working.
Step 3: Install Windows into a new VM
The suggestion that I finally got working was to install Windows on a new VM and swipe the SYSTEM registry hive from there. This registry hive contains all the information about your hardware that Windows needs to boot, so if you can get Windows booting in a VM then you can use the corresponding SYSTEM registry hive to boot the recovered installation. At least that's the theory; in practice it needs a bit of hackery to make it work.
Anyway, I installed Windows into the new VM, closed it down rebooted it with the System Rescue CD to retrieve the SYSTEM registry hive: C:\Windows\System32\config\SYSTEM. You cannot access this file when the system is running. I then booted my original VM with the System Rescue CD and copied the registry hive over, making sure I had a backup of the original. If you're doing this yourself don't change the hive on your original VM yet.
The system now booted into Windows. Well, almost — it booted up, but then displayed an LSASS message about being unable to update the password and rebooted. This cycle repeats ad infinitum, even in Safe Mode. So far not so good.
Step 4: Patching the SYSKEY
In practice, Windows installations have what's called a SYSKEY in order to prevent unauthorized people logging on to the system. This is a checksum of some variety which is spread across the SAM registry hive (which contains the user details), the SYSTEM hive and the SECURITY hive. If the SYSKEY is wrong, the system will boot up, but then display the message I was getting about LSASS being unable to update the password and reboot. In theory, you should be able to update all three registry hives together, but then all your user accounts get replaced, and I didn't fancy trying to get everything set up right again. This is where the hackery comes in, and where I am thankful to two people: firstly Clark from http://www.beginningtoseethelight.org/ who wrote an informative article on the Windows Security Accounts Manager which explains how the SYSKEY is stored in the registry hives, and secondly Petter Nordahl-Hagen who provides a boot disk for offline Windows password and registry editing.
According to the article on the Windows Security Manager, the portion of the SYSKEY store in the SYSTEM hive is stored as class key values on a few registry keys. Class key values are hidden from normal registry accesses, but Petter Nordahl-Hagen's registry editor can show them to you. So, I restored the original SYSTEM hive (I was glad I made a backup) and booted the VM from Petter's boot disk and looked at the class key values on the ControlSet001\Control\Lsa\Data, ControlSet001\Control\Lsa\GBG, ControlSet001\Control\Lsa\JD and ControlSet001\Control\Lsa\Skew1 keys from the SYSTEM hive. I noted these down for later. The values are all 16 bytes long: the ASCII values for 8 hex digits with null bytes between.
This is where the hackery comes in — I loaded the new SYSTEM hive (from the working Windows VM) into a hex editor and searched for the GBG key. The text should appear in a few places — one for the subkey of ControlSet001, one for the subkey of ControlSet002, and so forth. A few bytes after one of the occurrences you should see a sequence of 16 bytes that looks similar to the codes you wrote down: ASCII values for hex digits separated by spaces. Make a note of the original sequence and replace it with the GBG class key value from the working VM. Do the same for the Data, JD and Skew1 values. Near the Data values you should also see the same hex digit sequence without the separating null bytes. Replace that too. Now look at the values in the file near to where the registry key names occur to see if there are any other occurrences of the original hex digit sequences and replace these with the new values as well.
Save the patched SYSTEM registry hive and copy it into the VM being recovered.
Now for the moment of truth: boot the VM. If you've patched all the values correctly then it will boot into Windows. If not, then you'll get the LSASS message again. In this case, try booting into the "Last Known Good Configuration". This might work if you missed one of the occurrences of the original values. If it still doesn't work, load the hive back into your hex editor and have another go.
Step 5: Activate Windows and Install VirtualBox Guest Additions
Once Windows has booted successfully, it will update the SYSKEY entries across the remaining ControlSetXXX entries, so you don't need to worry if you missed some values. You'll need to re-activate Windows XP due to the huge change in hardware, but this should be relatively painless — if you enable a network adapter in the VM configuration then Windows can access the internet through your host's connection seamlessly. Once that's done you can proceed with install the VirtualBox guest additions to make it easier to work with the VM — mouse pointer integration, sensible screen resolutions, shared folders and so forth.
Was it quicker than installing everything from scratch? Possibly: I had a lot of software installed. It was certainly a lot more touch-and-go, and it was a bit scary patching the registry hive in a hex editor. It was quite fun though, and it felt good to get it working.
Posted by Anthony Williams
[/ general /] permanent link
Tags: virtualbox
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
May 2009 C++ Standards Committee Mailing - Object Lifetimes and Threads
Monday, 18 May 2009
The May 2009 mailing for the C++ Standards Committee was published a couple of weeks ago. This is a minimal mailing between meetings, and only has a small number of papers.
The primary reason I'm mentioning it is because one of the papers
is concurrency related: N2880:
C++ object lifetime interactions with the threads API by
Hans-J. Boehm and Lawrence Crowl. Hans and Lawrence are concerned
about the implications of thread_local
objects with destructors, and
how you can safely clean up threads if you don't call join()
. The
issue arose during discussion of the proposed async()
function, but is generally applicable.
thread_local
variables and detached threads
Suppose you run a function on a thread for which you want the
return value. You might be tempted to use std::packaged_task<>
and std::unique_future<>
for this; after all it's almost exactly what they're designed for:
#include <thread> #include <future> #include <iostream> int find_the_answer_to_LtUaE(); std::unique_future<int> start_deep_thought() { std::packaged_task<int()> task(find_the_answer_to_LtUaE); std::unique_future<int> future=task.get_future(); std::thread t(std::move(task)); t.detach(); return future; } int main() { std::unique_future<int> the_answer=start_deep_thought(); do_stuff(); std::cout<<"The answer="<<the_answer.get()<<std::endl; }
The call to get()
will wait for the task to finish, but not the
thread. If there are no thread_local
variable
this is not a problem — the thread is detached so the library
will clean up the resources assigned to it automatically. If there
are thread_local
variables (used in
find_the_answer_to_LtUaE()
for example), then this
does cause a problem, because their destructors are not
guaranteed to complete when get()
returns. Consequently, the program may exit before the destructors of
the thread_local
variables have completed, and we have a
race condition.
This race condition is particularly problematic if the thread
accesses any objects of static storage duration, such as global
variables. The program is exiting, so these are being destroyed; if
the thread_local
destructors in the still-running thread
access global variables that have already been destroyed then your
program has undefined behaviour.
This isn't the only problem that Hans and Lawrence discuss —
they also discuss problems with thread_local
and threads
that are reused for multiple tasks — but I think it's the most
important issue.
Solutions?
None of the solutions proposed in the paper are ideal. I
particularly dislike the proposed removal of the detach()
member function from std::thread
. If
you can't detach a thread directly then it makes functions like
start_deep_thought()
much harder to write, and people
will find ways to simulate detached threads another way. Of the
options presented, my preferred choice is to allow registration of a
thread termination handler which is run after all
thread_local
objects have been destroyed. This handler
can then be used to set the value on a future or notify a condition
variable. However, it would make start_deep_thought()
more complicated, as std::packaged_task<>
wouldn't automatically make use of this mechanism unless it was
extended to do so — if it did this every time then it would make
it unusable in other contexts.
If anyone has any suggestions on how to handle the issue, please leave them in the comments below and I'll pass them on to the rest of the committee.
Posted by Anthony Williams
[/ cplusplus /] permanent link
Tags: C++0x, C++, standards, concurrency
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
"Introduction to Variadic Templates in C++0x" Article Online
Thursday, 07 May 2009
My latest article, Introduction to Variadic Templates in C++0x has been published at devx.com.
This article introduces the syntax for declaring and using variadic templates, along with some simple examples of variadic function templates and variadic class templates.
Posted by Anthony Williams
[/ news /] permanent link
Tags: article, variadic templates, C++0x
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Designing Multithreaded Applications with C++0x: ACCU 2009 Slides
Tuesday, 28 April 2009
The ACCU 2009 conference has now finished, and life is getting back to normal. My presentation on "Designing Multithreaded Programs with C++0x" was well attended and I had a few people come up to me afterwards to say they enjoyed it, which is always nice.
Anyway, the purpose of this post is to say that the slides
are now up. I've also posted the sample
code for the Concurrent Queue and Numerical Integration
demonstrations that I did using our just::thread
implementation of the C++0x thread library.
Posted by Anthony Williams
[/ news /] permanent link
Tags: concurrency, threading, accu, C++0x
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
just::thread discount for ACCU 2009
Monday, 20 April 2009
As I mentioned back in January, I will be speaking on "Designing Multithreaded Applications with C++0x" at ACCU 2009 on Thursday.
To coincide with my presentation, our C++0x thread library, just::thread
is
available at a 25%
discount until the 4th May 2009. just::thread
provides implementations of the C++0x thread library facilities such
as std::thread,
std::mutex,
std::unique_future<>
and std::atomic<>. The
current release works with Microsoft Visual Studio 2008, and gcc/linux
support will be available soon — it is currently undergoing
alpha testing.
Posted by Anthony Williams
[/ news /] permanent link
Tags: concurrency, threading, accu, C++0x, just::thread
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Multithreading in C++0x part 4: Protecting Shared Data
Saturday, 04 April 2009
This is the fourth of a series of blog posts introducing the new C++0x thread library. The first three parts covered starting threads in C++0x with simple functions, starting threads with function objects and additional arguments, and starting threads with member functions and reference arguments.
If you've read the previous parts of the series then you should be comfortable with starting threads to perform tasks "in the background", and waiting for them to finish. You can accomplish a lot of useful work like this, passing in the data to be accessed as parameters to the thread function, and then retrieving the result when the thread has completed. However, this won't do if you need to communicate between the threads whilst they are running — accessing shared memory concurrently from multiple threads causes undefined behaviour if either thread modifies the data. What you need here is some way of ensuring that the accesses are mutually exlusive, so only one thread can access the shared data at a time.
Mutual Exclusion with std::mutex
Mutexes are conceptually simple. A mutex is either "locked" or
"unlocked", and threads try and lock the mutex when they wish to
access some protected data. If the mutex is already locked then any
other threads that try and lock the mutex will have to wait. Once the
thread is done with the protected data it unlocks the mutex, and
another thread can lock the mutex. If you make sure that threads
always lock a particular mutex before accessing a particular piece of
shared data then other threads are excluded from accessing the data
until as long as another thread has locked the mutex. This prevents
concurrent access from multiple threads, and avoids the undefined
behaviour of data races. The simplest mutex provided by C++0x is
std::mutex
.
Now, whilst std::mutex
has member functions for explicitly locking and unlocking, by far the
most common use case in C++ is where the mutex needs to be locked for
a specific region of code. This is where the std::lock_guard<>
template comes in handy by providing for exactly this scenario. The
constructor locks the mutex, and the destructor unlocks the mutex, so
to lock a mutex for the duration of a block of code, just construct a
std::lock_guard<>
object as a local variable at the start of the block. For example, to
protect a shared counter you can use std::lock_guard<>
to ensure that the mutex is locked for either an increment or a query
operation, as in the following example:
std::mutex m; unsigned counter=0; unsigned increment() { std::lock_guard<std::mutex> lk(m); return ++counter; } unsigned query() { std::lock_guard<std::mutex> lk(m); return counter; }
This ensures that access to counter
is
serialized — if more than one thread calls
query()
concurrently then all but one will block until
the first has exited the function, and the remaining threads will then
have to take turns. Likewise, if more than one thread calls
increment()
concurrently then all but one will
block. Since both functions lock the same mutex, if one
thread calls query()
and another calls
increment()
at the same time then one or other will have
to block. This mutual exclusion is the whole point of a
mutex.
Exception Safety and Mutexes
Using std::lock_guard<>
to lock the mutex has additional benefits over manually locking and
unlocking when it comes to exception safety. With manual locking, you
have to ensure that the mutex is unlocked correctly on every exit path
from the region where you need the mutex locked, including when
the region exits due to an exception. Suppose for a moment that
instead of protecting access to a simple integer counter we were
protecting access to a std::string
, and appending parts
on the end. Appending to a string might have to allocate memory, and
thus might throw an exception if the memory cannot be
allocated. With std::lock_guard<>
this still isn't a problem — if an exception is thrown, the
mutex is still unlocked. To get the same behaviour with manual locking
we have to use a catch
block, as shown below:
std::mutex m; std::string s; void append_with_lock_guard(std::string const& extra) { std::lock_guard<std::mutex> lk(m); s+=extra; } void append_with_manual_lock(std::string const& extra) { m.lock(); try { s+=extra; m.unlock(); } catch(...) { m.unlock(); throw; } }
If you had to do this for every function which might throw an exception it would quickly get unwieldy. Of course, you still need to ensure that the code is exception-safe in general — it's no use automatically unlocking the mutex if the protected data is left in a state of disarray.
Next time
Next time we'll take a look at the std::unique_lock<>
template, which provides more options than std::lock_guard<>
.
Subscribe to the RSS feed or email newsletter for this blog to be sure you don't miss the rest of the series.
Try it out
If you're using Microsoft Visual Studio 2008 or g++ 4.3 or 4.4 on
Ubuntu Linux you can try out the examples from this series using our
just::thread
implementation of the new C++0x thread library. Get your copy
today.
Multithreading in C++0x Series
Here are the posts in this series so far:
- Multithreading in C++0x Part 1: Starting Threads
- Multithreading in C++0x Part 2: Starting Threads with Function Objects and Arguments
- Multithreading in C++0x Part 3: Starting Threads with Member Functions and Reference Arguments
- Multithreading in C++0x Part 4: Protecting Shared Data
- Multithreading
in C++0x Part 5: Flexible locking
with
std::unique_lock<>
- Multithreading in C++0x part 6: Lazy initialization and double-checked locking with atomics
- Multithreading in C++0x part 7: Locking multiple mutexes without deadlock
- Multithreading in C++0x part 8: Futures, Promises and Asynchronous Function Calls
Posted by Anthony Williams
[/ threading /] permanent link
Tags: concurrency, multithreading, C++0x, thread
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Design and Content Copyright © 2005-2025 Just Software Solutions Ltd. All rights reserved. | Privacy Policy