Blog Archive
Boost 1.35.0 has been Released!
Tuesday, 01 April 2008
Verson 1.35.0 of the Boost libraries was released on Saturday. This release includes a major revision of the Boost.Thread library, to bring it more in line with the C++0x Thread Library. There are many new libraries, and revisions to other libraries too, see the full Release Notes for details, or just Download the release and give it a try.
Posted by Anthony Williams
[/ news /] permanent link
Tags: boost
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Optimizing Applications with Fixed-Point Arithmetic
Tuesday, 01 April 2008
My latest article, Optimizing Math-intensive Applications with Fixed Point Arithmetic from the April 2008 issue of Dr Dobb's Journal is now available online. (I originally had "Maths-intensive" in the title, being English, but they dropped the "s", being American).
In the article, I describe the fixed-point techniques I used to vastly improve the performance of an application using sines, cosines and exponentials without hardware floating point support.
The source code referenced in the article can be downloaded from here. It is released under the Boost Software License.
Posted by Anthony Williams
[/ news /] permanent link
Tags: optimization, fixed-point, maths
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Futures and Tasks in C++0x
Thursday, 27 March 2008
I had resigned myself to Thread Pools and Futures being punted to TR2 rather than C++0x, but it seems there is potential for some movement on this issue. At the meeting of WG21 in Kona, Hawaii in October 2007 it was agreed to include asynchronous future values in C++0x, whilst excluding thread pools and task launching.
Detlef Vollman has rekindled the effort, and drafted N2561: An Asynchronous Future Value with myself and
Howard Hinnant, based on a discussion including other members of the Standards Committee. This paper proposes four templates:
unique_future
and shared_future
, which are the asynchronous values themselves, and
packaged_task
and promise
, which provide ways of setting the asynchronous values.
Asynchronous future values
unique_future
is very much like unique_ptr
: it represents exclusive ownership of the value. Ownership
of a (future) value can be moved between unique_future
instances, but no two unique_future
instances can
refer to the same asynchronous value. Once the value is ready for retrieval, it is moved out of the internal storage
buffer: this allows for use with move-only types such as std::ifstream
.
Similarly, shared_future
is very much like shared_ptr
: multiple instances can refer to the same
(future) value, and shared_future
instances can be copied around. In order to reduce surprises with this usage (with
one thread moving the value through one instance at the same time as another tries to move it through another instance), the stored
value can only be accessed via const
reference, so must be copied out, or accessed in place.
Storing the future values as the return value from a function
The simplest way to calculate a future value is with a packaged_task<T>
. Much like std::function<T()>
, this
encapsulates a callable object or function, for invoking at a later time. However, whereas std::function
returns the
result directly to the caller, packaged_task
stores the result in a future.
extern int some_function(); std::packaged_task<int> task(some_function); std::unique_future<int> result=task.get_future(); // later on, some thread does task(); // and "result" is now ready
Making a promise
to provide a future value
The other way to store a value to be picked up with a unique_future
or shared_future
is to use a
promise
, and then explicitly set the value by calling the set_value()
member function.
std::promise<int> my_promise; std::unique_future<int> result=my_promise.get_future(); // later on, some thread does my_promise.set_value(42); // and "result" is now ready.
Exceptional returns
Futures also support storing exceptions: when you try and retrieve the value, if there is a stored exception, that exception is
thrown rather than the value being retrieved. With a packaged_task
, an exception gets stored if the wrapped function
throws an exception when it is invoked, and with a promise
, you can explicitly store an exception with the
set_exception()
member function.
Feedback
As the paper says, this is not a finished proposal: it is a basis for further discussion. Let me know if you have any comments.
Posted by Anthony Williams
[/ threading /] permanent link
Tags: threading, futures, asynchronous values, C++0x, wg21
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Thread Interruption in the Boost Thread Library
Tuesday, 11 March 2008
One of the new features introduced in the upcoming 1.35.0 release of the boost thread library is support for interruption of a running thread. Similar to the Java and .NET interruption support, this allows for one thread to request another thread to stop at the next interruption point. This is the only way to explicitly request a thread to terminate that is directly supported by the Boost Thread library, though users can manually implement cooperative interruption if required.
Interrupting a thread in this way is much less dangerous than brute-force tactics such as
TerminateThread()
, as such tactics can leave broken invariants and leak resources. If a thread is killed using a
brute-force method and it was holding any locks, this can also potentially lead to deadlock when another thread tries to acquire
those locks at some future point. Interruption is also easier and more reliable than rolling your own cooperative termination scheme
using mutexes, flags, condition variables, or some other synchronization mechanism, since it is part of the library.
Interrupting a Thread
A running thread can be interrupted by calling the interrupt()
member function on the corresponding
boost::thread
object. If the thread doesn't have a boost::thread
object (e.g the initial thread of the
application), then it cannot be interrupted.
Calling interrupt()
just sets a flag in the thread management structure for that thread and returns: it doesn't wait
for the thread to actually be interrupted. This is important, because a thread can only be interrupted at one of the predefined
interruption points, and it might be that a thread never executes an interruption point, so never sees the
request. Currently, the interruption points are:
boost::thread::join()
boost::thread::timed_join()
boost::condition_variable::wait()
boost::condition_variable::timed_wait()
boost::condition_variable_any::wait()
boost::condition_variable_any::timed_wait()
boost::this_thread::sleep()
boost::this_thread::interruption_point()
When a thread reaches one of these interruption points, if interruption is enabled for that thread then it checks its
interruption flag. If the flag is set, then it is cleared, and a boost::thread_interrupted
exception is thrown. If the
thread is already blocked on a call to one of the interruption points with interruption enabled when interrupt()
is
called, then the thread will wake in order to throw the boost::thread_interrupted
exception.
Catching an Interruption
boost::thread_interrupted
is just a normal exception, so it can be caught, just like any other exception. This is
why the "interrupted" flag is cleared when the exception is thrown — if a thread catches and handles the interruption, it is
perfectly acceptable to interrupt it again. This can be used, for example, when a worker thread that is processing a series of
independent tasks — if the current task is interrupted, the worker can handle the interruption and discard the task, and move
onto the next task, which can then in turn be interrupted. It also allows the thread to catch the exception and terminate itself by
other means, such as returning error codes, or translating the exception to pass through module boundaries.
Disabling Interruptions
Sometimes it is necessary to avoid being interrupted for a particular section of code, such as in a destructor where an exception
has the potential to cause immediate process termination. This is done by constructing an instance of
boost::this_thread::disable_interruption
. Objects of this class disable interruption for the thread that created them on
construction, and restore the interruption state to whatever it was before on destruction:
void f() { // interruption enabled here { boost::this_thread::disable_interruption di; // interruption disabled { boost::this_thread::disable_interruption di2; // interruption still disabled } // di2 destroyed, interruption state restored // interruption still disabled } // di destroyed, interruption state restored // interruption now enabled }
The effects of an instance of boost::this_thread::disable_interruption
can be temporarily reversed by constructing
an instance of boost::this_thread::restore_interruption
, passing in the
boost::this_thread::disable_interruption
object in question. This will restore the interruption state to what it was
when the boost::this_thread::disable_interruption
object was constructed, and then disable interruption again when the
boost::this_thread::restore_interruption
object is destroyed:
void g() { // interruption enabled here { boost::this_thread::disable_interruption di; // interruption disabled { boost::this_thread::restore_interruption ri(di); // interruption now enabled } // ri destroyed, interruption disabled again { boost::this_thread::disable_interruption di2; // interruption disabled { boost::this_thread::restore_interruption ri2(di2); // interruption still disabled // as it was disabled when di2 constructed } // ri2 destroyed, interruption still disabled } //di2 destroyed, interruption still disabled } // di destroyed, interruption state restored // interruption now enabled }
boost::this_thread::disable_interruption
and boost::this_thread::restore_interruption
cannot be moved
or copied, and they are the only way of enabling and disabling interruption. This ensures that the interruption state is correctly
restored when the scope is exited (whether normally, or by an exception), and that you cannot enable interruptions in the middle of
an interruption-disabled block unless you're in full control of the code, and have access to the
boost::this_thread::disable_interruption
instance.
At any point, the interruption state for the current thread can be queried by calling
boost::this_thread::interruption_enabled()
.
Cooperative Interruption
As well as the interruption points on blocking operations such as sleep()
and join()
, there is one
interruption point explicitly designed to allow interruption at a user-designated point in the
code. boost::this_thread::interruption_point()
does nothing except check for an interruption, and can therefore be used
in long-running code that doesn't execute any other interruption points, in order to allow for cooperative interruption. Just like
the other interruption points, interruption_point()
respects the interruption enabled state, and does nothing if
interruption is disabled for the current thread.
Interruption is Not Cancellation
On POSIX platforms, threads can be cancelled rather than killed, by calling pthread_cancel()
. This is similar to interruption, but is a separate mechanism, with different behaviour. In particular,
cancellation cannot be stopped once it is started: whereas interruption just throws an exception, once a cancellation request has
been acknowledged the thread is effectively dead. pthread_cancel()
does not always execute destructors either (though
it does on some platforms), as it is primarily a C interface — if you want to clean up your resources when a thread is
cancelled, you need to use pthread_cleanup_push()
to register a cleanup handler. The advantage here is that
pthread_cleanup_push()
works in C stack frames, whereas exceptions don't play nicely in C: on some platforms it will
crash your program for an exception to propagate into a C stack frame.
For portable code, I recommend interruption over cancellation. It's supported on all platforms that can use the Boost Thread library, and it works well with C++ code — it's just another exception, so all your destructors and catch blocks work just fine.
Posted by Anthony Williams
[/ threading /] permanent link
Tags: thread, boost, interruption, concurrency, cancellation, multi-threading
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Acquiring Multiple Locks Without Deadlock
Monday, 03 March 2008
In a software system with lots of fine-grained mutexes, it can sometimes be necessary to acquire locks on more than one mutex
together in order to perform some operation. If this is not done with care, then there is the possibility of deadlock, as multiple
threads may lock the same mutexes in a different order. It is for this reason that the thread library coming with C++0x will include
a lock()
function for locking multiple mutexes together: this article describes the implementation details behind such
a function.
Choose the lock order by role
The easiest way to deal with this is to always lock the mutexes in the same order. This is especially easy if the order can be hard-coded, and some uses naturally lend themselves towards this choice. For example, if the mutexes protect objects with different roles, it is relatively easy to always lock the mutex protecting one set of data before locking the other one. In such a situation, Lock hierarchies can be used to enforce the ordering — with a lock hierarchy, a thread cannot acquire a lock on a mutex with a higher hierarchy level than any mutexes currently locked by that thread.
If it is not possible to decide a-priori which mutex to lock first, such as when the mutexes are associated with the same sort of data, then a more complicated policy must be applied.
Choose the lock order by address
The simplest technique in these cases is to always lock the mutexes in ascending order of address (examples use the types and functions from the upcoming 1.35 release of Boost), like this:
void lock(boost::mutex& m1,boost::mutex& m2) { if(&m1<&m2) { m1.lock(); m2.lock(); } else { m2.lock(); m1.lock(); } }
This works for small numbers of mutexes, provided this policy is maintained throughout the application, but if several mutexes must be locked together, then calculating the ordering can get complicated, and potentially inefficient. It also requires that the mutexes are all of the same type. Since there are many possible mutex and lock types that an application might choose to use, this is a notable disadvantage, as the function must be written afresh for each possible combination.
Order mutexes "naturally", with try-and-back-off
If the mutexes cannot be ordered by address (for whatever reason), then an alternative scheme must be found. One such scheme is to use a try-and-back-off algorithm: try and lock each mutex in turn; if any cannot be locked, unlock the others and start again. The simplest implementation for 3 mutexes looks like this:
void lock(boost::mutex& m1,boost::mutex& m2,boost::mutex& m3) { do { m1.lock(); if(m2.try_lock()) { if(m3.try_lock()) { return; } m2.unlock(); } m1.unlock(); } while(true); }
Wait for the failed mutex
The big problem with this scheme is that it always locks the mutexes in the same order. If m1
and m2
are currently free, but m3
is locked by another thread, then this thread will repeatedly lock m1
and
m2
, fail to lock m3
and unlock m1
and m2
. This just wastes CPU cycles for no
gain. Instead, what we want to do is block waiting for m3
, and try to acquire the others only when
m3
has been successfully locked by this thread. For three mutexes, a first attempt looks like this:
void lock(boost::mutex& m1,boost::mutex& m2,boost::mutex& m3) { unsigned lock_first=0; while(true) { switch(lock_first) { case 0: m1.lock(); if(m2.try_lock()) { if(m3.try_lock()) return; lock_first=2; m2.unlock(); } else { lock_first=1; } m1.unlock(); break; case 1: m2.lock(); if(m3.try_lock()) { if(m1.try_lock()) return; lock_first=0; m3.unlock(); } else { lock_first=2; } m2.unlock(); break; case 2: m3.lock(); if(m1.try_lock()) { if(m2.try_lock()) return; lock_first=1; m1.unlock(); } else { lock_first=0; } m3.unlock(); break; } } }
Simplicity and Robustness
This code is very long-winded, with all the duplication between the case
blocks. Also, it assumes that the mutexes
are all boost::mutex
, which is overly restrictive. Finally, it assumes that the try_lock
calls don't throw
exceptions. Whilst this is true for the Boost mutexes, it is not required to be true in general, so a more robust implementation
that allows the mutex type to be supplied as a template parameter will ensure that any exceptions thrown will leave all the mutexes
unlocked: the unique_lock
template will help with that by providing RAII locking. Taking all this into account leaves
us with the following:
template<typename MutexType1,typename MutexType2,typename MutexType3> unsigned lock_helper(MutexType1& m1,MutexType2& m2,MutexType3& m3) { boost::unique_lock<MutexType1> l1(m1); boost::unique_lock<MutexType2> l2(m2,boost::try_to_lock); if(!l2) { return 1; } if(!m3.try_lock()) { return 2; } l2.release(); l1.release(); return 0; } template<typename MutexType1,typename MutexType2,typename MutexType3> void lock(MutexType1& m1,MutexType2& m2,MutexType3& m3) { unsigned lock_first=0; while(true) { switch(lock_first) { case 0: lock_first=lock_helper(m1,m2,m3); if(!lock_first) return; break; case 1: lock_first=lock_helper(m2,m3,m1); if(!lock_first) return; lock_first=(lock_first+1)%3; break; case 2: lock_first=lock_helper(m3,m1,m2); if(!lock_first) return; lock_first=(lock_first+2)%3; break; } } }
This code is simultaneously shorter, simpler and more general than the previous implementation, and is robust in the face of
exceptions. The lock_helper
function locks the first mutex, and then tries to lock the other two in turn. If either of
the try_lock
s fail, then all currently-locked mutexes are unlocked, and it returns the index of the mutex than couldn't
be locked. On success, the release
members of the unique_lock
instances are called to release ownership of
the locks, and thus stop them automatically unlocking the mutexes during destruction, and 0
is returned. The outer
lock
function is just a simple wrapper around lock_helper
that chooses the order of the mutexes so that
the one that failed to lock last time is tried first.
Extending to more mutexes
This scheme can also be easily extended to handle more mutexes, though the code gets unavoidably longer, since there are more cases to handle — this is where the C++0x variadic templates will really come into their own. Here's the code for locking 5 mutexes together:
template<typename MutexType1,typename MutexType2,typename MutexType3, typename MutexType4,typename MutexType5> unsigned lock_helper(MutexType1& m1,MutexType2& m2,MutexType3& m3, MutexType4& m4,MutexType5& m5) { boost::unique_lock<MutexType1> l1(m1); boost::unique_lock<MutexType2> l2(m2,boost::try_to_lock); if(!l2) { return 1; } boost::unique_lock<MutexType3> l3(m3,boost::try_to_lock); if(!l3) { return 2; } boost::unique_lock<MutexType4> l2(m4,boost::try_to_lock); if(!l4) { return 3; } if(!m5.try_lock()) { return 4; } l4.release(); l3.release(); l2.release(); l1.release(); return 0; } template<typename MutexType1,typename MutexType2,typename MutexType3, typename MutexType4,typename MutexType5> void lock(MutexType1& m1,MutexType2& m2,MutexType3& m3, MutexType4& m4,MutexType5& m5) { unsigned const lock_count=5; unsigned lock_first=0; while(true) { switch(lock_first) { case 0: lock_first=lock_helper(m1,m2,m3,m4,m5); if(!lock_first) return; break; case 1: lock_first=lock_helper(m2,m3,m4,m5,m1); if(!lock_first) return; lock_first=(lock_first+1)%lock_count; break; case 2: lock_first=lock_helper(m3,m4,m5,m1,m2); if(!lock_first) return; lock_first=(lock_first+2)%lock_count; break; case 3: lock_first=lock_helper(m4,m5,m1,m2,m3); if(!lock_first) return; lock_first=(lock_first+3)%lock_count; break; case 4: lock_first=lock_helper(m5,m1,m2,m3,m4); if(!lock_first) return; lock_first=(lock_first+4)%lock_count; break; } } }
Final Code
The final code for acquiring multiple locks
provides try_lock
and lock
functions for 2 to 5 mutexes. Though the try_lock
functions are
relatively straight-forward, their existence makes the lock_helper
functions slightly simpler, as they can just defer
to the appropriate overload of try_lock
to cover all the mutexes beyond the first one.
Posted by Anthony Williams
[/ threading /] permanent link
Tags: threading, concurrency, mutexes, locks
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Thread Library Now in C++0x Working Draft
Monday, 11 February 2008
The latest proposal for the C++ standard thread library has finally made it into the C++0x working draft.
Woohoo!
There will undoubtedly be minor changes as feedback comes in to the committee, but this is the first real look at what C++0x thread support will entail, as approved by the whole committee. The working draft also includes the new C++0x memory model, and atomic types and operations. This means that for the first time, C++ programs will legitimately be able to spawn threads without immediately straying into undefined behaviour. Not only that, but the memory model has been very carefully thought out, so it should be possible to write even low-level stuff such as lock-free containers in Standard C++.
Posted by Anthony Williams
[/ threading /] permanent link
Tags: threading, C++, C++0x, news
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Database Tip: Eliminate Duplicate Data
Friday, 25 January 2008
Storing duplicated data in your database is a bad idea for several reasons:
- The duplicated data occupies more space — if you store two copies of the same data in your database, it takes twice as much space.
- If duplicated data is updated, it must be changed in more than one place, which is more complex and may require more code than just changing it in one location.
- Following on from the previous point — if data is duplicated, then it is easy to miss one of the duplicates when updating, leading to different copies having different information. This may lead to confusion, and errors further down the line.
Coincidental Duplication
It is worth noting that some duplication is coincidental — it is worth checking out whether a particular instance of duplication is coincidental or not before eliminating it. For example it is common for a billing address to be the same as a delivery address, and it may be that for all existing entries in the table it is the same, but they are still different concepts, and therefore need to be handled as such (though you may manage to eliminate the duplicate storage where they are the same).
Duplication Between Tables
One of the benefits of Using an artificial primary key is that you can avoid duplication of data between the master table and those tables which have foreign keys linked to that table. This reduces the problems described above where the duplication is in the foreign key, but is only the first step towards eliminating duplication within a given table.
If there is duplication of data between tables that is not due to foreign key constraints, and is not coincidental duplication, then it is possibly worth deleting one of the copies, or making both copies reference the same row in a new table.
Duplication Between Rows Within a Table
Typically duplication between rows occurs through the use of a composite primary key, along with auxiliary data. For example, a table of customer orders might include the full customer data along with each order entry:
CUSTOMER_ORDERS | ||||
---|---|---|---|---|
CUSTOMER_NAME | CUSTOMER_ADDRESS | ORDER_NUMBER | ITEM | QUANTITY |
Sprockets Ltd | Booth Ind Est, Boston | 200804052 | Widget 23 | 450 |
Sprockets Ltd | Booth Ind Est, Boston | 200804052 | Widget Connector | 900 |
Foobar Inc | Baz Street, London | 200708162 | Widget Screw size 5 | 220 |
Foobar Inc | Baz Street, London | 200708162 | Widget 42 | 55 |
In order to remove duplication between rows, the data needs to split into two tables: the duplicated data can be stored as a single row in one table, referenced by a foreign key from the other table. So, the above example could be split into two tables: a CUSTOMER_ORDERS table, and an ORDER_ITEMS table:
CUSTOMER_ORDERS | ||
---|---|---|
CUSTOMER_NAME | CUSTOMER_ADDRESS | ORDER_NUMBER |
Sprockets Ltd | Booth Ind Est, Boston | 200804052 |
Foobar Inc | Baz Street, London | 200708162 |
ORDER_ITEMS | ||
---|---|---|
ORDER_NUMBER | ITEM | QUANTITY |
200804052 | Widget 23 | 450 |
200804052 | Widget Connector | 900 |
200708162 | Widget Screw size 5 | 220 |
200708162 | Widget 42 | 55 |
The ORDER_NUMBER column would be the primary key of the CUSTOMER_ORDERS table, and a foreign key in the ORDER_ITEMS table. This isn't the only duplication in the original table, though — what if one customer places multiple orders? In this case, not only are the customer details duplicated for every item on an order, they are duplicated for every item on every order by that customer. This duplication is still present in the new schema, but in this case it is a business decision whether to keep it — if a customer changes address, do you update the old orders with the new address, or do you leave those entries alone, since that was the address that order was delivered to? If the delivered-to address is important, then this is coincidental duplication as described above, if not, then it too can be eliminated by splitting the CUSTOMER_ORDERS table into two.
The Downsides of Eliminating Duplication
The benefits of eliminating duplication might seem obvious, but there are potential downsides too. For example:
- If the application is already released, you need to provide upgrade code to change existing databases over to the new schema without losing any data.
- If you split tables in order to reduce duplication, your SQL can get more complicated, as you need more table joins.
Conclusion
As with everything in software development it's a trade-off. However, as the database gets larger, and more data gets stored, the costs of storing duplicate data increase, as do the costs of changing the schema of an existing database. For this reason, I believe that it is worth designing your schema to eliminate duplication as soon as possible — preferably before there's any data in it!
Posted by Anthony Williams
[/ database /] permanent link
Tags: database, duplication
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
The Most Popular Articles of 2007
Monday, 14 January 2008
Now we're getting into 2008, here's a list of the 10 most popular articles on the Just Software Solutions website for 2007:
- Implementing drop-down menus in pure CSS (no JavaScript)
How to implement drop-down menus in CSS in a cross-browser fashion (with a teensy bit of JavaScript for IE). - Elegance in Software and Elegance in
Software part 2
What makes software elegant? - Reduce Bandwidth Usage by Supporting
If-Modified-Since
in PHP
Optimize your website by allowing browsers to cache pages that haven't changed - Introduction to C++ Templates (PDF)
How to use and write C++ templates. - Using CSS to Replace Text with Images
How to use CSS to display titles and logos as images whilst allowing search engines and users with text-only browsers to see the text. - Testing on Multiple Platforms with VMWare
The benefits of using VMWare for testing your code or website on multiple platforms - 10 Years of Programming with POSIX Threads
A review of "Programming with POSIX Threads" by David Butenhof, 10 years after publication. - Review of Test Driven Development — A Practical
Guide, by Dave Astels
This book will help you to learn TDD. - Implementing Synchronization Primitives for Boost on Windows Platforms
The technical details behind the current implementation ofboost::mutex
on Windows. - Building on a Legacy
How to handle legacy code.
Posted by Anthony Williams
[/ news /] permanent link
Tags: popular, articles
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
The Future of Concurrency in C++: ACCU 2008
Monday, 07 January 2008
I am pleased to start 2008 with some good news: I will be speaking on "The Future of Concurrency in C++" at ACCU 2008.
Here's the synopsis:
With the next version of the C++ Standard (C++0x), concurrency support is being added to C++. This means a new memory model with support for multiple threads of execution and atomic operations, and a new set of library classes and functions for managing threads and synchronizing data. There are also further library enhancements planned for the next technical report (TR2). This talk will provide an overview of the new facilities, including an introduction to the new memory model, and an in-depth look at how to use the new library. Looking forward to TR2, this talk will cover the proposed library extensions, and how facilities like futures will affect the programming model.
I hope to see you there!
Posted by Anthony Williams
[/ news /] permanent link
Tags: news, concurrency, threading, accu
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Elegance in Software Part 2
Tuesday, 11 December 2007
In my earlier blog post on Elegance in Software I gave a list of things that I feel contribute to elegant code, and asked for input from my readers. This post is the promised follow-up.
Several respondents mentioned the book Beautiful Code, which is a collection of essays by "leading computer scientists" describing code they feel is beautiful, and why. I've only read an excerpt myself, but it's got good reviews, and what I've read has enticed me to read more. There's also a blog related to the book, which is well worth a read.
Another common theme was the "I know it when I see it" factor. Though I alluded to this in the introduction of my previous post by saying that "elegance is in the eye of the beholder", a lot of people felt this was far more important than any "tick list": there's something special about truly elegant code that transcends the details, just like really good art is more than just a collection of well-executed brush strokes that make up a well-chosen composition. I agree here: elegant code just says "ooh, that's good" when you read it, it has a "Quality without a Name".
Thomas Guest pointed out that appearance plays a part
(whilst also discussing the importance of efficiency), and I agree. This ties in with good naming and short functions: if the code
is poorly laid out, it's hard to argue that it's elegant. Yes, you can get a "source code beautifier" to physically rearrange the
code, but good appearance often goes beyond that: if(some_boolean == true)
is just not elegant, no matter how
well-spaced it is. This also impacts the language used: it's harder to write "pretty" code in Perl than in Ruby or Scheme.
I particular liked Chris Dollin's characterization: it is obvious what elegant code does when you read it, but it's not necessarily an obvious approach when you haven't seen it before. This ties in with another theme amongst respondents: simplicity. Though I mentioned "minimal code" and "easy to understand" in my original list, "simplicity" goes beyond that, and I think that Chris's obvious solution to a complex problem highlights this. If the code is sufficiently easy to understand that a solution to a complex problem appears obvious, then it's probably a good demonstration of simplicity. Such code is "clever with a purpose" (as Pat Maddox described it).
Jim Shore has an interesting article on good design, in which he argues that the eye-of-the-beholder-ness of "elegant" is too vague for his liking, and instead tries to argue for "Quality with a Name". He says:
"A good software design minimizes the time required to create, modify, and maintain the software while achieving acceptable run-time performance."
Whilst this is definitely true, this ties in with the "tick list" from my previous posting. Elegant code is more than that, and I think this is important: software development is a craft, and developers are craftsmen. By taking pride in our work, by striving to write code that is not just good, but elegant, we are improving the state of our craft. Just as mathematicians strive for beautiful or elegant proofs, and are not satisfied with a proof by exhaustion, we should not be satisfied with code that is merely good, but strive for code that is elegant.
It is true that what I find to be elegant may be different from what you find to be elegant, but I hope believe that good programmers would agree that two pieces of code were "good code" even if they differ in their opinion of which is more elegant, much as art critics would agree that both a painting by Monet and one by Van Gogh were both good paintings, whilst differing in their opinion of which is better.
Posted by Anthony Williams
[/ design /] permanent link
Tags: design, elegance, software
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Design and Content Copyright © 2005-2025 Just Software Solutions Ltd. All rights reserved. | Privacy Policy