Blog Archive
Free SEO Tools for Webmasters
Monday, 24 September 2007
I thought I'd share some of the free online tools that I use for assisting with Search Engine Optimization.
The people behind the iWebTool Directory provide a set of free webmaster tools, including a Broken Link Checker, a Backlink Checker and their Rank Checker. For most tools, just enter your domain or URL in the box, click "Check!" and wait for the results.
Whereas the iWebTool tools each perform one small task, Website Grader is an all-in-one tool for grading your website. Put in your URL, the keywords you wish to rank well for, and the websites of your competitors (if you wish for a comparison). When you submit your site, the tool then displays its progress at the bottom of the page, and after a few moments will give a report on your website, including your PageRank, Alexa rank, inbound links and Google rankings for you and your competitors for the search terms you provided, as well as a quick analysis of the content of your page.
We Build Pages offers a suite of SEO tools, much like the ones from iWebTool. I find the Top Ten Analysis SEO Tool really useful, as it compares your site against the top ten ranking sites for the search term you specify. The Backlink and Anchor Text Tool is also pretty good — it takes a while, but eventually tells you which pages link to your site, and what anchor text they use for the link.
Posted by Anthony Williams
[/ webdesign /] permanent link
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Using Interfaces for Exception Safety in Delphi
Thursday, 20 September 2007
Resource Management and Exception Safety
One concept that has become increasingly important when writing C++ code is that of Exception Safety — writing code so that invariants are maintained even if an exception is thrown. Since exceptions are also supported in Delphi, it makes sense to apply many of the same techniques to Delphi code.
One of the important aspects of exception safety is resource management — ensuring that resources are correctly freed even in the presence of exceptions, in order to avoid memory leaks or leaking of other more expensive resources such as file handles or database connection handles. Probably the most common resource management idiom in C++ is Resource Acquisition is Initialization (RAII). As you may guess from the name, this involves acquiring resources in the constructor of an object. However, the important part is that the resource is released in the destructor. In C++, objects created on the stack are automatically destroyed when they go out of scope, so this idiom works well — if an exception is thrown, then the local objects are destroyed (and thus the resources they own are released) as part of stack unwinding.
In Delphi, things are not quite so straight-forward: variables of class type are not stack allocated, and must be explicitly
constructed by calling the constructor, and explicitly destroyed — in this respect, they are very like raw pointers in
C++. This commonly leads to lots of try
-finally
blocks to ensure that variables are correctly destroyed
when they go out of scope.
Delphi Interfaces
However, there is one type of Delphi variable that is automatically destroyed when it goes out of scope — an
interface
variable. Delphi interfaces behave very much like reference-counted pointers (such as
boost::shared_ptr
) in this regard — largely because they are used to support COM, which requires this
behaviour. When an object is assigned to an interface variable, the reference count is increased by one. When the interface variable
goes out of scope, or is assigned a new value, the reference count is decreased by one, and the object is destroyed when the
reference count reaches zero. So, if you declare an interface
for your class and use that interface type exclusively,
then you can avoid all these try
-finally
blocks. Consider:
type abc = class constructor Create; destructor Destroy; override; procedure do_stuff; end; procedure other_stuff; ... procedure foo; var x,y: abc; begin x := abc.Create; try y := abc.Create; try x.do_stuff; other_stuff; y.do_stuff; finally y.Free; end; finally x.Free; end; end;
All that try
-finally
machinery can seriously impact the readability of the code, and is easy to
forget. Compare it with:
type Idef = interface procedure do_stuff; end; def = class(TInterfacedObject, Idef) constructor Create; destructor Destroy; override; procedure do_stuff; end; procedure other_stuff; ... procedure foo; var x,y: Idef; begin x := def.Create; y := def.Create; x.do_stuff; other_stuff; y.do_stuff; end;
Isn't the interface
-based version easier to read? Not only that, but in many cases you no longer have to worry about
lifetime issues of objects returned from functions — the compiler takes care of ensuring that the reference count is kept
up-to-date and the object is destroyed when it is no longer used. Of course, you still need to make sure that the code behaves
appropriately in the case of exceptions, but this little tool can go a long way towards that.
Further Benefits
Not only do you get the benefit of automatic destruction when you use an interface to manage the lifetime of your class object,
but you also get further benefits in the form of code isolation. The class definition can be moved into the
implementation
section of a unit, so that other code that uses this unit isn't exposed to the implementation details in
terms of private methods and data. Not only that, but if the private data is of a type not exposed in the interface
,
you might be able to move a unit from the uses
clause of the interface
section to the
implementation
section. The reduced dependencies can lead to shorter compile times.
Another property of using an interface is that you can now provide alternative implementations of this interface. This can be of great benefit when testing, since it allows you to substitute a dummy test-only implementation when testing other code that uses this interface. In particular, you can write test implementations that return fixed values, or record the method calls made and their parameters.
Downsides
The most obvious downside is the increased typing required for the interface definition — all the public properties and methods of the class have to be duplicated in the interface. This isn't a lot of typing, except for really big classes, but it does mean that there are two places to update if the method signatures change or a new method is added. In the majority of cases, I think this is outweighed by the benefit of isolation and separation of concerns achieved by using the interface.
Another downside is the requirement to derive from TInterfacedObject
. Whilst you can implement the
IInterface
methods yourself, unless you have good reason to then it is strongly recommended to inherit from
TInterfacedObject
. One such "good reason" is that the class in question already inherits from another class, which
doesn't derive from TInterfacedObject
. In this case, you have no choice but to implement the functions yourself, which
is tedious. One possibility is to create a data member rather than inherit from the problematic class, but that doesn't always make
sense — you have to decide for yourself in each case. Sometimes the benefits of using an interface are not worth the
effort.
As Sidu Ponnappa does in his post 'Programming to interfaces' strikes again, "programming to interfaces" doesn't mean to create an interface for every class, which does seem to be what I am proposing here. Whilst I agree with the idea, I think the benefits of using interfaces outweigh the downsides in many cases, for the reasons outlined above.
A Valuable Tool in the Toolbox
Whilst this is certainly not applicable in all cases, I have found it a useful tool when writing Delphi code, and will continue to use it where it helps simplify code. Though this article has focused on the exception safety aspect of using interfaces, I find the testing aspects particularly compelling when writing DUnit tests.
Posted by Anthony Williams
[/ delphi /] permanent link
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Intel and AMD Define Memory Ordering
Monday, 17 September 2007
For a long time, the ordering of memory accesses between processors in a multi-core or multi-processor system based on the Intel x86 architecture has been under specified. Many newsgroup posts have discussed the interpretation of the Intel and AMD software developer manuals, and how that translates to actual guarantees, but there has been nothing authoritative, despite comments from Intel engineers. This has now changed! Both Intel and AMD have now released documentation of their memory ordering guarantees — Intel has published a new white paper (Intel 64 Architecture Memory Ordering White Paper) devoted to the issue, whereas AMD have updated their programmer's manual (Section 7.2 of AMD64 Architecture Programmer's Manual Volume 2: System Programming Rev 3.13).
In particular, there are a couple of things that a now made explicitly clear by this documentation:
- Stores from a single processor cannot be reordered, and
- Memory accesses obey causal consistency, so
- An aligned load is an acquire operation, and
- An aligned store is a release operation, and
- A locked instruction (such as
xchg
orlock cmpxchg
) is both an acquire and a release operation.
This has implications for the implementation of threading primitives such as mutexes for IA-32 and Intel 64 architectures — in some cases the code can be simplified, where it has been written to take a pessimistic interpretation of the specifications.
Posted by Anthony Williams
[/ threading /] permanent link
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
New Papers for C++ Standards Committee
Tuesday, 11 September 2007
I've just added the most recent papers that I've submitted to the C++ Standards Committee to our publications page. Mostly these are on multi-threading in C++:
- N2139 — Thoughts on a Thread Library for C++,
- N2276 — Thread Pools and Futures, and
- N2320 — Multi-threading library for Standard C++
but there's also an updated to my old paper on Names, Linkage and Templates (Rev 2), with new proposed wording for the C++ Standard now that the Evolution Working Group have approved the proposal in principle, and it has move to the Core Working Group for final approval and incorporation into the Standard.
Posted by Anthony Williams
[/ news /] permanent link
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Database Tip: Use Parameterized Queries
Monday, 03 September 2007
This post is the third in a series of Database Tips.
When running an application with a database backend, a high percentage of SQL statements are likely to have variable data. This might be data obtained from a previous query, or it might be data entered by the user. In either case, you've got to somehow combine this variable data with the fixed SQL string.
String Concatenation
One possibility is just to incorporate the data into the SQL statement directly, using string concatenation, but this has two
potential problems. Firstly, this means that the actual SQL statement parsed by the database is different every time. Many databases
can skip parsing for repeated uses of the same SQL statement, so by using a different statement every time there is a performance
hit. Secondly, and more importantly, this places the responsibility on you to ensure that the variable data will behave correctly as
part of the statement. This is particularly important for web-based applications, as a common attack used by crackers is a "SQL
injection" attack — by taking advantage of poor quoting by the application when generating SQL statements, it is possible to
input data which will end the current SQL statement, and start a new one of the cracker's choosing. For example, if string data is
just quoted in the SQL using plain quotes ('data'
) then data that contains a quote and a semicolon will end
the statement. This means that if data is '; update login_table set password='abc';
then the initial
';
will end the statement from the application, and the database will then run the next one, potentially setting
everyone's password to "abc".
Parameterized Queries
A solution to both these problems can be found in the form of Parameterized Queries. In a parameterized query, the variable data in the SQL statement is replaced with a placeholder such as a question mark, which indicates to the database engine that this is a parameter. The database API can then be used to set the parameters before executing the query. This neatly solves the first problem with string concatenation — the query seen by the database engine is the same every time, so giving the database the opportunity to avoid parsing the statement every time. Most parameterized query APIs will also allow you to reuse the same query with multiple sets of parameters, thus explicitly caching the parsed query.
Parameterized queries also solve the SQL injection problem — most APIs can send the data directly to the database engine, marked as a parameter rather than quoting it. Even when the data is quoted within the API, this is then the database driver's responsibility, and is thus more likely to be reliable. In either case, the user is relinquished from the requirement of correctly quoting the data, thus avoiding SQL injection attacks.
A third benefit of parameterized queries is that data doesn't have to be converted to a string representation. This means that, for example, floating point numbers can be correctly transferred to the database without first converting to a potentially inaccurate string representation. It also means that the statement might run slightly faster, as the string representation of data often requires more storage than the data itself.
The Parameterized Query Coding Model
Whereas running a simple SQL statement consists of just two parts — execute the statement, optionally retrieve the results — using parameterized queries ofetn requires five:
- Parse the statement (often called preparing the statement.)
- Bind the parameter values to the parameters.
- Execute the statement.
- Optionally, retrieve the results
- Close or finalize the statement.
The details of each step depends on the particular database API, but most APIs follow the same outline. In particular, as mentioned above, most APIs allow you to run steps 2 to 4 several times before running step 5.
Placeholders
A parameterized query includes placeholders for the actual data to be passed in. In the simplest form, these
placeholders can often be just a question mark (e.g. SELECT name FROM customers WHERE id=?
), but most APIs also allow
for named placeholders by prefixing an identifier with a marker such as a colon or an at-sign (e.g. INSERT INTO books
(title,author) VALUES (@title,@author)
). The use of named placeholders can be beneficial when the same data is needed in
multiple parts of the query — rather than binding the data twice, you just use the same placeholder name. Named placeholders
are also easier to get right in the face of SQL statements with large numbers of parameters or if the SQL statement is changed
— it is much easier to ensure that the correct data is associated with a particular named parameter, than to ensure that it is
associated with the correctly-numbered parameter, as it is easy to lose count of parameters, or change their order when changing the
SQL statement.
Recommendations
Look up the API for parameterized queries for your database. In SQLite, it the APIs surrounding sqlite3_stmt, for MySQL it's the Prepared Statements API, and for Oracle the OCI parameterized statements API does the trick.
If your database API supports it, used named parameters, or at least explicit numbering (e.g. ?1,?2,?3
rather than
just ?,?,?
) to help avoid errors.
Posted by Anthony Williams
[/ database /] permanent link
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Interview with Nick Hodges from Codegear
Friday, 31 August 2007
Over at Channel 9, Michael Lehmann from Microsoft has posted a video in which he and Bob Walsh interview Nick Hodges, the Delphi programme manager from Codegear. They discuss the future of Delphi, including support for Windows Vista, .NET 2.0 and WPF, as well as what makes Delphi good for small software companies.
For those looking to get to grips with Delphi, Nick recommends www.delphibasics.co.uk and www.marcocantu.com as resources well worth investigating.
Posted by Anthony Williams
[/ delphi /] permanent link
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Database Tip: Use Transactions
Monday, 27 August 2007
Here's another Database Tip, to follow on from my previous one on creating appropriate indexes. This time the focus is transactions.
For any experienced database developer, using transactions might seem an obvious suggestion, but lightweight databases may
require configuration in order to use transactions. For example, MySql tables use the
MyISAM
engine by default, which doesn't support transactions — in order to use transactions you need to set the
storage engine of your tables to InnoDB
or BDB
. Also, whereas in Oracle, every statement occurs within a transaction, and you need an explicit COMMIT
or ROLLBACK
to end the transaction, in databases such as MySQL and SQLite every
statement is its own transaction (and thus committed to the database immediately) unless you explicitly begin a transaction with a
BEGIN
statement, or other database configuration command.
Benefits of Transactions
The primary benefit of using transactions is data integrity. Many database uses require storing data to multiple tables, or multiple rows to the same table in order to maintain a consistent data set. Using transactions ensures that other connections to the same database see either all the updates or none of them. This also applies in the case of interrupted connections — if the power goes off in the middle of a transaction, the database engine will roll back the transaction so it is as-if it was never started. If each statement is committed independently, then other connections may see partial updates, and there is no opportunity for automatic rollback on error.
A secondary benefit of using transactions is speed. There is often an overhead associated with actually committing the data to the database. If you've got 1000 rows to insert, committing after every row can cause quite a performance hit compared to committing once after all the inserts. Of course, this can work the other way too — if you do too much work in a transaction then the database engine can consume lots of space storing the not-yet-committed data or caching data for use by other database connections in order to maintain consistency, which causes a performance hit. As with every optimisation, if you're changing the boundaries of your transactions to gain performance, then it is important to measure both before and after the change.
Using Transactions in Oracle
Oracle databases are always in transaction mode, so all that's needed is to decide where to put the COMMIT
or
ROLLBACK
. When one transaction is finished, another is automatically started. There are some additional options that
can be specified for advanced usage — see the Oracle documentation for these.
INSERT INTO foo (a,b) VALUES (1,'hello'); INSERT INTO foo (a,b) VALUES (2,'goodbye'); COMMIT; INSERT INTO foo (a,b) VALUES (3,'banana'); COMMIT;
Using Transactions in SQLite
In SQLite, if you wish a transaction to cover more than one statement, then you must use a BEGIN
or BEGIN
TRANSACTION
statement. The transaction ends when you execute a COMMIT
or ROLLBACK
statement, and
the database reverts to auto-commit mode where each statement has its own transaction.
BEGIN; INSERT INTO foo (a,b) VALUES (1,'hello'); INSERT INTO foo (a,b) VALUES (2,'goodbye'); COMMIT; INSERT INTO foo (a,b) VALUES (3,'banana'); -- implicit COMMIT at end of statement -- with no preceding BEGIN
Using Transactions in MySQL
As mentioned above, by default all tables use the MyISAM
storage engine, so transaction support is disabled. By
changing a table to use the InnoDB
or BDB
storage engines, transaction support can be enabled. For new
tables, this is done using the ENGINE
or TYPE
parameters on the CREATE TABLE
statement:
CREATE TABLE foo ( bar INTEGER PRIMARY KEY, baz VARCHAR(80) NOT NULL ) ENGINE = InnoDB;
Existing tables can be changed to use the InnoDB
storage engine using ALTER TABLE
:
ALTER TABLE foo ENGINE = InnoDB;
You can also change the default storage engine using the default-storage-engine
server option on the server command
line, or in the server configuration file.
Once all the tables you're using have storage engines that support transactions, you have two choices. For a given connection,
you can set the AUTOCOMMIT
session variable to 0
, in which case every statement within that connection is
part of a transaction, as for Oracle, or you can leave AUTOCOMMIT
set to 1
and start transactions
explicitly as for SQLite. In auto-commit mode for MySQL, transactions are started with BEGIN
, BEGIN WORK
or START TRANSACTION
. To disable AUTOCOMMIT
for the current transaction, use the SET
statement:
SET AUTOCOMMIT=0;
You can configure the database to run this statement immediately upon opening a connection using the init_connect
server variable. This can be set in the configuration file, or using the following command:
SET GLOBAL init_connect='SET AUTOCOMMIT=0';
MySQL also supports additional transaction options — check the documentation for details.
Automatic ROLLBACK
and COMMIT
One thing to watch out for is code that causes an automatic ROLLBACK
or COMMIT
. Most databases cause an
automatic ROLLBACK
of any open transaction when a connection is closed, so it is important to make sure that all
changes are committed before closing the connection.
Also worth watching out for are commands that cause an automatic COMMIT
. The list varies depending on the database,
but generally DDL statements such as CREATE TABLE
will cause an automatic commit. It is probably best to avoid
interleaving DML statement with any other type of statement in order to avoid surprises. Check your database documentation for
details.
Posted by Anthony Williams
[/ database /] permanent link
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
V1.3 of the dbExpress drivers for MySQL V5.0 released
Friday, 24 August 2007
New this release:
- Correctly stores BLOB data with embedded control characters
- Correctly stores string data with embedded slashes
- BCD fields in parameterized queries now work correctly with DecimalSeparators other than '.'
- Time values stored and retrieved correctly
- TSQLTable support works correctly with Delphi 7
See the download page for more details.
Posted by Anthony Williams
[/ delphi /] permanent link
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Database Tip: Create Appropriate Indexes
Monday, 20 August 2007
One of the simplest things you can do speed up database access is create appropriate indexes.
There are several aspects to this. Firstly, you need to identify which columns are used for queries on a given table; in
particular, which columns appear in the WHERE
clause of time consuming queries. If a query is only done once, or the
table only has five rows in it so queries are always quick then there is no benefit to adding indexes. It's not just
straight-forward SELECT
statements that need checking — UPDATE
and DELETE
statements
can have WHERE
clauses too.
Having identified which columns are used in queries, it is important to also note which combinations are used. A database engine
will tend to only use one index per table (though some can use more, depending on the query), so if your time-consuming queries use
multiple columns from the same table in the WHERE
clause, then it's probably better to have an index that covers all of
them rather than separate indexes on each.
The cost of indexes
Adding an index to a table does have a downside. Any modification to a table with an index (such as inserts and deletes) will be
slower, as the database will have to update the index, and the index will occupy space in the database. That can be a high price to
pay for faster queries if the table is modified frequently. In particular, an index that is never used, or covers the same columns
as another index is just dead weight. It is important to remember that PRIMARY KEY
and UNIQUE
columns
automatically have indexes that cover them.
Timing is everything
As with every optimization, it is important to profile both before and after any change, and this includes checking the performance of the rest of the application too. Don't be afraid to remove an index if it isn't helping, and bear in mind that it's also possible to improve performance by rewriting queries, particularly where the are joins or subqueries involved.
Posted by Anthony Williams
[/ database /] permanent link
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Demonstrating Software on the Web
Friday, 10 August 2007
One of the techniques we use for collaborating with customers, and obtaining feedback on work in progress is to demonstrate the software over the internet. This means that customers can see the software in action, without having to install it on their systems. This can be beneficial when the software is not yet complete, and we wish to demonstrate how a particular feature works — many customers are reluctant to install unfinished software on their systems to try it out, and an online demonstration means they don't have to do this.
Online demonstrations also provide scope for faster feedback — it's much quicker to start up a web demo than it is to ship a new version of the software to a customer, wait for them to find time to install it, and then talk them through it. This means that changes can be demonstrated as soon as they are ready, and also alternate versions can be shown in the case that the choice is unclear.
For most demonstrations we use TightVNC. This allows the screen of our demonstration PC to be replicated across the web. All that is needed at the customer's site is a web browser with Java installed. We send our users a URL to go to which then connects them to our demonstration PC and loads the TightVNC Java applet. The display is updated in real-time. As an added bonus, the server can also be configured to allow the customers to control the demonstration machine from their end, giving them a chance to try out the software and see how they would use it. We also have the customers on the phone (usually using a speaker-phone) at the same time, so we can talk them through the software, or the changes that have been made.
Though not as ideal as a face-to-face meeting, such an online demonstration is considerably less expensive and time consuming for both parties, and can consequently be arranged far more often.
Posted by Anthony Williams
[/ feedback /] permanent link
Stumble It! | Submit to Reddit | Submit to DZone
If you liked this post, why not subscribe to the RSS feed or Follow me on Twitter? You can also subscribe to this blog by email using the form on the left.
Design and Content Copyright © 2005-2025 Just Software Solutions Ltd. All rights reserved. | Privacy Policy