Re: "Difficult" clauses
|From:||Sam Pointon <sampointon@...>|
|Date:||Saturday, May 19, 2007, 0:22|
(attempt 3 - the software really doesn't like me)
On 5/19/07, Eric Christopherson <rakko@...> wrote:
> On May 14, 2007, at 9:10 PM, David G. Durand wrote:
> > I suspect I make a better computer scientist than I would have a
> > linguist, but I never got over my disgust with grammatical theories
> > that reach Turing-completeness and _still_ need features added to
> > them to fix the elegance of their analysis. For the less-geeky,
> > that means that the TG grammar mechanisms have enough power that
> > one can't necessarily determine that a sentence is ungrammatical
> > even with infinite time available -- and they still weren't elegant.
> I'm confused by that last sentence; how could a mechanism have enough
> power *not* to be able to do something?
This is a rather mathematical topic, so bear with me on this one...
A Turing-complete language is one in which any sequence of instructions (or
algorithm) can be expressed and interpreted. All Turing-complete languages
are, power-wise at least, exactly equivalent to each other - there is
nothing that can be expressed in one that cannot be expressed in some way in
All algorithms can also be written in English, because they're just
sequences of instructions structured in a way that a machine could go
through them. Don't forget that an algorithm could be absolutely anything.
These are some examples of algorithms:
(1) Go to the shop.
(2) If there is milk and eggs, buy some milk and a dozen eggs.
(3) Go home.
(1) Go to the shop.
(2) If there is no milk, wait until a milk delivery comes.
(3) Buy some milk.
(4) If there are no eggs, wait until an egg delivery comes.
(5) Buy a dozen eggs
(6) Go home.
Intuitively, you can tell that #1 will always terminate because there's no
waiting involved, you'll just go home empty-handed. What about #2? That one
could go on forever if the milk van doesn't come.
But this is intuition, not mathematical rigour. It can be shown that some
very simple algorithms will always terminate, but it is impossible to tell
whether any given algorithm will terminate. This is known as the 'halting
problem', and was proven by (guess who?) Alan Turing.
(here comes the maths...)
Here is a simple sketch of the proof: Using the same style as above to
represent algorithms, let's consider a simple algorithm named 'problem',
which is given an arbitrary algorithm as input:
(1) Given an algorithm,
(2) If the algorithm, given itself as input, halts: loop forever.
(3) Otherwise, terminate.
Now, consider what happens if 'problem' is called with itself. If (2) thinks
that it does, then it will actually run forever, making it incorrect. If (2)
thinks it does not terminate, then 'problem' shall actually end, making it
again wrong. There can be no way to tell accurately whether a given
algorithm and its input will terminate in a Turing-complete language.
Since a Turing-complete language can express any algorithm, including silly
ones like 'problem', a program written in a Turing-complete language may
never terminate. This is a result of the language being too powerful, and
able to express all algorithms, instead of only being able to express a
limited number of them which are not subject to the halting problem.