Re: New Machine Translation Project
From: | Yahya Abdal-Aziz <yahya@...> |
Date: | Monday, April 10, 2006, 5:53 |
Hi,
On Sun, 9 Apr 2006, Paul Bennett wrote:
>
> On Sun, 09 Apr 2006 22:16:10 -0400, Ph.D.
> <phil@...> wrote:
>
> > Anyway, I thought perhaps someone here might be
> > interested.
> >
> >
http://unikom.org/
>
> I love the reason they'll succeed where others have failed:
>
> "We'll cope with the stuff MT doesn't do well ... by ... uhh ...
> MAKING A
> HUMAN DO IT! Yeah, that's the ticket!"
>
> Also, one would think that the ideal medium for an intermediate language
> would be one which is specifically designed for the purpose (i.e.
> machine-readable, self-segregating, and all the other things), as
> opposed
> to one whose goal was (more or less) to be readily human-learnable.
Having just replied to Jim Henry and Carsten Becker
on how natlangs deal with ambiguity, by "evolving"
work-arounds, it now occurs to me that perhaps
there is another approach possible here. Instead of
looking for an ideal intermediate language, we could
look for an ideal combination of languages to serve
as intermediaries. Further, we could let the computer
do the looking, by using a genetic algorithm (GA) - one
that modifies itself under some sort of payoff law
("objective function"), creating new variants by
mutation and rewarding their survival skills. The
"ideal" combination would change from time to time,
learning from the language material it was exposed
to. We'd simply start with as many languages as we
had available, and let the algorithm itself decide how
much weight to give each of them in translating new
material. For some purposes, a highly ambiguous and
poetic language might serve best.
So if I ever decide to do an MT program, it would
probably combine both GA techniques with a neural
net to handle the learning.
Regards,
Yahya
--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.385 / Virus Database: 268.4.0/306 - Release Date: 9/4/06