Re: has anyone made a real conlang
From: | Chris Bates <christopher.bates@...> |
Date: | Saturday, April 26, 2003, 23:15 |
Andrew, If you want my advice you should separate actual fact from your
own opinions and admit that your opinions might be wrong. How are AIs
science Andrew? Or common sense? The aim of science is to describe the
natural world, and people might take that description and use it to make
a nuclear bomb or a quantum computer or anything, but they are not
science they are an application of science. As for common sense, common
sense says to me that there will never be AIs because the human race
will exterminate itself before we could invent them even if they are
possible, and your common sense says differently, so who's right? Common
sense is just opinion not fact, and the authors of those books could
hold a different opinion to you and not be wrong.
Personally I do not believe we will ever make a truly intelligent
machine in the sense that a human being is intelligent, because there is
no need to, even if it is possible, and the human race survives. Every
program which people describe as AI now is focused on solving and
automating one particular problem so you no longer need to devote a
human being to it, which is well and good, since humans come with more
baggage and costs than they're worth sometimes, but I have to ask, if
you want something that can think like a human being in every way, not
just in one area, why not use a human being? I do actually know a bit
about AI, since a couple of years before I started University when I had
more time I bought several books on the subject and spent a lot of time
reading and tapping away at my keyboard. *sigh* It all came to naught
though, and I got kindof put off when I had the idea of trying to
combine AI and conlanging and invent a conlang that I could write a
program to parse and compile down to a list of statements, rules etc. Of
course, other people have tried writing programs to converse in natural
languages, and it was no different with my conlang, because I found
either I made it almost trivially easy for the machine, which meant it
was something people would never be able to use, or I made it easy for
people but almost impossible for the machine to always get right.
*shrugs* When you're 16 you think everything will be easy because
you're wonderful lol, then you realize you're not perfect afterall and
get depressed and then get over it.
Chris.
>Markus Miekk-oja wrote:
>MM> I suspect the reason you'd be unable to distinguish
>MM> a computer-made language from a man-made, is that
>MM> you don't enjoy quirkiness, you don't enjoy
>MM> irregularities, so you don't search them out.
>MM> Is this true? It is in the irregularities you can
>MM> find the marks of human hands.
>
>I believe that diversity is the essence of good music
>and good art. Diversity has nothing to do with randomness.
>Randomness can be described by a simple mathematical
>formula, and therefore lacks complexity which is the
>foundation of true diversity.
>
>Phonemic diversity is a desirable feature of a language
>because it helps distinguish the words. Other forms of
>linguistic diversity (irregularities) may not be desirable,
>because they make the language difficult to learn.
>
>David Starner wrote:
>DS> As a totally different twist on where this thread is
>DS> heading, how much help is it to have a wide variety
>DS> of languages available?
>
>I feel that artlangs are the most productive when they do
>something that other languages cannot do. For example, a
>sci-fi novel can use words of its own coinage to describe
>a future world. If the novel is well written, its novel
>vocabulary may be imported into the English language. The
>word robot is such a word; it was invented by Karel Capek.
>
>An off topic comment:
>
>Most sci-fi novels are not good, because they are not
>based on science and common sense. To the best of my
>knowledge nobody made a convincing description of the
>future world dominated by creatures of artificial
>intelligence (AI). Hans P. Moravec, Rodney A. Brooks,
>Ray Kurzweil, George B. Dyson, Steven Levy, Peter Menzel,
>and Faith D'Aluisio wrote semi-scientific books on the
>subject, but none of them described evolution of the AI
>creatures well. Fermi Paradox seems to imply that the AI
>creatures exterminate biological creatures and then
>become extinct.
>
>
>