A dolphin handler gives the “together” and “create” signals with her hands. Underwater, the two trained dolphins exchange noises before emerging, flipping around, and lifting their tails. They came up with a brand-new trick of their own and executed it simultaneously as required. Aza Raskin claims that “it doesn’t establish that there is language.” However, it stands to reason that if they had access to a sophisticated, symbolic language of communication, this work would be made much simpler.
Human interest and research in animal vocalizations have existed for a very long time. The warning sounds of various primates vary depending on the predator, dolphins communicate with distinctive whistles, and certain songbirds may rearrange the components of their calls to convey different meanings. However, few specialists really refer to it as a language because no animal communication satisfies all the requirements.
The creation of an algorithm to represent words in a physical location is the first step in this process. The distance and direction between the points (words) in this multidimensional geometric representation define their meaningful relationships with one another (their semantic relationship). For instance, the distance and direction between “king” and “man” are the same as those between “woman” and “queen.” (The mapping is not performed by understanding what the words imply but rather by examining, for instance, how frequently they appear next to one another.)
Another research uses humpback whales as a test species to create unique animal noises using AI. The innovative calls may then be played back to the animals to observe how they react. They are created by breaking vocalizations into micro-phonemes, which are discrete units of sound lasting one tenth of a second. Raskin claims that if AI can distinguish between random and semantically significant changes, it will help humanity move toward meaningful communication. Even if we don’t yet understand the language, it involves having AI speak it.