Hi there... Had a chat with colleagues over coffee and somehow we got to discuss the Infinite monkey theorem.
So I've introduced the subject and explained in my own words that if ridiculous number of monkeys over ridiculous time were banging the typewriter in a quite random manner they would reproduce complete works of Shakespeare. In the context ridiculous numbers are large but finite.
We of course would not witness such a great historical moment but it is possible and possible just because of other "similar" events happened in the past. For example DNA/RNA molecules somehow been composed out of atoms into complex sequence that facilitates reproducible life. Number takes to describe the probability of life randomly creating itself is extremely large as well. Human DNA if converted to bits and bytes will occupy about 1.5 CDs - compact but efficient. This is definitely sufficient to store Shakespeare sonnets (some one already done it by the way). And two can be compared - sonnets of a great poet to a poetry of live... Both are hardly random things......?
But enough philosophy. Let's just talk about monkeys that I am certainly closer to (as every human DNA is in 99% matches DNA of a Chimpanzee... even yours dear reader)...
Ok... so lets think of a simpler case for example Monkeys writing a Shakespeare passage or even a simple but genius sentence: To be or not to be... or even better: To banana or not to banana
So what would it take to get group of monkeys to reproduce the sentence? Apparently there were experiments conducted in a real life. Not necessarily scientific but interesting results. Worth reading for laughs... Interesting to point out that monkeys were hitting the keyboard but not quite randomly. As wikipedia experiment states:
"Not only did the monkeys produce nothing but five pages consisting largely of the letter S"...
This is a very important point even if we managed to gather sufficient number of monkeys to type on the keyboards they could just type "S" or any other letter. So randomness and distribution of letters pressed by monkeys is very important. Otherwise theorem can't be proved as work produced by monkeys will never converge to some specific sequence but something consisting of mostly continuously pressed letters.
Lets even further simplify the task and say that we want to make it happen in a short time and we will provide some means of facilitating monkeys to have positive impression or negative of what they are doing. For example we will drop a banana or other fruit if monkey hit some sequence that resembles the word. Then sequences will converge at some point into words and we will be able to influence learning form previous experience.
I was so taken away by this idea so decided to create a simple simulation in Scala with Akka actors representing Monkeys trying to guess words. Feedback provided by MonkeySupervisors a highly trained Akka actors that analyse the sequences produced by monkeys compare to words Monkeys need to guess and either give a grapes, apple, insect, nut or banana. It all given in a score range 0 to 1 as a feedback depending on % score of similarity between sequence produced by monkey and actual word. To determine similarity we have used Levenshtein distance algorithm (Scala implementation code borrowed from) specifying how many symbols are required to be deleted, substituted or inserted to make words match completely. Monkey actors will randomly generate sequences of words by using number of something like bigram joined and will receive and accumulate feedback for each word produced. Overtime monkey will produce new random words and compare to previous experience before sequence is passed to Monkey Supervisor for comparison. So lets see what I've got after some experimentation:
Our monkeys are typing with rocket speed and eventually words are getting guessed. Here is sample output for "to banana or not to banana":
Words "to", "or" were guessed almost immediately:
List((oh,50), (ot,50), (ov,50), (om,50), (od,50), (oc,50), (oz,50), (ob,50), (ok,50), (ox,50))
Distance: 1.0 word: or word1: or
Word "banana" below took sometime (minutes):
List((zanana,83), (janana,83), (ganana,83), (canana,83), (vanana,83), (wanana,83), (manana,83), (danaga,66), (tanaxa,66), (fanawa,66))
Distance: 0.8333333333333334 word: banana word1: kanana
List((zanana,83), (janana,83), (ganana,83), (canana,83), (vanana,83), (wanana,83), (manana,83), (kanana,83), (danaga,66), (tanaxa,66))
Distance: 0.8333333333333334 word: banana word1: lanana
List((zanana,83), (janana,83), (ganana,83), (canana,83), (vanana,83), (wanana,83), (lanana,83), (manana,83), (kanana,83), (danaga,66))
Distance: 0.8333333333333334 word: banana word1: panana
List((zanana,83), (janana,83), (ganana,83), (canana,83), (vanana,83), (wanana,83), (lanana,83), (manana,83), (kanana,83), (panana,83))
Distance: 1.0 word: banana word1: banana
Distance: 1.0 word: banana word1: banana
Monkey worked hard and retyring now... zzzz...zzzz...
Word "not" was never guessed due to a nature of our "bigram" system it was either "no" or something more than that "nono" etc... As I mentioned it is not a pure bigram - in actual bigram system we could combine "no" and "ot" into "not" but it is used in a more simple way here... (EDIT: later I've modified the "bigram" generation method to sequencing vowels and consonant letters method - that improved random words generation allowing words like "not" to be guessed and others)
List((co,33), (do,33), (lo,33), (xo,33), (go,33), (qo,33), (po,33), (zo,33), (bo,33))
List((ho,50), (co,50), (do,50), (xo,50), (to,50), (go,50), (wo,50), (po,50), (zo,50), (mo,50))
Distance: 0.6666666666666667 word: not word1: no
Here is Github source of the scala app:
As you may have spotted I've genetically enhanced my monkeys and they are using Levenshtein distance function and verifying the score of the sequence produced compared to top 10 sequences scores (ordered by % feedback descending). Monkey will try to guess new sequence until it is within same distance compared to previously queried sequences. Meaning over time by learning feedback monkeys equipped with distance function will have a better idea as to what sequences will likely produce better results... Some would say not possible Monkeys, learning words, Levenshtein distance functions... but who knows... Planet of the Apes was a good movie anyway...
Further improvement could be done by increasing cognitive abilities of monkeys and getting them instead of randomly producing sequences of bigrams look into deducing the words by looking at the past tried sequences. For example here is monkey trying to guess word: interesting:
List((ihqefeitin,54), (itqedeitig,54), (igbeuditig,45), (ikseiqitin,45), (irgemeitik,45), (imgeidesitin,45), (imzeleitiv,45), (ifceysitin,45), (imveygitin,45), (ihvehuivin,36))
From the list of (word, score) pairs we can see that results are getting better over time but time takes to randomly guess next word is becoming increasingly longer (because of number of possibilities and comparisons required). To reduce set of possibilities from the sequence above we could deduce that first symbol is "I", fourth is "E", fifth is "E", and second last "N" and so on... And try withing space of those symbols. That would of course mean that our monkey is something more of a human that could be guessing words written in different language. That would be an intelligent monkey... Otherwise it is as close as it gets so far for word "interesting". Eventually monkey will guess the words but it will be a long wait...
Another improvement could be done by monkeys sharing common experience. Say monkeys guessing word "interesting" could use same memory - joined by a telepathic machine or some sort of feedback system that would propagate experience of one monkeys to rest of the group and thus reducing a time required for a group to guess a word. That is easy to be done programmatically and in reality there were some experiments done on human (or with human participants). Not a science fiction anymore I'd say...
So there is no conclusion as such but I think if someone take the task seriously and enhance the experiment by providing a feedback to a monkey then guessing can be turned into learning and actually possible ... possible?... ok... possible... hm...