I’ve been doing a little survey and a bit of thinking about chat bots to see if anyone’s come up with anything startling.
Most of them are very poor despite wild claims of people mistaking them for humans. Here’s my conversation with A.L.I.C.E for e.g.
Human: hi Alice
ALICE: Hello there.
Human: what’s your email address?
ALICE: Try sending mail to Dr. Wallace at drwallace@alicebot.org.
Human: OK thanks
ALICE: Yikes!
Human: why yikes?
ALICE: How much time do you have?
Human: what?
ALICE: No problem.
Human: you’re not making sense.
ALICE: Quite the contrary, it all makes sense to my artificial mind.
The best one I found was the John Lennon Chat Bot
Here’s a transcript of one of my chat’s with the artificial John Lennon:
Giles: Hi, I’m Giles
John Lennon: What are you doing in my kitchen Giles ?
Giles: Just Hanging
John Lennon: Get rid of the ego, Giles
Giles: how do I do that?
John Lennon: How should I know how you do that .
Giles:OK – how would you do it?
John Lennon: We’ll be pretty tired after this event, you know.
But even that one falls down pretty easily:
Giles: What do you think of Paul’s music?
John Lennon: It depends.
Giles: On what?
John Lennon: On the remarks?
Giles: What remarks?
John Lennon: Interesting question.
Giles: You’re avoiding the question.
John Lennon: Do you have any idea what it’s like to answer the same questions over and over again for 30 or 40 thousand years?! Christ, Luv, it’s a bloody drag.. Change the subject, man.
I don’t think it’s easy but people shouldn’t make wild claims like they do.
Anyway I’ve been thinking – there must be a solution to this problem – and it’s basically one of the great unsolved problems – how to get computers to understand and react to natural language.
Here are some thoughts:
Important features:
I checked the source code and it seems Alice is lacking:
1. A memory bank + stateful behaviour
2. An apprentice site where people can tell her she’s talking crap and how in a way that is meaningful to her.
3. A cognitive model.
4. A goal seeking module.
5. A better, emergent based, rule system that allows her to learn (I’ll have to take a look at what she’s using).
6. More experience
7. Ability to deal with emoticons and other stylistic aspects of text.
Prior art
Eliza
Java AIML Implementation
http://botspot.internet.com/
http://www.simonlaven.com/
An emotional model
The idea is to model abstract properties of the conversation and to use those to determine whether our bot said something good or something bad (towards or away from a goal) and what the goal it is trying to reach might be.
Here are some examples.
+ve “emotions”
Praise,
Satisfaction of the other party (probably an abstraction of some of the other properties below),
Humour on the other side (e.g. Lol),
The counterpart likes me (e.g. length of conversation),
The counterpart respects me,
The counterpart agrees with me (e.g. number of occurences of affirmations)
Consistency with things previously said (contradiction detection),
Consistency with a database of facts (Previous 2 things are like honesty for humans)
-ve “emotions”
Inconsistency,
Dissatisfaction of counterpart,
Aggression of counterpart,
Inconsistency of counterpart
To this, you could also add goal-specific emotions – e.g. erotic excitement, handing over the readies, etc….
Example Goals
erotic excitement
selling things
humour
return chatters (relationship)
Emergent learning algorithm
These paramaters could act as guides in an emergent rule system:
E.g.
Rule 1. Decide best response based on a database of response parameters (perhaps using a Neural Net or Bayesian algorithm – classification)
Rule 2. Alter response parameters based on detected levels of “emotions” in the conversation.
Rule 3. In the presence of negative emotions, try to elicit direct feedback from the counterpart to add specific rules to avoid them in future (e.g. learning new vocabulary etc…).
Another idea is to feedback some learned information to a central server which stores the knowledge so that our bot can learn some general things about all conversations. One would have to be careful to filter out the things that are specific to a given conversation.
Feature detection and learning
The big challenge here is how to set up Rule 2.
If I notice a change in “emotional response”, how do I know what caused it?
Let’s take an example. If you say the word “Bastard” too often in bastard conversation or a bastard text, then it tends to have a negative effect on the bastard conversation.
Using the rules above, how would our bot learn not to swear too much?
The basic algorithm is – when you notice an emotional feature (+ve or negative abstract property of conversation), look back at the history , and at other conversational histories and try to see what they have in common. This is basically a classification problem.
This is where the real AI comes in:
A classification algorithm classifies individuals according to features. If you want to put X1…XN into classes, you have to differentiate them on the basis of some feature or property.
So a key part of the problem here is which features of text to pick out. By analogy with human language learning there will be
“innate” or hardcoded features
E.g. word boundaries, length, capitalization, lemma (word stem) ID in dictionary, language, phrase boundaries, part of speech, occurrence frequency, frequency vector, etc… There is a lot of open source stuff out there for this.
composite or learned features
It is conceivable and desireable that the algorithm may be able to create new features and test if they have any causal effect E.g. occurrence frequency of the word “go” with the word “away”. The way in which the system is able to create new features and test their correlation with key emotional events will be key to its power.
Bayesian or neural algorithms are able to test correlations between features and events. We should investigate the most efficient way of doing this.
Eliciting feedback
A very powerful addition to existing systems would be a smart way in which the bot could ask the human to tell it about its mistakes.
The idea is that when it detects that the counterpart is not happy it can offer an interface to the human which allows it to interact in a user-friendly way with the bot’s weighting tables and/or rule system.
For example:
Human says: Hi, do you fancy other robots?
Botty says: Go and dig your own grave
Human says: Pardon me
Botty says: You didn’t like what I said?
Human says: No I certainly didn’t
Botty says: Would you be willing to spend a couple of minutes educating me?
Human says: If you promise to be more friendly.
Botty says: OK – please click here:
Opens a window with the phrase “Go and dig your own grave”
And various questions like:
Which part of “Go and dig your own grave” did you not like [:)]?
This is the reasoning which led me to say “Go and dig your own grave”
Displays a representation of the rule (simple example):
“x
The human being could then somehow make suggestions (in a user friendly interface e.g. like the outlook mail rule interface) about how the bot could behave better next time.
We could look at how children do this sort of thing for some hints.
Problems with this that would have to be solved:
– Good intentions of the teacher – maybe the suggestions could be vetted by a human, or somehow profiled to give an idea if they are likely to be good suggestions.
– How do deal with conflicting suggestions. Maybe you could have a conflict detection system and ask the human about conflicts.
Example implementations/ Moneymaking schemes
Psychiatrist Bot
What could be easier and more lucrative than sitting there saying yes, yes, yes while someone tells you their problems. Seriously speaking, you’d need a very strong cognitive model and so forth. But it would have great impact value – loads of press articles etc…
Problems here – privacy is a big one. No-one’s going to tell you their problems if you can record it – you’d need strong anonymity.