Statistical Grammar Correction or Not…
One thing that makes AtD so powerful is using the right tool for the right job. AtD uses a rule-based approach to find some errors and a statistical approach to find others. The misused word detection feature is the heaviest user of the statistical approach.
Last week I made great progress on the misused word detection making it look at the two words (trigrams) before the phrase in question instead of just one (bigrams). There were several memory challenges to get past to make this work in production, but it happened and I was happy with the results.
Before this, I just looked at the one word before and after the current word. All well and good but for certain kinds of words (too/to, their/there, your/you’re) this proved useless as it almost always guessed wrong. For these words I moved to a rule-based approach. Rules are great because they work and they’re easy to make. The disadvantage of rules is a lot of them are required to get broad coverage of the errors people really make. The statistical approach is great because it can pick up more cases.
So with the advent of the trigrams I decided to revisit the statistical technology for detecting errors I use rules for. I focused on three types of errors just to see what kind of difference there was. These types were:
- Wrong verb tense (irregular verbs only) – Writers tend to have trouble with irregular verbs as each case has to be memorized. Native English speakers aren’t too bad but those just learning have a lot of trouble with these. An example set would be throw vs. threw vs. thrown vs. throws vs. throwing.
- Agreement errors (irregular nouns only) – Irregular nouns are nouns that have plural and singular cases one has to memorize. You can’t just add an s to convert them. An example is die vs. dice.
- Confused words – And finally there are several confused words that the statistical approach just didn’t do much good for. These include it’s/its, their/there, and where/were.
You’ll notice that each of these types of errors relies on fixed words with a fixed set of alternatives. Perfect for use with the misused word detection technology.
My next step was to generate datasets for training and evaluating a neural network for each of these errors. With this data I then trained the neural network and compared the test results of this neural network (Problem AI) to how the misused word detector (Misused AI) did as-is against these types of errors. The results surprised me.
|Problem AI||Misused AI|
Table 1. Statistical Grammar Checking with Trigrams
First, you’ll notice the results aren’t that great. Imagine what they were like before I had trigrams. 🙂 Actually, I ran those numbers too. Here are the results:
|Problem AI||Misused AI|
Table 2. Statistical Grammar Checking with Bigrams
So, my instinct was at least validated. There was a big improvement going from bigrams to trigrams… just not big enough. What surprised me from these results is how close the AI trained for detecting the misused words performed to the AI trained for the problem at hand.
So one this experiment showed there is no need to train a separate model for dealing with these other classes of errors. There is a slight accuracy difference but it’s not significant enough to justify a separate model.
Real World Results
The experiment shows that all I need (in theory) is a higher bias to protect against false positives. With higher biasing in place (my target is always <0.05% false positives), I decided to run this feature against real writing to see what kind of errors it finds. This is an important step because the tests come from fabricated data. If the mechanism fails to find meaningful errors in a real corpus of writing then it’s not worth adding to AtD.
The irregular verb rules found 500 “errors” in my writing corpus. Some of the finds were good and wouldn’t be picked up by rules, for example:
Romanian justice had no saying whatsoever [ACCEPT] no, saying -> @('say') However it will mean that once 6th May 2010 comes around the thumping that Labour get will be historic [ACCEPT] Labour, get -> @('getting', 'gets', 'got', 'gotten')
Unfortunately, most of the errors are noise. The confused words and nouns have similar stories.
The misused word detection is my favorite feature in AtD. I like it because it catches a lot of errors with little effort on my part. I was hoping to get the same win catching verb tenses and other errors. This experiment showed misused word detection is fine for picking a better fit from a set and I don’t need to train separate models for different types of errors. This gives me another tool to use when creating rules and also gives me a flexible tool to use when trying to mine errors from raw text.