Creating the hack test
The difficulty with critiquing an author is that reading is subjective. Sometimes, you like an author. You know they’re middling or worse, but you still read them. Either they write about a subject that interests you or have a style that appeals despite its faults. Likewise, sometimes you unfairly hate a writer. Their technique is flawless, their characters and plot original, but the text doesn’t move you.
But what if we created a test that assessed the writing separate from emotion? What if we made an algorithm that spit out an “author score?”
Sure, it wouldn’t be infallible (and partly takes the fun away from reading,) but it could be fascinating.
The trick is it would have to measure quality quantitatively. We would need to glean measurable numbers from the text or our Hack Test would devolve into “I like them, so they’re not a hack.”
Roy Peter Clark and Harold Bloom already gave us two qualifiers to search for — overuse of unnecessary modifiers and dependence upon cliches.
We can read through passages and identify if a writer uses too many adverbs. That’s not subjective. That’s not a matter of opinion. If Author One uses twice as many adverbs as Author Two in the same amount of pages, Author Two has demonstrated more tact. If Author One depends on familiar turns of phrase, he is using a crutch.
We can count the number of pathetic fallacies and see if it exceeds good taste. We can check for dependence on “to be” verbs or overuse of passive voice. The language might be the easiest part of writing to score. Other aspects like characterization or plotting will be trickier.
We all know characterization is important, but how do we measure good characters?
The only quantifiable yardstick I can think of is originality. Tricia and I might argue about whether or not a character is likable or interesting, but we can agree when an author repeats character archetypes.
The same principle could be applied to plot or setting. Whether or not a plot is “good” is subjective. What I call good, you could call trash. Neither of us would be wrong, because they’re opinions. But originality? That’s not an opinion. Somebody has already used an idea before or it’s original. (Someone might be plagiarizing from an unknown source; but, in lieu of omniscience, we’ll have to depend on our own knowledge.)
So this is just a rough draft on the hack test. I expect revisions, but we must start somewhere.
First, some categories that we will rank:
Questions pertaining to language:
How many modifiers does the author use?
How many of those modifiers could be removed without changing the intent of the sentence?
How many cliches does the author use?
How many pathetic fallacies does the author use?
How many times does the author use a tense of the verb “to be?”
How many times does the author use an extraneous phrase that could be removed completely? (Not just a modifier, but an entire phrase.)
Questions pertaining to plot:
Has the author used this or a similar plot in previous works? If yes, how many times?
Has this plot been used in other authors’ work previously?
Questions pertaining to character:
Has the author used similar characters in previous works? If yes, how many times?
Is this character almost identical to other characters previously created by another author?
Questions pertaining to setting:
Has the author used similar settings in previous works? If yes, how many times?
Has an almost identical setting been used in other authors’ work previously?
I’m not worried about a ranking system yet. Let’s get a usable test first.
So what other quantifiable qualities need to be included in the hack test?
-Jason Lea, JLea@News-Herald.com