h1

Serious Snow

January 12, 2011

When it snows outside, it’s time to get serious inside.  We’re definitely serious inside Boston today.

I’ll post some progress later today.  Happy birthday, mom!

Later that day… I’ve been knocking the bugs out of a little algorithm I wrote to do rapid subjective sorting using Amazon Mechanical Turk.  The code will be going up in due time and the algorithm will be described at length in my thesis.  In the meantime, it’s good to know that the form I’m using is closely related to the Rasch Model and I’m repeating Thurstone’s 1927 experiment where he obtained a subjective ranking of the offensiveness of crimes.  The rankings in the right-most column are ours after 400 data points. In our experiment, the first 370 data points were selected randomly; the remaining were chosen to optimally increase our knowledge in expectation. Our column hasn’t converged by a long shot, but we’re comparing against experiments that had 20K data points apiece, so we’re not at all disappointed!  The crimes are sorted from most to least offensive:

1926 1966 1976 2011
Rape Homicide Homicide Kidnapping
Homicide Rape Rape Rape
Abortion Kidnapping Kidnapping Homicide
Seduction Arson Assault Abortion
Arson Assault Arson Embezzlement
Kidnapping Abortion Burglary Arson
Adultery Burglary Larceny Assault
Perjury Embezzlement Embezzlement Burglary
Embezzlement Adultery Perjury Receiving
Counterfeiting Larceny Counterfeiting Adultery
Assault Perjury Libel Forgery
Forgery Counterfeiting Forgery Larceny
Burglary Seduction Smuggling Smuggling
Larceny Forgery Adultery Perjury
Smuggling Smuggling Receiving Bootlegging
Libel Libel Seduction Counterfeiting
Receiving Receiving Bootlegging Seduction
Bootlegging Bootlegging Abortion Libel
Vagrancy Vagrancy Vagrancy Vagrancy

The first three columns came from this paper; the first column came from Thurstone’s 1927 paper where he introduces the method. Now I get to apply it to my thesis!

Update (deeper into the night):
After collecting another 100 or so datapoints, I wound up with the following graph:

Some pretty neat results that we can be fairly confident about.  Repeating this experiment would take around a day and would cost $50; those variances look pretty decent, showing the advantage of selecting the questions to ask appropriately.

Here’s the gnuplot code I used to plot this:

gnuplot> set ytics ("Kidnapping" 19, "Rape" 18, "Homocide" 17, "Arson" 16, "Abortion" 15, "Embezzlement" 14, "Assault and Battery" 13, "Burglary" 12, "Receiving Stolen Goods" 11, "Adultery" 10, "Forgery" 9, "Larceny" 8, "Smuggling" 7, "Bootlegging" 6, "Perjury" 5, "Seduction" 4, "Counterfeiting" 3, "Libel" 2, "Vagrancy" 1)
gnuplot> unset key
gnuplot> set yrange [0:20]
gnuplot> set xlabel "Offensiveness of Crime"
gnuplot> set terminal png transparent truecolor size 1400,480 xffffff x000000 xff0000
Terminal type set to 'png'
Options are 'transparent truecolor nocrop font /usr/share/fonts/truetype/ttf-liberation/LiberationSans-Regular.ttf 12 size 1400,480 xffffff x000000 xff0000 '
gnuplot> set output "crimes.png"
gnuplot> plot 'stats' using 2:(20-$1):3 with xerrorbars

Leave a Reply

Your email address will not be published.