Quotes 1-29-2015

by Miles Raymer

“A great fire can put out a smaller one by starving it of oxygen and fuel. Now, as he returned the last letter to the packet, he was almost sick with longing for Mrs. Rosa Clay of Van Pelt Street, Midwood, Brooklyn.

Sammy had once told him about the capsule that had been buried at the World’s Fair, in which typical items of that time and place––some nylon stockings, a copy of Gone with the Wind, a Mickey Mouse drinking cup––had been buried in the ground, to be recovered and marveled at by the people of some future gleaming New York. Now, as he read through these thousands of words that Rosa had written him, and her raspy, plaintive voice sounded in his ear, his entombed memories of Rosa were hauled up as from a deep shaft within him. The lock on the capsule was breached, the hasps were thrown, the hatch opened, and with a ghostly whiff of lily of the valley and a fluttering of moths, he remembered––he allowed himself to enjoy a final time––the stickiness and weight of her thigh thrown over his belly in the middle of a hot August night, her breath against his shoulder as she gave his hair a trim in the kitchen of his apartment on Fifth Avenue, the burble and glint of the Trout Quintet playing in the background as the smell of her cunt, rich and faintly smoky like cork, perfumed an idle hour in her father’s house. He recalled the sweet illusion of hope that his love for her had brought him.”

––The Amazing Adventures of Kavalier & Clay, by Michael Chabon, pg. 457-8

 

“Human individuals and human organizations typically have preferences over resources that are not well represented by an ‘unbounded aggregative utility function.’ A human will typically not wager all her capital for a fifty-fifty chance of doubling it. A state will typically not risk losing all its territory for a ten percent chance of a tenfold expansion. For individuals and governments, there are diminishing returns to most resources. The same need not hold for AIs. (We will return to the problem of AI motivation in subsequent chapters.) An AI might therefore be more likely to pursue a risky course of action that has some chance of giving it control of the world.”

––Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom, pg. 88