• Home   /  
  • Archive by category "1"

Uncanny Valley Essay

This article is about the hypothesis. For other uses, see Uncanny valley (disambiguation).

In aesthetics, the uncanny valley is an hypothesized relationship between the degree of an object's resemblance to a human being and the emotional response to such an object. The concept of the uncanny valley suggests humanoid objects which appear almost, but not exactly, like real human beings elicit uncanny, or strangely familiar, feelings of eeriness and revulsion in observers.[2]Valley denotes a dip in the human observer's affinity for the replica, a relation that otherwise increases with the replica's human likeness.[3]

Examples can be found in robotics, 3D computer animations, and lifelike dolls among others. With the increasing prevalence of virtual reality, augmented reality, and photorealistic computer animation, the 'valley' has been cited in the popular press in reaction to the verisimilitude of the creation as it approaches indistinguishability from reality. The uncanny valley hypothesis predicts an entity appearing almost human risks eliciting cold, eerie feelings in viewers.[4]

Etymology[edit]

The concept was identified by the robotics professor Masahiro Mori as Bukimi no Tani Genshō (不気味の谷現象) in 1970.[5] The term was first translated as uncanny valley in the 1978 book Robots: Fact, Fiction, and Prediction, written by Jasia Reichardt,[6] thus forging an unintended link to Ernst Jentsch's concept of the uncanny,[7] introduced in a 1906 essay entitled "On the Psychology of the Uncanny."[8][9][10] Jentsch's conception was elaborated by Sigmund Freud in a 1919 essay entitled "The Uncanny" ("Das Unheimliche").[11]

Hypothesis[edit]

Mori's original hypothesis states that as the appearance of a robot is made more human, some observers' emotional response to the robot becomes increasingly positive and empathetic, until it reaches a point beyond which the response quickly becomes strong revulsion. However, as the robot's appearance continues to become less distinguishable from a human being, the emotional response becomes positive once again and approaches human-to-human empathy levels.[13]

This area of repulsive response aroused by a robot with appearance and motion between a "barely human" and "fully human" entity is the uncanny valley. The name captures the idea that an almost human-looking robot seems overly "strange" to some human beings, produces a feeling of uncanniness, and thus fails to evoke the empathic response required for productive human–robot interaction.[13]

Theoretical basis[edit]

A number of theories have been proposed to explain the cognitive mechanism underlying the phenomenon:

  • Mate selection. Automatic, stimulus-driven appraisals of uncanny stimuli elicit aversion by activating an evolved cognitive mechanism for the avoidance of selecting mates with low fertility, poor hormonal health, or ineffective immune systems based on visible features of the face and body that are predictive of those traits.[14][15]
  • Mortality salience. Viewing an "uncanny" robot elicits an innate fear of death and culturally-supported defenses for coping with death’s inevitability.... [P]artially disassembled androids...play on subconscious fears of reduction, replacement, and annihilation: (1) A mechanism with a human façade and a mechanical interior plays on our subconscious fear that we are all just soulless machines. (2) Androids in various states of mutilation, decapitation, or disassembly are reminiscent of a battlefield after a conflict and, as such, serve as a reminder of our mortality. (3) Since most androids are copies of actual people, they are doppelgängers and may elicit a fear of being replaced, on the job, in a relationship, and so on. (4) The jerkiness of an android’s movements could be unsettling because it elicits a fear of losing bodily control."[16]
  • Pathogen avoidance. Uncanny stimuli may activate a cognitive mechanism that originally evolved to motivate the avoidance of potential sources of pathogens by eliciting a disgust response. "The more human an organism looks, the stronger the aversion to its defects, because (1) defects indicate disease, (2) more human-looking organisms are more closely related to human beings genetically, and (3) the probability of contracting disease-causing bacteria, viruses, and other parasites increases with genetic similarity."[15][17] The visual anomalies of androids, robots, and other animated human characters cause reactions of alarm and revulsion, similar to corpses and visibly diseased individuals.[18]
  • Sorites paradoxes. Stimuli with human and nonhuman traits undermine our sense of human identity by linking qualitatively different categories, human and nonhuman, by a quantitative metric, degree of human likeness.[19]
  • Violation of human norms. The uncanny valley may "be symptomatic of entities that elicit a model of a human other but do not measure up to it".[20] If an entity looks sufficiently nonhuman, its human characteristics are noticeable, generating empathy. However, if the entity looks almost human, it elicits our model of a human other and its detailed normative expectations. The nonhuman characteristics are noticeable, giving the human viewer a sense of strangeness. In other words, a robot stuck inside the uncanny valley is no longer judged by the standards of a robot doing a passable job at pretending to be human, but is instead judged by the standards of a human doing a terrible job at acting like a normal person. This has been linked to perceptual uncertainty and the theory of predictive coding.[21][22]
  • Religious definition of human identity. The existence of artificial but humanlike entities is viewed by some as a threat to the concept of human identity.[23] An example can be found in the theoretical framework of psychiatrist Irvin Yalom. Yalom explains that humans construct psychological defenses in order to avoid existential anxiety stemming from death. One of these defenses is specialness, the irrational belief that aging and death as central premises of life apply to all others but oneself.[24] The experience of the very humanlike "living" robot can be so rich and compelling that it challenges humans' notions of "specialness" and existential defenses, eliciting existential anxiety. In folklore, the creation of human-like, but soulless, beings is often shown to be unwise, as with the golem in Judaism, whose absence of human empathy and spirit can lead to disaster, however good the intentions of its creator.[25]
  • Conflicting perceptual cues. The negative effect associated with uncanny stimuli is produced by the activation of conflicting cognitive representations. Perceptual tension occurs when an individual perceives conflicting cues to category membership, such as when a humanoid figure moves like a robot, or has other visible robot features. This cognitive conflict is experienced as psychological discomfort (i.e., "eeriness"), much like the discomfort that is experienced with cognitive dissonance.[26][27] Several studies support this possibility. Mathur and Reichling found that the time subjects took to gauge a robot face's human- or mechanical-resemblance peaked for faces deepest in the uncanny valley, suggesting that perceptually classifying these faces as "human" or "robot" posed a greater cognitive challenge.[1] However, they found that while perceptual confusion coincided with the uncanny valley, it did not mediate the effect of the uncanny valley on subjects' social and emotional reactions—suggesting that perceptual confusion may not be the mechanism behind the uncanny valley effect. Burleigh and colleagues demonstrated that faces at the midpoint between human and non-human stimuli produced a level of reported eeriness that diverged from an otherwise linear model relating human-likeness to affect.[28] Yamada et al. found that cognitive difficulty was associated with negative affect at the midpoint of a morphed continuum (e.g., a series of stimuli morphing between a cartoon dog and a real dog).[29] Ferrey et al. demonstrated that the midpoint between images on a continuum anchored by two stimulus categories produced a maximum of negative affect, and found this with both human and non-human entities.[26] Schoenherr and Burleigh provide examples from history and culture that evidence an aversion to hybrid entities, such as the aversion to genetically modified organisms ("Frankenfoods") and transgender individuals.[30] Finally, Moore developed a Bayesian mathematical model that provides a quantitative account of perceptual conflict.[31] There has been some debate as to the precise mechanisms that are responsible. It has been argued that the effect is driven by categorization difficulty,[28][29] perceptual mismatch,[32][33][34] frequency-based sensitization,[35] and inhibitory devaluation.[26]
  • Threat to humans’ distinctiveness and identity. Negative reactions toward very humanlike robots can be related to the challenge that this kind of robot leads to the categorical human – non-human distinction. Kaplan [36] stated that these new machines challenge human uniqueness, pushing for a redefinition of humanness. MacDorman and Entenzari [37] investigated the distinction of human and robots as an individual trait that can predict sensitivity to the uncanny valley phenomenon. Ferrari, Paladino and Jetten [38] found that the increase of anthropomorphic appearance of a robot leads to an enhancement of threat to the human distinctiveness and identity. The more a robot resembles a real person, the more it represents a challenge to our social identity as human beings.

Research[edit]

A series of studies experimentally investigated whether uncanny valley effects exist for static images of robot faces. Mathur MB & Reichling DB[1] used two complementary sets of stimuli spanning the range from very mechanical to very human-like: first, a sample of 80 objectively chosen robot face images from Internet searches, and second, a morphometrically and graphically controlled 6-face series set of faces. They asked subjects to explicitly rate the likability of each face. To measure trust toward each face, subjects completed a one-shot investment game to indirectly measure how much money they were willing to "wager" on a robot's trustworthiness. Both stimulus sets showed a robust uncanny valley effect on explicitly-rated likability and a more context-dependent uncanny valley on implicitly-rated trust. Their exploratory analysis of one proposed mechanism for the uncanny valley, perceptual confusion at a category boundary, found that category confusion occurs in the uncanny valley but does not mediate the effect on social and emotional responses.

One study conducted in 2009 examined the evolutionary mechanism behind the aversion associated with the uncanny valley. A group of five monkeys were shown three images: two different 3D monkey faces (realistic, unrealistic), and a real photo of a monkey's face. The monkeys' eye-gaze was used as a proxy for preference or aversion. Since the realistic 3D monkey face was looked at less than either the real photo, or the unrealistic 3D monkey face, this was interpreted as an indication that the monkey participants found the realistic 3D face aversive, or otherwise preferred the other two images. As one would expect with the uncanny valley, more realism can lead to less positive reactions, and this study demonstrated that neither human-specific cognitive processes, nor human culture explain the uncanny valley. In other words, this aversive reaction to realism can be said to be evolutionary in origin.[39]

As of 2011, researchers at University of California, San Diego and California Institute for Telecommunications and Information Technology are measuring human brain activations related to the uncanny valley.[40][41] In one study using fMRI, a group of cognitive scientists and roboticists found the biggest differences in brain responses for uncanny robots in parietal cortex, on both sides of the brain, specifically in the areas that connect the part of the brain’s visual cortex that processes bodily movements with the section of the motor cortex thought to contain mirror neurons. The researchers say they saw, in essence, evidence of mismatch or perceptual conflict.[21] The brain "lit up" when the human-like appearance of the android and its robotic motion "didn’t compute". Ayşe Pınar Saygın, an assistant professor from UCSD, says "The brain doesn’t seem selectively tuned to either biological appearance or biological motion per se. What it seems to be doing is looking for its expectations to be met – for appearance and motion to be congruent."[42][43][44]

Viewer perception of facial expression and speech and the uncanny valley in realistic, human-like characters intended for video games and film is being investigated by Tinwell et al., 2011.[45] Consideration is also given by Tinwell et al. (2010) as to how the uncanny may be exaggerated for antipathetic characters in survival horror games.[46] Building on the body of work already undertaken in android science, this research intends to build a conceptual framework of the uncanny valley using 3D characters generated in a real-time gaming engine. The goal is to analyze how cross-modal factors of facial expression and speech can exaggerate the uncanny. Tinwell et al., 2011[47] have also introduced the notion of an unscalable uncanny wall that suggests that a viewer’s discernment for detecting imperfections in realism will keep pace with new technologies in simulating realism. A summary of Angela Tinwell's research on the uncanny valley, psychological reasons behind the uncanny valley and how designers may overcome the uncanny in human-like virtual characters is provided in her book, The Uncanny Valley in Games and Animation by CRC Press.[48]

Design principles[edit]

A number of design principles have been proposed for avoiding the uncanny valley:

  • Design elements should match in human realism. A robot may look uncanny when human and nonhuman elements are mixed.[49] For example, both a robot with a synthetic voice or a human being with a human voice have been found to be less eerie than a robot with a human voice or a human being with a synthetic voice.[50] For a robot to give a more positive impression, its degree of human realism in appearance should also match its degree of human realism in behavior.[51] If an animated character looks more human than its movement, this gives a negative impression.[52] Human neuroimaging studies also indicate matching appearance and motion kinematics are important.[21][53][54]
  • Reducing conflict and uncertainty by matching appearance, behavior, and ability. In terms of performance, if a robot looks too appliance-like, people expect little from it; if it looks too human, people expect too much from it.[51] A highly human-like appearance leads to an expectation that certain behaviors are present, such as humanlike motion dynamics. This likely operates at a sub-conscious level and may have a biological basis. Neuroscientists have noted "when the brain's expectations are not met, the brain...generates a 'prediction error'. As human-like artificial agents become more commonplace, perhaps our perceptual systems will be re-tuned to accommodate these new social partners. Or perhaps, we will decide "it is not a good idea to make [robots] so clearly in our image after all."[21][54][55]
  • Human facial proportions and photorealistic texture should only be used together. A photorealistic human texture demands human facial proportions, or the computer generated character can fall into the uncanny valley. Abnormal facial proportions, including those typically used by artists to enhance attractiveness (e.g., larger eyes), can look eerie with a photorealistic human texture. Avoiding a photorealistic texture can permit more leeway.[56]

Criticism[edit]

A number of criticisms have been raised concerning whether the uncanny valley exists as a unified phenomenon amenable to scientific scrutiny:

  • Good design can lift human-looking entities out of the valley.David Hanson has criticized Mori's hypothesis that entities approaching human appearance will necessarily be evaluated negatively.[57] He has shown that the uncanny valley that Karl MacDorman and Hiroshi Ishiguro[58] generated – by having participants rate photographs that morphed from humanoid robots to android robots to human beings – could be flattened out by adding neotenous, cartoonish features to the entities that had formerly fallen into the valley.[57] This approach uses the fact that humans find characteristics appealing that are reminiscent of the young of our own (as well as many other) species, as used in cartoons.
  • The uncanny appears at any degree of human likeness. Hanson has also pointed out that uncanny entities may appear anywhere in a spectrum ranging from the abstract (e.g., MIT's robot Lazlo) to the perfectly human (e.g., cosmetically atypical people).[57]Capgras syndrome is a relatively rare condition in which the sufferer believes that people (or, in some cases, things) have been replaced with duplicates. These duplicates are rationally accepted as identical in physical properties, but the irrational belief is held that the "true" entity has been replaced with something else. Some sufferers of Capgras syndrome claim that the duplicate is a robot. Ellis and Lewis argue that the syndrome arises from an intact system for overt recognition coupled with a damaged system for covert recognition, which leads to conflict over an individual being identifiable but not familiar in any emotional sense.[59] This supports the view that the uncanny valley could arise due to issues of categorical perception that are particular to the way the brain processes information.[54][60]
  • The uncanny valley is a heterogeneous group of phenomena. Phenomena labeled as being in the uncanny valley can be diverse, involve different sense modalities, and have multiple, possibly overlapping causes, which can range from evolved or learned circuits for early face perception[56][61] to culturally-shared psychological constructs.[62] People's cultural backgrounds may have a considerable influence on how androids are perceived with respect to the uncanny valley.[63]
  • The uncanny valley may be generational. Younger generations, more used to CGI, robots, and such, may be less likely to be affected by this hypothesized issue.[64]

Similar effects[edit]

An effect similar to the uncanny valley was noted by Charles Darwin in 1839:

The expression of this [Trigonocephalus] snake’s face was hideous and fierce; the pupil consisted of a vertical slit in a mottled and coppery iris; the jaws were broad at the base, and the nose terminated in a triangular projection. I do not think I ever saw anything more ugly, excepting, perhaps, some of the vampire bats. I imagine this repulsive aspect originates from the features being placed in positions, with respect to each other, somewhat proportional to the human face; and thus we obtain a scale of hideousness.

— Charles Darwin, The Voyage of the Beagle[65]

A similar "uncanny valley" effect could, according to the ethical-futurist writer Jamais Cascio, show up when humans begin modifying themselves with transhuman enhancements (cf. body modification), which aim to improve the abilities of the human body beyond what would normally be possible, be it eyesight, muscle strength, or cognition.[66] So long as these enhancements remain within a perceived norm of human behavior, a negative reaction is unlikely, but once individuals supplant normal human variety, revulsion can be expected. However, according to this theory, once such technologies gain further distance from human norms, "transhuman" individuals would cease to be judged on human levels and instead be regarded as separate entities altogether (this point is what has been dubbed "posthuman"), and it is here that acceptance would rise once again out of the uncanny valley.[66] Another example comes from "pageant retouching" photos, especially of children, which some find disturbingly doll-like.[67]

Due to rapid advancements in the areas of artificial intelligence and affective computing, cognitive scientists have also suggested the possibility of an "Uncanny Valley of Mind".[68][69] Accordingly, people might experience strong feelings of aversion if they encounter highly advanced, emotion-sensitive technology. Among the possible explanations for this phenomenon, both a perceived loss of human uniqueness and expectations of immediate physical harm are discussed by contemporary research.

In computer animation and special effects[edit]

A number of films that use computer-generated imagery to show characters have been described by reviewers as giving a feeling of revulsion or "creepiness" as a result of the characters looking too realistic. Examples include the following:

  • According to roboticist Dario Floreano, the baby character Billy in Pixar's groundbreaking 1988 animated short film Tin Toy provoked negative audience reactions, which first led the film industry to take the concept of the uncanny valley seriously.[70][71]
  • The 2001 film Final Fantasy: The Spirits Within, the first photorealistic computer-animated feature film, provoked negative reactions from some viewers due to its near-realistic yet imperfect visual depictions of human characters.[72][73][74]The Guardian critic Peter Bradshaw stated that while the film's animation is brilliant, the "solemnly realist human faces look shriekingly phoney precisely because they're almost there but not quite".[75]Rolling Stone critic Peter Travers wrote of the film, "At first it's fun to watch the characters, […] But then you notice a coldness in the eyes, a mechanical quality in the movements".[76]
  • Several reviewers of the 2004 animated film The Polar Express called its animation eerie. CNN.com reviewer Paul Clinton wrote, "Those human characters in the film come across as downright... well, creepy. So The Polar Express is at best disconcerting, and at worst, a wee bit horrifying".[77] The term "eerie" was used by reviewers Kurt Loder[78] and Manohla Dargis,[79] among others. Newsday reviewer John Anderson called the film's characters "creepy" and "dead-eyed", and wrote that "The Polar Express is a zombie train".[80] Animation director Ward Jenkins wrote an online analysis describing how changes to the Polar Express characters' appearance, especially to their eyes and eyebrows, could have avoided what he considered a feeling of deadness in their faces.[81]
  • In a review of the 2007 animated film Beowulf, New York Times technology writer David Gallagher wrote that the film failed the uncanny valley test, stating that the film's villain, the monster Grendel, was "only slightly scarier" than the "closeups of our hero Beowulf's face... allowing viewers to admire every hair in his 3-D digital stubble".[82]
  • Some reviewers of the 2009 animated film A Christmas Carol criticized its animation as creepy. Joe Neumaier of the New York Daily News said of the film, "The motion-capture does no favors to co-stars [Gary] Oldman, Colin Firth and Robin Wright Penn, since, as in 'Polar Express,' the animated eyes never seem to focus. And for all the photorealism, when characters get wiggly-limbed and bouncy as in standard Disney cartoons, it's off-putting".[83]Mary Elizabeth Williams of Salon.com wrote of the film, "In the center of the action is Jim Carrey -- or at least a dead-eyed, doll-like version of Carrey".[84]
  • In the 2010 live-action film The Last Airbender, the character Appa, the flying bison, has been called "uncanny". Geekosystem's Susana Polo found the character "really quite creepy", noting "that prey animals (like bison) have eyes on the sides of their heads, and so moving them to the front without changing rest of the facial structure tips us right into the uncanny valley".[85]
  • The 2010 live-action film Tron: Legacy features a computer-generated young version of actor Jeff Bridges (as a young Kevin Flynn and Clu), which reviewers have criticized as creepy. Vic Holtreman of Screen Rant wrote, "Finally we get to the CGI recreation of Jeff Bridges as a young man. Have we finally gotten past the 'uncanny valley' (where the mind/eye discerns that something is just not quite 'real')? Sadly, no. As long as young Kevin Flynn wasn’t talking, the face looked great – but as soon as he spoke, the creepy factor pops up. He looked like he had a face full of Botox […] One could argue that Clu was a computer program and should have been 'stiff' compared to a human, but even in the opening scene of the film where we see the real-world young Kevin Flynn, the same effect is present".[86]Manohla Dargis of The New York Times wrote that the appearance of young Kevin was "a simulacrum that here looks like an animated death mask".[87] Amy Biancolli of the Houston Chronicle wrote, "Regarding Bridges' digital de-aging: It's creepy. It's a little less creepy on Clu's face (he's not human, anyway) than on Kevin's in a scene from 1989, but on either of them it has the waxen look of storefront mannequins - or over-Botoxed socialites".[88]
  • The 2011 animated film Mars Needs Moms was widely criticized for being creepy and unnatural because of its style of animation. The film was among the biggest box office bombs in history, which may have been due in part to audience revulsion.[89][90][91][92] (Mars Needs Moms was produced by Robert Zemeckis's production company, ImageMovers, which had previously produced The Polar Express, Beowulf, and A Christmas Carol.)
  • Reviewers had mixed opinions regarding whether the 2011 animated film The Adventures of Tintin: The Secret of the Unicorn was affected by the uncanny valley. Daniel D. Snyder of The Atlantic wrote, "Instead of trying to bring to life Herge’s beautiful artwork, Spielberg and co. have opted to bring the movie into the 3D era using trendy motion-capture technique to recreate Tintin and his friends. Tintin’s original face, while barebones, never suffered for a lack of expression. It’s now outfitted with an alien and unfamiliar visage, his plastic skin dotted with pores and subtle wrinkles." He added, "In bringing them to life, Spielberg has made the characters dead.".[93] N.B. of The Economist called elements of the animation "grotesque", writing, "Tintin, Captain Haddock and the others exist in settings that are almost photo-realistic, and nearly all of their features are those of flesh-and-blood people. And yet they still have the sausage fingers and distended noses of comic-strip characters. It's not so much 'The Secret of the Unicorn' as 'The Invasion of the Body Snatchers'".[94] However, other reviewers felt that the film avoided the uncanny valley despite its animated characters' realism. Critic Dana Stevens of Slate wrote, "With the possible exception of the title character, the animated cast of Tintin narrowly escapes entrapment in the so-called 'uncanny valley'".[95]Wired magazine editor Kevin Kelly wrote of the film, "we have passed beyond the uncanny valley into the plains of hyperreality".[96]
  • The 2015 live-action film Terminator Genisys contains a scene featuring a computer-generated young version of actor Arnold Schwarzenegger (as a young T-800 Terminator), which some reviewers thought was affected by the uncanny valley. Eric Mungenast of the East Valley Tribune wrote of the film, "One notable technological problem stems from the attempt to make old Schwarzenegger look young again with some digital manipulation — it definitely doesn’t cross over the uncanny valley".[97] Writer and reviewer Simon Prior wrote of the film, "Even the 'Synthespian' version of Arnie isn’t as horrific as you might expect, although it’s still clear that CGI hasn’t yet conquered the Uncanny Valley issue".[98] (The film's predecessor, 2009's Terminator Salvation, also has a scene with a CG young Schwarzenegger that provoked similar reactions from some viewers,[99][100] but unlike in the newer film, this earlier version does not speak.)
  • The 2016 live-action film Rogue One features the CGI likenesses of the deceased Peter Cushing and a young Carrie Fisher "reprising" their respective roles of Grand Moff Tarkin and Princess Leia. Graeme McMillan of The Hollywood Reporter wrote, "The Tarkin that appears in Rogue One is a mix of CGI and live-action, and it… doesn't work. — In fact, for a special effect that's still stuck in the depths of the uncanny valley, it's surprising just how much of the movie Tarkin appears in, quietly undermining every scene he's in by somehow seeming less real than the various inhuman aliens in the movie."[101] Kelly Lawler of USA Today wrote, "the Leia cameo is so jarring as to take the audience completely out of the film at its most emotional moment. Leia’s appearance was meant to help the film end on a hopeful note [...] but instead it ends on a weird and unsettling one".[102]

In fiction[edit]

The fear, arising at contemplation of the "person" having small aberrations, and strengthening of impression because of its movement were noticed in 1818 by Mary Shelley in the novel Frankenstein; or, The Modern Prometheus:[citation needed]

How to describe my feelings at this awful show, how to represent unfortunate, created by me with such incredible work? And meanwhile its members were proportional, and I picked up for it beautiful lines. Beautiful — My God great! Yellow skin too hardly fitted his muscles and veins; hair were black, shiny and long, and teeth white as pearls; but that their contrast with the watery eyes almost indistinguishable on color from eye-sockets, with dry skin and a narrow cut of a black mouth was more terrible. […] It was impossible to look at it without shudder. No mummy restored to life could be more awful than this monster. I saw the creation unfinished; it was ugly even then; but when his joints and muscles started moving, something turned out more terrible, than all fictions of Dante.

— Mary Shelley, Frankenstein; or, The Modern Prometheus.

In the 2008 30 Rock episode "Succession", Frank Rossitano explains the uncanny valley concept, using a graph and Star Wars examples, to try to convince Tracy Jordan that his dream of creating a pornographic video game is impossible. He also references the computer-animated film The Polar Express.[103]

See also[edit]

References[edit]

  1. ^ abcMathur, Maya B.; Reichling, David B. (2016). "Navigating a social world with robot partners: a quantitative cartography of the Uncanny Valley"(PDF). Cognition. 146: 22–32. doi:10.1016/j.cognition.2015.09.008. PMID 26402646. 
  2. ^MacDorman, K. F.; Ishiguro, H. (2006). "androids". Interaction Studies. 7 (3): 297–337. doi:10.1075/is.7.3.03mac. 
  3. ^MacDorman, K. F.; Chattopadhyay, D. (2016). "Reducing consistency in human realism increases the uncanny valley effect; increasing category uncertainty does not". Cognition. 146: 190–205. doi:10.1016/j.cognition.2015.09.019. 
  4. ^MacDorman, Karl F.; Chattopadhyay, Debaleena. "Categorization-based stranger avoidance does not explain the uncanny valley effect". Cognition. 161: 132–135. doi:10.1016/j.cognition.2017.01.009. 
  5. ^Mori, M. (2012). Translated by MacDorman, K. F.; Kageki, Norri. "The uncanny valley". IEEE Robotics and Automation. 19 (2): 98–100. doi:10.1109/MRA.2012.2192811. 
  6. ^"An Uncanny Mind: Masahiro Mori on the Uncanny Valley and Beyond". IEEE Spectrum. 12 June 2012. Retrieved 1 April 2015. 
  7. ^MacDorman, K. F.; Vasudevan, S. K.; Ho, C.-C. (2009). "Does Japan really have robot mania? Comparing attitudes by implicit and explicit measures"(PDF). AI & Society. 23 (4): 485–510. doi:10.1007/s00146-008-0181-2. 
  8. ^Jentsch, E. (25 Aug. 1906). Zur Psychologie des Unheimlichen.Psychiatrisch-Neurologische Wochenschrift, 8(22), 195-198.
  9. ^Mitchell, W. J.; Szerszen Sr, K. A.; Lu, A. S.; Schermerhorn, P. W.; Scheutz, M.; MacDorman, K. F. (2011). "A mismatch in the human realism of face and voice produces an uncanny valley". i-Perception. 2 (1): 10–12. doi:10.1068/i0415. PMC 3485769. PMID 23145223. 
  10. ^Misselhorn, C (2009). "Empathy with inanimate objects and the uncanny valley". Minds and Machines. 19: 345–359. doi:10.1007/s11023-009-9158-2. 
  11. ^Freud, S. (1919/2003). The uncanny [das unheimliche] (D. McLintock, Trans.). New York: Penguin.
  12. ^Tinwell, Angela (2014-12-04). The Uncanny Valley in Games and Animation. CRC Press. pp. 165–. ISBN 9781466586956. Retrieved 13 January 2015. 
  13. ^ abMori, M (2012) [1970]. "The uncanny valley". IEEE Robotics & Automation Magazine. 19 (2): 98–100. doi:10.1109/MRA.2012.2192811. 
  14. ^Green, R. D.; MacDorman, K. F.; Ho, C.-C.; Vasudevan, S. K. (2008). "Sensitivity to the proportions of faces that vary in human likeness"(PDF). Computers in Human Behavior. 24 (5): 2456–2474. doi:10.1016/j.chb.2008.02.019. 
  15. ^ abRhodes, G. & Zebrowitz, L. A. (eds) (2002). Facial Attractiveness: Evolutionary, Cognitive, and Social Perspectives, Ablex Publishing.
  16. ^MacDorman & Ishiguro, 2006, p. 313.
  17. ^MacDorman, Green, Ho, & Koch, 2009, p. 696.
  18. ^Roberts, S. Craig (2012). Applied Evolutionary Psychology. Oxford University Press. p. 423. ISBN 9780199586073. 
  19. ^Ramey, 2005.
  20. ^MacDorman & Ishiguro, 2006, p. 303.
  21. ^ abcdSaygin, A.P. (2011). "The Thing That Should Not Be: Predictive Coding and the Uncanny Valley in Perceiving Human and Humanoid Robot Actions". Social Cognitive Affective Neuroscience. 7: 413–22. doi:10.1093/scan/nsr025. 
  22. ^UCSD News. "Your Brain on Androids". 
  23. ^MacDorman, K. F., Vasudevan, S. K., & Ho, C.-C., 2009.
  24. ^Yalom, Irvin D. (1980) "Existential Psychotherapy", Basic Books, Inc., Publishers, New York
  25. ^Cathy S. Gelbin (2011). "Introduction to The Golem Returns"(PDF). University of Michigan Press. Retrieved 18 December 2015. 
  26. ^ abcFerrey, A. E.; Burleigh, T. J.; Fenske, M. J. (2015). "Stimulus-category competition, inhibition, and affective devaluation: a novel account of the uncanny valley". Frontiers in Psychology. 6: 249. doi:10.3389/fpsyg.2015.00249. 
  27. ^Elliot, A. J.; Devine, P. G. (1994). "On the motivational nature of cognitive dissonance: Dissonance as psychological discomfort". Journal of Personality and Social Psychology. 67 (3): 382–394. doi:10.1037/0022-3514.67.3.382. 
  28. ^ abBurleigh, T. J.; Schoenherr, J. R.; Lacroix, G. L. (2013). "Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces"(PDF). Computers in Human Behavior. 29: 3. doi:10.1016/j.chb.2012.11.021. 
  29. ^ abYamada, Y.; Kawabe, T.; Ihaya, K. (2013). "Categorization difficulty is associated with negative evaluation in the "uncanny valley" phenomenon". Japanese Psychological Research. 55 (1): 20–32. doi:10.1111/j.1468-5884.2012.00538.x. 
  30. ^Schoenherr, J. R.; Burleigh, T. J. (2014). "Uncanny sociocultural categories". Frontiers in Psychology. 5: 1456. doi:10.3389/fpsyg.2014.01456. 
  31. ^Moore, R. K. (2012). "A Bayesian explanation of the 'Uncanny Valley' effect and related psychological phenomena". Nature Scientific Reports. 2: 555. doi:10.1038/srep00864. 
  32. ^Chattopadhyay, D.; MacDorman, K. F. (2016). "Familiar faces rendered strange: Why inconsistent realism drives characters into the uncanny valley". Journal of Vision. 16 (11,7): 1–25. doi:10.1167/16.11.7. 
  33. ^Kätsyri, J.; Förger, K.; Mäkäräinen, M.; Takala, T. (2015). "A review of empirical evidence on different uncanny valley hypotheses: support for perceptual mismatch as one road to the valley of eeriness". Frontiers in Psychology. 6. doi:10.3389/fpsyg.2015.00390. 
  34. ^MacDorman, K. F.; Chattopadhyay, D. (2017). "Categorization-based stranger avoidance does not explain the uncanny valley". Cognition. 161: 129–135. doi:10.1016/j.cognition.2017.01.009. 
  35. ^Burleigh, T. J.; Schoenherr, J. R. (2015). "A reappraisal of the uncanny valley: categorical perception or frequency-based sensitization?". Frontiers in Psychology. 5: 1488. doi:10.3389/fpsyg.2014.01488. 
  36. ^Kaplan, F. (2004). "Who is afraid of the humanoid? Investigating cultural differences in the acceptance of robots". International journal of humanoid robotics. 1 (3): 465–480. doi:10.1142/s0219843604000289. 
  37. ^MacDorman, K.F.; Entezari, S.O. (2015). "Individual differences predict sensitivity to the uncanny valley". Interaction Studies. 16 (2): 141–172. doi:10.1075/is.16.2.01mac. 
  38. ^Ferrari, F.; Paladino, M.P.; Jetten, J. (2016). "Blurring Human–Machine Distinctions: Anthropomorphic Appearance in Social Robots as a Threat to Human Distinctiveness". International Journal of Social Robotics. 8 (2): 287–302. doi:10.1007/s12369-016-0338-y. 
  39. ^Kitta MacPherson (2009-10-13). "Monkey visual behavior falls into the uncanny valley". Princeton University. Retrieved 2011-03-20. 
  40. ^"Science Exploring the uncanny valley of how brains react to humanoids". 
  41. ^Ramsey, Doug (2010-05-13). "Nineteen Projects Awarded Inaugural Calit2 Strategic Research Opportunities Grants". UCSD. Retrieved 2011-03-20. 
  42. ^Kiderra, Inga. "YOUR BRAIN ON ANDROIDS". UCSD.
An empirically estimated uncanny valley for static robot face images[1]
In an experiment involving the human lookalike robot Repliee Q2 (pictured above), the uncovered robotic structure underneath Repliee, and the actual human who was the model for Repliee, the human lookalike triggered the highest level of mirror neuron activity.[12]

Morale is down. We are making plenty of money, but the office is teeming with salespeople: well-groomed social animals with good posture and dress shoes, men who chuckle and smooth their hair back when they can’t connect to our VPN. Their corner of the office is loud; their desks are scattered with freebies from other start-ups, stickers and koozies and flash drives. We escape for drinks and fret about our company culture. “Our culture is dying,” we say gravely, apocalyptic prophets all. “What should we do about the culture?”

It’s not just the salespeople, of course. It’s never just the salespeople. Our culture has been splintering for months. Members of our core team have been shepherded into conference rooms by top-level executives who proceed to question our loyalty. They’ve noticed the sea change. They’ve noticed we don’t seem as invested. We don’t stick around for in-office happy hour anymore; we don’t take new hires out for lunch on the company card. We’re not hitting our KPIs, we’re not serious about the OKRs. People keep using the word paranoid. Our primary investor has funded a direct competitor. This is what investors do, but it feels personal: Daddy still loves us, but he loves us less.

We get ourselves out of the office and into a bar. We have more in common than our grievances, but we kick off by speculating about our job security, complaining about the bureaucratic double-downs, casting blame for blocks and poor product decisions. We talk about our IPO like it’s the deus ex machina coming down from on high to save us — like it’s an inevitability, like our stock options will lift us out of our existential dread, away from the collective anxiety that ebbs and flows. Realistically, we know it could be years before an IPO, if there’s an IPO at all; we know in our hearts that money is a salve, not a solution. Still, we are hopeful. We reassure ourselves and one another that this is just a phase; every start-up has its growing pains. Eventually we are drunk enough to change the subject, to remember our more private selves. The people we are on weekends, the people we were for years.

This is a group of secret smokers, and we go in on a communal pack of cigarettes. The problem, we admit between drags, is that we do care. We care about one another. We even care about the executives who can make us feel like shit. We want good lives for them, just like we want good lives for ourselves. We care, for fuck’s sake, about the company culture. We are among the first twenty employees, and we are making something people want. It feels like ours. Work has wedged its way into our identities, and the only way to maintain sanity is to maintain that we are the company, the company is us. Whenever we see a stranger at the gym wearing a T-shirt with our logo on it, whenever we are mentioned on social media or on a client’s blog, whenever we get a positive support ticket, we share it in the company chat room and we’re proud, genuinely proud.

But we see now that we’ve been swimming in the Kool-Aid, and we’re coming up for air. We were lucky and in thrall and now we are bureaucrats, punching at our computers, making other people — some kids — unfathomably rich. We throw our dead cigarettes on the sidewalk and grind them out under our toes. Phones are opened and taxis summoned; we gulp the dregs of our beers as cartoon cars approach on-screen. We disperse, off to terrorize sleeping roommates and lovers, to answer just one, two more emails before bed. Eight hours later we’ll be back in the office, slurping down coffee, running out for congealed breakfast sandwiches, tweaking mediocre scripts and writing halfhearted emails, throwing weary and knowing glances across the table.


I skim recruiter emailsand job listings like horoscopes, skidding down to the perks: competitive salary, dental and vision, 401k, free gym membership, catered lunch, bike storage, ski trips to Tahoe, off-sites to Napa, summits in Vegas, beer on tap, craft beer on tap, kombucha on tap, wine tastings, Whiskey Wednesdays, Open Bar Fridays, massage on-site, yoga on-site, pool table, Ping-Pong table, Ping-Pong robot, ball pit, game night, movie night, go-karts, zip line. Job listings are an excellent place to get sprayed with HR’s idea of fun and a 23-year-old’s idea of work-life balance. Sometimes I forget I’m not applying to summer camp. Customized setup: design your ultimate work station with the latest hardware. Change the world around you. Help humanity thrive by enabling — next! We work hard, we laugh hard, we give great high-fives. We have engineers in TopCoder’s Top 20. We’re not just another social web app. We’re not just another project-management tool. We’re not just another payment processor. I get a haircut and start exploring.

Most start-up offices look the same — faux midcentury furniture, brick walls, snack bar, bar cart. Interior designers in Silicon Valley are either brand-conscious or very literal. When tech products are projected into the physical world they become aesthetics unto themselves, as if to insist on their own reality: the office belonging to a home-sharing website is decorated like rooms in its customers’ pool houses and pieds-à-terre; the foyer of a hotel-booking start-up has a concierge desk replete with bell (no concierge); the headquarters of a ride-sharing app gleams in the same colors as the app itself, down to the sleek elevator bank. A book-related start-up holds a small and sad library, the shelves half-empty, paperbacks and object-oriented-programming manuals sloping against one another. It reminds me of the people who dressed like Michael Jackson to attend Michael Jackson’s funeral.

But this office, of a media app with millions in VC funding but no revenue model, is particularly sexy. This is something that an office shouldn’t be, and it jerks my heart rate way, way up. There are views of the city in every direction, fat leather loveseats, electric guitars plugged into amps, teak credenzas with white hardware. It looks like the loft apartment of the famous musician boyfriend I thought I’d have at 22 but somehow never met. I want to take off my dress and my shoes and lie on the voluminous sheepskin rug and eat fistfuls of MDMA, curl my naked body into the Eero Aarnio Ball Chair, never leave.

It’s not clear whether I’m here for lunch or an interview, which is normal. I am prepared for both and dressed for neither. My guide leads me through the communal kitchen, which has the trappings of every other start-up pantry: plastic bins of trail mix and Goldfish, bowls of Popchips and miniature candy bars. There’s the requisite wholesale box of assorted Clif Bars, and in the fridge are flavored water, string cheese, and single-serving cartons of chocolate milk. It can be hard to tell whether a company is training for a marathon or eating an after-school snack. Once I walked into our kitchen and found two Account Mana­gers pounding Shot Bloks, chewy cubes of glucose marketed to endurance athletes.

Over catered Afghan food, I meet the team, including a billionaire who made his fortune from a website that helps people feel close to celebrities and other strangers they’d hate in real life. He asks where I work, and I tell him. “Oh,” he says, not unkindly, snapping a piece of lavash in two, “I know that company. I think I tried to buy you.”


I take another personal daywithout giving a reason, an act of defiance that I fear is transparent. I spend the morning drinking coffee and skimming breathless tech press, then creep downtown to spend the afternoon in back-to-back interviews at a peanut-size start-up. All of the interviews are with men, which is fine. I like men. I had a boyfriend; I have a brother. The men ask me questions like, “How would you calculate the number of people who work for the United States Postal Service?” and “How would you describe the internet to a medieval farmer?” and “What is the hardest thing you’ve ever done?” They tell me to stand in front of the whiteboard and diagram my responses. These questions are self-conscious and infuriating, but it only serves to fuel me. I want to impress; I refuse to be discouraged by their self-importance. Here is a character flaw, my industry origin story: I have always responded positively to negging.

My third interview is with the technical cofounder. He enters the conference room in a crisp blue button-down, looking confidently unprepared. He tells me — apologetically — that he hasn’t done many interviews before, and as such he doesn’t have a ton of questions to ask me. Nonetheless, the office manager slated an hour for our conversation. This seems OK: I figure we will talk about the company, I will ask routine follow-up questions, and at four they will let me out for the day, like a middle school student, and the city will absorb me and my private errors. Then he tells me that his girlfriend is applying to law school and he’s been helping her prep. So instead of a conventional interview, he’s just going to have me take a section of the LSAT. I search his face to see if he’s kidding.

“If it’s cool with you, I’m just going to hang out here and check my email,” he says, sliding the test across the table and opening a laptop. He sets a timer.

I finish early, ever the overachiever. I check it twice. The cofounder grades it on the spot. “My mother would be so proud,” I joke, feeling brilliant and misplaced and low, lower than low.


Home is my refuge, except when it’s not. My roommate is turning 30, and to celebrate we are hosting a wine and cheese party at our apartment. Well, she is hosting — I have been invited. Her friends arrive promptly, in business casual. Hundreds of dollars of cheese are represented. “Bi-Rite, obviously,” she says, looking elegant in black silk as she smears Humboldt Fog onto a cracker. My roommate works down on the Peninsula, for a website that everyone loathes but no one can stop using. We occupy different spaces: I am in the start-up world, land of perpetual youth, and she is an adult like any other, navigating a corporation, acting the part, negotiating for her place. I admire and do not understand her; it is possible she finds me amusing. Mostly we talk about exercise.

Classical music streams through the house and someone opens a bottle of proper Champagne, which he reassures us is really from France; people clap when the cork pops. My roommate and I are the same age but I feel like a child at my parents’ party, and I am immediately envious, homesick. I send myself to my room, lock the door, and change into a very tight dress. I’ve gained fifteen pounds in trail mix: it never feels like a meal, but there’s an aggregate effect. When I reenter the living room, I suck in my stomach and slide between people’s backs, looking for a conversation. On the couch, a man in a suit jacket expounds on the cannabis opportunity. Everyone seems very comfortable and nobody talks to me. They tilt their wineglasses at the correct angle; they dust crumbs off their palms with grace. The word I hear the most is revenue. No — strategy. There’s nothing to do but drink and ingratiate myself. I wind up on the roof with a cluster of strangers and find myself missing my mother with a ferocity that carves into my gut. In the distance I can see the tip of the famous Rainbow Flag on Castro Street, whipping.

“Oakland,” one of them says. “That’s where we want to invest.”

“Too dangerous,” says another. “My wife would never go for it.”

“Of course not,” says the first, “but you don’t buy to live there.”

By the time the last guest has filtered out, I am in leggings and a sweatshirt, cleaning ineffectively: scooping up cheese rinds, rinsing plastic glasses, sneaking slices of chocolate cake with my damp hands. My roommate comes to say goodnight, and she is beautiful: tipsy but not toasted, radiant with absorbed goodwill. She repairs to her room with her boyfriend, and I listen from down the hall as they quietly undress, ease into bed, turn over into sleep.


Ours is a “pickax-during-the-gold-rush” product, the kind venture capitalists love to get behind. The product provides a shortcut to database infrastructure, giving people information about their apps and websites that they wouldn’t necessarily have on their own. All our customers are other software companies. This is a privileged vantage point from which to observe the tech industry. I would say more, but I signed an NDA.

I am the inaugural customer support rep, or Support Engineer. My job involves looking at strangers’ codebases and telling them what they’ve done wrong in integrating our product with theirs, and how to fix it. There are no unsolvable problems. Perhaps there are not even problems, only mistakes. After nearly three years in book publishing, where I mostly moved on instinct, taste, and feeling, the clarity of this soothes me.

I learn the bare minimum, code-wise, to be able to do my job well — to ask questions only when I’m truly in over my head. Still, I escalate problems all the time. I learn how to talk to our customers about the technology without ever touching the technology itself. I find myself confidently discussing cookies, data mapping, the difference between server-side and client-side integrations. “Just add logic!” I advise cheerfully. This means nothing to me but generally resonates with engineers. It shocks me every time someone nods along.

This is not to confuse confidence with pride. I doubt myself daily. I feel lucky to have this job; I feel desperately out of place. My previous boss — breezy and helpful, earnest in the manner of a man in his early twenties bequeathed $4 million to disrupt libraries — had encouraged me to apply for the role; I had joined his publishing start-up too early and needed something new. “This is the next big company,” he had said. “It’s a rocket ship.” He was right. I had been banking on him being right. Still, there are days when all I want is to disembark, eject myself into space, admit defeat. I pander and apologize and self-deprecate until my manager criticizes me for being a pleaser, at which point it seems most strategic to stop talking.

I convince myself and everyone else that I want to learn how to code, and I’m incentivized to do it: I’m told I will be promoted to Solutions Architect if I can build a networked, two-player game of checkers in the next few months. One lazy Saturday I give it three hours, then call it a day. I resent the challenge; I resent myself. I tell everyone I can’t do it, which is a lesser evil than not wanting to. In this environment, my lack of interest in learning JavaScript feels like a moral failure.

Around here, we nonengineers are pressed to prove our value. The hierarchy is pervasive, ingrained in the industry’s dismissal of marketing and its insistence that a good product sells itself; evident in the few “office hours” established for engineers (our scheduled opportunity to approach with questions and bugs); reflected in our salaries and equity allotment, even though it’s harder to find a good copywriter than a liberal-arts graduate with a degree in history and twelve weeks’ training from an uncredentialed coding dojo. This is a cozy home for believers in bootstrapping and meritocracy, proponents of shallow libertarianism. I am susceptible to it, too. “He just taught himself to code over the summer,” I hear myself say one afternoon, with the awe of someone relaying a miracle.

Our soft skills are a necessary inconvenience. We bloat payroll; we dilute conversation; we create process and bureaucracy; we put in requests for yoga classes and Human Resources. We’re a dragnet — though we tend to contribute positively to diversity metrics. There is quiet pity for the MBAs.

It’s easy for me to dissociate from the inferiority of my job because I’ve never been particularly proud of my customer-service skills. I’m good at subservience, but it isn’t what I would lead with on a first date. I enjoy translating between the software and the customers. I like breaking down information, demystifying technical processes, being one of few with this specific expertise. I like being bossy. People are interesting — unpredictable, emotional — when their expensive software product doesn’t behave as expected. Plus, I am almost always permissioned for God Mode.

After a year, my job evolves from support into something the industry calls Customer Success. The new role is more interesting, but the title is so corny and oddly stilted in its pseudosincerity that I cannot bring myself to say it out loud. This turns out to work to my advantage: when I change my email signature to read “Technical Account Manager” instead, it actually elicits a response from previously uncommunicative clients who are — I regret having to buttress stereotypes — always engineers, always founders, and always men.

I visit a friend at his midsize software company and see a woman typing at a treadmill desk. That’s a little on the nose, I whisper, and he whispers back, You have no idea — she does Customer Success.


My coworkers are all skilled at maneuvering something called a RipStik, a two-wheeled, skateboard-like invention with separated pivoting plates, one for each foot. They glide across the office, twisting and dipping with laptops in hand, taking customer calls on their personal cell phones, shuttling from desk to kitchen to conference room. Mastering the RipStik is a rite of passage, and I cannot do it. After a few weeks of trying, I order a tiny plastic skateboard off eBay, a neon-green Penny board with four wheels that looks coolest when it’s not being ridden. I come into the office over the weekend and practice on the Penny, perfecting my balance. It’s fast, dangerously so. Mostly I put it under my standing desk and then get onboard, rocking back and forth as I work.


The billboards along the stretch of the 101 that sweeps Silicon Valley have been punchy and declarative lately, advertising apps and other software products that transcend all context and grammatical structure. “We fixed dinner” (meal delivery). “Ask your developer” (cloud-based communications). “How tomorrow works” (file storage). The ads get less dystopian the farther you get from the city: by the airport, they grow international-businessman corporate, and as the land turns over into suburbs you can almost hear the gears shift. A financial-services company — one that’s been around for more than a century, a provider of life insurance, investment management, and, in the 1980s, bald-faced fraud — holds a mirror to an audience that perhaps won’t want to recognize itself. The ad reads, “Donate to a worthy cause: your retirement.”

I attend a networking event at an office whose walls are hung with inspirational posters that quote tech luminaries I’ve never heard of. The posters say things like “Life is short: build stuff that matters” and “Innovate or die.” I am dead. Our interior designer tried hanging posters like these in our office; the front-end engineers relocated them to the bathroom, placed them face to the wall. The event is packed; people roam in clusters, like college freshmen during orientation week. There are a few women, but most of the attendees are young men in start-up twinsets: I pass someone wearing a branded hoodie, unzipped to reveal a shirt with the same logo. I google the company on my cellphone to see what it is, to see if they’re hiring. “We have loved mobile since we saw Steve Jobs announce the first iPhone,” their website declares, and I close the browser, thinking, how basic.

The tenor of these events is usually the same: guilelessly optimistic. People are excited to talk about their start-ups, and all small-talk is a prelude to a pitch. I’m guilty of this, too; I’m proud of my work, and our recruiting bonus is 15 percent of my salary (alignment of company–employee goals and incentives). I talk to two European men who are building a food-delivery app geared toward healthy eaters, like people on the Paleo diet. They’re extremely polite and oddly buff. They say they’ll invite me to their beta, and I am excited. I like to be on the inside track. I want to help. I tell them that I know a lot of people on the Paleo diet, like the guy in marketing who stores plastic baggies of wet, sauteed meat in the communal refrigerator. I chatter on about Paleo adherents and people who do CrossFit and practice polyamory, and how I admire that they manage to do these things without detrimental physical or emotional consequences. I’ve learned so much about polyamory and S&M since moving to San Francisco. Ask me anything about The Ethical Slut; ask me anything about Sex at Dawn. That night, I download the healthy-food app and can’t ever imagine using it.

My opinion doesn’t matter, of course: a few months later I’ll find out that the Europeans raised $30-odd million after pivoting to a new business model and undergoing a radical rebranding, and I’ll find this out when our company starts paying them thousands to organize the catering for our in-office meals. The food is served in sturdy tinfoil troughs, and people race to be first in line for self-serve. It is low-carb and delicious, healthier than anything I’ve ever cooked, well worth someone else’s money, and every afternoon I shovel it into my body.


Our own 101 billboard is unveiled on a chilly morning in November, just a few months after I’ve started. Everyone gets to work early; our office manager orders fresh-squeezed orange juice and pastries, cups of yogurt parfait with granola strata. We’ve arranged for a company field trip around the corner. We walk in a pack, hands in our pockets, and take a group photograph in front of our ad. I forward it to my parents in New York. In the photograph we’ve got our arms around one another, smiling and proud. The start-up is still small, just thirty of us or so, but within a year we’ll be almost a hundred employees, and shortly thereafter, I’ll be gone.


I have lunchwith one of the salespeople, and I like him a lot. He’s easy to talk to; he’s easy to talk to for a living. We eat large, sloppy sandwiches in the park and gaze out at the tourists.

“So how’d you end up choosing our company?” I ask. Roast turkey drops from my sandwich onto the grass.

“Come on,” he says. “I heard there were a bunch of twentysomethings crushing it in the Valley. How often does that happen?”

I lean in and go to a panel on big data. There are two venture capitalists onstage, dressed identically. They are exceptionally sweaty. Even from the back row, the place feels moist. I’ve never been in a room with so few women and so much money, and so many people champing at the bit to get a taste. It’s like watching two ATMs in conversation. “I want big data on men watching other men talk about data,” I whisper to my new friend in sales, who ignores me.

Back at the office, I walk into the bathroom to find a coworker folded over the sink, wiping her face with a paper towel. There aren’t many women at this company, and I have encountered almost all of them, at one point or another, crying in the bathroom. “I just hope this is all worth it,” she spits in my direction. I know what she means — she’s talking about money — but I also know how much equity she has, and I’m confident that even in the best possible scenario, whatever she’s experiencing is definitely not. She’s out the door and back at her desk before I can conjure up something consoling.

Half of the conversations I overhear these days are about money, but nobody likes to get specific. It behooves everyone to stay theoretical.

A friend’s roommate wins a hackathon with corporate sponsorship, and on a rainy Sunday afternoon he is awarded $500,000. (It is actually a million, but who would believe me?) That evening they throw a party at their duplex, which feels like a normal event in the Burning Man off-season — whippits, face paint, high-design vaporizers — except for the oversize foamcore check propped laterally against the bathroom doorframe.

Out by the porch cooler, I run into a friend who works at a company — cloud something — that was recently acquired. I make a joke about this being a billionaire boys’ club and he laughs horsily, disproportionate to the humor. I’ve never seen him like this, but then I’ve never met anyone who’s won the lottery, seen anyone so jazzed on his own good luck. He opens a beer using the edge of his lighter and invites me to drive up to Mendocino in his new convertible. What else do you do after a windfall? “You know who the real winner was, though?” he asks, then immediately names a mutual acquaintance, a brilliant and introverted programmer who was the company’s first engineering hire, very likely the linchpin. “Instant multimillionaire,” my friend says incredulously, as if hearing his own information for the first time. “At least eight figures.”

“Wow,” I say, handing my beer to him to open. “What do you think he wants to do?”

My friend deftly pops off the bottle cap, then looks at me and shrugs. “That’s a good question,” he says, tapping the lighter against the side of his beer. “I don’t think he wants to do anything.”


An old high school friend emails out of the blue to introduce me to his college buddy: a developer, new to the city, “always a great time!” The developer and I agree to meet for drinks. It’s not clear whether we’re meeting for a date or networking. Not that there’s always a difference: I have one friend who found a job by swiping right and know countless others who go to industry conferences just to fuck — nothing gets them hard like a nonsmoking room charged to the company AmEx. The developer is very handsome and stiltedly sweet. He seems like someone who has opinions about fonts, and he does. It’s clear from the start that we’re there to talk shop. We go to a tiny cocktail bar in the Tenderloin with textured wallpaper and a scrawny bouncer. Photographs are forbidden, which means the place is designed for social media. This city is changing, and I am disgusted by my own complicity.

“There’s no menu, so you can’t just order, you know, a martini,” the developer says, as if I would ever. “You tell the bartender three adjectives, and he’ll customize a drink for you accordingly. It’s great. It’s creative! I’ve been thinking about my adjectives all day.”

What is it like to be fun? What is it like to feel like you’ve earned this? I try to game the system by asking for something smoky, salty, and angry, crossing my fingers for mezcal; it works. We lean against a wall and sip. The developer tells me about his loft apartment in the Mission, his specialty bikes, how excited he is to go on weeknight camping trips. We talk about cameras and books. We talk about cities we’ve never visited. I tell him about the personal-shopper service my coworkers all signed up for, how three guys came into work wearing the same sweater; he laughs but looks a little guilty. He’s sweet and a little shy about his intelligence, and I know we’ll probably never hang out again. Still, I go home that night with the feeling that something, however small, has been lifted.


Venture capitalists have spearheadedmassive innovation in the past few decades, not least of which is their incubation of this generation’s very worst prose style. The internet is choked with blindly ambitious and professionally inexperienced men giving each other anecdote-based instruction and bullet-point advice. 10 Essential Start-up Lessons You Won’t Learn in School. 10 Things Every Successful Entrepreneur Knows. 5 Ways to Stay Humble. Why the Market Always Wins. Why the Customer Is Never Right. How to Deal with Failure. How to Fail Better. How to Fail Up. How to Pivot. How to Pivot Back. 18 Platitudes to Tape Above Your Computer. Raise Your Way to Emotional Acuity. How to Love Something That Doesn’t Love You Back.

Sometimes it feels like everyone is speaking a different language — or the same language, with radically different rules. At our all-hands meeting, we are subjected to a pep talk. Our director looks like he hasn’t slept in days, but he straightens up and moves his gaze from face to face, making direct and metered eye contact with everyone around the table. “We are making products,” he begins, “that can push the fold of mankind.”

A networking-addicted coworker scrolls through a website where people voluntarily post their own résumés. I spy. He clicks through to an engineer who works for an aggressively powerful start-up, one whose rapid expansion, relentless pursuit of domination, and absence of ethical boundaries scare the shit out of me. Under his current company, the engineer has written this job description: “This is a rocket ship, baby. Climb aboard.”

I am waiting for the train when I notice the ad: it covers the platform below the escalators. The product is an identity-as-a-service app — it stores passwords — but the company isn’t advertising to users; they’re advertising their job openings. They’re advertising to me. The ad features five people standing in V-formation with their arms crossed. They’re all wearing identical blue hoodies. They’re also wearing identical rubber unicorn masks; I am standing on one of their heads. The copy reads, “Built by humans, used by unicorns.”


We hire an engineer fresh out of a top undergraduate program. She walks confidently into the office, springy and enthusiastic. We’ve all been looking forward to having a woman on our engineering team. It’s a big moment for us. Her onboarding buddy brings her around to make introductions, and as they approach our corner, my coworker leans over and cups his hand around my ear: as though we are colluding, as though we are 5 years old. “I feel sorry,” he says, his breath moist against my neck. “Everyone’s going to hit on her.”

I include this anecdote in an email to my mom. The annual-review cycle is nigh, and I’m on the fence about whether or not to bring up the running list of casual hostilities toward women that add unsolicited spice to the workplace. I tell her about the colleague with the smart-watch app that’s just an animated GIF of a woman’s breasts bouncing in perpetuity; I tell her about the comments I’ve fielded about my weight, my lips, my clothing, my sex life; I tell her that the first woman engineer is also the only engineer without SSH access to the servers. I tell her that compared with other women I’ve met here, I have it good, but the bar is low. It’s tricky: I like these coworkers — and I dish it back — but in the parlance of our industry, this behavior is scalable. I don’t have any horror stories yet; I’d prefer things stay this way. I expect my mother to respond with words of support and encouragement. I expect her to say, “Yes! You are the change this industry needs.” She emails me back almost immediately. “Don’t put complaints about sexism in writing,” she writes. “Unless, of course, you have a lawyer at the ready.”


A meeting is dropped mysteriously onto our calendars, and at the designated time we shuffle warily into a conference room. The last time this happened, we were given forms that asked us to rate various values on a scale of 1 to 5: our desire to lead a team; the importance of work-life balance. I gave both things a 4 and was told I didn’t want it enough.

The conference room has a million-dollar view of downtown San Francisco, but we keep the shades down. Across the street, a bucket drummer bangs out an irregular heartbeat. We sit in a row, backs to the window, laptops open. I look around the room and feel a wave of affection for these men, this small group of misfits who are the only people who understand this new backbone to my life. On the other side of the table, our manager paces back and forth, but he’s smiling. He asks us to write down the names of the five smartest people we know, and we dutifully oblige. I look at the list and think about how much I miss my friends back home, how bad I’ve been at returning phone calls and emails, how bloated I’ve become with start-up self-importance, how I’ve stopped making time for what I once held dear. I can feel blood rush to my cheeks.

“OK,” my manager says. “Now tell me: why don’t they work here?”


Morale, like anything, is just another problem to be solved. There is a high premium on break/fix. To solve our problem, management arranges for a team-building exercise. They schedule it on a weeknight evening, and we pretend not to mind. Our team-building begins with beers in the office, and then we travel en masse to a tiny event space at the mouth of the Stockton Tunnel, where two energetic blondes give us sweatbands and shots. The blondes are attractive and athletic, strong limbs wrapped in spandex leggings and tiny shorts, and we are their smudge-edged foils: an army of soft bellies and stiff necks, hands tight with the threat of carpal tunnel. They smear neon face paint across our foreheads and cheeks and tell us we look awesome. The event space warms up as people get drunk and bounce around the room, taking selfies with the CFO, fist-bumping the cofounders without irony, flirting with the new hires who don’t yet know any better. We play Skee-Ball. We cluster by the bar and have another round, two.

Eventually, we’re dispatched on a scavenger hunt across the city. We pour out of the building and into the street, spreading across rush-hour San Francisco, seeking landmarks; we barrel past tourists and harass taxicab drivers, piss off doormen and stumble into homeless people. We are our own worst representatives, calling apologies over our shoulders. We are sweaty, competitive — maybe happy, really happy.


The meeting begins without fanfare. They thought I was an amazing worker at first, working late every night, last out of the office, but now they wonder if the work was just too hard for me to begin with. They need to know: Am I down for the cause? Because if I’m not down for the cause, it’s time. They will do this amicably. Of course I’m down, I say, trying not to swivel in my ergonomic chair. I care deeply about the company. I am here for it.

When I say I care deeply, what I mean is I am ready to retire. When I say I’m down, what I mean is I’m scared. I cry twice during the meeting, despite my best efforts. I think about the city I left to come here, the plans I’ve canceled and the friends I haven’t made. I think about how hard I’ve worked and how demoralizing it is to fail. I think about my values, and I cry even more. It will be months until I call uncle and quit; it will take almost a year to realize I was gaslighting myself, that I was reading from someone else’s script.


It’s Christmastime; I’m older, I’m elsewhere. On the train to work, I swipe through social media and hit on a post from the start-up’s holiday party, which has its own hashtag. The photograph is of two former teammates, both of them smiling broadly, their teeth as white as I remember. “So grateful to be part of such an amazing team,” the caption reads, and I tap through. The hashtag unleashes a stream of photographs featuring people I’ve never met — beautiful people, the kind of people who look good in athleisure. They look well rested. They look relaxed and happy. They look nothing like me. There’s a photograph of what can only be the pre-dinner floor show: an acrobat in a leotard kneeling on a pedestal, her legs contorted, her feet grasping a bow and arrow, poised to release. Her target is a stuffed heart, printed with the company logo. I scroll past animated photo-booth GIFs of strangers, kissing and mugging for the camera, and I recognize their pride, I empathize with their sense of accomplishment — this was one hell of a year, and they have won. I feel gently ill, a callback to the childhood nausea of being left out.

The holiday party my year at the company began with an open bar at 4 PM — the same coworker had shellacked my hair into curls in the office bathroom, both of us excited and exhausted, ready to celebrate. Hours later, we danced against the glass windows of the Michelin-starred restaurant our company had bought out for the night, our napkins strewn on the tables, our shoes torn off, our plus-ones shifting in formal wear on the sidelines, the waitstaff studiously withholding visible judgment.

I keep scrolling until I hit a video of this year’s after-party, which looks like it was filmed in a club or at a flashy bar mitzvah, save for the company logo projected onto the wall: flashing colored lights illuminate men in stripped-down suits and women in cocktail dresses, all of them bouncing up and down, waving glow sticks and lightsabers to a background of electronic dance music. They’ve gone pro, I say to myself. “Last night was epic!” someone has commented. Three years have passed since I left. I catch myself searching for my own face anyway.

Enjoyed this article? Subscribe to n+1.

One thought on “Uncanny Valley Essay

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *