Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Knowledge about others reduces one’s own sense of anonymity

Subjects

Abstract

Social ties often seem symmetric, but they need not be1,2,3,4,5. For example, a person might know a stranger better than the stranger knows them. We explored whether people overlook these asymmetries and what consequences that might have for people’s perceptions and actions. Here we show that when people know more about others, they think others know more about them. Across nine laboratory experiments, when participants learned more about a stranger, they felt as if the stranger also knew them better, and they acted as if the stranger was more attuned to their actions. As a result, participants were more honest around known strangers. We tested this further with a field experiment in New York City, in which we provided residents with mundane information about neighbourhood police officers. We found that the intervention shifted residents’ perceptions of officers’ knowledge of illegal activity, and it may even have reduced crime. It appears that our sense of anonymity depends not only on what people know about us but also on what we know about them.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Treatment-control differences in crime after policing intervention.
Fig. 2: Residents’ perceptions of officer knowledge predict crime reductions.

Similar content being viewed by others

Data availability

The Open Science Framework page for this project (https://osf.io/mkgwr/) includes all data from laboratory experiments and all data necessary to reproduce the results of the field experiment.

Code availability

The codes for running the laboratory experiments online and for analysing the data from the field experiment are available on the Open Science Framework (https://osf.io/mkgwr/).

References

  1. Krackhardt, D. & Kilduff, M. Whether close or far: social distance effects on perceived balance in friendship networks. J. Pers. Soc. Psychol. 76, 770–782 (1999).

    Article  Google Scholar 

  2. Davis, J. A. In Theories of Cognitive Consistency (eds Abelson, R. P. et al.) 544–550 (Rand McNally, 1968).

  3. Heider, F. The Psychology of Interpersonal Relations (Wiley, 1958).

  4. DeSoto, C. B. Learning a social structure. J. Abnorm. Soc. Psychol. 60, 417–421 (1960).

    Article  CAS  Google Scholar 

  5. Freeman, L. C. Filling in the blanks: a theory of cognitive categories and the structure of social affiliation. Soc. Psychol. Q. 55, 118–127 (1992).

    Article  Google Scholar 

  6. Rand, D. G. et al. Social heuristics shape intuitive cooperation. Nat. Commun. 5, 3677 (2014).

    Article  ADS  CAS  Google Scholar 

  7. Holoien, D. S., Bergsieker, H. B., Shelton, J. N. & Alegre, J. M. Do you really understand? Achieving accuracy in interracial relationships. J. Pers. Soc. Psychol. 108, 76–92 (2015).

    Article  Google Scholar 

  8. Epley, N., Keysar, B., Van Boven, L. & Gilovich, T. Perspective taking as egocentric anchoring and adjustment. J. Pers. Soc. Psychol. 87, 327–339 (2004).

    Article  Google Scholar 

  9. Nickerson, R. S. How we know—and sometimes misjudge—what others know: imputing one’s own knowledge to others. Psychol. Bull. 125, 737–759 (1999).

    Article  Google Scholar 

  10. Gilovich, T., Savitsky, K. & Medvec, V. H. The illusion of transparency: biased assessments of others’ ability to read one’s emotional states. J. Pers. Soc. Psychol. 75, 332–346 (1998).

    Article  CAS  Google Scholar 

  11. Gilovich, T. & Savitsky, K. The spotlight effect and the illusion of transparency: egocentric assessments of how we’re seen by others. Curr. Dir. Psychol. Sci. 8, 165–168 (1999).

    Article  Google Scholar 

  12. Milgram, S. The experience of living in cities. Science 167, 1461–1468 (1970).

    Article  ADS  CAS  Google Scholar 

  13. Diener, E., Fraser, S. C., Beaman, A. L. & Kelem, R. T. Effects of deindividuation variables on stealing among Halloween trick-or-treaters. J. Pers. Soc. Psychol. 33, 178–183 (1976).

    Article  Google Scholar 

  14. Zhong, C., Bohns, V. K. & Gino, F. Good lamps are the best police: darkness increases dishonesty and self-interested behavior. Psychol. Sci. 21, 311–314 (2010).

    Article  Google Scholar 

  15. Andreoni, J. & Petrie, R. Public goods experiments without confidentiality: a glimpse into fund-raising. J. Public Econ. 88, 1605–1623 (2004).

    Article  Google Scholar 

  16. Yoeli, E., Hoffman, M., Rand, D. & Nowak, M. Powering up with indirect reciprocity in a large-scale field experiment. Proc. Natl Acad. Sci. USA 110, 10424–10429 (2013).

    Article  ADS  CAS  Google Scholar 

  17. Ernest-Jones, M., Nettle, D. & Bateson, M. Effects of eye images on everyday cooperative behavior: a field experiment. Evol. Hum. Behav. 32, 172–178 (2011).

    Article  Google Scholar 

  18. Pronin, E., Kruger, J., Savitsky, K. & Ross, L. You don’t know me, but I know you: the illusion of asymmetric insight. J. Pers. Soc. Psychol. 81, 639–656 (2001).

    Article  CAS  Google Scholar 

  19. Lakens, D. Equivalence tests: a practical primer for t tests, correlations, and meta-analyses. Soc. Psychol. Pers. Sci. 8, 355–362 (2017).

    Article  Google Scholar 

  20. Preacher, K. J., Rucker, D. D. & Hayes, A. F. Addressing moderated mediation hypotheses: theory, methods, and prescriptions. Multivariate Behav. Res. 42, 185–227 (2007).

    Article  Google Scholar 

  21. Parks, R. B., Mastrofski, S. D., DeJong, C. & Gray, M. K. How officers spend their time with the community. Justice Q. 16, 483–518 (1999).

    Article  Google Scholar 

  22. Ba, B. A., Knox, D., Mummolo, J. & Rivera, R. The role of officer race and gender in police-civilian interactions in Chicago. Science 371, 696–702 (2021).

    Article  ADS  CAS  Google Scholar 

  23. Fryer, R. G. An empirical analysis of racial differences in police use of force. J. Pol. Econ. 127, 1210–1261 (2019).

    Article  Google Scholar 

  24. Voigt, R. et al. Language from police body camera footage shows racial disparities in officer respect. Proc. Natl Acad. Sci. USA 114, 6521–6526 (2017).

    Article  CAS  Google Scholar 

  25. Braga, A. A., Papachristos, A. V. & Hureau, D. M. The effects of hot spots policing on crime: an updated systematic review and meta-analysis. Justice Q. 31, 633–663 (2014).

    Article  Google Scholar 

  26. National Academies of Sciences, Engineering, and Medicine. Proactive Policing: Effects on Crime and Communities (The National Academies Press, 2018).

  27. National Research Council. Fairness and Effectiveness in Policing: The Evidence (The National Academies Press, 2004).

  28. Sherman, L. W. & Eck, J. in Evidence Based Crime Prevention (eds Sherman, L. W. et al.) 295–329 (Routledge, 2002).

  29. Peyton, K., Sierra-Arévalo, M. & Rand, D. G. A field experiment on community policing and police legitimacy. Proc. Natl Acad. Sci. USA 116, 19894–19898 (2019).

    Article  CAS  Google Scholar 

  30. Owens, E., Weisburd, D., Amendola, K. L. & Alpert, G. P. Can you build a better cop? Experimental evidence on supervision, training, and policing in the community. Criminol. Public Policy 17, 41–87 (2018).

    Article  Google Scholar 

  31. Sunshine, J. & Tyler, T. The role of procedural justice and legitimacy in shaping public support for policing. Law Soc. Rev. 37, 513–548 (2003).

    Article  Google Scholar 

  32. Chalfin, C., Hansen, B., Weisburst, E. K. & Williams, M. C. Police force size and civilian race. Am. Econ. Rev. Insights (in the press).

  33. Belloni, A., Chernozhukov, V. & Hansen, C. High-dimensional methods and inference on structural and treatment effects. J. Econ. Perspect. 28, 29–50 (2014).

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported by the National Institute of Justice (award number 2013-R2-CX-0006). We are grateful to the New York City Police Department, particularly T. Coffey and D. Williamson in the Office of Management Analysis and Planning. Points of view or opinions contained within this document are those of the authors and do not necessarily represent the official position or policies of the New York City Police Department. We also thank the New York City Housing Authority for their assistance with the field experiment. Throughout this project, ideas42 was an essential research partner. We are also grateful to H. Furstenberg-Beckman for thoughtful guidance; A. Alhadeff and W. Tucker for valuable assistance; Crime Lab New York for critical support in the planning and evaluation of the policing intervention, particularly R. Ander, M. Barron, A. Chalfin, K. Falco, V. Gilbert, D. Hafetz, B. Jakubowski, Z. Jelveh, K. Nguyen, L. Parker, J. Lerner, H. Golden, G. Stoddard and N. Weil; V. Nguyen for her support as a research assistant; and J. Ludwig, S. Mullainathan, A. Kumar, E. O’Brien and F. Goncalves for insightful feedback.

Author information

Authors and Affiliations

Authors

Contributions

A.K.S. developed the hypotheses. A.K.S. designed, conducted and analysed the laboratory experiments. A.K.S. and M.L. designed the field intervention. M.L. led the analysis of the field intervention. A.K.S. and M.L. contributed to the manuscript.

Corresponding author

Correspondence to Anuj K. Shah.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature thanks the anonymous reviewers for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended Data Fig. 1 Outreach cards.

A sample outreach card (front and back) used in the field intervention. Identifying information has been redacted.

Extended Data Fig. 2 Outreach letters.

A sample letter used in the field intervention. Identifying information has been redacted.

Extended Data Fig. 3 Distribution of point estimates for treatment effect.

As a robustness check, we conducted analyses for various radii ranging up to three blocks around developments: 65 ft., 100 ft., 150 ft., 200 ft., 250 ft., 300 ft., 400 ft., 500 ft., and 750 ft. And, for each radius, we conducted analyses for cumulative time intervals ranging from one month after the intervention (i.e., February 2018) to the first nine months after the intervention (i.e., February through October 2018). Varying both of these dimensions produced 81 sets of results, based on our primary specification applied to each radius and time interval (see Supplementary Information C.3). This figure shows the distribution of point estimates for the crime reductions across these analyses, along with an Epanechnikov kernel density function over the distribution. The red dot highlights where the 250-ft, 3-month result falls in the distribution, suggesting it is in line with the central estimates across all 81 analyses.

Extended Data Fig. 4 Heat map of P-values for treatment effect over time and distance.

As a robustness check, we conducted analyses for various radii ranging up to three blocks around developments: 65 ft., 100 ft., 150 ft., 200 ft., 250 ft., 300 ft., 400 ft., 500 ft., and 750 ft. And, for each radius, we conducted analyses for cumulative time intervals ranging from one month after the intervention (i.e., February 2018) to the first nine months after the intervention (i.e., February through October 2018). Varying both of these dimensions produced 81 sets of results. This figure shows a heat map of P-values across these 81 specifications, with the 250-ft, 3-month result outlined in blue. P-values are from two-tailed tests based on our primary specification applied to each radius and time interval (see Supplementary Information C.3).

Extended Data Table 1 NYCHA development characteristics
Extended Data Table 2 Primary survey outcome estimates
Extended Data Table 3 Exploratory survey outcome estimates
Extended Data Table 4 Crime outcome estimates
Extended Data Table 5 Crime outcome treatment-on-the-treated estimates

Supplementary information

Supplementary Information

This file contains the following sections: A. Lab experiment methods, materials, and results; B. Field experiment methods and materials; C. Field experiment results, tables, and figures; D. Software Used; E. Supplementary references.

Reporting Summary

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shah, A.K., LaForest, M. Knowledge about others reduces one’s own sense of anonymity. Nature 603, 297–301 (2022). https://doi.org/10.1038/s41586-022-04452-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1038/s41586-022-04452-3

This article is cited by

Comments

Commenting on this article is now closed.

  1. Shah and LaForest[1] presented empirical results from nine laboratory studies to conclude that learning about a stranger makes a person feel and act as if the stranger had also learned something about them. In a field study, they demonstrated that sharing information about local police officers influenced perception of the officers’ knowledge about illegal activities and reduced crime rates. Such a result is potentially very useful[2], if true. Unfortunately, the reported studies that support these claims seem too good to be true[3]. Even if the reported empirical effects reflect reality, it should be very unusual for studies like these to show consistent positive outcomes simply due to random sampling. Indeed, if the reported effects reflect reality, a replication of the ten studies with the same sample sizes should produce uniform success with a probability of only 0.019. The absence of any experimental failures is an indication that something has gone wrong in data collection, analysis, or reporting of these studies. It could be that the reported ten studies are only a subset of the (unknown number of) actually performed studies, with non-significant outcomes not being reported. Such a publication bias produces a set of results that cannot distinguish between real and false effects, so readers should be skeptical about the reported empirical findings, the associated conclusions, and their potential applications. [Note, that concerns have also been raised about possible confounds for the field experiment: https://datacolada.org/101. These concerns are orthogonal to the concerns raised here.]

    The probability of a random sample producing a significant outcome when analyzed by a hypothesis test is known as power, and it increases with bigger effects and with bigger samples. Scientists want experiments with high power, and best practice is to perform a power analysis prior to data collection. Shah and LaForest report that they did not perform any power analyses but that they used large samples relative to studies of similar phenomena. Despite the large samples, the reported effects are small, so power for most of their experiments seems to be modest. Table 1 reports the estimated power for a replication of each study that uses the same sample sizes as the original. To be favorable to the original studies, the power calculation supposes that the reported means and standard deviations accurately reflect the population values. The probability that ten experiments like these would all produce significant outcomes is the product of the power values, which is 0.019. Thus, if the effects are real, it is highly unlikely that experiments like these would all produce significant outcomes; instead ten experiments with similar sample sizes should almost surely show some failures to replicate. Such an observation begs the question as to how the original studies could have been so successful despite their own data indicating the implausibility of such a set of findings. Perhaps, despite a decade-long replication crisis in psychology, many scientists continue to engage in Questionable Research Practices (QRPs) that inflate Type I errors and undermine their conclusions.[4]

    The reported outcomes in Shah and LaForest do not seem to be due to QRPs such as optional stopping[5] (gathering data until getting a desired significant outcome) or hypothesizing after the results are known[6] (HARKing) because their methods section indicates that no analyses started before data collection finished and the experimental results are straightforward. Without an alternative explanation, publication bias (suppressing relevant non-significant experiments or measures) might explain how such unreliable experiments could produce such consistent outcomes.

    It is possible that Shah and LaForest were just extremely (un)lucky to pick random samples that happened to consistently produce significant outcomes, but then the reported results almost surely overestimate the true effect, and it is possible that there is no effect at all. Scientists should be skeptical about the empirical results and conclusions reported in Shah and LaForest; and future studies will be needed to evaluate the validity of the claims.

    What sample sizes should be used for future studies of this topic? The answer depends on how confident a scientist wants to be that their random sample will generate a significant outcome. A common recommendation is to plan sample sizes to ensure 0.8 power. The penultimate column in Table 1 shows sample sizes that provide 0.8 power for each experiment if the population effect matches what was reported in the original study. Note that the replication experiments usually require much larger samples than the original studies. To fully support the conclusions of Shah and LaForest, 0.8 power might not be deemed sufficient, since the probability that ten experiments with 0.8 power would all produce significant results is only 0.8^(10)=0.107. To insure a joint probability of 0.8 for all ten experiments to produce significant results one could require that each experiment has a power of (0.8)^(1/10)≈0.98. The final column of Table 1 shows that such studies require sample sizes nearly three times larger than the original studies. If the population effects are smaller than what is reported in the original studies, then replication efforts will require even larger samples.

    Table 1. A power analysis indicates that it is very unlikely that ten experiments like the ones in Shah and LaForest would all produce significant outcomes. Future studies of this topic will require much larger sample sizes.

    Exp. | N | Power | N for power=0.8 | N for power=0.98

    1A 397 0.70 502 1028

    1B 291 0.57 504 1032

    1C 456 0.56 704 1442

    1D 543 0.60 862 1766

    2A 462 0.93 310 634

    2B 552 0.81 537 1047

    3A 995 0.69 1312 2690

    3B 582 0.58 972 1992

    3C 294 0.69 382 782

    Field 30 0.67 36 71

    All 4602 0.019 6121 12484

    Methods

    Power was calculated using the pwr library[7] in R[8]. Sample sizes for future experiments were calculated using the pwr library and G*Power[9]. R code to reproduce the analyses reported here is available at the Open Science Framework https://osf.io/tjp6f/

    References

    1. Shah, A. K. & LaForest, M. Knowledge about others reduces one’s own sense of anonymity. Nature (2022). https://doi.org/10.1038/s41...

    2. John, E. & Bushway, S. D. A feeling of familiarity can deter crime. Nature (2002). https://doi.org/10.1038/d41...

    3. Francis, G. Too good to be true: Publication bias in two prominent studies from experimental psychology. Psychonomic Bulletin & Review, 19, 151-156 (2012). DOI 10.3758/s13423-012-0227-9

    4. Simmons, J. P., Nelson, L. D., & Simonsohn, U. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366 (2011). https://doi.org/10.1177/095...

    5. Strube, M. J. SNOOP: A program for demonstrating the consequences of premature and repeated null hypothesis testing. Behavior Research Methods, 38, 24-27 (2006).

    6. Kerr, N. L. HARKing: Hypothesizing After the Results are Known. Personality and Social Psychology Review, 2, 196–217 (1998).

    7. Champely, S., Ekstrom, C., Dalgaard, P., Gill, J., Weibelzahl, S., Ford, C., & Volcic, R. Package ‘pwr ’: Basic functions for power analysis (2018).

    8. R core team. R: A language and environment for statistical computing. R Foundation for Statistical Computing. Vienna, Austria (2017). Retrieved from https://www.r-project.org/

    9. Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175-191 (2007). doi:10.3758/bf03193146

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing