Abstract
Social ties often seem symmetric, but they need not be1,2,3,4,5. For example, a person might know a stranger better than the stranger knows them. We explored whether people overlook these asymmetries and what consequences that might have for people’s perceptions and actions. Here we show that when people know more about others, they think others know more about them. Across nine laboratory experiments, when participants learned more about a stranger, they felt as if the stranger also knew them better, and they acted as if the stranger was more attuned to their actions. As a result, participants were more honest around known strangers. We tested this further with a field experiment in New York City, in which we provided residents with mundane information about neighbourhood police officers. We found that the intervention shifted residents’ perceptions of officers’ knowledge of illegal activity, and it may even have reduced crime. It appears that our sense of anonymity depends not only on what people know about us but also on what we know about them.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$32.99 / 30 days
cancel any time
Subscribe to this journal
Receive 51 print issues and online access
$199.00 per year
only $3.90 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout


Similar content being viewed by others
Data availability
The Open Science Framework page for this project (https://osf.io/mkgwr/) includes all data from laboratory experiments and all data necessary to reproduce the results of the field experiment.
Code availability
The codes for running the laboratory experiments online and for analysing the data from the field experiment are available on the Open Science Framework (https://osf.io/mkgwr/).
References
Krackhardt, D. & Kilduff, M. Whether close or far: social distance effects on perceived balance in friendship networks. J. Pers. Soc. Psychol. 76, 770–782 (1999).
Davis, J. A. In Theories of Cognitive Consistency (eds Abelson, R. P. et al.) 544–550 (Rand McNally, 1968).
Heider, F. The Psychology of Interpersonal Relations (Wiley, 1958).
DeSoto, C. B. Learning a social structure. J. Abnorm. Soc. Psychol. 60, 417–421 (1960).
Freeman, L. C. Filling in the blanks: a theory of cognitive categories and the structure of social affiliation. Soc. Psychol. Q. 55, 118–127 (1992).
Rand, D. G. et al. Social heuristics shape intuitive cooperation. Nat. Commun. 5, 3677 (2014).
Holoien, D. S., Bergsieker, H. B., Shelton, J. N. & Alegre, J. M. Do you really understand? Achieving accuracy in interracial relationships. J. Pers. Soc. Psychol. 108, 76–92 (2015).
Epley, N., Keysar, B., Van Boven, L. & Gilovich, T. Perspective taking as egocentric anchoring and adjustment. J. Pers. Soc. Psychol. 87, 327–339 (2004).
Nickerson, R. S. How we know—and sometimes misjudge—what others know: imputing one’s own knowledge to others. Psychol. Bull. 125, 737–759 (1999).
Gilovich, T., Savitsky, K. & Medvec, V. H. The illusion of transparency: biased assessments of others’ ability to read one’s emotional states. J. Pers. Soc. Psychol. 75, 332–346 (1998).
Gilovich, T. & Savitsky, K. The spotlight effect and the illusion of transparency: egocentric assessments of how we’re seen by others. Curr. Dir. Psychol. Sci. 8, 165–168 (1999).
Milgram, S. The experience of living in cities. Science 167, 1461–1468 (1970).
Diener, E., Fraser, S. C., Beaman, A. L. & Kelem, R. T. Effects of deindividuation variables on stealing among Halloween trick-or-treaters. J. Pers. Soc. Psychol. 33, 178–183 (1976).
Zhong, C., Bohns, V. K. & Gino, F. Good lamps are the best police: darkness increases dishonesty and self-interested behavior. Psychol. Sci. 21, 311–314 (2010).
Andreoni, J. & Petrie, R. Public goods experiments without confidentiality: a glimpse into fund-raising. J. Public Econ. 88, 1605–1623 (2004).
Yoeli, E., Hoffman, M., Rand, D. & Nowak, M. Powering up with indirect reciprocity in a large-scale field experiment. Proc. Natl Acad. Sci. USA 110, 10424–10429 (2013).
Ernest-Jones, M., Nettle, D. & Bateson, M. Effects of eye images on everyday cooperative behavior: a field experiment. Evol. Hum. Behav. 32, 172–178 (2011).
Pronin, E., Kruger, J., Savitsky, K. & Ross, L. You don’t know me, but I know you: the illusion of asymmetric insight. J. Pers. Soc. Psychol. 81, 639–656 (2001).
Lakens, D. Equivalence tests: a practical primer for t tests, correlations, and meta-analyses. Soc. Psychol. Pers. Sci. 8, 355–362 (2017).
Preacher, K. J., Rucker, D. D. & Hayes, A. F. Addressing moderated mediation hypotheses: theory, methods, and prescriptions. Multivariate Behav. Res. 42, 185–227 (2007).
Parks, R. B., Mastrofski, S. D., DeJong, C. & Gray, M. K. How officers spend their time with the community. Justice Q. 16, 483–518 (1999).
Ba, B. A., Knox, D., Mummolo, J. & Rivera, R. The role of officer race and gender in police-civilian interactions in Chicago. Science 371, 696–702 (2021).
Fryer, R. G. An empirical analysis of racial differences in police use of force. J. Pol. Econ. 127, 1210–1261 (2019).
Voigt, R. et al. Language from police body camera footage shows racial disparities in officer respect. Proc. Natl Acad. Sci. USA 114, 6521–6526 (2017).
Braga, A. A., Papachristos, A. V. & Hureau, D. M. The effects of hot spots policing on crime: an updated systematic review and meta-analysis. Justice Q. 31, 633–663 (2014).
National Academies of Sciences, Engineering, and Medicine. Proactive Policing: Effects on Crime and Communities (The National Academies Press, 2018).
National Research Council. Fairness and Effectiveness in Policing: The Evidence (The National Academies Press, 2004).
Sherman, L. W. & Eck, J. in Evidence Based Crime Prevention (eds Sherman, L. W. et al.) 295–329 (Routledge, 2002).
Peyton, K., Sierra-Arévalo, M. & Rand, D. G. A field experiment on community policing and police legitimacy. Proc. Natl Acad. Sci. USA 116, 19894–19898 (2019).
Owens, E., Weisburd, D., Amendola, K. L. & Alpert, G. P. Can you build a better cop? Experimental evidence on supervision, training, and policing in the community. Criminol. Public Policy 17, 41–87 (2018).
Sunshine, J. & Tyler, T. The role of procedural justice and legitimacy in shaping public support for policing. Law Soc. Rev. 37, 513–548 (2003).
Chalfin, C., Hansen, B., Weisburst, E. K. & Williams, M. C. Police force size and civilian race. Am. Econ. Rev. Insights (in the press).
Belloni, A., Chernozhukov, V. & Hansen, C. High-dimensional methods and inference on structural and treatment effects. J. Econ. Perspect. 28, 29–50 (2014).
Acknowledgements
This research was supported by the National Institute of Justice (award number 2013-R2-CX-0006). We are grateful to the New York City Police Department, particularly T. Coffey and D. Williamson in the Office of Management Analysis and Planning. Points of view or opinions contained within this document are those of the authors and do not necessarily represent the official position or policies of the New York City Police Department. We also thank the New York City Housing Authority for their assistance with the field experiment. Throughout this project, ideas42 was an essential research partner. We are also grateful to H. Furstenberg-Beckman for thoughtful guidance; A. Alhadeff and W. Tucker for valuable assistance; Crime Lab New York for critical support in the planning and evaluation of the policing intervention, particularly R. Ander, M. Barron, A. Chalfin, K. Falco, V. Gilbert, D. Hafetz, B. Jakubowski, Z. Jelveh, K. Nguyen, L. Parker, J. Lerner, H. Golden, G. Stoddard and N. Weil; V. Nguyen for her support as a research assistant; and J. Ludwig, S. Mullainathan, A. Kumar, E. O’Brien and F. Goncalves for insightful feedback.
Author information
Authors and Affiliations
Contributions
A.K.S. developed the hypotheses. A.K.S. designed, conducted and analysed the laboratory experiments. A.K.S. and M.L. designed the field intervention. M.L. led the analysis of the field intervention. A.K.S. and M.L. contributed to the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature thanks the anonymous reviewers for their contribution to the peer review of this work.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extended data figures and tables
Extended Data Fig. 1 Outreach cards.
A sample outreach card (front and back) used in the field intervention. Identifying information has been redacted.
Extended Data Fig. 2 Outreach letters.
A sample letter used in the field intervention. Identifying information has been redacted.
Extended Data Fig. 3 Distribution of point estimates for treatment effect.
As a robustness check, we conducted analyses for various radii ranging up to three blocks around developments: 65 ft., 100 ft., 150 ft., 200 ft., 250 ft., 300 ft., 400 ft., 500 ft., and 750 ft. And, for each radius, we conducted analyses for cumulative time intervals ranging from one month after the intervention (i.e., February 2018) to the first nine months after the intervention (i.e., February through October 2018). Varying both of these dimensions produced 81 sets of results, based on our primary specification applied to each radius and time interval (see Supplementary Information C.3). This figure shows the distribution of point estimates for the crime reductions across these analyses, along with an Epanechnikov kernel density function over the distribution. The red dot highlights where the 250-ft, 3-month result falls in the distribution, suggesting it is in line with the central estimates across all 81 analyses.
Extended Data Fig. 4 Heat map of P-values for treatment effect over time and distance.
As a robustness check, we conducted analyses for various radii ranging up to three blocks around developments: 65 ft., 100 ft., 150 ft., 200 ft., 250 ft., 300 ft., 400 ft., 500 ft., and 750 ft. And, for each radius, we conducted analyses for cumulative time intervals ranging from one month after the intervention (i.e., February 2018) to the first nine months after the intervention (i.e., February through October 2018). Varying both of these dimensions produced 81 sets of results. This figure shows a heat map of P-values across these 81 specifications, with the 250-ft, 3-month result outlined in blue. P-values are from two-tailed tests based on our primary specification applied to each radius and time interval (see Supplementary Information C.3).
Supplementary information
Supplementary Information
This file contains the following sections: A. Lab experiment methods, materials, and results; B. Field experiment methods and materials; C. Field experiment results, tables, and figures; D. Software Used; E. Supplementary references.
Rights and permissions
About this article
Cite this article
Shah, A.K., LaForest, M. Knowledge about others reduces one’s own sense of anonymity. Nature 603, 297–301 (2022). https://doi.org/10.1038/s41586-022-04452-3
Received:
Accepted:
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1038/s41586-022-04452-3
This article is cited by
-
Cognitive representations of social networks in isolated villages
Nature Human Behaviour (2025)
-
Letters and cards telling people about local police reduce crime
Nature (2022)



Greg Francis
Shah and LaForest[1] presented empirical results from nine laboratory studies to conclude that learning about a stranger makes a person feel and act as if the stranger had also learned something about them. In a field study, they demonstrated that sharing information about local police officers influenced perception of the officers’ knowledge about illegal activities and reduced crime rates. Such a result is potentially very useful[2], if true. Unfortunately, the reported studies that support these claims seem too good to be true[3]. Even if the reported empirical effects reflect reality, it should be very unusual for studies like these to show consistent positive outcomes simply due to random sampling. Indeed, if the reported effects reflect reality, a replication of the ten studies with the same sample sizes should produce uniform success with a probability of only 0.019. The absence of any experimental failures is an indication that something has gone wrong in data collection, analysis, or reporting of these studies. It could be that the reported ten studies are only a subset of the (unknown number of) actually performed studies, with non-significant outcomes not being reported. Such a publication bias produces a set of results that cannot distinguish between real and false effects, so readers should be skeptical about the reported empirical findings, the associated conclusions, and their potential applications. [Note, that concerns have also been raised about possible confounds for the field experiment: https://datacolada.org/101. These concerns are orthogonal to the concerns raised here.]
The probability of a random sample producing a significant outcome when analyzed by a hypothesis test is known as power, and it increases with bigger effects and with bigger samples. Scientists want experiments with high power, and best practice is to perform a power analysis prior to data collection. Shah and LaForest report that they did not perform any power analyses but that they used large samples relative to studies of similar phenomena. Despite the large samples, the reported effects are small, so power for most of their experiments seems to be modest. Table 1 reports the estimated power for a replication of each study that uses the same sample sizes as the original. To be favorable to the original studies, the power calculation supposes that the reported means and standard deviations accurately reflect the population values. The probability that ten experiments like these would all produce significant outcomes is the product of the power values, which is 0.019. Thus, if the effects are real, it is highly unlikely that experiments like these would all produce significant outcomes; instead ten experiments with similar sample sizes should almost surely show some failures to replicate. Such an observation begs the question as to how the original studies could have been so successful despite their own data indicating the implausibility of such a set of findings. Perhaps, despite a decade-long replication crisis in psychology, many scientists continue to engage in Questionable Research Practices (QRPs) that inflate Type I errors and undermine their conclusions.[4]
The reported outcomes in Shah and LaForest do not seem to be due to QRPs such as optional stopping[5] (gathering data until getting a desired significant outcome) or hypothesizing after the results are known[6] (HARKing) because their methods section indicates that no analyses started before data collection finished and the experimental results are straightforward. Without an alternative explanation, publication bias (suppressing relevant non-significant experiments or measures) might explain how such unreliable experiments could produce such consistent outcomes.
It is possible that Shah and LaForest were just extremely (un)lucky to pick random samples that happened to consistently produce significant outcomes, but then the reported results almost surely overestimate the true effect, and it is possible that there is no effect at all. Scientists should be skeptical about the empirical results and conclusions reported in Shah and LaForest; and future studies will be needed to evaluate the validity of the claims.
What sample sizes should be used for future studies of this topic? The answer depends on how confident a scientist wants to be that their random sample will generate a significant outcome. A common recommendation is to plan sample sizes to ensure 0.8 power. The penultimate column in Table 1 shows sample sizes that provide 0.8 power for each experiment if the population effect matches what was reported in the original study. Note that the replication experiments usually require much larger samples than the original studies. To fully support the conclusions of Shah and LaForest, 0.8 power might not be deemed sufficient, since the probability that ten experiments with 0.8 power would all produce significant results is only 0.8^(10)=0.107. To insure a joint probability of 0.8 for all ten experiments to produce significant results one could require that each experiment has a power of (0.8)^(1/10)≈0.98. The final column of Table 1 shows that such studies require sample sizes nearly three times larger than the original studies. If the population effects are smaller than what is reported in the original studies, then replication efforts will require even larger samples.
Table 1. A power analysis indicates that it is very unlikely that ten experiments like the ones in Shah and LaForest would all produce significant outcomes. Future studies of this topic will require much larger sample sizes.
Exp. | N | Power | N for power=0.8 | N for power=0.98
1A 397 0.70 502 1028
1B 291 0.57 504 1032
1C 456 0.56 704 1442
1D 543 0.60 862 1766
2A 462 0.93 310 634
2B 552 0.81 537 1047
3A 995 0.69 1312 2690
3B 582 0.58 972 1992
3C 294 0.69 382 782
Field 30 0.67 36 71
All 4602 0.019 6121 12484
Methods
Power was calculated using the pwr library[7] in R[8]. Sample sizes for future experiments were calculated using the pwr library and G*Power[9]. R code to reproduce the analyses reported here is available at the Open Science Framework https://osf.io/tjp6f/
References
1. Shah, A. K. & LaForest, M. Knowledge about others reduces one’s own sense of anonymity. Nature (2022). https://doi.org/10.1038/s41...
2. John, E. & Bushway, S. D. A feeling of familiarity can deter crime. Nature (2002). https://doi.org/10.1038/d41...
3. Francis, G. Too good to be true: Publication bias in two prominent studies from experimental psychology. Psychonomic Bulletin & Review, 19, 151-156 (2012). DOI 10.3758/s13423-012-0227-9
4. Simmons, J. P., Nelson, L. D., & Simonsohn, U. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366 (2011). https://doi.org/10.1177/095...
5. Strube, M. J. SNOOP: A program for demonstrating the consequences of premature and repeated null hypothesis testing. Behavior Research Methods, 38, 24-27 (2006).
6. Kerr, N. L. HARKing: Hypothesizing After the Results are Known. Personality and Social Psychology Review, 2, 196–217 (1998).
7. Champely, S., Ekstrom, C., Dalgaard, P., Gill, J., Weibelzahl, S., Ford, C., & Volcic, R. Package ‘pwr ’: Basic functions for power analysis (2018).
8. R core team. R: A language and environment for statistical computing. R Foundation for Statistical Computing. Vienna, Austria (2017). Retrieved from https://www.r-project.org/
9. Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175-191 (2007). doi:10.3758/bf03193146