Tuesday, July 29, 2014

Sins of information omission


From Kids Count 25th Edition, 2014 Data Book from The Annie E. Casey Foundation. I am looking for empirical data about what works and what doesn't with regard to children and reading. With reports such as these you always have to be careful because, as well intended as they are, they are usually fundamentally dishonest. They have an agenda they are pushing and they bend the data to support that agenda.

Because they are basically altruistic, they tend to look at the benefits of any given action and never the costs, an approach which makes for good press but which doesn't work in the real world of constraints and limits.

I approach the report with some due skepticism, prepared to find the gold amongst the dross. I am somewhat taken aback to have the concern so immediately confirmed. I turn to page 26, Education, to look for any information related to reading.

The opening three paragraphs are:
High-quality preschool matters, which is good news for the 50,000 low-income New Jersey children who benefit each year from a state-funded effort. In 1999, the state began enrolling 3- and 4-year-olds in high-quality preschool across the state's highest poverty districts. The program now serves about 80 percent of preschool-aged children in those districts.

A recent evaluation found that by fifth grade, children who attended the state program for two years weer, on average, nearly a year ahead of students who had not enrolled in the program. These positive effects were considerably larger than those found in programs with less funding. Small classes, well-trained teachers, a curriculum with high standards and support services for children and families contributed to this programs success.

Advocates for Children of New Jersey played a key role in bringing early care and learning advocates together to develop a mixed-delivery system that improved the quality of community-based child care centers, while utilizing some public school classrooms. The organization led a coalition of early childhood stakeholders who successfully forced the state to require that preschool teachers have a bachelor's degree and receive the resource to acquire the necessary education. Those benefits to teachers are giving children a good start.
Sound pretty good.

But is it? My sense of concern is triggered on two counts. First is that this is a result quite inconsistent with the repeated studies of Head Start, the federal program with similar structure and goals as New Jersey's. Several times in the past fifteen years, Head Start reviews have come back with the finding that while there are positive improvements while a child is enrolled in the program, those improvements have completely disappeared within two years. Ten billion dollars spent a year with no gain. But perhaps, New Jersey is implementing their program more rigorously.

That's possible but that brings us to the second trigger. Lots of words in the description but it is missing three crucial elements: 1) How much does it cost, 2) What is the goal, and 3) How effective is the program in achieving the goal? No measurements appear.

They are claiming that there are measurable sustained benefits accruing to children five years after the pre-school investment. If true, that is a real accomplishment. But what about the costs and goals?

How much does the Abbott preschool program cost? Googling turns up a number of documents that seem to indicate that the current cost to the state is roughly $12,000 per student. With some 50,000 enrolled, therefore, the program costs $600 million a year.

What is their goal? I haven't found a succinct and firm statement of goals. Most of the formulations are centered around "close the achievement gap." There are two problems with this formulation. Close can either mean narrow the gap or it can mean eliminate the gap. Under one reading, any improvement, no matter how narrow, would signal achievement of the goal. Since that would be fairly trivial, I am going to work with the second interpretation which is that the Abbott program will eliminate the performance gap between the advantaged and the disadvantaged.

How effective is the program in closing the gap? You have to go to the Abbott Preschool Program Longitudinal Effects Study: Fifth Grade Follow-Up to find that answer and you have too read pretty closely even then. Two caveats though. The first caveat is that the program only enrolls 80% of the population. Would the achieved results be the same if the entire population was enrolled? Second, they did not measure the same population between preschool and fifth grade. They were only able to positively identify about 65% of the initial group five years later. They assume that the missing 35% had the same performance attributes as the located 65% but it is easy to conjecture why that might not be true. So what are the results?
The gains from two years [of participation in the Abbott Program] are equivalent to 20 to 40 percent of the achievement gap.
For $600 million, New Jersey is buying an average closure of the achievement gap of 30% for 65% of its students.

So there are (at least) three questions. 1) How much of the improvement would still exist if you were able to measure 100% of participants instead of only 65% and all students not just the 80% enrolled? 2) Is it worth $600 million to close the gap only 30% by fifth grade? and 3) Will there still be measurable benefits by graduation and if so, how much?

The last is the real rub. If you invest $24,000 in a child over two years, you will only obtain positive benefit if they are likelier to graduate, more likely to attain higher education, more likely to be employed and more likely to be employed in higher compensated professions than they would otherwise have been. That there is still some measurable benefit at fifth grade is a good thing not achieved in any other similar scaled program of which I am aware.

But we still don't know if it will make a difference in the long run.

All of which is to say that the three paragraph summary in the report sins by omitting critical information that provides substantially different interpretation of whether and to what degree the program has been successful.





The immaculate conception theory of decision-making

An essay from a decade ago about foreign policy that provides great counsel and insight on decision-making in general, Foreign Policy Immaculately Conceived by Adam Garfinkle.
When a talented but untutored journalistic mind focuses on a foreign policy issue, particularly one that editors will pay to have written about, an amazing thing sometimes happens: All of a sudden, crystalline truth rises from the clear flame of an obvious logic that, for some unexplained reason, all of the experts and practitioners thinking and working on the problem for years never saw. This is the immaculate conception theory of U.S. foreign policy at work.

The immaculate conception theory of U.S. foreign policy operates from three central premises. The first is that foreign policy decisions always involve one and only one major interest or principle at a time. The second is that it is always possible to know the direct and peripheral impact of crisis-driven decisions several months or years into the future. The third is that U.S. foreign policy decisions are always taken with all principals in agreement and are implemented down the line as those principals intend — in short, they are logically coherent.

Put this way, of course, no sentient adult would defend such a theory. Even those who have never read Isaiah Berlin intuit from their own experiences that tradeoffs among incommensurable interests or principles are inevitable. They recognize that the urgent and the imminent generally push out the important and the eventual in high-level decision making. They know that disagreement and dissension often affect how public policy is made and applied. More than that, any sober soul is capable of applying this elemental understanding to particular cases if he really puts his mind to it.
I would recast this. The immaculate conception theory of decision-making holds that:
There is one and only one goal to be attained and that there are no trade-offs between goals.

There is perfect knowledge of cause and effect and that consequences can be accurately forecast two or three removes from the action (both in terms of time and proximity).

All affected parties agree not only on the primary goal but also on the ordinal ranking and the relative priorities of subsidiary goals.

That there is no fresh knowledge likely to arise between conceptualization and implementation and there are no feedback mechanisms that change priorities over time.
Obviously all four of these assumptions are wrong and most people would readily acknowledge that they are wrong. But in our Monday morning quarterbacking, we behave as if these four maxims were, in fact, true.

Garfinkle provides a foreign policy example.
How many times have we heard the clarion claim that the covert U.S. effort to aid the Afghan mujahedeen through the Pakistani regime during the 1980s was, in the end, a terrible mistake because it led first to a cruel Afghan civil war and then to the rise of the Taliban? I have lost count.

This argument is about as cogent as saying to a 79-year old man — Ralph, let’s call him — that he should never have gotten married because one of his grandsons has turned out to be a schmuck. But a person does not consider marriage with the character of one of several theoretical grandchildren foremost in mind. It was not possible at the time of the nuptials for Ralph to have foreseen the personality quirks of a ne’er-do-well son-in-law not yet born; so, lo and behold, the fine upbringing that he bequeathed to his children somehow got mangled in translation to the next generation. These things happen.

Similarly, in 1980, when the initial decision was made (in the Carter administration, by the way), to establish links with the mujahedeen, the preeminent concern of American decision makers was not the future of Afghanistan, but the future of the Soviet Union and its position in Southwest Asia. Whatever the Politburo intended at the time, the consolidation of Soviet control in Afghanistan would have given future Soviet leaders options they would not otherwise have had. In light of the strategic realities of the day, the American concern was entirely reasonable: Any group of U.S. decision makers would have thought and done more or less the same thing, even if they could have foreseen the risks to which they might expose the country on other scores.

But, of course, such foresight was impossible. Who in 1980 or 1982 or 1985 could have foreseen the confluence of events that would bring al Qaeda into being, with a haven in Afghanistan? The Saudi policies that led to bin Laden’s exile and the Kuwait crisis that led to the placement of U.S. forces on Saudi soil had not yet happened — and neither could have been reasonably anticipated. The civil strife that followed the exit of the Red Army from Afghanistan, and which established the preconditions for the rise of the Taliban government, had not yet happened either. Of course, despite the policy’s overall success in undermining the Soviet position in Afghanistan, entrusting Pakistan’s Inter-Services Intelligence Directorate to manage aid to the mujahedeen turned out to be problematic, but who of the immaculate conception set knows whether there were better alternatives available at the time? There weren’t; a tradeoff was involved, and it was a tradeoff known to carry certain risks.
Summing up the difficulties of foreign policy decision-making:
American presidents, who have to make the truly big decisions of U.S. foreign policy, must come to a judgment with incomplete information, often under stress and merciless time constraints, and frequently with their closest advisors painting one another in shades of disagreement. The choices are never between obviously good and obviously bad, but between greater and lesser sets of risks, greater and lesser prospects of danger. Banal as it sounds, we do well to remind ourselves from time to time that things really are not so simple, even when one’s basic principles are clear and correct.
All true. So why do we both claim to see the past so clearly and hold the past accountable to the knowledge of the present? Lots of reasons can be adduced ranging from simple ignorance to political expediency.

Obviously hindsight bias plays a role as exemplified by the current Obamacare contretemps related to the Halbig v. Burwell decision (in which the court held that the plain language of the Patient Protection and Affordable Care Act law had to be read as such, i.e. subsidies are only available to those in State exchanges). Megan McArdle focuses on the comments of Jonathan Gruber, one of the chief architects of PPACA, in 2012 in two different forums that comport with the court's interpretation but which Gruber now disavows.
I believe that Gruber sincerely does not remember making these remarks. Memory is fallible; at some point, Gruber probably changed his mind and forgot that he had ever believed otherwise. People show a strong tendency to edit their recollections of prior beliefs to reflect the "correct" answer, and even brilliant economists are not immune to this common cognitive bias.

But though I do not fault his honesty, I also think that in January 2012, Gruber did believe that premium tax credits would only be available on state-created exchanges, and that this would give states a strong incentive to create exchanges.
Hindsight bias is definitely part of the phenomenon but I think there is more going on than that.

I think the inclination towards hindsight bias is facilitated by two general weaknesses in human decision-making. 1) We are often exceptionally poor at creating, articulating, and measuring a coherent, logical, rational, and empirically robust argument towards some goal. If we are honest with ourselves, there is a lot we don't know and there are many uncertainties. As the physicists Niels Bohr is reputed to have said, "Prediction is very difficult, especially about the future." To chase down all the unknowns, imponderables, uncertainties, contingencies, etc. is mentally taxing and time consuming. We are cognitively conservative and take all sorts of shortcuts. "Everyone knows that . . . "; "Obviously . . . "; "That's an exception . . ."; etc. By taking these cognitive shortcuts, the lack of rigor and coherence is hidden. We sacrifice strategic effectiveness (likelihood of achieving the desire outcomes) in order to achieve tactical efficiency (faster, easier decision).

By not formalizing our thinking in advance, we give ourselves lots of wiggle room to reinterpret that thinking in a favorable fashion as circumstances change.

2) We plan strategically but implement tactically. We continually adjust our goals to accommodate emerging information and issues without ever revalidating the underlying premises. We start with a broad vision but then implement on a circumscribed basis. While implementing, we usually see and focus on only what is immediately in front of us and rarely lift our eyes to the horizon to make sure we are still broadly headed in the right direction. Consequently, and too often, we end up at unexpected destinations without understanding how we got there.

There are all sorts of grounds for criticizing decisions made in the past, and many lessons to be learned about how to be more disciplined in decision-making in the future. What we cannot do with integrity and honesty is to criticize past decisions based on the immaculate conception theory of decision-making.









Monday, July 28, 2014

Writing while naked

A nice example of how we cannot leave a good story alone and of epistemological evolution. What we know is often what we want to know. From The Victor Hugo working naked story: myth or fact? by Druss.
I ran into a Neatorama article the other day which listed authors who like(d) to work naked. One of them was apparently Victor Hugo:
When Victor Hugo, the famous author of great tomes such as Les Misérables and The Hunchback of Notre-Dame, ran into a writer's block, he concocted a unique scheme to force himself to write: he had his servant take all of his clothes away for the day and leave his own nude self with only pen and paper, so he'd have nothing to do but sit down and write.
That's a cute story. But how true is it?
He does some research and eventually turns up the truth.
So, in this version, Hugo was not writing while naked. He was just stuck in his pyjamas and had no formal clothes to leave his study. This sounds a lot more plausible. We also learn that the source of this anecdote is his wife (Adèle Foucher). McNally cites a J. Sturrock in his article who turns out to be a John Sturrock, a translator of Hugo's works. The "introduction", I find out is Sturrock's introduction to The Hunchback of Notre-Dame:
'He bought himself a bottle of ink and a huge grey knitted shawl, which swathed him from head to foot, locked his formal clothes away so that he would not be tempted to go out and entered his novel as if it were a prison. He was very sad.' This engagingly domestic report on Victor Hugo sitting down in the autumn of 1830 to write Notre-Dame of Paris is by his wife, Adèle, who in the 1860s published a quaintly tinted memoir (dictated, some have hinted, by its subject himself): Victor Hugo Recounted by a Witness of His Life.
Read the whole thing to get a sense of how a story of Victor Hugo exercising self-control ended up as a story of Victor Hugo writing while naked.

Knowing that my mom remembers is bad enough.

Ann Althouse has a post on the shocking incident in Florida where two teenage girls film themselves torturing a gopher turtle to death and then posting the video on Facebook. Althouse has worked hard over the years to moderate and manage her commenting community and while things occasionally get out of hand, it is by and large one of the more well-behaved communities of which I am aware.

There are, among the commenters, affirmations of how unacceptable this behavior is, speculation about the link between adolescent torturing of animals and later criminality, speculation about the link between such behavior and the likelihood of earlier childhood abuse, a lesson in logic, lots of personal testimonial vignettes of how the commenter's parents taught them about the importance of being kind to animals, recollections of childhoods in Florida when gopher turtle stew was a dinnertime staple, comments on First World Problems, etc.

I liked this comment though. High morality often has earthy roots.
I am glad the childhood behavior which led my parents to correct me is not documented on Facebook or Youtube. Knowing that my mom remembers is bad enough.
Forget theology, first principles and detailed arguments on knife-edge ethical issues. It does often come down to "Do Mom or Dad know and what would they think of me if they did."

Sunday, July 27, 2014

We can't impute intent to a chimera, all we can do is observe the revealed preferences of real people.

Kind of interesting. From The Idea of Congressional Intent is Incoherent by Alex Tabarrok. This is really more a philosophical and legal discussion but it has broader application.
Now seems like an apposite time to remember, Congress intends no more than Congress smiles. As Ken Shepsle put it in his classic paper Congress is a “They,” not an “It”:
Legislative intent is an internally inconsistent, self-contradictory expression. Therefore, it has no meaning. To claim otherwise is to entertain a myth (the existence of a Rousseauian great law giver) or commit a fallacy (the false personification of a collectivity). In either instance, it provides a very insecure foundation for statutory interpretation.
Shepsle’s point is that Arrow’s impossibility theorem shows that not only do collectives not have preferences they can’t even be understood as if they had preferences. As I wrote earlier:
Suppose that a person is rational and that we observe their choices. After some time we will come to understand their choices in terms of their underlying preferences (assume stability–this is a thought experiment). We will be able to say, “Ah, I see what this person wants. I understand now why they are choosing in the way that they do. If I were them, I would choose in the same way.”

Arrow showed that when a group chooses, there are no underlying preferences to uncover–not even in theory. In one sense, the theorem is trivial. We know or should always have known that a group doesn’t have preferences anymore than a group smiles. What Arrow showed, however, is that without invoking special cases we can’t even rationalize group choices as if leviathan had preferences.
In children's literature circles, dominated as they often are by the further reaches of academic fashions, there is a tendency to characterize groups (gender, religion, race, age, class, etc.) and attribute desires to those putative groups.

There are certainly empirical correlations of one sort or another and I have no particular beef with that reality. "I yam what I yam and tha's all what I yam" as the immortal Popeye used to say. It is the attribution of desires to an otherwise heterogeneous group that sets me on edge.

This attribution of motive and philosophical existence often takes the form of "We (whatever group) need more books about . . ." All well and good but who is this metaphorical "we"? Are "we" going to buy the book? Are "we" going to read the book? Somebody will but not "we" because "we" is just a conjured metaphor with no wallet or smiling eyes.

As Tabarrok says, "there are no underlying preferences to uncover."

We can, for example, empirically say that the primary readers of Young Adult literary fiction are actually middle aged women. We do survey's, we obtain data, we observe purchasing trends.

We can't from that piece of empirical knowledge then say that the group we designate as middle aged women prefer YA literary fiction, or that they want more literary fiction. There is no corporal substance or cognitive attribution that can be made to the "they."

Publisher's might usefully exploit the knowledge that the demographic of middle aged women are the primary consumers of YA literary fiction in shaping their marketing and advertising campaigns but they would never (or at least never if they want to stay in business) make the mistake of assuming that middle aged women want more YA literary fiction. Lots of middle aged women readers might individually wish for and choose to read more YA literary fiction, but "they" don't make that decision. Betty makes that decision and Allison and Juanda, etc. To assume otherwise both entertains "a myth" and deprives those middle aged female readers of their agency.

We can't impute intent to a chimera, all we can do is observe the revealed preferences of real people.

Pretending in the virgin birth of ideas and words

A couple of weeks ago, I posted The narcissism of small differences, a series of observations by the economic historian, Niall Fergusson in an article from June of this year, Networks and Hierarchies. I was particularly taken by that turn of phrase, the narcissism of small differences because it so deftly captures what is so often at the heart of heated debates.

But recently, at the beach, I read Malcolm Gladwell's What the Dog Saw and Other Adventures. And there it was again.
“The ethics of plagiarism have turned into the narcissism of small differences: because journalism cannot own up to its heavily derivative nature, it must enforce originality on the level of the sentence.”
So has this phrase been circulating a long time and I am just now registering it? Is Fergusson, ironically, committing micro-plagiarism by using Gladwell's turn of phrase?

Fortunately we can set plagiarism aside. It is indeed a phrase of some lineage and circulation stretching back via Sigmund Freud (The Taboo of Virginity, 1917) to the work of a British anthropologist Ernest Crawley (1867-1924). From Freud's account.
Crawley, in language which differs only slightly from the current terminology of psycho-analysis, declares that each individual is separated from others by a ‘taboo of personal isolation’, and that it is precisely the minor differences in people who are otherwise alike that form the basis of feelings of strangeness and hostility between them.
It would be tempting to pursue this idea and to derive from this 'narcissism of minor differences' the hostility which in every human relationship we see fighting successfully against feelings of fellowship and overpowering the commandment that all men should love one another.(Freud 1917:199)
This brief search is interesting because it comports with Gladwell's conclusion in his Something Borrowed which appeared in The New Yorker, November 22, 2004. In the article he explores serendipity, uncertainty, chance, fallible memory and other contributors to perceived plagiarism. Here is one of his conclusions, as it happens, containing the mentioned phrase.
And this is the second problem with plagiarism. It is not merely extremist. It has also become disconnected from the broader question of what does and does not inhibit creativity. We accept the right of one writer to engage in a full-scale knockoff of another—think how many serial-killer novels have been cloned from “The Silence of the Lambs.” Yet, when Kathy Acker incorporated parts of a Harold Robbins sex scene verbatim in a satiric novel, she was denounced as a plagiarist (and threatened with a lawsuit). When I worked at a newspaper, we were routinely dispatched to “match” a story from the Times: to do a new version of someone else’s idea. But had we “matched” any of the Times’ words—even the most banal of phrases—it could have been a firing offense. The ethics of plagiarism have turned into the narcissism of small differences: because journalism cannot own up to its heavily derivative nature, it must enforce originality on the level of the sentence.
His overall conclusion is
The final dishonesty of the plagiarism fundamentalists is to encourage us to pretend that these chains of influence and evolution do not exist, and that a writer’s words have a virgin birth and an eternal life. I suppose that I could get upset about what happened to my words. I could also simply acknowledge that I had a good, long ride with that line—and let it go.
So Freud got his idea from Crawley, paraphrasing Crawley in a strikingly meaningful manner which has echoed down the times and lines to Gladwell and then to Fergusson (and undoubtedly hundreds or thousands of others). A good idea or phrase thrives on its own merits regardless of its progenitor and any putative idea of ownership.

Of course all this brings to mind Henry Kissinger's quip, which is a variant of the idea behind the narcissism of small differences; "University politics are vicious precisely because the stakes are so small."

Finally, there is Dr. Joy Bliss's joke about the narcissism of small differences.
I was walking across a bridge one sunny day, and I saw a man standing on the edge, about to jump. I ran over and said: 'Stop. Don't do it.'

'Why shouldn't I?' he asked.

'Well, there's so much to live for!'

'Like what?'

'Are you religious?'

He said: 'Yes.'

I said. 'Me too. Are you Christian or Buddhist?'

'Christian.'

'Me too. Are you Catholic or Protestant?''

'Protestant.'

'Me too. Are you Episcopalian or Baptist?'

'Baptist.'

'Wow. Me too. Are you Baptist Church of God or Baptist Church of the Lord?'

'Baptist Church of God.'

'Me too. Are you original Baptist Church of God, or are you reformed Baptist Church of God?'

'Reformed Baptist Church of God.'

'Me too. Are you Reformed Baptist Church of God, reformation of 1879, or Reformed Baptist Church of God, reformation of 1915?'

He said: 'Reformed Baptist Church of God, reformation of 1915.'

I said: "Die, heretic scum," and pushed him off the bridge.


What the state is doing, in actuality, is issuing licenses to commit a felony


I enjoy coming across instances that shed light on what seemed like peculiar issues in history.

An example of one such peculiarity was the prevalence in the seventeenth and eighteenth centuries of privateers which were essentially pirates operating under a state license. Intellectually it is of course not so hard to grasp the distinction between pirate and privateer. It is also not hard to understand why the distinction often became unclear in practice. None-the-less, there is something just odd to the modern mind about the whole issue of pirates and privateers.

But perhaps it is because we are blind to our own pirates/privateers issues. We are close to contemporary issues and perhaps fail to see the parallels. This came to me while reading Patrick Radden Keefe's article Buzzkill: Washington State discovers that it’s not so easy to create a legal marijuana economy in the November 18, 2013 issue of the New Yorker.
Washington and Colorado have launched a singular experiment. The Netherlands tolerates personal use of marijuana, but growing or selling the drug is still illegal. Portugal has eliminated criminal sanctions on all forms of drug use, but selling narcotics remains a crime. Washington and Colorado are not merely decriminalizing adult possession and use of cannabis; they are creating a legal market for the drug that will be overseen by the state. In a further complication, the marijuana that is legal in these states will remain illegal in the eyes of the federal government, because the Controlled Substances Act of 1970 forbids the growing and selling of cannabis. “What the state is doing, in actuality, is issuing licenses to commit a felony,” Kleiman says. In late August, after months of silence, the Department of Justice announced that it will not intervene to halt the initiatives in Washington and Colorado. Instead, it will adopt a “trust but verify” approach, permitting the states to police the new market for the drug. Many other states appear poised to introduce legalization measures, and the Obama Administration’s apparent acquiescence surely will hasten this development.
"What the state is doing, in actuality, is issuing licenses to commit a felony" - sounds like the distinction between pirates and privateers to me. And I think we know how well that distinction was maintained and how well that policy served the interests of the respective nations.




Saturday, July 26, 2014

It’s not worth spending more on American workers at current wage levels

From Facts about non-residential investment by Tyler Cowen

Digesting some cited data, Cowler observes
One simple hypothesis is that it’s not worth spending more on American workers at current wage levels. As workers, while Americans are quite good, they are just not that much better than a variety of high-IQ individuals in cheaper countries, many of whom now have acceptable infrastructure to work with.
Accepting the premise on a contingent basis, I wonder if then the conclusion might be that the fastest improvement might be in helping individuals improve their non-cognitive skills and behaviors. Probably would materially improve the inequality trends as well.

Any job is a step up

From an AEI video, an expert in poverty program administration from New York City.

Four lessons learned in fighting poverty that informed 1990's reform of welfare. Notes from the presentation.
Require work.

Reward work.

Two parent family.

Grow the economy.
Any job is a step up.

1.1 million on welfare (in New York) plunges to 300,000.
Work rate for never married mothers went from 43% to 63%.
Child Poverty in NYC went from 43% to 28%.
African-American child poverty in US dropped from 44% in 1994 to 30% in 2001.

Government can't deliver on promises. Only people can deliver on promises.

Friday, July 25, 2014

And so the letters sat in that attic for decades

What a wonderful story - A Family's Lost Love Letters, a Stranger, and a History Revealed by Abigail Jones.
Caral explained that when her family first moved in, the house was bare except for a corner of the attic, where she found, in the dusty shadows, a blue hatbox with a gold braid around the edges. Inside, Caral found hundreds of letters, each one folded neatly inside an envelope, and on the front of each envelope, written in penmanship from a forgotten era, was “Miss Sally Anne Rudolph” and her address at the Parc Vendome.

Caral was 11 years old at the time, and she says that over the next few months, she read every letter in that hatbox in order, oftentimes ignoring her homework to find out what happened next in the love story between this woman named Sally and a man named Charlie. Caral was old enough to grasp the enormity of what she had discovered, but too young to understand why the letters might have been left behind or how to go about finding their owner. And so the letters sat in that attic for decades—until Caral’s mother decided to sell the house.

“I had to clean out all of my stuff from my childhood, and I knew that I had to take them. I have no idea why that was so important to me, but it was,” recalls Caral, who was in her 30s when her family moved. “I just couldn’t leave them.”