THIS IS NOT to say that I’m unanchored in my faith. There are some things that I’m absolutely sure about—the Golden Rule, the need to battle cruelty in all its forms, the value of love and charity, humility and grace. Those beliefs were driven home two years ago when I flew down to Birmingham, Alabama, to deliver a speech at the city’s Civil Rights Institute. The institute is right across the street from the Sixteenth Street Baptist Church, the site where, in 1963, four young children—Addie Mae Collins, Carole Robertson, Cynthia Wesley, and Denise McNair—lost their lives when a bomb planted by white supremacists exploded during Sunday school, and before my talk I took the opportunity to visit the church. The young pastor and several deacons greeted me at the door and showed me the still-visible scar along the wall where the bomb went off. I saw the clock at the back of the church, still frozen at 10:22 a.m. I studied the portraits of the four little girls. After the tour, the pastor, deacons, and I held hands and said a prayer in the sanctuary. Then they left me to sit in one of the pews and gather my thoughts. What must it have been like for those parents forty years ago, I wondered, knowing that their precious daughters had been snatched away by violence at once so casual and so vicious? How could they endure the anguish unless they were certain that some purpose lay behind their children’s murders, that some meaning could be found in immeasurable loss? Those parents would have seen the mourners pour in from all across the nation, would have read the condolences from across the globe, would have watched as Lyndon Johnson announced on national television that the time had come to overcome, would have seen Congress finally pass the Civil Rights Act of 1964. Friends and strangers alike would have assured them that their daughters had not died in vain—that they had awakened the conscience of a nation and helped liberate a people; that the bomb had burst a dam to let justice roll down like water and righteousness like a mighty stream. And yet would even that knowledge be enough to console your grief, to keep you from madness and eternal rage—unless you also knew that your child had gone on to a better place? My thoughts turned to my mother and her final days, after cancer had spread through her body and it was clear that there was no coming back. She had admitted to me during the course of her illness that she was not ready to die; the suddenness of it all had taken her by surprise, as if the physical world she loved so much had turned on her, betrayed her. And although she fought valiantly, endured the pain and chemotherapy with grace and good humor to the very end, more than once I saw fear flash across her eyes. More than fear of pain or fear of the unknown, it was the sheer loneliness of death that frightened her, I think—the notion that on this final journey, on this last adventure, she would have no one to fully share her experiences with, no one who could marvel with her at the body’s capacity to inflict pain on itself, or laugh at the stark absurdity of life once one’s hair starts falling out and one’s salivary glands shut down. I carried such thoughts with me as I left the church and made my speech. Later that night, back home in Chicago, I sat at the dinner table, watching Malia and Sasha as they laughed and bickered and resisted their string beans before their mother chased them up the stairs and to their baths. Alone in the kitchen washing the dishes, I imagined my two girls growing up, and I felt the ache that every parent must feel at one time or another, that desire to snatch up each moment of your child’s presence and never let go—to preserve every gesture, to lock in for all eternity the sight of their curls or the feel of their fingers clasped around yours. I thought of Sasha asking me once what happened when we die—“I don’t want to die, Daddy,” she had added matter-of-factly—and I had hugged her and said, “You’ve got a long, long way before you have to worry about that,” which had seemed to satisfy her. I wondered whether I should have told her the truth, that I wasn’t sure what happens when we die, any more than I was sure of where the soul resides or what existed before the Big Bang. Walking up the stairs, though, I knew what I hoped for—that my mother was together in some way with those four little girls, capable in some fashion of embracing them, of finding joy in their spirits. I know that tucking in my daughters that night, I grasped a little bit of heaven. Chapter Seven Race THE FUNERAL WAS held in a big church, a gleaming, geometric structure spread out over ten well-manicured acres. Reputedly, it had cost $35 million to build, and every dollar showed—there was a banquet hall, a conference center, a 1,200-car parking lot, a state-of-the-art sound system, and a TV production facility with digital editing equipment. Inside the church sanctuary, some four thousand mourners had already gathered, most of them African American, many of them professionals of one sort or another: doctors, lawyers, accountants, educators, and real estate brokers. On the stage, senators, governors, and captains of industry mingled with black leaders like Jesse Jackson, John Lewis, Al Sharpton, and T. D. Jakes. Outside, under a bright October sun, thousands more stood along the quiet streets: elderly couples, solitary men, young women with strollers, some waving to the motorcades that occasionally passed, others standing in quiet contemplation, all of them waiting to pay their final respects to the diminutive, gray-haired woman who lay in the casket within. The choir sang; the pastor said an opening prayer. Former President Bill Clinton rose to speak, and began to describe what it had been like for him as a white Southern boy to ride in segregated buses, how the civil rights movement that Rosa Parks helped spark had liberated him and his white neighbors from their own bigotry. Clinton’s ease with his black audience, their almost giddy affection for him, spoke of reconciliation, of forgiveness, a partial mending of the past’s grievous wounds. In many ways, seeing a man who was both the former leader of the free world and a son of the South acknowledge the debt he owed a black seamstress was a fitting tribute to the legacy of Rosa Parks. Indeed, the magnificent church, the multitude of black elected officials, the evident prosperity of so many of those in attendance, and my own presence onstage as a United States senator—all of it could be traced to that December day in 1955 when, with quiet determination and unruffled dignity, Mrs. Parks had refused to surrender her seat on a bus. In honoring Rosa Parks, we honored others as well, the thousands of women and men and children across the South whose names were absent from the history books, whose stories had been lost in the slow eddies of time, but whose courage and grace had helped liberate a people. And yet, as I sat and listened to the former President and the procession of speakers that followed, my mind kept wandering back to the scenes of devastation that had dominated the news just two months earlier, when Hurricane Katrina struck the Gulf Coast and New Orleans was submerged. I recalled images of teenage mothers weeping or cursing in front of the New Orleans Superdome, their listless infants hoisted to their hips, and old women in wheelchairs, heads lolled back from the heat, their withered legs exposed under soiled dresses. I thought about the news footage of a solitary body someone had laid beside a wall, motionless beneath the flimsy dignity of a blanket; and the scenes of shirtless young men in sagging pants, their legs churning through the dark waters, their arms draped with whatever goods they had managed to grab from nearby stores, the spark of chaos in their eyes. I had been out of the country when the hurricane first hit the Gulf, on my way back from a trip to Russia. One week after the initial tragedy, though, I traveled to Houston, joining Bill and Hillary Clinton, as well as George H. W. Bush and his wife, Barbara, as they announced fund-raising efforts on behalf of the hurricane’s victims and visited with some of the twenty-five thousand evacuees who were now sheltered in the Houston Astrodome and adjoining Reliant Center. The city of Houston had done an impressive job setting up emergency facilities to accommodate so many people, working with the Red Cross and FEMA to provide them with food, clothing, shelter, and medical care. But as we walked along the rows of cots that now lined the Reliant Center, shaking hands, playing with children, listening to people’s stories, it was obvious that many of Katrina’s survivors had been abandoned long before the hurricane struck. They were the faces of any inner-city neighborhood in any American city, the faces of black poverty—the jobless and almost jobless, the sick and soon to be sick, the frail and the elderly. A young mother talked about handing off her children to a bus full of strangers. Old men quietly described the houses they had lost and the absence of any insurance or family to fall back on. A group of young men insisted that the levees had been blown up by those who wished to rid New Orleans of black people. One tall, gaunt woman, looking haggard in an Astros T-shirt two sizes too big, clutched my arm and pulled me toward her. “We didn’t have nothin’ before the storm,” she whispered. “Now we got less than nothin’.” In the days that followed, I returned to Washington and worked the phones, trying to secure relief supplies and contributions. In Senate Democratic Caucus meetings, my colleagues and I discussed possible legislation. I appeared on the Sunday morning news shows, rejecting the notion that the Administration had acted slowly because Katrina’s victims were black—“the incompetence was color-blind,” I said—but insisting that the Administration’s inadequate planning showed a degree of remove from, and indifference toward, the problems of inner-city poverty that had to be addressed. Late one afternoon we joined Republican senators in what the Bush Administration deemed a classified briefing on the federal response. Almost the entire Cabinet was there, along with the chairman of the Joint Chiefs, and for an hour Secretaries Chertoff, Rumsfeld, and the rest bristled with confidence—and displayed not the slightest bit of remorse—as they recited the number of evacuations made, military rations distributed, National Guard troops deployed. A few nights later, we watched President Bush in that eerie, floodlit square, acknowledging the legacy of racial injustice that the tragedy had helped expose and proclaiming that New Orleans would rise again. And now, sitting at the funeral of Rosa Parks, nearly two months after the storm, after the outrage and shame that Americans across the country had felt during the crisis, after the speeches and emails and memos and caucus meetings, after television specials and essays and extended newspaper coverage, it felt as if nothing had happened. Cars remained on rooftops. Bodies were still being discovered. Stories drifted back from the Gulf that the big contractors were landing hundreds of millions of dollars’ worth of contracts, circumventing prevailing wage and affirmative action laws, hiring illegal immigrants to keep their costs down. The sense that the nation had reached a transformative moment—that it had had its conscience stirred out of a long slumber and would launch a renewed war on poverty—had quickly died away. Instead, we sat in church, eulogizing Rosa Parks, reminiscing about past victories, entombed in nostalgia. Already, legislation was moving to place a statue of Mrs. Parks under the Capitol dome. There would be a commemorative stamp bearing her likeness, and countless streets, schools, and libraries across America would no doubt bear her name. I wondered what Rosa Parks would make of all of this—whether stamps or statues could summon her spirit, or whether honoring her memory demanded something more. I thought about what that woman in Houston had whispered to me, and wondered how we might be judged, in those days after the levee broke. WHEN I MEET people for the first time, they sometimes quote back to me a line in my speech at the 2004 Democratic National Convention that seemed to strike a chord: “There is not a black America and white America and Latino America and Asian America—there’s the United States of America.” For them, it seems to capture a vision of America finally freed from the past of Jim Crow and slavery, Japanese internment camps and Mexican braceros, workplace tensions and cultural conflict—an America that fulfills Dr. King’s promise that we be judged not by the color of our skin but by the content of our character. In a sense I have no choice but to believe in this vision of America. As the child of a black man and a white woman, someone who was born in the racial melting pot of Hawaii, with a sister who’s half Indonesian but who’s usually mistaken for Mexican or Puerto Rican, and a brother-in-law and niece of Chinese descent, with some blood relatives who resemble Margaret Thatcher and others who could pass for Bernie Mac, so that family get-togethers over Christmas take on the appearance of a UN General Assembly meeting, I’ve never had the option of restricting my loyalties on the basis of race, or measuring my worth on the basis of tribe. Moreover, I believe that part of America’s genius has always been its ability to absorb newcomers, to forge a national identity out of the disparate lot that arrived on our shores. In this we’ve been aided by a Constitution that—despite being marred by the original sin of slavery—has at its very core the idea of equal citizenship under the law; and an economic system that, more than any other, has offered opportunity to all comers, regardless of status or title or rank. Of course, racism and nativist sentiments have repeatedly undermined these ideals; the powerful and the privileged have often exploited or stirred prejudice to further their own ends. But in the hands of reformers, from Tubman to Douglass to Chavez to King, these ideals of equality have gradually shaped how we understand ourselves and allowed us to form a multicultural nation the likes of which exists nowhere else on earth. Finally, those lines in my speech describe the demographic realities of America’s future. Already, Texas, California, New Mexico, Hawaii, and the District of Columbia are majority minority. Twelve other states have populations that are more than a third Latino, black, and/or Asian. Latino Americans now number forty-two million and are the fastest-growing demographic group, accounting for almost half of the nation’s population growth between 2004 and 2005; the Asian American population, though far smaller, has experienced a similar surge and is expected to increase by more than 200 percent over the next forty-five years. Shortly after 2050, experts project, America will no longer be a majority white country—with consequences for our economics, our politics, and our culture that we cannot fully anticipate. Still, when I hear commentators interpreting my speech to mean that we have arrived at a “postracial politics” or that we already live in a color-blind society, I have to offer a word of caution. To say that we are one people is not to suggest that race no longer matters—that the fight for equality has been won, or that the problems that minorities face in this country today are largely self-inflicted. We know the statistics: On almost every single socioeconomic indicator, from infant mortality to life expectancy to employment to home ownership, black and Latino Americans in particular continue to lag far behind their white counterparts. In corporate boardrooms across America, minorities are grossly underrepresented; in the United States Senate, there are only three Latinos and two Asian members (both from Hawaii), and as I write today I am the chamber’s sole African American. To suggest that our racial attitudes play no part in these disparities is to turn a blind eye to both our history and our experience—and to relieve ourselves of the responsibility to make things right. Moreover, while my own upbringing hardly typifies the African American experience— and although, largely through luck and circumstance, I now occupy a position that insulates me from most of the bumps and bruises that the average black man must endure—I can recite the usual litany of petty slights that during my forty-five years have been directed my way: security guards tailing me as I shop in department stores, white couples who toss me their car keys as I stand outside a restaurant waiting for the valet, police cars pulling me over for no apparent reason. I know what it’s like to have people tell me I can’t do something because of my color, and I know the bitter swill of swallowed-back anger. I know as well that Michelle and I must be continually vigilant against some of the debilitating story lines that our daughters may absorb—from TV and music and friends and the streets—about who the world thinks they are, and what the world imagines they should be. To think clearly about race, then, requires us to see the world on a split screen—to maintain in our sights the kind of America that we want while looking squarely at America as it is, to acknowledge the sins of our past and the challenges of the present without becoming trapped in cynicism or despair. I have witnessed a profound shift in race relations in my lifetime. I have felt it as surely as one feels a change in the temperature. When I hear some in the black community deny those changes, I think it not only dishonors those who struggled on our behalf but also robs us of our agency to complete the work they began. But as much as I insist that things have gotten better, I am mindful of this truth as well: Better isn’t good enough. MY CAMPAIGN for the U.S. Senate indicates some of the changes that have taken place in both the white and black communities of Illinois over the past twenty-five years. By the time I ran, Illinois already had a history of blacks elected to statewide office, including a black state comptroller and attorney general (Roland Burris), a United States senator (Carol Moseley Braun), and a sitting secretary of state, Jesse White, who had been the state’s leading vote-getter only two years earlier. Because of the pioneering success of these public officials, my own campaign was no longer a novelty—I might not have been favored to win, but the fact of my race didn’t foreclose the possibility. Moreover, the types of voters who ultimately gravitated to my campaign defied the conventional wisdom. On the day I announced my candidacy for the U.S. Senate, for example, three of my white state senate colleagues showed up to endorse me. They weren’t what we in Chicago call “Lakefront Liberals”—the so-called Volvo-driving, latte-sipping, white-wine-drinking Democrats that Republicans love to poke fun at and might be expected to embrace a lost cause such as mine. Instead, they were three middle-aged, working-class guys—Terry Link of Lake County, Denny Jacobs of the Quad Cities, and Larry Walsh of Will County—all of whom represented mostly white, mostly working-class or suburban communities outside Chicago. It helped that these men knew me well; the four of us had served together in Springfield during the previous seven years and had maintained a weekly poker game whenever we were in session. It also helped that each of them prided himself on his independence, and was therefore willing to stick with me despite pressure from more favored white candidates. But it wasn’t just our personal relationships that led them to support me (although the strength of my friendships with these men—all of whom grew up in neighborhoods and at a time in which hostility toward blacks was hardly unusual—itself said something about the evolution of race relations). Senators Link, Jacobs, and Walsh are hard-nosed, experienced politicians; they had no interest in backing losers or putting their own positions at risk. The fact was, they all thought that I’d “sell” in their districts—once their constituents met me and could get past the name. They didn’t make such a judgment blind. For seven years they had watched me interact with their constituents, in the state capitol or on visits to their districts. They had seen white mothers hand me their children for pictures and watched white World War II vets shake my hand after I addressed their convention. They sensed what I’d come to know from a lifetime of experience: that whatever preconceived notions white Americans may continue to hold, the overwhelming majority of them these days are able—if given the time—to look beyond race in making their judgments of people. This isn’t to say that prejudice has vanished. None of us—black, white, Latino, or Asian—is immune to the stereotypes that our culture continues to feed us, especially stereotypes about black criminality, black intelligence, or the black work ethic. In general, members of every minority group continue to be measured largely by the degree of our assimilation—how closely speech patterns, dress, or demeanor conform to the dominant white culture—and the more that a minority strays from these external markers, the more he or she is subject to negative assumptions. If an internalization of antidiscrimination norms over the past three decades—not to mention basic decency— prevents most whites from consciously acting on such stereotypes in their daily interactions with persons of other races, it’s unrealistic to believe that these stereotypes don’t have some cumulative impact on the often snap decisions of who’s hired and who’s promoted, on who’s arrested and who’s prosecuted, on how you feel about the customer who just walked into your store or about the demographics of your children’s school. I maintain, however, that in today’s America such prejudices are far more loosely held than they once were—and hence are subject to refutation. A black teenage boy walking down the street may elicit fear in a white couple, but if he turns out to be their son’s friend from school he may be invited over for dinner. A black man may have trouble catching a cab late at night, but if he is a capable software engineer Microsoft will have no qualms about hiring him. I cannot prove these assertions; surveys of racial attitudes are notoriously unreliable. And even if I’m right, it’s cold comfort to many minorities. After all, spending one’s days refuting stereotypes can be a wearying business. It’s the added weight that many minorities, especially African Americans, so often describe in their daily round—the feeling that as a group we have no store of goodwill in America’s accounts, that as individuals we must prove ourselves anew each day, that we will rarely get the benefit of the doubt and will have little margin for error. Making a way through such a world requires the black child to fight off the additional hesitation that she may feel when she stands at the threshold of a mostly white classroom on the first day of school; it requires the Latina woman to fight off self-doubt as she prepares for a job interview at a mostly white company. Most of all, it requires fighting off the temptation to stop making the effort. Few minorities can isolate themselves entirely from white society—certainly not in the way that whites can successfully avoid contact with members of other races. But it is possible for minorities to pull down the shutters psychologically, to protect themselves by assuming the worst. “Why should I have to make the effort to disabuse whites of their ignorance about us?” I’ve had some blacks tell me. “We’ve been trying for three hundred years, and it hasn’t worked yet.” To which I suggest that the alternative is surrender—to what has been instead of what might be. One of the things I value most in representing Illinois is the way it has disrupted my own assumptions about racial attitudes. During my Senate campaign, for example, I traveled with Illinois’s senior senator, Dick Durbin, on a thirty-nine-city tour of southern Illinois. One of our scheduled stops was a town called Cairo, at the very southern tip of the state, where the Mississippi and Ohio Rivers meet, a town made famous during the late sixties and early seventies as the site of some of the worst racial conflict anywhere outside of the Deep South. Dick had first visited Cairo during this period, when as a young attorney working for then Lieutenant Governor Paul Simon, he had been sent to investigate what might be done to lessen the tensions there. As we drove down to Cairo, Dick recalled that visit: how, upon his arrival, he’d been warned not to use the telephone in his motel room because the switchboard operator was a member of the White Citizens Council; how white store owners had closed their businesses rather than succumb to boycotters’ demands to hire blacks; how black residents told him of their efforts to integrate the schools, their fear and frustration, the stories of lynching and jailhouse suicides, shootings and riots. By the time we pulled into Cairo, I didn’t know what to expect. Although it was midday, the town felt abandoned, a handful of stores open along the main road, a few elderly couples coming out of what appeared to be a health clinic. Turning a corner, we arrived at a large parking lot, where a crowd of a couple of hundred were milling about. A quarter of them were black, almost all the rest white. They were all wearing blue buttons that read OBAMA FOR U.S. SENATE. Ed Smith, a big, hearty guy who was the Midwest regional manager of the Laborers’ International Union and who’d grown up in Cairo, strode up to our van with a big grin on his face. “Welcome,” he said, shaking our hands as we got off the bus. “Hope you’re hungry, ’cause we got a barbecue going and my mom’s cooking.” I don’t presume to know exactly what was in the minds of the white people in the crowd that day. Most were my age and older and so would at least have remembered, if not been a direct part of, those grimmer days thirty years before. No doubt many of them were there because Ed Smith, one of the most powerful men in the region, wanted them to be there; others may have been there for the food, or just to see the spectacle of a U.S. senator and a candidate for the Senate campaign in their town. I do know that the barbecue was terrific, the conversation spirited, the people seemingly glad to see us. For an hour or so we ate, took pictures, and listened to people’s concerns. We discussed what might be done to restart the area’s economy and get more money into the schools; we heard about sons and daughters on their way to Iraq and the need to tear down an old hospital that had become a blight on downtown. And by the time we left, I felt a relationship had been established between me and the people I’d met— nothing transformative, but perhaps enough to weaken some of our biases and reinforce some of our better impulses. In other words, a quotient of trust had been built. Of course, such trust between the races is often tentative. It can wither without a sustaining effort. It may last only so long as minorities remain quiescent, silent to injustice; it can be blown asunder by a few well-timed negative ads featuring white workers displaced by affirmative action, or the news of a police shooting of an unarmed black or Latino youth. But I also believe that moments like the one in Cairo ripple from their immediate point: that people of all races carry these moments into their homes and places of worship; that such moments shade a conversation with their children or their coworkers and can wear down, in slow, steady waves, the hatred and suspicion that isolation breeds. Recently, I was back in southern Illinois, driving with one of my downstate field directors, a young white man named Robert Stephan, after a long day of speeches and appearances in the area. It was a beautiful spring night, the broad waters and dusky banks of the Mississippi shimmering under a full, low-flung moon. The waters reminded me of Cairo and all the other towns up and down the river, the settlements that had risen and fallen with the barge traffic and the often sad, tough, cruel histories that had been deposited there at the confluence of the free and enslaved, the world of Huck and the world of Jim. I mentioned to Robert the progress we’d made on tearing down the old hospital in Cairo—our office had started meeting with the state health department and local officials—and told him about my first visit to the town. Because Robert had grown up in the southern part of the state, we soon found ourselves talking about the racial attitudes of his friends and neighbors. Just the previous week, he said, a few local guys with some influence had invited him to join them at a small social club in Alton, a couple of blocks from the house where he’d been raised. Robert had never been to the place, but it seemed nice enough. The food had been served, the group was making some small talk, when Robert noticed that of the fifty or so people in the room not a single person was black. Since Alton’s population is about a quarter African American, Robert thought this odd, and asked the men about it. It’s a private club, one of them said. At first, Robert didn’t understand—had no blacks tried to join? When they said nothing, he said, It’s 2006, for God’s sake. The men shrugged. It’s always been that way, they told him. No blacks allowed. Which is when Robert dropped his napkin on his plate, said good night, and left. I suppose I could spend time brooding over those men in the club, file it as evidence that white people still maintain a simmering hostility toward those who look like me. But I don’t want to confer on such bigotry a power it no longer possesses. I choose to think about Robert instead, and the small but difficult gesture he made. If a young man like Robert can make the effort to cross the currents of habit and fear in order to do what he knows is right, then I want to be sure that I’m there to meet him on the other side and help him onto shore. MY ELECTION WASN’T just aided by the evolving racial attitudes of Illinois’s white voters. It reflected changes in Illinois’s African American community as well. One measure of these changes could be seen in the types of early support my campaign received. Of the first $500,000 that I raised during the primary, close to half came from black businesses and professionals. It was a black-owned radio station, WVON, that first began to mention my campaign on the Chicago airwaves, and a black-owned weekly newsmagazine, N’Digo, that first featured me on its cover. One of the first times I needed a corporate jet for the campaign, it was a black friend who lent me his. Such capacity simply did not exist a generation ago. Although Chicago has always had one of the more vibrant black business communities in the country, in the sixties and seventies only a handful of self-made men—John Johnson, the founder of Ebony and Jet; George Johnson, the founder of Johnson Products; Ed Gardner, the founder of Soft Sheen; and Al Johnson, the first black in the country to own a GM franchise—would have been considered wealthy by the standards of white America. Today not only is the city filled with black doctors, dentists, lawyers, accountants, and other professionals, but blacks also occupy some of the highest management positions in corporate Chicago. Blacks own restaurant chains, investment banks, PR agencies, real estate investment trusts, and architectural firms. They can afford to live in neighborhoods of their choosing and send their children to the best private schools. They are actively recruited to join civic boards and generously support all manner of charities. Statistically, the number of African Americans who occupy the top fifth of the income ladder remains relatively small. Moreover, every black professional and businessperson in Chicago can tell you stories of the roadblocks they still experience on account of race. Few African American entrepreneurs have either the inherited wealth or the angel investors to help launch their businesses or cushion them from a sudden economic downturn. Few doubt that if they were white they would be further along in reaching their goals. And yet you won’t hear these men and women use race as a crutch or point to discrimination as an excuse for failure. In fact, what characterizes this new generation of black professionals is their rejection of any limits to what they can achieve. When a friend who had been the number one bond salesman at Merrill Lynch’s Chicago office decided to start his own investment bank, his goal wasn’t to grow it into the top black firm—he wanted it to become the top firm, period. When another friend decided to leave an executive position at General Motors to start his own parking service company in partnership with Hyatt, his mother thought he was crazy. “She couldn’t imagine anything better than having a management job at GM,” he told me, “because those jobs were unattainable for her generation. But I knew I wanted to build something of my own.” That simple notion—that one isn’t confined in one’s dreams—is so central to our understanding of America that it seems almost commonplace. But in black America, the idea represents a radical break from the past, a severing of the psychological shackles of slavery and Jim Crow. It is perhaps the most important legacy of the civil rights movement, a gift from those leaders like John Lewis and Rosa Parks who marched, rallied, and endured threats, arrests, and beatings to widen the doors of freedom. And it is also a testament to that generation of African American mothers and fathers whose heroism was less dramatic but no less important: parents who worked all their lives in jobs that were too small for them, without complaint, scrimping and saving to buy a small home; parents who did without so that their children could take dance classes or the school-sponsored field trip; parents who coached Little League games and baked birthday cakes and badgered teachers to make sure that their children weren’t tracked into the less challenging programs; parents who dragged their children to church every Sunday, whupped their children’s behinds when they got out of line, and looked out for all the children on the block during long summer days and into the night. Parents who pushed their children to achieve and fortified them with a love that could withstand whatever the larger society might throw at them. It is through this quintessentially American path of upward mobility that the black middle class has grown fourfold in a generation, and that the black poverty rate was cut in half. Through a similar process of hard work and commitment to family, Latinos have seen comparable gains: From 1979 to 1999, the number of Latino families considered middle class has grown by more than 70 percent. In their hopes and expectations, these black and Latino workers are largely indistinguishable from their white counterparts. They are the people who make our economy run and our democracy flourish—the teachers, mechanics, nurses, computer technicians, assembly-line workers, bus drivers, postal workers, store managers, plumbers, and repairmen who constitute America’s vital heart. And yet, for all the progress that’s been made in the past four decades, a stubborn gap remains between the living standards of black, Latino, and white workers. The average black wage is 75 percent of the average white wage; the average Latino wage is 71 percent of the average white wage. Black median net worth is about $6,000, and Latino median net worth is about $8,000, compared to $88,000 for whites. When laid off from their job or confronted with a family emergency, blacks and Latinos have less savings to draw on, and parents are less able to lend their children a helping hand. Even middle- class blacks and Latinos pay more for insurance, are less likely to own their own homes, and suffer poorer health than Americans as a whole. More minorities may be living the American dream, but their hold on that dream remains tenuous. How we close this persistent gap—and how much of a role government should play in achieving that goal—remains one of the central controversies of American politics. But there should be some strategies we can all agree on. We might start with completing the unfinished business of the civil rights movement—namely, enforcing nondiscrimination laws in such basic areas as employment, housing, and education. Anyone who thinks that such enforcement is no longer needed should pay a visit to one of the suburban office parks in their area and count the number of blacks employed there, even in the relatively unskilled jobs, or stop by a local trade union hall and inquire as to the number of blacks in the apprenticeship program, or read recent studies showing that real estate brokers continue to steer prospective black homeowners away from predominantly white neighborhoods. Unless you live in a state without many black residents, I think you’ll agree that something’s amiss. Under recent Republican Administrations, such enforcement of civil rights laws has been tepid at best, and under the current Administration, it’s been essentially nonexistent—unless one counts the eagerness of the Justice Department’s Civil Rights Division to label university scholarship or educational enrichment programs targeted at minority students as “reverse discrimination,” no matter how underrepresented minority students may be in a particular institution or field, and no matter how incidental the program’s impact on white students. This should be a source of concern across the political spectrum, even to those who oppose affirmative action. Affirmative action programs, when properly structured, can open up opportunities otherwise closed to qualified minorities without diminishing opportunities for white students. Given the dearth of black and Latino Ph.D. candidates in mathematics and the physical sciences, for example, a modest scholarship program for minorities interested in getting advanced degrees in these fields (a recent target of a Justice Department inquiry) won’t keep white students out of such programs, but can broaden the pool of talent that America will need for all of us to prosper in a technology-based economy. Moreover, as a lawyer who’s worked on civil rights cases, I can say that where there’s strong evidence of prolonged and systematic discrimination by large corporations, trade unions, or branches of municipal government, goals and timetables for minority hiring may be the only meaningful remedy available. Many Americans disagree with me on this as a matter of principle, arguing that our institutions should never take race into account, even if it is to help victims of past discrimination. Fair enough—I understand their arguments, and don’t expect the debate to be settled anytime soon. But that shouldn’t stop us from at least making sure that when two equally qualified people—one minority and one white—apply for a job, house, or loan, and the white person is consistently preferred, then the government, through its prosecutors and through its courts, should step in to make things right. We should also agree that the responsibility to close the gap can’t come from government alone; minorities, individually and collectively, have responsibilities as well. Many of the social or cultural factors that negatively affect black people, for example, simply mirror in exaggerated form problems that afflict America as a whole: too much television (the average black household has the television on more than eleven hours per day), too much consumption of poisons (blacks smoke more and eat more fast food), and a lack of emphasis on educational achievement. Then there’s the collapse of the two-parent black household, a phenomenon that is occurring at such an alarming rate when compared to the rest of American society that what was once a difference in degree has become a difference in kind, a phenomenon that reflects a casualness toward sex and child rearing among black men that renders black children more vulnerable—and for which there is simply no excuse. Taken together, these factors impede progress. Moreover, although government action can help change behavior (encouraging supermarket chains with fresh produce to locate in black neighborhoods, to take just one small example, would go a long way toward changing people’s eating habits), a transformation in attitudes has to begin in the home, and in neighborhoods, and in places of worship. Community-based institutions, particularly the historically black church, have to help families reinvigorate in young people a reverence for educational achievement, encourage healthier lifestyles, and reenergize traditional social norms surrounding the joys and obligations of fatherhood. Ultimately, though, the most important tool to close the gap between minority and white workers may have little to do with race at all. These days, what ails working-class and middle-class blacks and Latinos is not fundamentally different from what ails their white counterparts: downsizing, outsourcing, automation, wage stagnation, the dismantling of employer-based health-care and pension plans, and schools that fail to teach young people the skills they need to compete in a global economy. (Blacks in particular have been vulnerable to these trends, since they are more reliant on blue- collar manufacturing jobs and are less likely to live in suburban communities where new jobs are being generated.) And what would help minority workers are the same things that would help white workers: the opportunity to earn a living wage, the education and training that lead to such jobs, labor laws and tax laws that restore some balance to the distribution of the nation’s wealth, and health-care, child care, and retirement systems that working people can count on. This pattern—of a rising tide lifting minority boats—has certainly held true in the past. The progress made by the previous generation of Latinos and African Americans occurred primarily because the same ladders of opportunity that built the white middle class were for the first time made available to minorities as well. They benefited, as all people did, from an economy that was growing and a government interested in investing in its people. Not only did tight labor markets, access to capital, and programs like Pell Grants and Perkins Loans benefit blacks directly; growing incomes and a sense of security among whites made them less resistant to minority claims for equality. The same formula holds true today. As recently as 1999, the black unemployment rate fell to record lows and black income rose to record highs not because of a surge in affirmative action hiring or a sudden change in the black work ethic but because the economy was booming and government took a few modest measures—like the expansion of the Earned Income Tax Credit—to spread the wealth around. If you want to know the secret of Bill Clinton’s popularity among African Americans, you need look no further than these statistics. But these same statistics should also force those of us interested in racial equality to conduct an honest accounting of the costs and benefits of our current strategies. Even as we continue to defend affirmative action as a useful, if limited, tool to expand opportunity to underrepresented minorities, we should consider spending a lot more of our political capital convincing America to make the investments needed to ensure that all children perform at grade level and graduate from high school—a goal that, if met, would do more than affirmative action to help those black and Latino children who need it the most. Similarly, we should support targeted programs to eliminate existing health disparities between minorities and whites (some evidence suggests that even when income and levels of insurance are factored out, minorities may still be receiving worse care), but a plan for universal health-care coverage would do more to eliminate health disparities between whites and minorities than any race-specific programs we might design. An emphasis on universal, as opposed to race-specific, programs isn’t just good policy; it’s also good politics. I remember once sitting with one of my Democratic colleagues in the Illinois state senate as we listened to another fellow senator—an African American whom I’ll call John Doe who represented a largely inner-city district—launch into a lengthy and passionate peroration on why the elimination of a certain program was a case of blatant racism. After a few minutes, the white senator (who had one of the chamber’s more liberal voting records) turned to me and said, “You know what the problem is with John? Whenever I hear him, he makes me feel more white.” In defense of my black colleague, I pointed out that it’s not always easy for a black politician to gauge the right tone to take—too angry? not angry enough?—when discussing the enormous hardships facing his or her constituents. Still, my white colleague’s comment was instructive. Rightly or wrongly, white guilt has largely exhausted itself in America; even the most fair-minded of whites, those who would genuinely like to see racial inequality ended and poverty relieved, tend to push back against suggestions of racial victimization—or race-specific claims based on the history of race discrimination in this country. Some of this has to do with the success of conservatives in fanning the politics of resentment—by wildly overstating, for example, the adverse effects of affirmative action on white workers. But mainly it’s a matter of simple self-interest. Most white Americans figure that they haven’t engaged in discrimination themselves and have plenty of their own problems to worry about. They also know that with a national debt approaching $9 trillion and annual deficits of almost $300 billion, the country has precious few resources to help them with those problems. As a result, proposals that solely benefit minorities and dissect Americans into “us” and “them” may generate a few short-term concessions when the costs to whites aren’t too high, but they can’t serve as the basis for the kinds of sustained, broad-based political coalitions needed to transform America. On the other hand, universal appeals around strategies that help all Americans (schools that teach, jobs that pay, health care for everyone who needs it, a government that helps out after a flood), along with measures that ensure our laws apply equally to everyone and hence uphold broadly held American ideals (like better enforcement of existing civil rights laws), can serve as the basis for such coalitions—even if such strategies disproportionately help minorities. Such a shift in emphasis is not easy: Old habits die hard, and there is always a fear on the part of many minorities that unless racial discrimination, past and present, stays on the front burner, white America will be let off the hook and hard-fought gains may be reversed. I understand these fears—nowhere is it ordained that history moves in a straight line, and during difficult economic times it is possible that the imperatives of racial equality get shunted aside. Still, when I look at what past generations of minorities have had to overcome, I am optimistic about the ability of this next generation to continue their advance into the economic mainstream. For most of our recent history, the rungs on the opportunity ladder may have been more slippery for blacks; the admittance of Latinos into firehouses and corporate suites may have been grudging. But despite all that, the combination of economic growth, government investment in broad-based programs to encourage upward mobility, and a modest commitment to enforce the simple principle of nondiscrimination was sufficient to pull the large majority of blacks and Latinos into the socioeconomic mainstream within a generation. We need to remind ourselves of this achievement. What’s remarkable is not the number of minorities who have failed to climb into the middle class but the number who succeeded against the odds; not the anger and bitterness that parents of color have transmitted to their children but the degree to which such emotions have ebbed. That knowledge gives us something to build on. It tells us that more progress can be made. IF UNIVERSAL STRATEGIES that target the challenges facing all Americans can go a long way toward closing the gap between blacks, Latinos, and whites, there are two aspects of race relations in America that require special attention—issues that fan the flames of racial conflict and undermine the progress that’s been made. With respect to the African American community, the issue is the deteriorating condition of the inner- city poor. With respect to Latinos, it is the problem of undocumented workers and the political firestorm surrounding immigration. One of my favorite restaurants in Chicago is a place called MacArthur’s. It’s away from the Loop, on the west end of the West Side on Madison Street, a simple, brightly lit space with booths of blond wood that seat maybe a hundred people. On any day of the week, about that many people can be found lining up—families, teenagers, groups of matronly women and elderly men—all waiting their turn, cafeteria-style, for plates filled with fried chicken, catfish, hoppin’ John, collard greens, meatloaf, cornbread, and other soul-food standards. As these folks will tell you, it’s well worth the wait. The restaurant’s owner, Mac Alexander, is a big, barrel-chested man in his early sixties, with thinning gray hair, a mustache, and a slight squint behind his glasses that gives him a pensive, professorial air. He’s an army vet, born in Lexington, Mississippi, who lost his left leg in Vietnam; after his convalescence, he and his wife moved to Chicago, where he took business courses while working in a warehouse. In 1972, he opened Mac’s Records, and helped found the Westside Business Improvement Association, pledging to fix up what he calls his “little corner of the world.” By any measure he has succeeded. His record store grew; he opened up the restaurant and hired local residents to work there; he started buying and rehabbing run-down buildings and renting them out. It’s because of the efforts of men and women like Mac that the view along Madison Street is not as grim as the West Side’s reputation might suggest. There are clothing stores and pharmacies and what seems like a church on every block. Off the main thoroughfare you will find the same small bungalows—with neatly trimmed lawns and carefully tended flower beds—that make up many of Chicago’s neighborhoods. But travel a few blocks farther in any direction and you will also experience a different side of Mac’s world: the throngs of young men on corners casting furtive glances up and down the street; the sound of sirens blending with the periodic thump of car stereos turned up full blast; the dark, boarded-up buildings and hastily scrawled gang signs; the rubbish everywhere, swirling in winter winds. Recently, the Chicago Police Department installed permanent cameras and flashing lights atop the lampposts of Madison, bathing each block in a perpetual blue glow. The folks who live along Madison didn’t complain; flashing blue lights are a familiar enough sight. They’re just one more reminder of what everybody knows—that the community’s immune system has broken down almost entirely, weakened by drugs and gunfire and despair; that despite the best efforts of folks like Mac, a virus has taken hold, and a people is wasting away. “Crime’s nothing new on the West Side,” Mac told me one afternoon as we walked to look at one of his buildings. “I mean, back in the seventies, the police didn’t really take the idea of looking after black neighborhoods seriously. As long as trouble didn’t spill out into the white neighborhoods, they didn’t care. First store I opened, on Lake and Damen, I must’ve had eight, nine break-ins in a row. “The police are more responsive now,” Mac said. “The commander out here, he’s a good brother, does the best he can. But he’s just as overwhelmed as everybody else. See, these kids out here, they just don’t care. Police don’t scare ’em, jail doesn’t scare ’em—more than half of the young guys out here already got a record. If the police pick up ten guys standing on a corner, another ten’ll take their place in an hour. “That’s the thing that’s changed…the attitude of these kids. You can’t blame them, really, because most of them have nothing at home. Their mothers can’t tell them nothing—a lot of these women are still children themselves. Father’s in jail. Nobody around to guide the kids, keep them in school, teach them respect. So these boys just raise themselves, basically, on the streets. That’s all they know. The gang, that’s their family. They don’t see any jobs out here except the drug trade. Don’t get me wrong, we’ve still got a lot of good families around here…not a lot of money necessarily, but doing their best to keep their kids out of trouble. But they’re just too outnumbered. The longer they stay, the more they feel their kids are at risk. So the minute they get a chance, they move out. And that just leaves things worse.” Mac shook his head. “I don’t know. I keep thinking we can turn things around. But I’ll be honest with you, Barack—it’s hard not to feel sometimes like the situation is hopeless. Hard—and getting harder.” I hear a lot of such sentiments in the African American community these days, a frank acknowledgment that conditions in the heart of the inner city are spinning out of control. Sometimes the conversation will center on statistics—the infant mortality rate (on par with Malaysia among poor black Americans), or black male unemployment (estimated at more than a third in some Chicago neighborhoods), or the number of black men who can expect to go through the criminal justice system at some point in their lives (one in three nationally). But more often the conversation focuses on personal stories, offered as evidence of a fundamental breakdown within a portion of our community and voiced with a mixture of sadness and incredulity. A teacher will talk about what it’s like to have an eight-year- old shout obscenities and threaten her with bodily harm. A public defender will describe a fifteen-year-old’s harrowing rap sheet or the nonchalance with which his clients predict they will not live to see their thirtieth year. A pediatrician will describe the teenage parents who don’t think there’s anything wrong with feeding their toddlers potato chips for breakfast, or who admit to having left their five- or six-year-old alone at home. These are the stories of those who didn’t make it out of history’s confinement, of the neighborhoods within the black community that house the poorest of the poor, serving as repositories for all the scars of slavery and violence of Jim Crow, the internalized rage and the forced ignorance, the shame of men who could not protect their women or support their families, the children who grew up being told they wouldn’t amount to anything and had no one there to undo the damage. There was a time, of course, when such deep intergenerational poverty could still shock a nation—when the publication of Michael Harrington’s The Other America or Bobby Kennedy’s visits to the Mississippi Delta could inspire outrage and a call to action. Not anymore. Today the images of the so-called underclass are ubiquitous, a permanent fixture in American popular culture—in film and TV, where they’re the foil of choice for the forces of law and order; in rap music and videos, where the gangsta life is glorified and mimicked by white and black teenagers alike (although white teenagers, at least, are aware that theirs is just a pose); and on the nightly news, where the depredation to be found in the inner city always makes for good copy. Rather than evoke our sympathy, our familiarity with the lives of the black poor has bred spasms of fear and outright contempt. But mostly it’s bred indifference. Black men filling our prisons, black children unable to read or caught in a gangland shooting, the black homeless sleeping on grates and in the parks of our nation’s capital—we take these things for granted, as part of the natural order, a tragic situation, perhaps, but not one for which we are culpable, and certainly not something subject to change. This concept of a black underclass—separate, apart, alien in its behavior and in its values—has also played a central role in modern American politics. It was partly on behalf of fixing the black ghetto that Johnson’s War on Poverty was launched, and it was on the basis of that war’s failures, both real and perceived, that conservatives turned much of the country against the very concept of the welfare state. A cottage industry grew within conservative think tanks, arguing not only that cultural pathologies—rather than racism or structural inequalities built into our economy—were responsible for black poverty but also that government programs like welfare, coupled with liberal judges who coddled criminals, actually made these pathologies worse. On television, images of innocent children with distended bellies were replaced with those of black looters and muggers; news reports focused less on the black maid struggling to make ends meet and more on the “welfare queen” who had babies just to collect a check. What was needed, conservatives argued, was a stern dose of discipline—more police, more prisons, more personal responsibility, and an end to welfare. If such strategies could not transform the black ghetto, at least they would contain it and keep hardworking taxpayers from throwing good money after bad. That conservatives won over white public opinion should come as no surprise. Their arguments tapped into a distinction between the “deserving” and “undeserving” poor that has a long and varied history in America, an argument that has often been racially or ethnically tinged and that has gained greater currency during those periods—like the seventies and eighties—when economic times are tough. The response of liberal policy makers and civil rights leaders didn’t help; in their urgency to avoid blaming the victims of historical racism, they tended to downplay or ignore evidence that entrenched behavioral patterns among the black poor really were contributing to intergenerational poverty. (Most famously, Daniel Patrick Moynihan was accused of racism in the early sixties when he raised alarms about the rise of out-of-wedlock births among the black poor.) This willingness to dismiss the role that values played in shaping the economic success of a community strained credulity and alienated working-class whites— particularly since some of the most liberal policy makers lived lives far removed from urban disorder. The truth is that such rising frustration with conditions in the inner city was hardly restricted to whites. In most black neighborhoods, law-abiding, hardworking residents have been demanding more aggressive police protection for years, since they are far more likely to be victims of crime. In private—around kitchen tables, in barbershops, and after church—black folks can often be heard bemoaning the eroding work ethic, inadequate parenting, and declining sexual mores with a fervor that would make the Heritage Foundation proud. In that sense, black attitudes regarding the sources of chronic poverty are far more conservative than black politics would care to admit. What you won’t hear, though, are blacks using such terms as “predator” in describing a young gang member, or “underclass” in describing mothers on welfare—language that divides the world between those who are worthy of our concern and those who are not. For black Americans, such separation from the poor is never an option, and not just because the color of our skin—and the conclusions the larger society draws from our color—makes all of us only as free, only as respected, as the least of us. It’s also because blacks know the back story to the inner city’s dysfunction. Most blacks who grew up in Chicago remember the collective story of the great migration from the South, how after arriving in the North blacks were forced into ghettos because of racial steering and restrictive covenants and stacked up in public housing, where the schools were substandard and the parks were underfunded and police protection was nonexistent and the drug trade was tolerated. They remember how the plum patronage jobs were reserved for other immigrant groups and the blue-collar jobs that black folks relied on evaporated, so that families that had been intact began to crack under the pressure and ordinary children slipped through those cracks, until a tipping point was reached and what had once been the sad exception somehow became the rule. They know what drove that homeless man to drink because he is their uncle. That hardened criminal— they remember when he was a little boy, so full of life and capable of love, for he is their cousin. In other words, African Americans understand that culture matters but that culture is shaped by circumstance. We know that many in the inner city are trapped by their own self-destructive behaviors but that those behaviors are not innate. And because of that knowledge, the black community remains convinced that if America finds its will to do so, then circumstances for those trapped in the inner city can be changed, individual attitudes among the poor will change in kind, and the damage can gradually be undone, if not for this generation then at least for the next. Such wisdom might help us move beyond ideological bickering and serve as the basis of a renewed effort to tackle the problems of inner-city poverty. We could begin by acknowledging that perhaps the single biggest thing we could do to reduce such poverty is to encourage teenage girls to finish high school and avoid having children out of wedlock. In this effort, school- and community-based programs that have a proven track record of reducing teen pregnancy need to be expanded, but parents, clergy, and community leaders also need to speak out more consistently on the issue. We should also acknowledge that conservatives—and Bill Clinton—were right about welfare as it was previously structured: By detaching income from work, and by making no demands on welfare recipients other than a tolerance for intrusive bureaucracy and an assurance that no man lived in the same house as the mother of his children, the old AFDC program sapped people of their initiative and eroded their self-respect. Any strategy to reduce intergenerational poverty has to be centered on work, not welfare— not only because work provides independence and income but also because work provides order, structure, dignity, and opportunities for growth in people’s lives. But we also need to admit that work alone does not ensure that people can rise out of poverty. Across America, welfare reform has sharply reduced the number of people on the public dole; it has also swelled the ranks of the working poor, with women churning in and out of the labor market, locked into jobs that don’t pay a living wage, forced every day to scramble for adequate child care, affordable housing, and accessible health care, only to find themselves at the end of each month wondering how they can stretch the last few dollars that they have left to cover the food bill, the gas bill, and the baby’s new coat. Strategies like an expanded Earned Income Tax Credit that help all low-wage workers can make an enormous difference in the lives of these women and their children. But if we’re serious about breaking the cycle of intergenerational poverty, then many of these women will need some extra help with the basics that those living outside the inner city often take for granted. They need more police and more effective policing in their neighborhoods, to provide them and their children some semblance of personal security. They need access to community-based health centers that emphasize prevention— including reproductive health care, nutritional counseling, and in some cases treatment for substance abuse. They need a radical transformation of the schools their children attend, and access to affordable child care that will allow them to hold a full-time job or pursue their education. And in many cases they need help learning to be effective parents. By the time many inner-city children reach the school system, they’re already behind—unable to identify basic numbers, colors, or the letters in the alphabet, unaccustomed to sitting still or participating in a structured environment, and often burdened by undiagnosed health problems. They’re unprepared not because they’re unloved but because their mothers don’t know how to provide what they need. Well-structured government programs— prenatal counseling, access to regular pediatric care, parenting programs, and quality early-childhood-education programs—have a proven ability to help fill the void. Finally, we need to tackle the nexus of unemployment and crime in the inner city so that the men who live there can begin fulfilling their responsibilities. The conventional wisdom is that most unemployed inner-city men could find jobs if they really wanted to work; that they inevitably prefer drug dealing, with its attendant risks but potential profits, to the low-paying jobs that their lack of skills warrants. In fact, economists who’ve studied the issue—and the young men whose fates are at stake—will tell you that the costs and benefits of the street life don’t match the popular mythology: At the bottom or even the middle ranks of the industry, drug dealing is a minimum-wage affair. For many inner-city men, what prevents gainful employment is not simply the absence of motivation to get off the streets but the absence of a job history or any marketable skills—and, increasingly, the stigma of a prison record. Ask Mac, who has made it part of his mission to provide young men in his neighborhood a second chance. Ninety-five percent of his male employees are ex- felons, including one of his best cooks, who has been in and out of prison for the past twenty years for various drug offenses and one count of armed robbery. Mac starts them out at eight dollars an hour and tops them out at fifteen dollars an hour. He has no shortage of applicants. Mac’s the first one to admit that some of the guys come in with issues—they aren’t used to getting to work on time, and a lot of them aren’t used to taking orders from a supervisor—and his turnover can be high. But by not accepting excuses from the young men he employs (“I tell them I got a business to run, and if they don’t want the job I got other folks who do”), he finds that most are quick to adapt. Over time they become accustomed to the rhythms of ordinary life: sticking to schedules, working as part of a team, carrying their weight. They start talking about getting their GEDs, maybe enrolling in the local community college. They begin to aspire to something better. It would be nice if there were thousands of Macs out there, and if the market alone could generate opportunities for all the inner-city men who need them. But most employers aren’t willing to take a chance on ex-felons, and those who are willing are often prevented from doing so. In Illinois, for example, ex-felons are prohibited from working not only in schools, nursing homes, and hospitals—restrictions that sensibly reflect our unwillingness to compromise the safety of our children or aging parents— but some are also prohibited from working as barbers and nail technicians. Government could kick-start a transformation of circumstances for these men by working with private-sector contractors to hire and train ex-felons on projects that can benefit the community as a whole: insulating homes and offices to make them energy- efficient, perhaps, or laying the broadband lines needed to thrust entire communities into the Internet age. Such programs would cost money, of course—although, given the annual cost of incarcerating an inmate, any drop in recidivism would help the program pay for itself. Not all of the hard-core unemployed would prefer entry-level jobs to life on the streets, and no program to help ex-felons will eliminate the need to lock up hardened criminals, those whose habits of violence are too deeply entrenched. Still, we can assume that with lawful work available for young men now in the drug trade, crime in many communities would drop; that as a consequence more employers would locate businesses in these neighborhoods and a self-sustaining economy would begin to take root; and that over the course of ten or fifteen years norms would begin to change, young men and women would begin to imagine a future for themselves, marriage rates would rise, and children would have a more stable world in which to grow up. What would that be worth to all of us—an America in which crime has fallen, more children are cared for, cities are reborn, and the biases, fear, and discord that black poverty feeds are slowly drained away? Would it be worth what we’ve spent in the past year in Iraq? Would it be worth relinquishing demands for estate tax repeal? It’s hard to quantify the benefits of such changes—precisely because the benefits would be immeasurable. IF THE PROBLEMS of inner-city poverty arise from our failure to face up to an often tragic past, the challenges of immigration spark fears of an uncertain future. The demographics of America are changing inexorably and at lightning speed, and the claims of new immigrants won’t fit neatly into the black-and-white paradigm of discrimination and resistance and guilt and recrimination. Indeed, even black and white newcomers—from Ghana and Ukraine, Somalia and Romania—arrive on these shores unburdened by the racial dynamics of an earlier era. During the campaign, I would see firsthand the faces of this new America—in the Indian markets along Devon Avenue, in the sparkling new mosque in the southwest suburbs, in an Armenian wedding and a Filipino ball, in the meetings of the Korean American Leadership Council and the Nigerian Engineers Association. Everywhere I went, I found immigrants anchoring themselves to whatever housing and work they could find, washing dishes or driving cabs or toiling in their cousin’s dry cleaners, saving money and building businesses and revitalizing dying neighborhoods, until they moved to the suburbs and raised children with accents that betrayed not the land of their parents but their Chicago birth certificates, teenagers who listened to rap and shopped at the mall and planned for futures as doctors and lawyers and engineers and even politicians. Across the country, this classic immigrant story is playing itself out, the story of ambition and adaptation, hard work and education, assimilation and upward mobility. Today’s immigrants, however, are living out this story in hyperdrive. As beneficiaries of a nation more tolerant and more worldly than the one immigrants faced generations ago, a nation that has come to revere its immigrant myth, they are more confident in their place here, more assertive of their rights. As a senator, I receive countless invitations to address these newest Americans, where I am often quizzed on my foreign policy views—where do I stand on Cyprus, say, or the future of Taiwan? They may have policy concerns specific to fields in which their ethnic groups are heavily represented—Indian American pharmacists might complain about Medicare reimbursements, Korean small-business owners might lobby for changes in the tax code. But mostly they want affirmation that they, too, are Americans. Whenever I appear before immigrant audiences, I can count on some good-natured ribbing from my staff after my speech; according to them, my remarks always follow a three-part structure: “I am your friend,” “[Fill in the home country] has been a cradle of civilization,” and “You embody the American dream.” They’re right, my message is simple, for what I’ve come to understand is that my mere presence before these newly minted Americans serves notice that they matter, that they are voters critical to my success and full-fledged citizens deserving of respect. Of course, not all my conversations in immigrant communities follow this easy pattern. In the wake of 9/11, my meetings with Arab and Pakistani Americans, for example, have a more urgent quality, for the stories of detentions and FBI questioning and hard stares from neighbors have shaken their sense of security and belonging. They have been reminded that the history of immigration in this country has a dark underbelly; they need specific assurances that their citizenship really means something, that America has learned the right lessons from the Japanese internments during World War II, and that I will stand with them should the political winds shift in an ugly direction. It’s in my meetings with the Latino community, though, in neighborhoods like Pilsen and Little Village, towns like Cicero and Aurora, that I’m forced to reflect on the meaning of America, the meaning of citizenship, and my sometimes conflicted feelings about all the changes that are taking place. Of course, the presence of Latinos in Illinois—Puerto Ricans, Colombians, Salvadorans, Cubans, and most of all Mexicans—dates back generations, when agricultural workers began making their way north and joined ethnic groups in factory jobs throughout the region. Like other immigrants, they assimilated into the culture, although like African Americans, their upward mobility was often hampered by racial bias. Perhaps for that reason, black and Latino political and civil rights leaders often made common cause. In 1983, Latino support was critical in the election of Chicago’s first black mayor, Harold Washington. That support was reciprocated, as Washington helped elect a generation of young, progressive Latinos to the Chicago city council and the Illinois state legislature. Indeed, until their numbers finally justified their own organization, Latino state legislators were official members of the Illinois Legislative Black Caucus. It was against this backdrop, shortly after my arrival in Chicago, that my own ties to the Latino community were formed. As a young organizer, I often worked with Latino leaders on issues that affected both black and brown residents, from failing schools to illegal dumping to unimmunized children. My interest went beyond politics; I would come to love the Mexican and Puerto Rican sections of the city—the sounds of salsa and merengue pulsing out of apartments on hot summer nights, the solemnity of Mass in churches once filled with Poles and Italians and Irish, the frantic, happy chatter of soccer matches in the park, the cool humor of the men behind the counter at the sandwich shop, the elderly women who would grasp my hand and laugh at my pathetic efforts at Spanish. I made lifelong friends and allies in those neighborhoods; in my mind, at least, the fates of black and brown were to be perpetually intertwined, the cornerstone of a coalition that could help America live up to its promise. By the time I returned from law school, though, tensions between blacks and Latinos in Chicago had started to surface. Between 1990 and 2000, the Spanish-speaking population in Chicago rose by 38 percent, and with this surge in population the Latino community was no longer content to serve as junior partner in any black-brown coalition. After Harold Washington died, a new cohort of Latino elected officials, affiliated with Richard M. Daley and remnants of the old Chicago political machine, came onto the scene, men and women less interested in high-minded principles and rainbow coalitions than in translating growing political power into contracts and jobs. As black businesses and commercial strips struggled, Latino businesses thrived, helped in part by financial ties to home countries and by a customer base held captive by language barriers. Everywhere, it seemed, Mexican and Central American workers came to dominate low-wage work that had once gone to blacks—as waiters and busboys, as hotel maids and as bellmen—and made inroads in the construction trades that had long excluded black labor. Blacks began to grumble and feel threatened; they wondered if once again they were about to be passed over by those who’d just arrived. I shouldn’t exaggerate the schism. Because both communities share a host of challenges, from soaring high school dropout rates to inadequate health insurance, blacks and Latinos continue to find common cause in their politics. As frustrated as blacks may get whenever they pass a construction site in a black neighborhood and see nothing but Mexican workers, I rarely hear them blame the workers themselves; usually they reserve their wrath for the contractors who hire them. When pressed, many blacks will express a grudging admiration for Latino immigrants—for their strong work ethic and commitment to family, their willingness to start at the bottom and make the most of what little they have. Still, there’s no denying that many blacks share the same anxieties as many whites about the wave of illegal immigration flooding our Southern border—a sense that what’s happening now is fundamentally different from what has gone on before. Not all these fears are irrational. The number of immigrants added to the labor force every year is of a magnitude not seen in this country for over a century. If this huge influx of mostly low-skill workers provides some benefits to the economy as a whole—especially by keeping our workforce young, in contrast to an increasingly geriatric Europe and Japan—it also threatens to depress further the wages of blue-collar Americans and put strains on an already overburdened safety net. Other fears of native-born Americans are disturbingly familiar, echoing the xenophobia once directed at Italians, Irish, and Slavs fresh off the boat—fears that Latinos are inherently too different, in culture and in temperament, to assimilate fully into the American way of life; fears that, with the demographic changes now taking place, Latinos will wrest control away from those accustomed to wielding political power. For most Americans, though, concerns over illegal immigration go deeper than worries about economic displacement and are more subtle than simple racism. In the past, immigration occurred on America’s terms; the welcome mat could be extended selectively, on the basis of the immigrant’s skills or color or the needs of industry. The laborer, whether Chinese or Russian or Greek, found himself a stranger in a strange land, severed from his home country, subject to often harsh constraints, forced to adapt to rules not of his own making. Today it seems those terms no longer apply. Immigrants are entering as a result of a porous border rather than any systematic government policy; Mexico’s proximity, as well as the desperate poverty of so many of its people, suggests the possibility that border crossing cannot even be slowed, much less stopped. Satellites, calling cards, and wire transfers, as well as the sheer size of the burgeoning Latino market, make it easier for today’s immigrant to maintain linguistic and cultural ties to the land of his or her birth (the Spanish-language Univision now boasts the highest-rated newscast in Chicago). Native-born Americans suspect that it is they, and not the immigrant, who are being forced to adapt. In this way, the immigration debate comes to signify not a loss of jobs but a loss of sovereignty, just one more example—like September 11, avian flu, computer viruses, and factories moving to China—that America seems unable to control its own destiny. IT WAS IN this volatile atmosphere—with strong passions on both sides of the debate—that the U.S. Senate considered a comprehensive immigration reform bill in the spring of 2006. With hundreds of thousands of immigrants protesting in the streets and a group of self-proclaimed vigilantes called the Minutemen rushing to defend the Southern border, the political stakes were high for Democrats, Republicans, and the President. Under the leadership of Ted Kennedy and John McCain, the Senate crafted a compromise bill with three major components. The bill provided much tougher border security and, through an amendment I wrote with Chuck Grassley, made it significantly more difficult for employers to hire workers here illegally. The bill also recognized the difficulty of deporting twelve million undocumented immigrants and instead created a long, eleven-year process under which many of them could earn their citizenship. Finally, the bill included a guest worker program that would allow two hundred thousand foreign workers to enter the country for temporary employment. On balance, I thought the legislation was worth supporting. Still, the guest worker provision of the bill troubled me; it was essentially a sop to big business, a means for them to employ immigrants without granting them citizenship rights—indeed, a means for business to gain the benefits of outsourcing without having to locate their operations overseas. To address this problem, I succeeded in including language requiring that any job first be offered to U.S. workers, and that employers not undercut American wages by paying guest workers less than they would pay U.S. workers. The idea was to ensure that businesses turned to temporary foreign workers only when there was a labor shortage. It was plainly an amendment designed to help American workers, which is why all the unions vigorously supported it. But no sooner had the provision been included in the bill than some conservatives, both inside and outside of the Senate, began attacking me for supposedly “requiring that foreign workers get paid more than U.S. workers.” On the floor of the Senate one day, I caught up with one of my Republican colleagues who had leveled this charge at me. I explained that the bill would actually protect U.S. workers, since employers would have no incentive to hire guest workers if they had to pay the same wages they paid U.S. workers. The Republican colleague, who had been quite vocal in his opposition to any bill that would legalize the status of undocumented immigrants, shook his head. “My small business guys are still going to hire immigrants,” he said. “All your amendment does is make them pay more for their help.” “But why would they hire immigrants over U.S. workers if they cost the same?” I asked him. He smiled. “’Cause let’s face it, Barack. These Mexicans are just willing to work harder than Americans do.” That the opponents of the immigration bill could make such statements privately, while publicly pretending to stand up for American workers, indicates the degree of cynicism and hypocrisy that permeates the immigration debate. But with the public in a sour mood, their fears and anxieties fed daily by Lou Dobbs and talk radio hosts around the country, I can’t say I’m surprised that the compromise bill has been stalled in the House ever since it passed out of the Senate. And if I’m honest with myself, I must admit that I’m not entirely immune to such nativist sentiments. When I see Mexican flags waved at proimmigration demonstrations, I sometimes feel a flush of patriotic resentment. When I’m forced to use a translator to communicate with the guy fixing my car, I feel a certain frustration. Once, as the immigration debate began to heat up in the Capitol, a group of activists visited my office, asking that I sponsor a private relief bill that would legalize the status of thirty Mexican nationals who had been deported, leaving behind spouses or children with legal resident status. One of my staffers, Danny Sepulveda, a young man of Chilean descent, took the meeting, and explained to the group that although I was sympathetic to their plight and was one of the chief sponsors of the Senate immigration bill, I didn’t feel comfortable, as a matter of principle, sponsoring legislation that would select thirty people out of the millions in similar situations for a special dispensation. Some in the group became agitated; they suggested that I didn’t care about immigrant families and immigrant children, that I cared more about borders than about justice. One activist accused Danny of having forgotten where he came from—of not really being Latino. When I heard what had happened, I was both angry and frustrated. I wanted to call the group and explain that American citizenship is a privilege and not a right; that without meaningful borders and respect for the law, the very things that brought them to America, the opportunities and protections afforded those who live in this country, would surely erode; and that anyway, I didn’t put up with people abusing my staff— especially one who was championing their cause. It was Danny who talked me out of the call, sensibly suggesting that it might be counterproductive. Several weeks later, on a Saturday morning, I attended a naturalization workshop at St. Pius Church in Pilsen, sponsored by Congressman Luis Gutierrez, the Service Employees International Union, and several of the immigrants’ rights groups that had visited my office. About a thousand people had lined up outside the church, including young families, elderly couples, and women with strollers; inside, people sat silently in wooden pews, clutching the small American flags that the organizers had passed out, waiting to be called by one of the volunteers who would help them manage the start of what would be a years-long process to become citizens. As I wandered down the aisle, some people smiled and waved; others nodded tentatively as I offered my hand and introduced myself. I met a Mexican woman who spoke no English but whose son was in Iraq; I recognized a young Colombian man who worked as a valet at a local restaurant and learned that he was studying accounting at the local community college. At one point a young girl, seven or eight, came up to me, her parents standing behind her, and asked me for an autograph; she was studying government in school, she said, and would show it to her class. I asked her what her name was. She said her name was Cristina and that she was in the third grade. I told her parents they should be proud of her. And as I watched Cristina translate my words into Spanish for them, I was reminded that America has nothing to fear from these newcomers, that they have come here for the same reason that families came here 150 years ago—all those who fled Europe’s famines and wars and unyielding hierarchies, all those who may not have had the right legal documents or connections or unique skills to offer but who carried with them a hope for a better life. We have a right and duty to protect our borders. We can insist to those already here that with citizenship come obligations—to a common language, common loyalties, a common purpose, a common destiny. But ultimately the danger to our way of life is not that we will be overrun by those who do not look like us or do not yet speak our language. The danger will come if we fail to recognize the humanity of Cristina and her family—if we withhold from them the rights and opportunities that we take for granted, and tolerate the hypocrisy of a servant class in our midst; or more broadly, if we stand idly by as America continues to become increasingly unequal, an inequality that tracks racial lines and therefore feeds racial strife and which, as the country becomes more black and brown, neither our democracy nor our economy can long withstand. That’s not the future I want for Cristina, I said to myself as I watched her and her family wave good-bye. That’s not the future I want for my daughters. Their America will be more dizzying in its diversity, its culture more polyglot. My daughters will learn Spanish and be the better for it. Cristina will learn about Rosa Parks and understand that the life of a black seamstress speaks to her own. The issues my girls and Cristina confront may lack the stark moral clarity of a segregated bus, but in one form or another their generation will surely be tested—just as Mrs. Parks was tested and the Freedom Riders were tested, just as we are all tested—by those voices that would divide us and have us turn on each other. And when they are tested in that way, I hope Cristina and my daughters will have all read about the history of this country and will recognize they have been given something precious. America is big enough to accommodate all their dreams. Chapter Eight The World Beyond Our Borders INDONESIA IS A nation of islands—more than seventeen thousand in all, spread along the equator between the Indian and Pacific Oceans, between Australia and the South China Sea. Most Indonesians are of Malay stock and live on the larger islands of Java, Sumatra, Kalimantan, Sulawesi, and Bali. On the far eastern islands like Ambon and the Indonesian portion of New Guinea the people are, in varying degrees, of Melanesian ancestry. Indonesia’s climate is tropical, and its rain forests were once teeming with exotic species like the orangutan and the Sumatran tiger. Today, those rain forests are rapidly dwindling, victim to logging, mining, and the cultivation of rice, tea, coffee, and palm oil. Deprived of their natural habitat, orangutans are now an endangered species; no more than a few hundred Sumatran tigers remain in the wild. With more than 240 million people, Indonesia’s population ranks fourth in the world, behind China, India, and the United States. More than seven hundred ethnic groups reside within the country’s borders, and more than 742 languages are spoken there. Almost 90 percent of Indonesia’s population practice Islam, making it the world’s largest Muslim nation. Indonesia is OPEC’s only Asian member, although as a consequence of aging infrastructure, depleted reserves, and high domestic consumption it is now a net importer of crude oil. The national language is Bahasa Indonesia. The capital is Jakarta. The currency is the rupiah. Most Americans can’t locate Indonesia on a map. This fact is puzzling to Indonesians, since for the past sixty years the fate of their nation has been directly tied to U.S. foreign policy. Ruled by a succession of sultanates and often-splintering kingdoms for most of its history, the archipelago became a Dutch colony—the Dutch East Indies—in the 1600s, a status that would last for more than three centuries. But in the lead-up to World War II, the Dutch East Indies’ ample oil reserves became a prime target of Japanese expansion; having thrown its lot in with the Axis powers and facing a U.S.-imposed oil embargo, Japan needed fuel for its military and industry. After the attack on Pearl Harbor, Japan moved swiftly to take over the Dutch colony, an occupation that would last for the duration of the war. With the Japanese surrender in 1945, a budding Indonesian nationalist movement declared the country’s independence. The Dutch had other ideas, and attempted to reclaim their former territory. Four bloody years of war ensued. Eventually the Dutch bowed to mounting international pressure (the U.S. government, already concerned with the spread of communism under the banner of anticolonialism, threatened the Netherlands with a cutoff of Marshall Plan funds) and recognized Indonesia’s sovereignty. The principal leader of the independence movement, a charismatic, flamboyant figure named Sukarno, became Indonesia’s first president. Sukarno proved to be a major disappointment to Washington. Along with Nehru of India and Nasser of Egypt, he helped found the nonaligned movement, an effort by nations newly liberated from colonial rule to navigate an independent path between the West and the Soviet bloc. Indonesia’s Communist Party, although never formally in power, grew in size and influence. Sukarno himself ramped up the anti-Western rhetoric, nationalizing key industries, rejecting U.S. aid, and strengthening ties with the Soviets and China. With U.S. forces knee-deep in Vietnam and the domino theory still a central tenet of U.S. foreign policy, the CIA began providing covert support to various insurgencies inside Indonesia, and cultivated close links with Indonesia’s military officers, many of whom had been trained in the United States. In 1965, under the leadership of General Suharto, the military moved against Sukarno, and under emergency powers began a massive purge of communists and their sympathizers. According to estimates, between 500,000 and one million people were slaughtered during the purge, with 750,000 others imprisoned or forced into exile. It was two years after the purge began, in 1967, the same year that Suharto assumed the presidency, that my mother and I arrived in Jakarta, a consequence of her remarriage to an Indonesian student whom she’d met at the University of Hawaii. I was six at the time, my mother twenty-four. In later years my mother would insist that had she known what had transpired in the preceding months, we never would have made the trip. But she didn’t know—the full story of the coup and the purge was slow to appear in American newspapers. Indonesians didn’t talk about it either. My stepfather, who had seen his student visa revoked while still in Hawaii and had been conscripted into the Indonesian army a few months before our arrival, refused to talk politics with my mother, advising her that some things were best forgotten. And in fact, forgetting the past was easy to do in Indonesia. Jakarta was still a sleepy backwater in those days, with few buildings over four or five stories high, cycle rickshaws outnumbering cars, the city center and wealthier sections of town—with their colonial elegance and lush, well-tended lawns—quickly giving way to clots of small villages with unpaved roads and open sewers, dusty markets, and shanties of mud and brick and plywood and corrugated iron that tumbled down gentle banks to murky rivers where families bathed and washed laundry like pilgrims in the Ganges. Our family was not well off in those early years; the Indonesian army didn’t pay its lieutenants much. We lived in a modest house on the outskirts of town, without air- conditioning, refrigeration, or flush toilets. We had no car—my stepfather rode a motorcycle, while my mother took the local jitney service every morning to the U.S. embassy, where she worked as an English teacher. Without the money to go to the international school that most expatriate children attended, I went to local Indonesian schools and ran the streets with the children of farmers, servants, tailors, and clerks. As a boy of seven or eight, none of this concerned me much. I remember those years as a joyous time, full of adventure and mystery—days of chasing down chickens and running from water buffalo, nights of shadow puppets and ghost stories and street vendors bringing delectable sweets to our door. As it was, I knew that relative to our neighbors we were doing fine—unlike many, we always had enough to eat. And perhaps more than that, I understood, even at a young age, that my family’s status was determined not only by our wealth but by our ties to the West. My mother might scowl at the attitudes she heard from other Americans in Jakarta, their condescension toward Indonesians, their unwillingness to learn anything about the country that was hosting them—but given the exchange rate, she was glad to be getting paid in dollars rather than the rupiahs her Indonesian colleagues at the embassy were paid. We might live as Indonesians lived—but every so often my mother would take me to the American Club, where I could jump in the pool and watch cartoons and sip Coca-Cola to my heart’s content. Sometimes, when my Indonesian friends came to our house, I would show them books of photographs, of Disneyland or the Empire State Building, that my grandmother had sent me; sometimes we would thumb through the Sears Roebuck catalog and marvel at the treasures on display. All this, I knew, was part of my heritage and set me apart, for my mother and I were citizens of the United States, beneficiaries of its power, safe and secure under the blanket of its protection. The scope of that power was hard to miss. The U.S. military conducted joint exercises with the Indonesian military and training programs for its officers. President Suharto turned to a cadre of American economists to design Indonesia’s development plan, based on free-market principles and foreign investment. American development consultants formed a steady line outside government ministries, helping to manage the massive influx of foreign assistance from the U.S. Agency for International Development and the World Bank. And although corruption permeated every level of government—even the smallest interaction with a policeman or bureaucrat involved a bribe, and just about every commodity or product coming in and out of the country, from oil to wheat to automobiles, went through companies controlled by the president, his family, or members of the ruling junta—enough of the oil wealth and foreign aid was plowed back into schools, roads, and other infrastructure that Indonesia’s general population saw its living standards rise dramatically; between 1967 and 1997, per capita income would go from $50 to $4,600 a year. As far as the United States was concerned, Indonesia had become a model of stability, a reliable supplier of raw materials and importer of Western goods, a stalwart ally and bulwark against communism. I would stay in Indonesia long enough to see some of this newfound prosperity firsthand. Released from the army, my stepfather began working for an American oil company. We moved to a bigger house and got a car and a driver, a refrigerator, and a television set. But in 1971 my mother—concerned for my education and perhaps anticipating her own growing distance from my stepfather—sent me to live with my grandparents in Hawaii. A year later she and my sister would join me. My mother’s ties to Indonesia would never diminish; for the next twenty years she would travel back and forth, working for international agencies for six or twelve months at a time as a specialist in women’s development issues, designing programs to help village women start their own businesses or bring their produce to market. But while during my teenage years I would return to Indonesia three or four times on short visits, my life and attention gradually turned elsewhere. What I know of Indonesia’s subsequent history, then, I know mainly through books, newspapers, and the stories my mother told me. For twenty-five years, in fits and starts, Indonesia’s economy continued to grow. Jakarta became a metropolis of almost nine million souls, with skyscrapers, slums, smog, and nightmare traffic. Men and women left the countryside to join the ranks of wage labor in manufacturing plants built by foreign investment, making sneakers for Nike and shirts for the Gap. Bali became the resort of choice for surfers and rock stars, with five-star hotels, Internet connections, and a Kentucky Fried Chicken franchise. By the early nineties, Indonesia was considered an “Asian tiger,” the next great success story of a globalizing world. Even the darker aspects of Indonesian life—its politics and human rights record— showed signs of improvement. When it came to sheer brutality, the post-1967 Suharto regime never reached the levels of Iraq under Saddam Hussein; with his subdued, placid style, the Indonesian president would never attract the attention that more demonstrative strongmen like Pinochet or the Shah of Iran did. By any measure, though, Suharto’s rule was harshly repressive. Arrests and torture of dissidents were common, a free press nonexistent, elections a mere formality. When ethnically based secessionist movements sprang up in areas like Aceh, the army targeted not just guerrillas but civilians for swift retribution—murder, rape, villages set afire. And throughout the seventies and eighties, all this was done with the knowledge, if not outright approval, of U.S. administrations. But with the end of the Cold War, Washington’s attitudes began to change. The State Department began pressuring Indonesia to curb its human rights abuses. In 1992, after Indonesian military units massacred peaceful demonstrators in Dili, East Timor, Congress terminated military aid to the Indonesian government. By 1996, Indonesian reformists had begun taking to the streets, openly talking about corruption in high offices, the military’s excesses, and the need for free and fair elections. Then, in 1997, the bottom fell out. A run on currencies and securities throughout Asia engulfed an Indonesian economy already corroded by decades of corruption. The rupiah’s value fell 85 percent in a matter of months. Indonesian companies that had borrowed in dollars saw their balance sheets collapse. In exchange for a $43 billion bailout, the Western-dominated International Monetary Fund, or IMF, insisted on a series of austerity measures (cutting government subsidies, raising interest rates) that would lead the price of such staples as rice and kerosene to nearly double. By the time the crisis was over, Indonesia’s economy had contracted almost 14 percent. Riots and demonstrations grew so severe that Suharto was finally forced to resign, and in 1998 the country’s first free elections were held, with some forty-eight parties vying for seats and some ninety-three million people casting their votes. On the surface, at least, Indonesia has survived the twin shocks of financial meltdown and democratization. The stock market is booming, and a second national election went off without major incident, leading to a peaceful transfer of power. If corruption remains endemic and the military remains a potent force, there’s been an explosion of independent newspapers and political parties to channel discontent. On the other hand, democracy hasn’t brought a return to prosperity. Per capita income is nearly 22 percent less than it was in 1997. The gap between rich and poor, always cavernous, appears to have worsened. The average Indonesian’s sense of deprivation is amplified by the Internet and satellite TV, which beam in images of the unattainable riches of London, New York, Hong Kong, and Paris in exquisite detail. And anti- American sentiment, almost nonexistent during the Suharto years, is now widespread, thanks in part to perceptions that New York speculators and the IMF purposely triggered the Asian financial crisis. In a 2003 poll, most Indonesians had a higher opinion of Osama bin Laden than they did of George W. Bush. All of which underscores perhaps the most profound shift in Indonesia—the growth of militant, fundamentalist Islam in the country. Traditionally, Indonesians practiced a tolerant, almost syncretic brand of the faith, infused with the Buddhist, Hindu, and animist traditions of earlier periods. Under the watchful eye of an explicitly secular Suharto government, alcohol was permitted, non-Muslims practiced their faith free from persecution, and women—sporting skirts or sarongs as they rode buses or scooters on the way to work—possessed all the rights that men possessed. Today, Islamic parties make up one of the largest political blocs, with many calling for the imposition of sharia, or Islamic law. Seeded by funds from the Middle East, Wahhabist clerics, schools, and mosques now dot the countryside. Many Indonesian women have adopted the head coverings so familiar in the Muslim countries of North Africa and the Persian Gulf; Islamic militants and self-proclaimed “vice squads” have attacked churches, nightclubs, casinos, and brothels. In 2002, an explosion in a Bali nightclub killed more than two hundred people; similar suicide bombings followed in Jakarta in 2004 and Bali in 2005. Members of Jemaah Islamiah, a militant Islamic organization with links to Al Qaeda, were tried for the bombings; while three of those connected to the bombings received death sentences, the spiritual leader of the group, Abu Bakar Bashir, was released after a twenty-six-month prison term. It was on a beach just a few miles from the site of those bombings that I stayed the last time I visited Bali. When I think of that island, and all of Indonesia, I’m haunted by memories—the feel of packed mud under bare feet as I wander through paddy fields; the sight of day breaking behind volcanic peaks; the muezzin’s call at night and the smell of wood smoke; the dickering at the fruit stands alongside the road; the frenzied sound of a gamelan orchestra, the musicians’ faces lit by fire. I would like to take Michelle and the girls to share that piece of my life, to climb the thousand-year-old Hindu ruins of Prambanan or swim in a river high in Balinese hills. But my plans for such a trip keep getting delayed. I’m chronically busy, and traveling with young children is always difficult. And, too, perhaps I am worried about what I will find there—that the land of my childhood will no longer match my memories. As much as the world has shrunk, with its direct flights and cell phone coverage and CNN and Internet cafés, Indonesia feels more distant now than it did thirty years ago. I fear it’s becoming a land of strangers. IN THE FIELD of international affairs, it’s dangerous to extrapolate from the experiences of a single country. In its history, geography, culture, and conflicts, each nation is unique. And yet in many ways Indonesia serves as a useful metaphor for the world beyond our borders—a world in which globalization and sectarianism, poverty and plenty, modernity and antiquity constantly collide. Indonesia also provides a handy record of U.S. foreign policy over the past fifty years. In broad outline at least, it’s all there: our role in liberating former colonies and creating international institutions to help manage the post–World War II order; our tendency to view nations and conflicts through the prism of the Cold War; our tireless promotion of American-style capitalism and multinational corporations; the tolerance and occasional encouragement of tyranny, corruption, and environmental degradation when it served our interests; our optimism once the Cold War ended that Big Macs and the Internet would lead to the end of historical conflicts; the growing economic power of Asia and the growing resentment of the United States as the world’s sole superpower; the realization that in the short term, at least, democratization might lay bare, rather than alleviate, ethnic hatreds and religious divisions—and that the wonders of globalization might also facilitate economic volatility, the spread of pandemics, and terrorism. In other words, our record is mixed—not just in Indonesia but across the globe. At times, American foreign policy has been farsighted, simultaneously serving our national interests, our ideals, and the interests of other nations. At other times American policies have been misguided, based on false assumptions that ignore the legitimate aspirations of other peoples, undermine our own credibility, and make for a more dangerous world. Such ambiguity shouldn’t be surprising, for American foreign policy has always been a jumble of warring impulses. In the earliest days of the Republic, a policy of isolationism often prevailed—a wariness of foreign intrigues that befitted a nation just emerging from a war of independence. “Why,” George Washington asked in his famous Farewell Address, “by interweaving our destiny with that of any part of Europe, entangle our peace and prosperity in the toils of European ambition, rivalship, interest, humor or caprice?” Washington’s view was reinforced by what he called America’s “detached and distant situation,” a geographic separation that would permit the new nation to “defy material injury from external annoyance.” Moreover, while America’s revolutionary origins and republican form of government might make it sympathetic toward those seeking freedom elsewhere, America’s early leaders cautioned against idealistic attempts to export our way of life; according to John Quincy Adams, America should not go “abroad in search of monsters to destroy” nor “become the dictatress of the world.” Providence had charged America with the task of making a new world, not reforming the old; protected by an ocean and with the bounty of a continent, America could best serve the cause of freedom by concentrating on its own development, becoming a beacon of hope for other nations and people around the globe. But if suspicion of foreign entanglements is stamped into our DNA, then so is the impulse to expand—geographically, commercially, and ideologically. Thomas Jefferson expressed early on the inevitability of expansion beyond the boundaries of the original thirteen states, and his timetable for such expansion was greatly accelerated with the Louisiana Purchase and the Lewis and Clark expedition. The same John Quincy Adams who warned against U.S. adventurism abroad became a tireless advocate of continental expansion and served as the chief architect of the Monroe Doctrine—a warning to European powers to keep out of the Western Hemisphere. As American soldiers and settlers moved steadily west and southwest, successive administrations described the annexation of territory in terms of “manifest destiny”—the conviction that such expansion was preordained, part of God’s plan to extend what Andrew Jackson called “the area of freedom” across the continent. Of course, manifest destiny also meant bloody and violent conquest—of Native American tribes forcibly removed from their lands and of the Mexican army defending its territory. It was a conquest that, like slavery, contradicted America’s founding principles and tended to be justified in explicitly racist terms, a conquest that American mythology has always had difficulty fully absorbing but that other countries recognized for what it was—an exercise in raw power. With the end of the Civil War and the consolidation of what’s now the continental United States, that power could not be denied. Intent on expanding markets for its goods, securing raw materials for its industry, and keeping sea lanes open for its commerce, the nation turned its attention overseas. Hawaii was annexed, giving America a foothold in the Pacific. The Spanish-American War delivered Puerto Rico, Guam, and the Philippines into U.S. control; when some members of the Senate objected to the military occupation of an archipelago seven thousand miles away—an occupation that would involve thousands of U.S. troops crushing a Philippine independence movement—one senator argued that the acquisition would provide the United States with access to the China market and mean “a vast trade and wealth and power.” America would never pursue the systematic colonization practiced by European nations, but it shed all inhibitions about meddling in the affairs of countries it deemed strategically important. Theodore Roosevelt, for example, added a corollary to the Monroe Doctrine, declaring that the United States would intervene in any Latin American or Caribbean country whose government it deemed not to America’s liking. “The United States of America has not the option as to whether it will or it will not play a great part in the world,” Roosevelt would argue. “It must play a great part. All that it can decide is whether it will play that part well or badly.” By the start of the twentieth century, then, the motives that drove U.S. foreign policy seemed barely distinguishable from those of the other great powers, driven by realpolitik and commercial interests. Isolationist sentiment in the population at large remained strong, particularly when it came to conflicts in Europe, and when vital U.S. interests did not seem directly at stake. But technology and trade were shrinking the globe; determining which interests were vital and which ones were not became increasingly difficult. During World War I, Woodrow Wilson avoided American involvement until the repeated sinking of American vessels by German U-boats and the imminent collapse of the European continent made neutrality untenable. When the war was over, America had emerged as the world’s dominant power—but a power whose prosperity Wilson now understood to be linked to peace and prosperity in faraway lands. It was in an effort to address this new reality that Wilson sought to reinterpret the idea of America’s manifest destiny. Making “the world safe for democracy” didn’t just involve winning a war, he argued; it was in America’s interest to encourage the self- determination of all peoples and provide the world a legal framework that could help avoid future conflicts. As part of the Treaty of Versailles, which detailed the terms of German surrender, Wilson proposed a League of Nations to mediate conflicts between nations, along with an international court and a set of international laws that would bind not just the weak but also the strong. “This is the time of all others when Democracy should prove its purity and its spiritual power to prevail,” Wilson said. “It is surely the manifest destiny of the United States to lead in the attempt to make this spirit prevail.” Wilson’s proposals were initially greeted with enthusiasm in the United States and around the world. The U.S. Senate, however, was less impressed. Republican Senate Leader Henry Cabot Lodge considered the League of Nations—and the very concept of international law—as an encroachment on American sovereignty, a foolish constraint on America’s ability to impose its will around the world. Aided by traditional isolationists in both parties (many of whom had opposed American entry into World War I), as well as Wilson’s stubborn unwillingness to compromise, the Senate refused to ratify U.S. membership in the League. For the next twenty years, America turned resolutely inward—reducing its army and navy, refusing to join the World Court, standing idly by as Italy, Japan, and Nazi Germany built up their military machines. The Senate became a hotbed of isolationism, passing a Neutrality Act that prevented the United States from lending assistance to countries invaded by the Axis powers, and repeatedly ignoring the President’s appeals as Hitler’s armies marched across Europe. Not until the bombing of Pearl Harbor would America realize its terrible mistake. “There is no such thing as security for any nation— or any individual—in a world ruled by the principles of gangsterism,” FDR would say in his national address after the attack. “We cannot measure our safety in terms of miles on any map any more.” In the aftermath of World War II, the United States would have a chance to apply these lessons to its foreign policy. With Europe and Japan in ruins, the Soviet Union bled white by its battles on the Eastern Front but already signaling its intentions to spread its brand of totalitarian communism as far as it could, America faced a choice. There were those on the right who argued that only a unilateral foreign policy and an immediate invasion of the Soviet Union could disable the emerging communist threat. And although isolationism of the sort that prevailed in the thirties was now thoroughly discredited, there were those on the left who downplayed Soviet aggression, arguing that given Soviet losses and the country’s critical role in the Allied victory, Stalin should be accommodated. America took neither path. Instead, the postwar leadership of President Truman, Dean Acheson, George Marshall, and George Kennan crafted the architecture of a new, postwar order that married Wilson’s idealism to hardheaded realism, an acceptance of America’s power with a humility regarding America’s ability to control events around the world. Yes, these men argued, the world is a dangerous place, and the Soviet threat is real; America needed to maintain its military dominance and be prepared to use force in defense of its interests across the globe. But even the power of the United States was finite—and because the battle against communism was also a battle of ideas, a test of what system might best serve the hopes and dreams of billions of people around the world, military might alone could not ensure America’s long-term prosperity or security. What America needed, then, were stable allies—allies that shared the ideals of freedom, democracy, and the rule of law, and that saw themselves as having a stake in a market- based economic system. Such alliances, both military and economic, entered into freely and maintained by mutual consent, would be more lasting—and stir less resentment— than any collection of vassal states American imperialism might secure. Likewise, it was in America’s interest to work with other countries to build up international institutions and promote international norms. Not because of a naive assumption that international laws and treaties alone would end conflicts among nations or eliminate the need for American military action, but because the more international norms were reinforced and the more America signaled a willingness to show restraint in the exercise of its power, the fewer the number of conflicts that would arise—and the more legitimate our actions would appear in the eyes of the world when we did have to move militarily. In less than a decade, the infrastructure of a new world order was in place. There was a U.S. policy of containment with respect to communist expansion, backed not just by U.S. troops but also by security agreements with NATO and Japan; the Marshall Plan to rebuild war-shattered economies; the Bretton Woods agreement to provide stability to the world’s financial markets and the General Agreement on Tariffs and Trade to establish rules governing world commerce; U.S. support for the independence of former European colonies; the IMF and World Bank to help integrate these newly independent nations into the world economy; and the United Nations to provide a forum for collective security and international cooperation. Sixty years later, we can see the results of this massive postwar undertaking: a successful outcome to the Cold War, an avoidance of nuclear catastrophe, the effective end of conflict between the world’s great military powers, and an era of unprecedented economic growth at home and abroad. It’s a remarkable achievement, perhaps the Greatest Generation’s greatest gift to us after the victory over fascism. But like any system built by man, it had its flaws and contradictions; it could fall victim to the distortions of politics, the sins of hubris, the corrupting effects of fear. Because of the enormity of the Soviet threat, and the shock of communist takeovers in China and North Korea, American policy makers came to view nationalist movements, ethnic struggles, reform efforts, or left-leaning policies anywhere in the world through the lens of the Cold War—potential threats they felt outweighed our professed commitment to freedom and democracy. For decades we would tolerate and even aid thieves like Mobutu, thugs like Noriega, so long as they opposed communism. Occasionally U.S. covert operations would engineer the removal of democratically elected leaders in countries like Iran—with seismic repercussions that haunt us to this day. America’s policy of containment also involved an enormous military buildup, matching and then exceeding the Soviet and Chinese arsenals. Over time, the “iron triangle” of the Pentagon, defense contractors, and congressmen with large defense expenditures in their districts amassed great power in shaping U.S. foreign policy. And although the threat of nuclear war would preclude direct military confrontation with our superpower rivals, U.S policy makers increasingly viewed problems elsewhere in the world through a military lens rather than a diplomatic one. Most important, the postwar system over time suffered from too much politics and not enough deliberation and domestic consensus building. One of America’s strengths immediately following the war was a degree of domestic consensus surrounding foreign policy. There might have been fierce differences between Republicans and Democrats, but politics usually ended at the water’s edge; professionals, whether in the White House, the Pentagon, the State Department, or the CIA, were expected to make decisions based on facts and sound judgment, not ideology or electioneering. Moreover, that consensus extended to the public at large; programs like the Marshall Plan, which involved a massive investment of U.S. funds, could not have gone forward without the American people’s basic trust in their government, as well as a reciprocal faith on the part of government officials that the American people could be trusted with the facts that went into decisions that spent their tax dollars or sent their sons to war. As the Cold War wore on, the key elements in this consensus began to erode. Politicians discovered that they could get votes by being tougher on communism than their opponents. Democrats were assailed for “losing China.” McCarthyism destroyed careers and crushed dissent. Kennedy would blame Republicans for a “missile gap” that didn’t exist on his way to beating Nixon, who himself had made a career of Red-baiting his opponents. Presidents Eisenhower, Kennedy, and Johnson would all find their judgment clouded by fear that they would be tagged as “soft on communism.” The Cold War techniques of secrecy, snooping, and misinformation, used against foreign governments and foreign populations, became tools of domestic politics, a means to harass critics, build support for questionable policies, or cover up blunders. The very ideals that we had promised to export overseas were being betrayed at home. All these trends came to a head in Vietnam. The disastrous consequences of that conflict—for our credibility and prestige abroad, for our armed forces (which would take a generation to recover), and most of all for those who fought—have been amply documented. But perhaps the biggest casualty of that war was the bond of trust between the American people and their government—and between Americans themselves. As a consequence of a more aggressive press corps and the images of body bags flooding into living rooms, Americans began to realize that the best and the brightest in Washington didn’t always know what they were doing—and didn’t always tell the truth. Increasingly, many on the left voiced opposition not only to the Vietnam War but also to the broader aims of American foreign policy. In their view, President Johnson, General Westmoreland, the CIA, the “military-industrial complex,” and international institutions like the World Bank were all manifestations of American arrogance, jingoism, racism, capitalism, and imperialism. Those on the right responded in kind, laying responsibility not only for the loss of Vietnam but also for the decline of America’s standing in the world squarely on the “blame America first” crowd—the protesters, the hippies, Jane Fonda, the Ivy League intellectuals and liberal media who denigrated patriotism, embraced a relativistic worldview, and undermined American resolve to confront godless communism. Admittedly, these were caricatures, promoted by activists and political consultants. Many Americans remained somewhere in the middle, still supportive of America’s efforts to defeat communism but skeptical of U.S. policies that might involve large numbers of American casualties. Throughout the seventies and eighties, one could find Democratic hawks and Republican doves; in Congress, there were men like Mark Hatfield of Oregon and Sam Nunn of Georgia who sought to perpetuate the tradition of a bipartisan foreign policy. But the caricatures were what shaped public impressions during election time, as Republicans increasingly portrayed Democrats as weak on defense, and those suspicious of military and covert action abroad increasingly made the Democratic Party their political home. It was against this backdrop—an era of division rather than an era of consensus—that most Americans alive today formed whatever views they may have on foreign policy. These were the years of Nixon and Kissinger, whose foreign policies were tactically brilliant but were overshadowed by domestic policies and a Cambodian bombing campaign that were morally rudderless. They were the years of Jimmy Carter, a Democrat who—with his emphasis on human rights—seemed prepared to once again align moral concerns with a strong defense, until oil shocks, the humiliation of the Iranian hostage crisis, and the Soviet Union’s invasion of Afghanistan made him seem naive and ineffective. Looming perhaps largest of all was Ronald Reagan, whose clarity about communism seemed matched by his blindness regarding other sources of misery in the world. I personally came of age during the Reagan presidency—I was studying international affairs at Columbia, and later working as a community organizer in Chicago—and like many Democrats in those days I bemoaned the effect of Reagan’s policies toward the Third World: his administration’s support for the apartheid regime of South Africa, the funding of El Salvador’s death squads, the invasion of tiny, hapless Grenada. The more I studied nuclear arms policy, the more I found Star Wars to be ill conceived; the chasm between Reagan’s soaring rhetoric and the tawdry Iran-Contra deal left me speechless. But at times, in arguments with some of my friends on the left, I would find myself in the curious position of defending aspects of Reagan’s worldview. I didn’t understand why, for example, progressives should be less concerned about oppression behind the Iron Curtain than they were about brutality in Chile. I couldn’t be persuaded that U.S. multinationals and international terms of trade were single-handedly responsible for poverty around the world; nobody forced corrupt leaders in Third World countries to steal from their people. I might have arguments with the size of Reagan’s military buildup, but given the Soviet invasion of Afghanistan, staying ahead of the Soviets militarily seemed a sensible thing to do. Pride in our country, respect for our armed services, a healthy appreciation for the dangers beyond our borders, an insistence that there was no easy equivalence between East and West—in all this I had no quarrel with Reagan. And when the Berlin Wall came tumbling down, I had to give the old man his due, even if I never gave him my vote. Many people—including many Democrats—did give Reagan their vote, leading Republicans to argue that his presidency restored America’s foreign policy consensus. Of course, that consensus was never really tested; Reagan’s war against communism was mainly carried out through proxies and deficit spending, not the deployment of U.S. troops. As it was, the end of the Cold War made Reagan’s formula seem ill suited to a new world. George H. W. Bush’s return to a more traditional, “realist” foreign policy would result in a steady management of the Soviet Union’s dissolution and an able handling of the first Gulf War. But with the American public’s attention focused on the domestic economy, his skill in building international coalitions or judiciously projecting American power did nothing to salvage his presidency. By the time Bill Clinton came into office, conventional wisdom suggested that America’s post–Cold War foreign policy would be more a matter of trade than tanks, protecting American copyrights rather than American lives. Clinton himself understood that globalization involved not only new economic challenges but also new security challenges. In addition to promoting free trade and bolstering the international financial system, his administration would work to end long-festering conflicts in the Balkans and Northern Ireland and advance democratization in Eastern Europe, Latin America, Africa, and the former Soviet Union. But in the eyes of the public, at least, foreign policy in the nineties lacked any overarching theme or grand imperatives. U.S. military action in particular seemed entirely a matter of choice, not necessity—the product of our desire to slap down rogue states, perhaps; or a function of humanitarian calculations regarding the moral obligations we owed to Somalis, Haitians, Bosnians, or other unlucky souls. Then came September 11—and Americans felt their world turned upside down. IN JANUARY 2006, I boarded a C-130 military cargo plane and took off for my first trip into Iraq. Two of my colleagues on the trip—Senator Evan Bayh of Indiana and Congressman Harold Ford, Jr. of Tennessee—had made the trip before, and they warned me that the landings in Baghdad could be a bit uncomfortable: To evade potential hostile fire, military flights in and out of Iraq’s capital city engaged in a series of sometimes stomach-turning maneuvers. As our plane cruised through the hazy morning, though, it was hard to feel concerned. Strapped into canvas seats, most of my fellow passengers had fallen asleep, their heads bobbing against the orange webbing that ran down the center of the fuselage. One of the crew appeared to be playing a video game; another placidly thumbed through our flight plans. It had been four and a half years since I’d first heard reports of a plane hitting the World Trade Center. I had been in Chicago at the time, driving to a state legislative hearing downtown. The reports on my car radio were sketchy, and I assumed that there must have been an accident, a small prop plane perhaps veering off course. By the time I arrived at my meeting, the second plane had already hit, and we were told to evacuate the State of Illinois Building. Up and down the streets, people gathered, staring at the sky and at the Sears Tower. Later, in my law office, a group of us sat motionless as the nightmare images unfolded across the TV screen—a plane, dark as a shadow, vanishing into glass and steel; men and women clinging to windowsills, then letting go; the shouts and sobs from below and finally the rolling clouds of dust blotting out the sun. I spent the next several weeks as most Americans did—calling friends in New York and D.C., sending donations, listening to the President’s speech, mourning the dead. And for me, as for most of us, the effect of September 11 felt profoundly personal. It wasn’t just the magnitude of the destruction that affected me, or the memories of the five years I’d spent in New York—memories of streets and sights now reduced to rubble. Rather, it was the intimacy of imagining those ordinary acts that 9/11’s victims must have performed in the hours before they were killed, the daily routines that constitute life in our modern world—the boarding of a plane, the jostling as we exit a commuter train, grabbing coffee and the morning paper at a newsstand, making small talk on the elevator. For most Americans, such routines represented a victory of order over chaos, the concrete expression of our belief that so long as we exercised, wore seat belts, had a job with benefits, and avoided certain neighborhoods, our safety was ensured, our families protected. Now chaos had come to our doorstep. As a consequence, we would have to act differently, understand the world differently. We would have to answer the call of a nation. Within a week of the attacks, I watched the Senate vote 98–0 and the House vote 420–1 to give the President the authority to “use all necessary and appropriate force against those nations, organizations or persons” behind the attacks. Interest in the armed services and applications to join the CIA soared, as young people across America resolved to serve their country. Nor were we alone. In Paris, Le Monde ran the banner headline “Nous sommes tous Américains” (“We are all Americans”). In Cairo, local mosques offered prayers of sympathy. For the first time since its founding in 1949, NATO invoked Article 5 of its charter, agreeing that the armed attack on one of its members “shall be considered an attack against them all.” With justice at our backs and the world by our side, we drove the Taliban government out of Kabul in just over a month; Al Qaeda operatives fled or were captured or killed. It was a good start by the Administration, I thought—steady, measured, and accomplished with minimal casualties (only later would we discover the degree to which our failure to put sufficient military pressure on Al Qaeda forces at Tora Bora may have led to bin Laden’s escape). And so, along with the rest of the world, I waited with anticipation for what I assumed would follow: the enunciation of a U.S. foreign policy for the twenty-first century, one that would not only adapt our military planning, intelligence operations, and homeland defenses to the threat of terrorist networks but build a new international consensus around the challenges of transnational threats. This new blueprint never arrived. Instead what we got was an assortment of outdated policies from eras gone by, dusted off, slapped together, and with new labels affixed. Reagan’s “Evil Empire” was now “the Axis of Evil.” Theodore Roosevelt’s version of the Monroe Doctrine—the notion that we could preemptively remove governments not to our liking—was now the Bush Doctrine, only extended beyond the Western Hemisphere to span the globe. Manifest destiny was back in fashion; all that was needed, according to Bush, was American firepower, American resolve, and a “coalition of the willing.” Perhaps worst of all, the Bush Administration resuscitated a brand of politics not seen since the end of the Cold War. As the ouster of Saddam Hussein became the test case for Bush’s doctrine of preventive war, those who questioned the Administration’s rationale for invasion were accused of being “soft on terrorism” or “un-American.” Instead of an honest accounting of this military campaign’s pros and cons, the Administration initiated a public relations offensive: shading intelligence reports to support its case, grossly understating both the costs and the manpower requirements of military action, raising the specter of mushroom clouds. The PR strategy worked; by the fall of 2002, a majority of Americans were convinced that Saddam Hussein possessed weapons of mass destruction, and at least 66 percent believed (falsely) that the Iraqi leader had been personally involved in the 9/11 attacks. Support for an invasion of Iraq—and Bush’s approval rating—hovered around 60 percent. With an eye on the midterm elections, Republicans stepped up the attacks and pushed for a vote authorizing the use of force against Saddam Hussein. And on October 11, 2002, twenty-eight of the Senate’s fifty Democrats joined all but one Republican in handing to Bush the power he wanted. I was disappointed in that vote, although sympathetic to the pressures Democrats were under. I had felt some of those same pressures myself. By the fall of 2002, I had already decided to run for the U.S. Senate and knew that possible war with Iraq would loom large in any campaign. When a group of Chicago activists asked if I would speak at a large antiwar rally planned for October, a number of my friends warned me against taking so public a position on such a volatile issue. Not only was the idea of an invasion increasingly popular, but on the merits I didn’t consider the case against war to be cut- and-dried. Like most analysts, I assumed that Saddam had chemical and biological weapons and coveted nuclear arms. I believed that he had repeatedly flouted UN resolutions and weapons inspectors and that such behavior had to have consequences. That Saddam butchered his own people was undisputed; I had no doubt that the world, and the Iraqi people, would be better off without him. What I sensed, though, was that the threat Saddam posed was not imminent, the Administration’s rationales for war were flimsy and ideologically driven, and the war in Afghanistan was far from complete. And I was certain that by choosing precipitous, unilateral military action over the hard slog of diplomacy, coercive inspections, and smart sanctions, America was missing an opportunity to build a broad base of support for its policies. And so I made the speech. To the two thousand people gathered in Chicago’s Federal Plaza, I explained that unlike some of the people in the crowd, I didn’t oppose all wars—that my grandfather had signed up for the war the day after Pearl Harbor was bombed and had fought in Patton’s army. I also said that “after witnessing the carnage and destruction, the dust and the tears, I supported this Administration’s pledge to hunt down and root out those who would slaughter innocents in the name of intolerance” and would “willingly take up arms myself to prevent such tragedy from happening again.” What I could not support was “a dumb war, a rash war, a war based not on reason but on passion, not on principle but on politics.” And I said: I know that even a successful war against Iraq will require a U.S. occupation of undetermined length, at undetermined cost, with undetermined consequences. I know that an invasion of Iraq without a clear rationale and without strong international support will only fan the flames of the Middle East, and encourage the worst, rather than the best, impulses of the Arab world, and strengthen the recruitment arm of Al Qaeda. The speech was well received; activists began circulating the text on the Internet, and I established a reputation for speaking my mind on hard issues—a reputation that would carry me through a tough Democratic primary. But I had no way of knowing at the time whether my assessment of the situation in Iraq was correct. When the invasion was finally launched and U.S. forces marched unimpeded through Baghdad, when I saw Saddam’s statue topple and watched the President stand atop the U.S.S. Abraham Lincoln, a banner behind him proclaiming “Mission Accomplished,” I began to suspect that I might have been wrong—and was relieved to see the low number of American casualties involved. And now, three years later—as the number of American deaths passed two thousand and the number of wounded passed sixteen thousand; after $250 billion in direct spending and hundreds of billions more in future years to pay off the resulting debt and care for disabled veterans; after two Iraqi national elections, one Iraqi constitutional referendum, and tens of thousands of Iraqi deaths; after watching anti-American sentiment rise to record levels around the world and Afghanistan begin to slip back into chaos—I was flying into Baghdad as a member of the Senate, partially responsible for trying to figure out just what to do with this mess. The landing at Baghdad International Airport turned out not to be so bad—although I was thankful that we couldn’t see out the windows as the C-130 bucked and banked and dipped its way down. Our escort officer from the State Department was there to greet us, along with an assortment of military personnel with rifles slung over their shoulders. After getting our security briefing, recording our blood types, and being fitted for helmets and Kevlar vests, we boarded two Black Hawk helicopters and headed for the Green Zone, flying low, passing over miles of mostly muddy, barren fields crisscrossed by narrow roads and punctuated by small groves of date trees and squat concrete shelters, many of them seemingly empty, some bulldozed down to their foundations. Eventually Baghdad came into view, a sand-colored metropolis set in a circular pattern, the Tigris River cutting a broad, murky swath down its center. Even from the air the city looked worn and battered, the traffic on the streets intermittent—although almost every rooftop was cluttered with satellite dishes, which along with cell phone service had been touted by U.S. officials as one of the successes of the reconstruction. I would spend only a day and a half in Iraq, most of it in the Green Zone, a ten-mile- wide area of central Baghdad that had once been the heart of Saddam Hussein’s government but was now a U.S.-controlled compound, surrounded along its perimeter by blast walls and barbed wire. Reconstruction teams briefed us about the difficulty of maintaining electrical power and oil production in the face of insurgent sabotage; intelligence officers described the growing threat of sectarian militias and their infiltration of Iraqi security forces. Later, we met with members of the Iraqi Election Commission, who spoke with enthusiasm about the high turnout during the recent election, and for an hour we listened to U.S. Ambassador Khalilzad, a shrewd, elegant man with world-weary eyes, explain the delicate shuttle diplomacy in which he was now engaged, to bring Shi’ite, Sunni, and Kurdish factions into some sort of workable unity government. In the afternoon we had an opportunity to have lunch with some of the troops in the huge mess hall just off the swimming pool of what had once been Saddam’s presidential palace. They were a mix of regular forces, reservists, and National Guard units, from big cities and small towns, blacks and whites and Latinos, many of them on their second or third tour of duty. They spoke with pride as they told us what their units had accomplished—building schools, protecting electrical facilities, leading newly trained Iraqi soldiers on patrol, maintaining supply lines to those in far-flung regions of the country. Again and again, I was asked the same question: Why did the U.S. press only report on bombings and killings? There was progress being made, they insisted—I needed to let the folks back home know that their work was not in vain. It was easy, talking to these men and women, to understand their frustration, for all the Americans I met in Iraq, whether military or civilian, impressed me with their dedication, their skill, and their frank acknowledgment not only of the mistakes that had been made but also of the difficulties of the task that still lay ahead. Indeed, the entire enterprise in Iraq bespoke American ingenuity, wealth, and technical know-how; standing inside the Green Zone or any of the large operating bases in Iraq and Kuwait, one could only marvel at the ability of our government to essentially erect entire cities within hostile territory, self-contained communities with their own power and sewage systems, computer lines and wireless networks, basketball courts and ice cream stands. More than that, one was reminded of that unique quality of American optimism that everywhere was on display—the absence of cynicism despite the danger, sacrifice, and seemingly interminable setbacks, the insistence that at the end of the day our actions would result in a better life for a nation of people we barely knew. And yet, three conversations during the course of my visit would remind me of just how quixotic our efforts in Iraq still seemed—how, with all the American blood, treasure, and the best of intentions, the house we were building might be resting on quicksand. The first conversation took place in the early evening, when our delegation held a press conference with a group of foreign correspondents stationed in Baghdad. After the Q&A session, I asked the reporters if they’d stay for an informal, off-the-record conversation. I was interested, I said, in getting some sense of life outside the Green Zone. They were happy to oblige, but insisted they could only stay for forty-five minutes—it was getting late, and like most residents of Baghdad, they generally avoided traveling once the sun went down. As a group, they were young, mostly in their twenties and early thirties, all of them dressed casually enough that they could pass for college students. Their faces, though, showed the stresses they were under—sixty journalists had already been killed in Iraq by that time. Indeed, at the start of our conversation they apologized for being somewhat distracted; they had just received word that one of their colleagues, a reporter with the Christian Science Monitor named Jill Carroll, had been abducted, her driver found killed on the side of a road. Now they were all working their contacts, trying to track down her whereabouts. Such violence wasn’t unusual in Baghdad these days, they said, although Iraqis overwhelmingly bore the brunt of it. Fighting between Shi’ites and Sunnis had become widespread, less strategic, less comprehensible, more frightening. None of them thought that the elections would bring about significant improvement in the security situation. I asked them if they thought a U.S. troop withdrawal might ease tensions, expecting them to answer in the affirmative. Instead, they shook their heads. “My best guess is the country would collapse into civil war within weeks,” one of the reporters told me. “One hundred, maybe two hundred thousand dead. We’re the only thing holding this place together.” That night, our delegation accompanied Ambassador Khalilzad for dinner at the home of Iraqi interim President Jalal Tala-bani. Security was tight as our convoy wound its way past a maze of barricades out of the Green Zone; outside, our route was lined with U.S. troops at one-block intervals, and we were instructed to keep our vests and helmets on for the duration of the drive. After ten minutes we arrived at a large villa, where we were greeted by the president and several members of the Iraqi interim government. They were all heavyset men, most in their fifties or sixties, with broad smiles but eyes that betrayed no emotion. I recognized only one of the ministers—Mr. Ahmed Chalabi, the Western-educated Shi’ite who, as a leader of the exile group the Iraqi National Congress, had reportedly fed U.S. intelligence agencies and Bush policy makers some of the prewar information on which the decision to invade was made—information for which Chalabi’s group had received millions of dollars, and that had turned out to be bogus. Since then Chalabi had fallen out with his U.S. patrons; there were reports that he had steered U.S. classified information to the Iranians, and that Jordan still had a warrant out for his arrest after he’d been convicted in absentia on thirty-one charges of embezzlement, theft, misuse of depositor funds, and currency speculation. But he appeared to have landed on his feet; immaculately dressed, accompanied by his grown daughter, he was now the interim government’s acting oil minister. I didn’t speak much to Chalabi during dinner. Instead I was seated next to the former interim finance minister. He seemed impressive, speaking knowledgeably about Iraq’s economy, its need to improve transparency and strengthen its legal framework to attract foreign investment. At the end of the evening, I mentioned my favorable impression to one of the embassy staff. “He’s smart, no doubt about it,” the staffer said. “Of course, he’s also one of the leaders of the SCIRI Party. They control the Ministry of the Interior, which controls the police. And the police, well…there have been problems with militia infiltration. Accusations that they’re grabbing Sunni leaders, bodies found the next morning, that kind of thing…” The staffer’s voice trailed off, and he shrugged. “We work with what we have.” I had difficulty sleeping that night; instead, I watched the Redskins game, piped in live via satellite to the pool house once reserved for Saddam and his guests. Several times I muted the TV and heard mortar fire pierce the silence. The following morning, we took a Black Hawk to the Marine base in Fallujah, out in the arid, western portion of Iraq called Anbar Province. Some of the fiercest fighting against the insurgency had taken place in Sunni-dominated Anbar, and the atmosphere in the camp was considerably grimmer than in the Green Zone; just the previous day, five Marines on patrol had been killed by roadside bombs or small-arms fire. The troops here looked rawer as well, most of them in their early twenties, many still with pimples and the unformed bodies of teenagers. The general in charge of the camp had arranged a briefing, and we listened as the camp’s senior officers explained the dilemma facing U.S. forces: With improved capabilities, they were arresting more and more insurgent leaders each day, but like street gangs back in Chicago, for every insurgent they arrested, there seemed to be two ready to take his place. Economics, and not just politics, seemed to be feeding the insurgency—the central government had been neglecting Anbar, and male unemployment hovered around 70 percent. “For two or three dollars, you can pay some kid to plant a bomb,” one of the officers said. “That’s a lot of money out here.” By the end of the briefing, a light fog had rolled in, delaying our flight to Kirkuk. While waiting, my foreign policy staffer, Mark Lippert, wandered off to chat with one of the unit’s senior officers, while I struck up a conversation with one of the majors responsible for counterinsurgency strategy in the region. He was a soft-spoken man, short and with glasses; it was easy to imagine him as a high school math teacher. In fact, it turned out that before joining the Marines he had spent several years in the Philippines as a member of the Peace Corps. Many of the lessons he had learned there needed to be applied to the military’s work in Iraq, he told me. He didn’t have anywhere near the number of Arabic-speakers needed to build trust with the local population. We needed to improve cultural sensitivity within U.S. forces, develop long-term relationships with local leaders, and couple security forces to reconstruction teams, so that Iraqis could see concrete benefits from U.S. efforts. All this would take time, he said, but he could already see changes for the better as the military adopted these practices throughout the country. Our escort officer signaled that the chopper was ready to take off. I wished the major luck and headed for the van. Mark came up beside me, and I asked him what he’d learned from his conversation with the senior officer. “I asked him what he thought we needed to do to best deal with the situation.” “What did he say?” “Leave.” THE STORY OF America’s involvement in Iraq will be analyzed and debated for many years to come—indeed, it’s a story that’s still being written. At the moment, the situation there has deteriorated to the point where it appears that a low-grade civil war has begun, and while I believe that all Americans—regardless of their views on the original decision to invade—have an interest in seeing a decent outcome in Iraq, I cannot honestly say that I am optimistic about Iraq’s short-term prospects. I do know that at this stage it will be politics—the calculations of those hard, unsentimental men with whom I had dinner—and not the application of American force that determines what happens in Iraq. I believe as well that our strategic goals at this point should be well defined: achieving some semblance of stability in Iraq, ensuring that those in power in Iraq are not hostile to the United States, and preventing Iraq from becoming a base for terrorist activity. In pursuit of these goals, I believe it is in the interest of both Americans and Iraqis to begin a phased withdrawal of U.S. troops by the end of 2006, although how quickly a complete withdrawal can be accomplished is a matter of imperfect judgment, based on a series of best guesses—about the ability of the Iraqi government to deliver even basic security and services to its people, the degree to which our presence drives the insurgency, and the odds that in the absence of U.S. troops Iraq would descend into all-out civil war. When battle-hardened Marine officers suggest we pull out and skeptical foreign correspondents suggest that we stay, there are no easy answers to be had. Still, it’s not too early to draw some conclusions from our actions in Iraq. For our difficulties there don’t just arise as a result of bad execution. They reflect a failure of conception. The fact is, close to five years after 9/11 and fifteen years after the breakup of the Soviet Union, the United States still lacks a coherent national security policy. Instead of guiding principles, we have what appear to be a series of ad hoc decisions, with dubious results. Why invade Iraq and not North Korea or Burma? Why intervene in Bosnia and not Darfur? Are our goals in Iran regime change, the dismantling of all Iranian nuclear capability, the prevention of nuclear proliferation, or all three? Are we committed to use force wherever there’s a despotic regime that’s terrorizing its people—and if so, how long do we stay to ensure democracy takes root? How do we treat countries like China that are liberalizing economically but not politically? Do we work through the United Nations on all issues or only when the UN is willing to ratify decisions we’ve already made? Perhaps someone inside the White House has clear answers to these questions. But our allies—and for that matter our enemies—certainly don’t know what those answers are. More important, neither do the American people. Without a well-articulated strategy that the public supports and the world understands, America will lack the legitimacy— and ultimately the power—it needs to make the world safer than it is today. We need a revised foreign policy framework that matches the boldness and scope of Truman’s post–World War II policies—one that addresses both the challenges and the opportunities of a new millennium, one that guides our use of force and expresses our deepest ideals and commitments. I don’t presume to have this grand strategy in my hip pocket. But I know what I believe, and I’d suggest a few things that the American people should be able to agree on, starting points for a new consensus. To begin with, we should understand that any return to isolationism—or a foreign policy approach that denies the occasional need to deploy U.S. troops—will not work. The impulse to withdraw from the world remains a strong undercurrent in both parties, particularly when U.S. casualties are at stake. After the bodies of U.S. soldiers were dragged through the streets of Mogadishu in 1993, for example, Republicans accused President Clinton of squandering U.S. forces on ill-conceived missions; it was partly because of the experience in Somalia that candidate George W. Bush vowed in the 2000 election never again to expend American military resources on “nation building.” Understandably, the Bush Administration’s actions in Iraq have produced a much bigger backlash. According to a Pew Research Center poll, almost five years after the 9/11 attacks, 46 percent of Americans have concluded that the United States should “mind its own business internationally and let other countries get along the best they can on their own.” The reaction has been particularly severe among liberals, who see in Iraq a repeat of the mistakes America made in Vietnam. Frustration with Iraq and the questionable tactics the Administration used to make its case for the war has even led many on the left to downplay the threat posed by terrorists and nuclear proliferators; according to a January 2005 poll, self-identified conservatives were 29 points more likely than liberals to identify destroying Al Qaeda as one of their top foreign policy goals, and 26 points more likely to mention denying nuclear weapons to hostile groups or nations. The top three foreign policy objectives among liberals, on the other hand, were withdrawing troops from Iraq, stopping the spread of AIDS, and working more closely with our allies. The objectives favored by liberals have merit. But they hardly constitute a coherent national security policy. It’s useful to remind ourselves, then, that Osama bin Laden is not Ho Chi Minh, and that the threats facing the United States today are real, multiple, and potentially devastating. Our recent policies have made matters worse, but if we pulled out of Iraq tomorrow, the United States would still be a target, given its dominant position in the existing international order. Of course, conservatives are just as misguided if they think we can simply eliminate “the evildoers” and then let the world fend for itself. Globalization makes our economy, our health, and our security all captive to events on the other side of the world. And no other nation on earth has a greater capacity to shape that global system, or to build consensus around a new set of international rules that expand the zones of freedom, personal safety, and economic well-being. Like it or not, if we want to make America more secure, we are going to have to help make the world more secure. The second thing we need to recognize is that the security environment we face today is fundamentally different from the one that existed fifty, twenty-five, or even ten years ago. When Truman, Acheson, Kennan, and Marshall sat down to design the architecture of the post–World War II order, their frame of reference was the competition between the great powers that had dominated the nineteenth and early twentieth centuries. In that world, America’s greatest threats came from expansionist states like Nazi Germany or Soviet Russia, which could deploy large armies and powerful arsenals to invade key territories, restrict our access to critical resources, and dictate the terms of world trade. That world no longer exists. The integration of Germany and Japan into a world system of liberal democracies and free-market economies effectively eliminated the threat of great-power conflicts inside the free world. The advent of nuclear weapons and “mutual assured destruction” rendered the risk of war between the United States and the Soviet Union fairly remote even before the Berlin Wall fell. Today, the world’s most powerful nations (including, to an ever-increasing extent, China)—and, just as important, the vast majority of the people who live within these nations—are largely committed to a common set of international rules governing trade, economic policy, and the legal and diplomatic resolution of disputes, even if broader notions of liberty and democracy aren’t widely observed within their own borders. The growing threat, then, comes primarily from those parts of the world on the margins of the global economy where the international “rules of the road” have not taken hold— the realm of weak or failing states, arbitrary rule, corruption, and chronic violence; lands in which an overwhelming majority of the population is poor, uneducated, and cut off from the global information grid; places where the rulers fear globalization will loosen their hold on power, undermine traditional cultures, or displace indigenous institutions. In the past, there was the perception that America could perhaps safely ignore nations and individuals in these disconnected regions. They might be hostile to our worldview, nationalize a U.S. business, cause a spike in commodity prices, fall into the Soviet or Communist Chinese orbit, or even attack U.S. embassies or military personnel overseas—but they could not strike us where we live. September 11 showed that’s no longer the case. The very interconnectivity that increasingly binds the world together has empowered those who would tear that world down. Terrorist networks can spread their doctrines in the blink of an eye; they can probe the world economic system’s weakest links, knowing that an attack in London or Tokyo will reverberate in New York or Hong Kong; weapons and technology that were once the exclusive province of nation-states can now be purchased on the black market, or their designs downloaded off the Internet; the free travel of people and goods across borders, the lifeblood of the global economy, can be exploited for murderous ends. If nation-states no longer have a monopoly on mass violence; if in fact nation-states are increasingly less likely to launch a direct attack on us, since they have a fixed address to which we can deliver a response; if instead the fastest-growing threats are transnational—terrorist networks intent on repelling or disrupting the forces of globalization, potential pandemic disease like avian flu, or catastrophic changes in the earth’s climate—then how should our national security strategy adapt? For starters, our defense spending and the force structure of our military should reflect the new reality. Since the outset of the Cold War, our ability to deter nation-to-nation aggression has to a large extent underwritten security for every country that commits itself to international rules and norms. With the only blue-water navy that patrols the entire globe, it is our ships that keep the sea lanes clear. And it is our nuclear umbrella that prevented Europe and Japan from entering the arms race during the Cold War, and that—until recently, at least—has led most countries to conclude that nukes aren’t worth the trouble. So long as Russia and China retain their own large military forces and haven’t fully rid themselves of the instinct to throw their weight around—and so long as a handful of rogue states are willing to attack other sovereign nations, as Saddam attacked Kuwait in 1991—there will be times when we must again play the role of the world’s reluctant sheriff. This will not change—nor should it. On the other hand, it’s time we acknowledge that a defense budget and force structure built principally around the prospect of World War III makes little strategic sense. The U.S. military and defense budget in 2005 topped $522 billion—more than that of the next thirty countries combined. The United States’ GDP is greater than that of the two largest countries and fastest-growing economies—China and India—combined. We need to maintain a strategic force posture that allows us to manage threats posed by rogue nations like North Korea and Iran and to meet the challenges presented by potential rivals like China. Indeed, given the depletion of our forces after the wars in Iraq and Afghanistan, we will probably need a somewhat higher budget in the immediate future just to restore readiness and replace equipment. But our most complex military challenge will not be staying ahead of China (just as our biggest challenge with China may well be economic rather than military). More likely, that challenge will involve putting boots on the ground in the ungoverned or hostile regions where terrorists thrive. That requires a smarter balance between what we spend on fancy hardware and what we spend on our men and women in uniform. That should mean growing the size of our armed forces to maintain reasonable rotation schedules, keeping our troops properly equipped, and training them in the language, reconstruction, intelligence-gathering, and peacekeeping skills they’ll need to succeed in increasingly complex and difficult missions. A change in the makeup of our military won’t be enough, though. In coping with the asymmetrical threats that we’ll face in the future—from terrorist networks and the handful of states that support them—the structure of our armed forces will ultimately matter less than how we decide to use those forces. The United States won the Cold War not simply because it outgunned the Soviet Union but because American values held sway in the court of international public opinion, which included those who lived within communist regimes. Even more than was true during the Cold War, the struggle against Islamic-based terrorism will be not simply a military campaign but a battle for public opinion in the Islamic world, among our allies, and in the United States. Osama bin Laden understands that he cannot defeat or even incapacitate the United States in a conventional war. What he and his allies can do is inflict enough pain to provoke a reaction of the sort we’ve seen in Iraq—a botched and ill-advised U.S. military incursion into a Muslim country, which in turn spurs on insurgencies based on religious sentiment and nationalist pride, which in turn necessitates a lengthy and difficult U.S. occupation, which in turn leads to an escalating death toll on the part of U.S. troops and the local civilian population. All of this fans anti-American sentiment among Muslims, increases the pool of potential terrorist recruits, and prompts the American public to question not only the war but also those policies that project us into the Islamic world in the first place. That’s the plan for winning a war from a cave, and so far, at least, we are playing to script. To change that script, we’ll need to make sure that any exercise of American military power helps rather than hinders our broader goals: to incapacitate the destructive potential of terrorist networks and win this global battle of ideas. What does this mean in practical terms? We should start with the premise that the United States, like all sovereign nations, has the unilateral right to defend itself against attack. As such, our campaign to take out Al Qaeda base camps and the Taliban regime that harbored them was entirely justified—and was viewed as legitimate even in most Islamic countries. It may be preferable to have the support of our allies in such military campaigns, but our immediate safety can’t be held hostage to the desire for international consensus; if we have to go it alone, then the American people stand ready to pay any price and bear any burden to protect our country. I would also argue that we have the right to take unilateral military action to eliminate an imminent threat to our security—so long as an imminent threat is understood to be a nation, group, or individual that is actively preparing to strike U.S. targets (or allies with which the United States has mutual defense agreements), and has or will have the means to do so in the immediate future. Al Qaeda qualifies under this standard, and we can and should carry out preemptive strikes against them wherever we can. Iraq under Saddam Hussein did not meet this standard, which is why our invasion was such a strategic blunder. If we are going to act unilaterally, then we had better have the goods on our targets. Once we get beyond matters of self-defense, though, I’m convinced that it will almost always be in our strategic interest to act multilaterally rather than unilaterally when we use force around the world. By this, I do not mean that the UN Security Council—a body that in its structure and rules too often appears frozen in a Cold War–era time warp—should have a veto over our actions. Nor do I mean that we round up the United Kingdom and Togo and then do what we please. Acting multilaterally means doing what George H. W. Bush and his team did in the first Gulf War—engaging in the hard diplomatic work of obtaining most of the world’s support for our actions, and making sure our actions serve to further recognize international norms. Why conduct ourselves in this way? Because nobody benefits more than we do from the observance of international “rules of the road.” We can’t win converts to those rules if we act as if they apply to everyone but us. When the world’s sole superpower willingly restrains its power and abides by internationally agreed-upon standards of conduct, it sends a message that these are rules worth following, and robs terrorists and dictators of the argument that these rules are simply tools of American imperialism. Obtaining global buy-in also allows the United States to carry a lighter load when military action is required and enhances the chances for success. Given the comparatively modest defense budgets of most of our allies, sharing the military burden may in some cases prove a bit of an illusion, but in the Balkans and Afghanistan, our NATO partners have indeed shouldered their share of the risks and costs. Additionally, for the types of conflicts in which we’re most likely to find ourselves engaged, the initial military operation will often be less complex and costly than the work that follows—training local police forces, restoring electricity and water services, building a working judicial system, fostering an independent media, setting up a public health infrastructure, and planning elections. Allies can help pay the freight and provide expertise for these critical efforts, as they have in the Balkans and Afghanistan, but they are far more likely to do so if our actions have gained international support on the front end. In military parlance, legitimacy is a “force multiplier.” Just as important, the painstaking process of building coalitions forces us to listen to other points of view and therefore look before we leap. When we’re not defending ourselves against a direct and imminent threat, we will often have the benefit of time; our military power becomes just one tool among many (albeit an extraordinarily important one) to influence events and advance our interests in the world—interests in maintaining access to key energy sources, keeping financial markets stable, seeing international boundaries respected, and preventing genocide. In pursuit of those interests, we should be engaging in some hardheaded analysis of the costs and benefits of the use of force compared to the other tools of influence at our disposal. Is cheap oil worth the costs—in blood and treasure—of war? Will our military intervention in a particular ethnic dispute lead to a permanent political settlement or an indefinite commitment of U.S. forces? Can our dispute with a country be settled diplomatically or through a coordinated series of sanctions? If we hope to win the broader battle of ideas, then world opinion must enter into this calculus. And while it may be frustrating at times to hear anti-American posturing from European allies that enjoy the blanket of our protection, or to hear speeches in the UN General Assembly designed to obfuscate, distract, or excuse inaction, it’s just possible that beneath all the rhetoric are perspectives that can illuminate the situation and help us make better strategic decisions. Finally, by engaging our allies, we give them joint ownership over the difficult, methodical, vital, and necessarily collaborative work of limiting the terrorists’ capacity to inflict harm. That work includes shutting down terrorist financial networks and sharing intelligence to hunt down terrorist suspects and infiltrate their cells; our continued failure to effectively coordinate intelligence gathering even among various U.S. agencies, as well as our continued lack of effective human intelligence capacity, is inexcusable. Most important, we need to join forces to keep weapons of mass destruction out of terrorist hands. One of the best examples of such collaboration was pioneered in the nineties by Republican Senator Dick Lugar of Indiana and former Democratic Senator Sam Nunn of Georgia, two men who understood the need to nurture coalitions before crises strike, and who applied this knowledge to the critical problem of nuclear proliferation. The premise of what came to be known as the Nunn-Lugar program was simple: After the fall of the Soviet Union, the biggest threat to the United States—aside from an accidental launch—wasn’t a first strike ordered by Gorbachev or Yeltsin, but the migration of nuclear material or know-how into the hands of terrorists and rogue states, a possible result of Russia’s economic tailspin, corruption in the military, the impoverishment of Russian scientists, and security and control systems that had fallen into disrepair. Under Nunn-Lugar, America basically provided the resources to fix up those systems, and although the program caused some consternation to those accustomed to Cold War thinking, it has proven to be one of the most important investments we could have made to protect ourselves from catastrophe. In August 2005, I traveled with Senator Lugar to see some of this handiwork. It was my first trip to Russia and Ukraine, and I couldn’t have had a better guide than Dick, a remarkably fit seventy-three-year-old with a gentle, imperturbable manner and an inscrutable smile that served him well during the often interminable meetings we held with foreign officials. Together we visited the nuclear facilities of Saratov, where Russian generals pointed with pride to the new fencing and security systems that had been recently completed; afterward, they served us a lunch of borscht, vodka, potato stew, and a deeply troubling fish Jell-O mold. In Perm, at a site where SS-24 and SS-25 tactical missiles were being dismantled, we walked through the center of eight-foot-high empty missile casings and gazed in silence at the massive, sleek, still-active missiles that were now warehoused safely but had once been aimed at the cities of Europe. And in a quiet, residential neighborhood of Kiev, we received a tour of the Ukraine’s version of the Centers for Disease Control, a modest three-story facility that looked like a high school science lab. At one point during our tour, after seeing windows open for lack of air-conditioning and metal strips crudely bolted to door jambs to keep out mice, we were guided to a small freezer secured by nothing more than a seal of string. A middle-aged woman in a lab coat and surgical mask pulled a few test tubes from the freezer, waving them around a foot from my face and saying something in Ukrainian. “That is anthrax,” the translator explained, pointing to the vial in the woman’s right hand. “That one,” he said, pointing to the one in the left hand, “is the plague.” I looked behind me and noticed Lugar standing toward the back of the room. “You don’t want a closer look, Dick?” I asked, taking a few steps back myself. “Been there, done that,” he said with a smile. There were moments during our travels when we were reminded of the old Cold War days. At the airport in Perm, for example, a border officer in his early twenties detained us for three hours because we wouldn’t let him search our plane, leading our staffs to fire off telephone calls to the U.S. embassy and Russia’s foreign affairs ministry in Moscow. And yet most of what we heard and saw—the Calvin Klein store and Maserati showroom in Red Square Mall; the motorcade of SUVs that pulled up in front of a restaurant, driven by burly men with ill-fitting suits who once might have rushed to open the door for Kremlin officials but were now on the security detail of one of Russia’s billionaire oligarchs; the throngs of sullen teenagers in T-shirts and low-riding jeans, sharing cigarettes and the music on their iPods as they wandered Kiev’s graceful boulevards—underscored the seemingly irreversible process of economic, if not political, integration between East and West. That was part of the reason, I sensed, why Lugar and I were greeted so warmly at these various military installations. Our presence not only promised money for security systems and fencing and monitors and the like; it also indicated to the men and women who worked in these facilities that they still in fact mattered. They had made careers, had been honored, for perfecting the tools of war. Now they found themselves presiding over remnants of the past, their institutions barely relevant to nations whose people had shifted their main attention to turning a quick buck. Certainly that’s how it felt in Donetsk, an industrial town in the southeastern portion of Ukraine where we stopped to visit an installation for the destruction of conventional weapons. The facility was nestled in the country, accessed by a series of narrow roads occasionally crowded with goats. The director of the facility, a rotund, cheerful man who reminded me of a Chicago ward superintendent, led us through a series of dark warehouse-like structures in various states of disrepair, where rows of workers nimbly dismantled an assortment of land mines and tank ordnance, and empty shell casings were piled loosely into mounds that rose to my shoulders. They needed U.S. help, the director explained, because Ukraine lacked the money to deal with all the weapons left over from the Cold War and Afghanistan—at the pace they were going, securing and disabling these weapons might take sixty years. In the meantime weapons would remain scattered across the country, often in shacks without padlocks, exposed to the elements, not just ammunition but high-grade explosives and shoulder-to-air missiles—tools of destruction that might find their way into the hands of warlords in Somalia, Tamil fighters in Sri Lanka, insurgents in Iraq. As he spoke, our group entered another building, where women wearing surgical masks stood at a table removing hexogen—a military-grade explosive—from various munitions and placing it into bags. In another room, I happened upon a pair of men in their undershirts, smoking next to a wheezing old boiler, flicking their ashes into an open gutter filled with orange-tinted water. One of our team called me over and showed me a yellowing poster taped to the wall. It was a relic of the Afghan war, we were told: instructions on how to hide explosives in toys, to be left in villages and carried home by unsuspecting children. A testament, I thought, to the madness of men. A record of how empires destroy themselves. THERE’S A FINAL dimension to U.S. foreign policy that must be discussed—the portion that has less to do with avoiding war than promoting peace. The year I was born, President Kennedy stated in his inaugural address: “To those peoples in the huts and villages of half the globe struggling to break the bonds of mass misery, we pledge our best efforts to help them help themselves, for whatever period is required—not because the Communists may be doing it, not because we seek their votes, but because it is right. If a free society cannot help the many who are poor, it cannot save the few who are rich.” Forty-five years later, that mass misery still exists. If we are to fulfill Kennedy’s promise—and serve our long-term security interests—then we will have to go beyond a more prudent use of military force. We will have to align our policies to help reduce the spheres of insecurity, poverty, and violence around the world, and give more people a stake in the global order that has served us so well. Of course, there are those who would argue with my starting premise—that any global system built in America’s image can alleviate misery in poorer countries. For these critics, America’s notion of what the international system should be—free trade, open markets, the unfettered flow of information, the rule of law, democratic elections, and the like—is simply an expression of American imperialism, designed to exploit the cheap labor and natural resources of other countries and infect non-Western cultures with decadent beliefs. Rather than conform to America’s rules, the argument goes, other countries should resist America’s efforts to expand its hegemony; instead, they should follow their own path to development, taking their lead from left-leaning populists like Venezuela’s Hugo Chávez, or turning to more traditional principles of social organization, like Islamic law. I don’t dismiss these critics out of hand. America and its Western partners did design the current international system, after all; it is our way of doing things—our accounting standards, our language, our dollar, our copyright laws, our technology, and our popular culture—to which the world has had to adapt over the past fifty years. If overall the international system has produced great prosperity in the world’s most developed countries, it has also left many people behind—a fact that Western policy makers have often ignored and occasionally made worse. Ultimately, though, I believe critics are wrong to think that the world’s poor will benefit by rejecting the ideals of free markets and liberal democracy. When human rights activists from various countries come to my office and talk about being jailed or tortured for their beliefs, they are not acting as agents of American power. When my cousin in Kenya complains that it’s impossible to find work unless he’s paid a bribe to some official in the ruling party, he hasn’t been brainwashed by Western ideas. Who doubts that, if given the choice, most of the people in North Korea would prefer living in South Korea, or that many in Cuba wouldn’t mind giving Miami a try? No person, in any culture, likes to be bullied. No person likes living in fear because his or her ideas are different. Nobody likes being poor or hungry, and nobody likes to live under an economic system in which the fruits of his or her labor go perpetually unrewarded. The system of free markets and liberal democracy that now characterizes most of the developed world may be flawed; it may all too often reflect the interests of the powerful over the powerless. But that system is constantly subject to change and improvement—and it is precisely in this openness to change that market-based liberal democracies offer people around the world their best chance at a better life. Our challenge, then, is to make sure that U.S. policies move the international system in the direction of greater equity, justice, and prosperity—that the rules we promote serve both our interests and the interests of a struggling world. In doing so, we might keep a few basic principles in mind. First, we should be skeptical of those who believe we can single-handedly liberate other people from tyranny. I agree with George W. Bush when in his second inaugural address he proclaimed a universal desire to be free. But there are few examples in history in which the freedom men and women crave is delivered through outside intervention. In almost every successful social movement of the last century, from Gandhi’s campaign against British rule to the Solidarity movement in Poland to the antiapartheid movement in South Africa, democracy was the result of a local awakening. We can inspire and invite other people to assert their freedoms; we can use international forums and agreements to set standards for others to follow; we can provide funding to fledgling democracies to help institutionalize fair election systems, train independent journalists, and seed the habits of civic participation; we can speak out on behalf of local leaders whose rights are violated; and we can apply economic and diplomatic pressure to those who repeatedly violate the rights of their own people. But when we seek to impose democracy with the barrel of a gun, funnel money to parties whose economic policies are deemed friendlier to Washington, or fall under the sway of exiles like Chalabi whose ambitions aren’t matched by any discernible local support, we aren’t just setting ourselves up for failure. We are helping oppressive regimes paint democratic activists as tools of foreign powers and retarding the possibility that genuine, homegrown democracy will ever emerge. A corollary to this is that freedom means more than elections. In 1941, FDR said he looked forward to a world founded upon four essential freedoms: freedom of speech, freedom of worship, freedom from want, and freedom from fear. Our own experience tells us that those last two freedoms—freedom from want and freedom from fear—are prerequisites for all others. For half of the world’s population, roughly three billion people around the world living on less than two dollars a day, an election is at best a means, not an end; a starting point, not deliverance. These people are looking less for an “electocracy” than for the basic elements that for most of us define a decent life—food, shelter, electricity, basic health care, education for their children, and the ability to make their way through life without having to endure corruption, violence, or arbitrary power. If we want to win the hearts and minds of people in Caracas, Jakarta, Nairobi, or Tehran, dispersing ballot boxes will not be enough. We’ll have to make sure that the international rules we’re promoting enhance, rather than impede, people’s sense of material and personal security. That may require that we look in the mirror. For example, the United States and other developed countries constantly demand that developing countries eliminate trade barriers that protect them from competition, even as we steadfastly protect our own constituencies from exports that could help lift poor countries out of poverty. In our zeal to protect the patents of American drug companies, we’ve discouraged the ability of countries like Brazil to produce generic AIDS drugs that could save millions of lives. Under the leadership of Washington, the International Monetary Fund, designed after World War II to serve as a lender of last resort, has repeatedly forced countries in the midst of financial crisis like Indonesia to go through painful readjustments (sharply raising interest rates, cutting government social spending, eliminating subsidies to key industries) that cause enormous hardship to their people—harsh medicine that we Americans would have difficulty administering to ourselves. Another branch of the international financial system, the World Bank, has a reputation for funding large, expensive projects that benefit high-priced consultants and well- connected local elites but do little for ordinary citizens—although it’s these ordinary citizens who are left holding the bag when the loans come due. Indeed, countries that have successfully developed under the current international system have at times ignored Washington’s rigid economic prescriptions by protecting nascent industries and engaging in aggressive industrial policies. The IMF and World Bank need to recognize that there is no single, cookie-cutter formula for each and every country’s development. There is nothing wrong, of course, with a policy of “tough love” when it comes to providing development assistance to poor countries. Too many poor countries are hampered by archaic, even feudal, property and banking laws; in the past, too many foreign aid programs simply engorged local elites, the money siphoned off into Swiss bank accounts. Indeed, for far too long international aid policies have ignored the critical role that the rule of law and principles of transparency play in any nation’s development. In an era in which international financial transactions hinge on reliable, enforceable contracts, one might expect that the boom in global business would have given rise to vast legal reforms. But in fact countries like India, Nigeria, and China have developed two legal systems—one for foreigners and elites, and one for ordinary people trying to get ahead. As for countries like Somalia, Sierra Leone, or the Congo, well, they have barely any law whatsoever. There are times when considering the plight of Africa—the millions racked by AIDS, the constant droughts and famines, the dictatorships, the pervasive corruption, the brutality of twelve-year-old guerrillas who know nothing but war wielding machetes or AK-47s—I find myself plunged into cynicism and despair. Until I’m reminded that a mosquito net that prevents malaria cost three dollars; that a voluntary HIV testing program in Uganda has made substantial inroads in the rate of new infections at a cost of three or four dollars per test; that only modest attention—an international show of force or the creation of civilian protection zones—might have stopped the slaughter in Rwanda; and that onetime hard cases like Mozambique have made significant steps toward reform. FDR was certainly right when he said, “As a nation we may take pride in the fact that we are softhearted; but we cannot afford to be soft-headed.” We should not expect to help Africa if Africa ultimately proves unwilling to help itself. But there are positive trends in Africa often hidden in the news of despair. Democracy is spreading. In many places economies are growing. We need to build on these glimmers of hope and help those committed leaders and citizens throughout Africa build the better future they, like we, so desperately desire. Moreover, we fool ourselves in thinking that, in the words of one commentator, “we must learn to watch others die with equanimity,” and not expect consequences. Disorder breeds disorder; callousness toward others tends to spread among ourselves. And if moral claims are insufficient for us to act as a continent implodes, there are certainly instrumental reasons why the United States and its allies should care about failed states that don’t control their territories, can’t combat epidemics, and are numbed by civil war and atrocity. It was in such a state of lawlessness that the Taliban took hold of Afghanistan. It was in Sudan, site of today’s slow-rolling genocide, that bin Laden set up camp for several years. It’s in the misery of some unnamed slum that the next killer virus will emerge. Of course, whether in Africa or elsewhere, we can’t expect to tackle such dire problems alone. For that reason, we should be spending more time and money trying to strengthen the capacity of international institutions so that they can do some of this work for us. Instead, we’ve been doing the opposite. For years, conservatives in the United States have been making political hay over problems at the UN: the hypocrisy of resolutions singling out Israel for condemnation, the Kafkaesque election of nations like Zimbabwe and Libya to the UN Commission on Human Rights, and most recently the kickbacks that plagued the oil-for-food program. These critics are right. For every UN agency like UNICEF that functions well, there are other agencies that seem to do nothing more than hold conferences, produce reports, and provide sinecures for third-rate international civil servants. But these failures aren’t an argument for reducing our involvement in international organizations, nor are they an excuse for U.S. unilateralism. The more effective UN peacekeeping forces are in handling civil wars and sectarian conflicts, the less global policing we have to do in areas that we’d like to see stabilized. The more credible the information that the International Atomic Energy Agency provides, the more likely we are to mobilize allies against the efforts of rogue states to obtain nuclear weapons. The greater the capacity of the World Health Organization, the less likely we are to have to deal with a flu pandemic in our own country. No country has a bigger stake than we do in strengthening international institutions—which is why we pushed for their creation in the first place, and why we need to take the lead in improving them. Finally, for those who chafe at the prospect of working with our allies to solve the pressing global challenges we face, let me suggest at least one area where we can act unilaterally and improve our standing in the world—by perfecting our own democracy and leading by example. When we continue to spend tens of billions of dollars on weapons systems of dubious value but are unwilling to spend the money to protect highly vulnerable chemical plants in major urban centers, it becomes more difficult to get other countries to safeguard their nuclear power plants. When we detain suspects indefinitely without trial or ship them off in the dead of night to countries where we know they’ll be tortured, we weaken our ability to press for human rights and the rule of law in despotic regimes. When we, the richest country on earth and the consumer of 25 percent of the world’s fossil fuels, can’t bring ourselves to raise fuel-efficiency standards by even a small fraction so as to weaken our dependence on Saudi oil fields and slow global warming, we should expect to have a hard time convincing China not to deal with oil suppliers like Iran or Sudan—and shouldn’t count on much cooperation in getting them to address environmental problems that visit our shores. This unwillingness to make hard choices and live up to our own ideals doesn’t just undermine U.S. credibility in the eyes of the world. It undermines the U.S. government’s credibility with the American people. Ultimately, it is how we manage that most precious resource—the American people, and the system of self-government we inherited from our Founders—that will determine the success of any foreign policy. The world out there is dangerous and complex; the work of remaking it will be long and hard, and will require some sacrifice. Such sacrifice comes about because the American people understand fully the choices before them; it is born of the confidence we have in our democracy. FDR understood this when he said, after the attack on Pearl Harbor, that “[t]his Government will put its trust in the stamina of the American people.” Truman understood this, which is why he worked with Dean Acheson to establish the Committee for the Marshall Plan, made up of CEOs, academics, labor leaders, clergymen, and others who could stump for the plan across the country. It seems as if this is a lesson that America’s leadership needs to relearn. I wonder, sometimes, whether men and women in fact are capable of learning from history—whether we progress from one stage to the next in an upward course or whether we just ride the cycles of boom and bust, war and peace, ascent and decline. On the same trip that took me to Baghdad, I spent a week traveling through Israel and the West Bank, meeting with officials from both sides, mapping in my own mind the site of so much strife. I talked to Jews who’d lost parents in the Holocaust and brothers in suicide bombings; I heard Palestinians talk of the indignities of checkpoints and reminisce about the land they had lost. I flew by helicopter across the line separating the two peoples and found myself unable to distinguish Jewish towns from Arab towns, all of them like fragile outposts against the green and stony hills. From the promenade above Jerusalem, I looked down at the Old City, the Dome of the Rock, the Western Wall, and the Church of the Holy Sepulcher, considered the two thousand years of war and rumors of war that this small plot of land had come to represent, and pondered the possible futility of believing that this conflict might somehow end in our time, or that America, for all its power, might have any lasting say over the course of the world. I don’t linger on such thoughts, though—they are the thoughts of an old man. As difficult as the work may seem, I believe we have an obligation to engage in efforts to bring about peace in the Middle East, not only for the benefit of the people of the region, but for the safety and security of our own children as well. And perhaps the world’s fate depends not just on the events of its battlefields; perhaps it depends just as much on the work we do in those quiet places that require a helping hand. I remember seeing the news reports of the tsunami that hit East Asia in 2004—the towns of Indonesia’s western coast flattened, the thousands of people washed out to sea. And then, in the weeks that followed, I watched with pride as Americans sent more than a billion dollars in private relief aid and as U.S. warships delivered thousands of troops to assist in relief and reconstruction. According to newspaper reports, 65 percent of Indonesians surveyed said that this assistance had given them a more favorable view of the United States. I am not naive enough to believe that one episode in the wake of catastrophe can erase decades of mistrust. But it’s a start. Chapter Nine Family BY THE START of my second year in the Senate, my life had settled into a manageable rhythm. I would leave Chicago Monday night or early Tuesday morning, depending on the Senate’s voting schedule. Other than daily trips to the Senate gym and the rare lunch or dinner with a friend, the next three days would be consumed by a predictable series of tasks—committee markups, votes, caucus lunches, floor statements, speeches, photos with interns, evening fund-raisers, returning phone calls, writing correspondence, reviewing legislation, drafting op-eds, recording podcasts, receiving policy briefings, hosting constituent coffees, and attending an endless series of meetings. On Thursday afternoon, we would get word from the cloakroom as to when the last vote would be, and at the appointed hour I’d line up in the well of the Senate alongside my colleagues to cast my vote, before trotting down the Capitol steps in hopes of catching a flight that would get me home before the girls went to bed. Despite the hectic schedule, I found the work fascinating, if occasionally frustrating. Contrary to popular perceptions, only about two dozen significant bills come up for a roll-call vote on the Senate floor every year, and almost none of those are sponsored by a member of the minority party. As a result, most of my major initiatives—the formation of public school innovation districts, a plan to help U.S. automakers pay for their retiree health-care costs in exchange for increased fuel economy standards, an expansion of the Pell Grant program to help low-income students meet rising college tuition costs—languished in committee. On the other hand, thanks to great work by my staff, I managed to get a respectable number of amendments passed. We helped provide funds for homeless veterans. We provided tax credits to gas stations for installing E85 fuel pumps. We obtained funding to help the World Health Organization monitor and respond to a potential avian flu pandemic. We got an amendment out of the Senate eliminating no-bid contracts in the post-Katrina reconstruction, so more money would actually end up in the hands of the tragedy’s victims. None of these amendments would transform the country, but I took satisfaction in knowing that each of them helped some people in a modest way or nudged the law in a direction that might prove to be more economical, more responsible, or more just. One day in February I found myself in particularly good spirits, having just completed a hearing on legislation that Dick Lugar and I were sponsoring aimed at restricting weapons proliferation and the black-market arms trade. Because Dick was not only the Senate’s leading expert on proliferation issues but also the chairman of the Senate Foreign Relations Committee, prospects for the bill seemed promising. Wanting to share the good news, I called Michelle from my D.C. office and started explaining the significance of the bill—how shoulder-to-air missiles could threaten commercial air travel if they fell into the wrong hands, how small-arms stockpiles left over from the Cold War continued to feed conflict across the globe. Michelle cut me off. “We have ants.” “Huh?” “I found ants in the kitchen. And in the bathroom upstairs.” “Okay…” “I need you to buy some ant traps on your way home tomorrow. I’d get them myself, but I’ve got to take the girls to their doctor’s appointment after school. Can you do that for me?” “Right. Ant traps.” “Ant traps. Don’t forget, okay, honey? And buy more than one. Listen, I need to go into a meeting. Love you.” I hung up the receiver, wondering if Ted Kennedy or John McCain bought ant traps on the way home from work. MOST PEOPLE WHO meet my wife quickly conclude that she is remarkable. They are right about this—she is smart, funny, and thoroughly charming. She is also very beautiful, although not in a way that men find intimidating or women find off-putting; it is the lived-in beauty of the mother and busy professional rather than the touched-up image we see on the cover of glossy magazines. Often, after hearing her speak at some function or working with her on a project, people will approach me and say something to the effect of “You know I think the world of you, Barack, but your wife…wow!” I nod, knowing that if I ever had to run against her for public office, she would beat me without much difficulty. Fortunately for me, Michelle would never go into politics. “I don’t have the patience,” she says to people who ask. As is always the case, she is telling the truth. I met Michelle in the summer of 1988, while we were both working at Sidley & Austin, a large corporate law firm based in Chicago. Although she is three years younger than me, Michelle was already a practicing lawyer, having attended Harvard Law straight out of college. I had just finished my first year at law school and had been hired as a summer associate. It was a difficult, transitional period in my life. I had enrolled in law school after three years of work as a community organizer, and although I enjoyed my studies, I still harbored doubts about my decision. Privately, I worried that it represented the abandonment of my youthful ideals, a concession to the hard realities of money and power—the world as it is rather than the world as it should be. The idea of working at a corporate law firm, so near and yet so far removed from the poor neighborhoods where my friends were still laboring, only worsened these fears. But with student loans rapidly mounting, I was in no position to turn down the three months of salary Sidley was offering. And so, having sublet the cheapest apartment I could find, having purchased the first three suits ever to appear in my closet and a new pair of shoes that turned out to be a half size too small and would absolutely cripple me for the next nine weeks, I arrived at the firm one drizzly morning in early June and was directed to the office of the young attorney who’d been assigned to serve as my summer advisor. I don’t remember the details of that first conversation with Michelle. I remember that she was tall—almost my height in heels—and lovely, with a friendly, professional manner that matched her tailored suit and blouse. She explained how work was assigned at the firm, the nature of the various practice groups, and how to log our billable hours. After showing me my office and giving me a tour of the library, she handed me off to one of the partners and told me that she would meet me for lunch. Later Michelle would tell me that she had been pleasantly surprised when I walked into her office; the drugstore snapshot that I’d sent in for the firm directory made my nose look a little big (even more enormous than usual, she might say), and she had been skeptical when the secretaries who’d seen me during my interview told her I was cute: “I figured that they were just impressed with any black man with a suit and a job.” But if Michelle was impressed, she certainly didn’t tip her hand when we went to lunch. I did learn that she had grown up on the South Side, in a small bungalow just north of the neighborhoods where I had organized. Her father was a pump operator for the city; her mother had been a housewife until the kids were grown, and now worked as a secretary at a bank. She had attended Bryn Mawr Public Elementary School, gotten into Whitney Young Magnet School, and followed her brother to Princeton, where he had been a star on the basketball team. At Sidley she was part of the intellectual property group and specialized in entertainment law; at some point, she said, she might have to consider moving to Los Angeles or New York to pursue her career. Oh, Michelle was full of plans that day, on the fast track, with no time, she told me, for distractions—especially men. But she knew how to laugh, brightly and easily, and I noticed she didn’t seem in too much of a hurry to get back to the office. And there was something else, a glimmer that danced across her round, dark eyes whenever I looked at her, the slightest hint of uncertainty, as if, deep inside, she knew how fragile things really were, and that if she ever let go, even for a moment, all her plans might quickly unravel. That touched me somehow, that trace of vulnerability. I wanted to know that part of her. For the next several weeks, we saw each other every day, in the law library or the cafeteria or at one of the many outings that law firms organize for their summer associates to convince them that their life in the law will not be endless hours of poring through documents. She took me to one or two parties, tactfully overlooking my limited wardrobe, and even tried to set me up with a couple of her friends. Still, she refused to go out on a proper date. It wasn’t appropriate, she said, since she was my advisor. “That’s a poor excuse,” I told her. “Come on, what advice are you giving me? You’re showing me how the copy machine works. You’re telling me what restaurants to try. I don’t think the partners will consider one date a serious breach of firm policy.” She shook her head. “Sorry.” “Okay, I’ll quit. How’s that? You’re my advisor. Tell me who I have to talk to.” Eventually I wore her down. After a firm picnic, she drove me back to my apartment, and I offered to buy her an ice cream cone at the Baskin-Robbins across the street. We sat on the curb and ate our cones in the sticky afternoon heat, and I told her about working at Baskin-Robbins when I was a teenager and how it was hard to look cool in a brown apron and cap. She told me that for a span of two or three years as a child, she had refused to eat anything except peanut butter and jelly. I said that I’d like to meet her family. She said that she would like that. I asked if I could kiss her. It tasted of chocolate. We spent the rest of the summer together. I told her about organizing, and living in Indonesia, and what it was like to bodysurf. She told me about her childhood friends, and a trip to Paris she’d taken in high school, and her favorite Stevie Wonder songs. But it wasn’t until I met Michelle’s family that I began to understand her. It turned out that visiting the Robinson household was like dropping in on the set of Leave It to Beaver. There was Frasier, the kindly, good-humored father, who never missed a day of work or any of his son’s ball games. There was Marian, the pretty, sensible mother who baked birthday cakes, kept order in the house, and had volunteered at school to make sure her children were behaving and that the teachers were doing what they were supposed to be doing. There was Craig, the basketball-star brother, tall and friendly and courteous and funny, working as an investment banker but dreaming of going into coaching someday. And there were uncles and aunts and cousins everywhere, stopping by to sit around the kitchen table and eat until they burst and tell wild stories and listen to Grandpa’s old jazz collection and laugh deep into the night. All that was missing was the dog. Marian didn’t want a dog tearing up the house. What made this vision of domestic bliss all the more impressive was the fact that the Robinsons had had to overcome hardships that one rarely saw on prime-time TV. There were the usual issues of race, of course: the limited opportunities available to Michelle’s parents growing up in Chicago during the fifties and sixties; the racial steering and panic peddling that had driven white families away from their neighborhood; the extra energy required from black parents to compensate for smaller incomes and more violent streets and underfunded playgrounds and indifferent schools. But there was a more specific tragedy at the center of the Robinson household. At the age of thirty, in the prime of his life, Michelle’s father had been diagnosed with multiple sclerosis. For the next twenty-five years, as his condition steadily deteriorated, he had carried out his responsibilities to his family without a trace of self-pity, giving himself an extra hour every morning to get to work, struggling with every physical act from driving a car to buttoning his shirt, smiling and joking as he labored—at first with a limp and eventually with the aid of two canes, his balding head beading with sweat— across a field to watch his son play, or across the living room to give his daughter a kiss. After we were married, Michelle would help me understand the hidden toll that her father’s illness had taken on her family; how heavy a burden Michelle’s mother had been forced to carry; how carefully circumscribed their lives together had been, with even the smallest outing carefully planned to avoid problems or awkwardness; how terrifyingly random life seemed beneath the smiles and laughter. But back then I saw only the joy of the Robinson house. For someone like me, who had barely known his father, who had spent much of his life traveling from place to place, his bloodlines scattered to the four winds, the home that Frasier and Marian Robinson had built for themselves and their children stirred a longing for stability and a sense of place that I had not realized was there. Just as Michelle perhaps saw in me a life of adventure, risk, travel to exotic lands—a wider horizon than she had previously allowed herself. Six months after Michelle and I met, her father died suddenly of complications after a kidney operation. I flew back to Chicago and stood at his gravesite, Michelle’s head on my shoulder. As the casket was lowered, I promised Frasier Robinson that I would take care of his girl. I realized that in some unspoken, still tentative way, she and I were already becoming a family. THERE’S A LOT of talk these days about the decline of the American family. Social conservatives claim that the traditional family is under assault from Hollywood movies and gay pride parades. Liberals point to the economic factors—from stagnating wages to inadequate day care—that have put families under increasing duress. Our popular culture feeds the alarm, with tales of women consigned to permanent singlehood, men unwilling to make lasting commitments, and teens engaged in endless sexual escapades. Nothing seems settled, as it was in the past; our roles and relationships all feel up for grabs. Given this hand-wringing, it may be helpful to step back and remind ourselves that the institution of marriage isn’t disappearing anytime soon. While it’s true that marriage rates have declined steadily since the 1950s, some of the decline is a result of more Americans delaying marriage to pursue an education or establish a career; by the age of forty-five, 89 percent of women and 83 percent of men will have tied the knot at least once. Married couples continue to head 67 percent of American families, and the vast majority of Americans still consider marriage to be the best foundation for personal intimacy, economic stability, and child rearing. Still, there’s no denying that the nature of the family has changed over the last fifty years. Although divorce rates have declined by 21 percent since their peak in the late seventies and early eighties, half of all first marriages still end in divorce. Compared to our grandparents, we’re more tolerant of premarital sex, more likely to cohabit, and more likely to live alone. We’re also far more likely to be raising children in nontraditional households; 60 percent of all divorces involve children, 33 percent of all children are born out of wedlock, and 34 percent of children don’t live with their biological fathers. These trends are particularly acute in the African American community, where it’s fair to say that the nuclear family is on the verge of collapse. Since 1950, the marriage rate for black women has plummeted from 62 percent to 36 percent. Between 1960 and 1995, the number of African American children living with two married parents dropped by more than half; today 54 percent of all African American children live in single-parent households, compared to about 23 percent of all white children. For adults, at least, the effect of these changes is a mixed bag. Research suggests that on average, married couples live healthier, wealthier, and happier lives, but no one claims that men and women benefit from being trapped in bad or abusive marriages. Certainly the decision of increasing numbers of Americans to delay marriage makes sense; not only does today’s information economy demand more time in school, but studies show that couples who wait until their late twenties or thirties to get married are more likely to stay married than those who marry young. Whatever the effect on adults, though, these trends haven’t been so good for our children. Many single moms—including the one who raised me—do a heroic job on behalf of their kids. Still, children living with single mothers are five times more likely to be poor than children in two-parent households. Children in single-parent homes are also more likely to drop out of school and become teen parents, even when income is factored out. And the evidence suggests that on average, children who live with both their biological mother and father do better than those who live in stepfamilies or with cohabiting partners. In light of these facts, policies that strengthen marriage for those who choose it and that discourage unintended births outside of marriage are sensible goals to pursue. For example, most people agree that neither federal welfare programs nor the tax code should penalize married couples; those aspects of welfare reform enacted under Clinton and those elements of the Bush tax plan that reduced the marriage penalty enjoy strong bipartisan support. The same goes for teen pregnancy prevention. Everyone agrees that teen pregnancies place both mother and child at risk for all sorts of problems. Since 1990, the teen pregnancy rate has dropped by 28 percent, an unadulterated piece of good news. But teens still account for almost a quarter of out-of-wedlock births, and teen mothers are more likely to have additional out-of-wedlock births as they get older. Community- based programs that have a proven track record in preventing unwanted pregnancies— both by encouraging abstinence and by promoting the proper use of contraception— deserve broad support. Finally, preliminary research shows that marriage education workshops can make a real difference in helping married couples stay together and in encouraging unmarried couples who are living together to form a more lasting bond. Expanding access to such services to low-income couples, perhaps in concert with job training and placement, medical coverage, and other services already available, should be something everybody can agree on. But for many social conservatives, these commonsense approaches don’t go far enough. They want a return to a bygone era, in which sexuality outside of marriage was subject to both punishment and shame, obtaining a divorce was far more difficult, and marriage offered not merely personal fulfillment but also well-defined social roles for men and for women. In their view, any government policy that appears to reward or even express neutrality toward what they consider to be immoral behavior—whether providing birth control to young people, abortion services to women, welfare support for unwed mothers, or legal recognition of same-sex unions—inherently devalues the marital bond. Such policies take us one step closer, the argument goes, to a brave new world in which gender differences have been erased, sex is purely recreational, marriage is disposable, motherhood is an inconvenience, and civilization itself rests on shifting sands. I understand the impulse to restore a sense of order to a culture that’s constantly in flux. And I certainly appreciate the desire of parents to shield their children from values they consider unwholesome; it’s a feeling I often share when I listen to the lyrics of songs on the radio. But all in all, I have little sympathy for those who would enlist the government in the task of enforcing sexual morality. Like most Americans, I consider decisions about sex, marriage, divorce, and childbearing to be highly personal—at the very core of our system of individual liberty. Where such personal decisions raise the prospect of significant harm to others—as is true with child abuse, incest, bigamy, domestic violence, or failure to pay child support—society has a right and duty to step in. (Those who believe in the personhood of the fetus would put abortion in this category.) Beyond that, I have no interest in seeing the president, Congress, or a government bureaucracy regulating what goes on in America’s bedrooms. Moreover, I don’t believe we strengthen the family by bullying or coercing people into the relationships we think are best for them—or by punishing those who fail to meet our standards of sexual propriety. I want to encourage young people to show more reverence toward sex and intimacy, and I applaud parents, congregations, and community programs that transmit that message. But I’m not willing to consign a teenage girl to a lifetime of struggle because of lack of access to birth control. I want couples to understand the value of commitment and the sacrifices marriage entails. But I’m not willing to use the force of law to keep couples together regardless of their personal circumstances. Perhaps I just find the ways of the human heart too various, and my own life too imperfect, to believe myself qualified to serve as anyone’s moral arbiter. I do know that in our fourteen years of marriage, Michelle and I have never had an argument as a result of what other people are doing in their personal lives. What we have argued about—repeatedly—is how to balance work and family in a way that’s equitable to Michelle and good for our children. We’re not alone in this. In the sixties and early seventies, the household Michelle grew up in was the norm—more than 70 percent of families had Mom at home and relied on Dad as the sole breadwinner. Today those numbers are reversed. Seventy percent of families with children are headed by two working parents or a single working parent. The result has been what my policy director and work-family expert Karen Kornbluh calls “the juggler family,” in which parents struggle to pay the bills, look after their children, maintain a household, and maintain their relationship. Keeping all these balls in the air takes its toll on family life. As Karen explained when she was director of the Work and Family Program at the New America Foundation and testified before the Senate Subcommittee on Children and Families: Americans today have 22 fewer hours a week to spend with their kids than they did in 1969. Millions of children are left in unlicensed day care every day—or at home alone with the TV as a babysitter. Employed mothers lose almost an hour of sleep a day in their attempt to make it all add up. Recent data show that parents with school age children show high signs of stress—stress that has an impact on their productivity and work—when they have inflexible jobs and unstable after-school care. Sound familiar? Many social conservatives suggest that this flood of women out of the home and into the workplace is a direct consequence of feminist ideology, and hence can be reversed if women will just come to their senses and return to their traditional homemaking roles. It’s true that ideas about equality for women have played a critical role in the transformation of the workplace; in the minds of most Americans, the opportunity for women to pursue careers, achieve economic independence, and realize their talents on an equal footing with men has been one of the great achievements of modern life. But for the average American woman, the decision to work isn’t simply a matter of changing attitudes. It’s a matter of making ends meet. Consider the facts. Over the last thirty years, the average earnings of American men have grown less than 1 percent after being adjusted for inflation. Meanwhile, the cost of everything, from housing to health care to education, has steadily risen. What has kept a large swath of American families from falling out of the middle class has been Mom’s paycheck. In their book The Two-Income Trap, Elizabeth Warren and Amelia Tyagi point out that the additional income mothers bring home isn’t going to luxury items. Instead, almost all of it goes to purchase what families believe to be investments in their children’s future—preschool education, college tuition, and most of all, homes in safe neighborhoods with good public schools. In fact, between these fixed costs and the added expenses of a working mother (particularly day care and a second car), the average two-income family has less discretionary income—and is less financially secure—than its single-earner counterpart thirty years ago. So is it possible for the average family to return to life on a single income? Not when every other family on the block is earning two incomes and bidding up the prices of homes, schools, and college tuition. Warren and Tyagi show that an average single- earner family today that tried to maintain a middle-class lifestyle would have 60 percent less discretionary income than its 1970s counterpart. In other words, for most families, having Mom stay at home means living in a less-safe neighborhood and enrolling their children in a less-competitive school. That’s not a choice most Americans are willing to make. Instead they do the best they can under the circumstances, knowing that the type of household they grew up in—the type of household in which Frasier and Marian Robinson raised their kids—has become much, much harder to sustain. BOTH MEN AND women have had to adjust to these new realities. But it’s hard to argue with Michelle when she insists that the burdens of the modern family fall more heavily on the woman. For the first few years of our marriage, Michelle and I went through the usual adjustments all couples go through: learning to read each other’s moods, accepting the quirks and habits of a stranger underfoot. Michelle liked to wake up early and could barely keep her eyes open after ten o’clock. I was a night owl and could be a bit grumpy (mean, Michelle would say) within the first half hour or so of getting out of bed. Partly because I was still working on my first book, and perhaps because I had lived much of my life as an only child, I would often spend the evening holed up in my office in the back of our railroad apartment; what I considered normal often left Michelle feeling lonely. I invariably left the butter out after breakfast and forgot to twist the little tie around the bread bag; Michelle could rack up parking tickets like nobody’s business. Mostly, though, those early years were full of ordinary pleasures—going to movies, having dinner with friends, catching the occasional concert. We were both working hard: I was practicing law at a small civil rights firm and had started teaching at the University of Chicago Law School, while Michelle had decided to leave her law practice, first to work in Chicago’s Department of Planning and then to run the Chicago arm of a national service program called Public Allies. Our time together got squeezed even more when I ran for the state legislature, but despite my lengthy absences and her general dislike of politics, Michelle supported the decision; “I know it’s something that you want to do,” she would tell me. On the nights that I was in Springfield, we’d talk and laugh over the phone, sharing the humor and frustrations of our days apart, and I would fall asleep content in the knowledge of our love. Then Malia was born, a Fourth of July baby, so calm and so beautiful, with big, hypnotic eyes that seemed to read the world the moment they opened. Malia’s arrival came at an ideal time for both of us: Because I was out of session and didn’t have to teach during the summer, I was able to spend every evening at home; meanwhile, Michelle had decided to accept a part-time job at the University of Chicago so she could spend more time with the baby, and the new job didn’t start until October. For three magical months the two of us fussed and fretted over our new baby, checking the crib to make sure she was breathing, coaxing smiles from her, singing her songs, and taking so many pictures that we started to wonder if we were damaging her eyes. Suddenly our different biorhythms came in handy: While Michelle got some well-earned sleep, I would stay up until one or two in the morning, changing diapers, heating breast milk, feeling my daughter’s soft breath against my chest as I rocked her to sleep, guessing at her infant dreams. But when fall came—when my classes started back up, the legislature went back into session, and Michelle went back to work—the strains in our relationship began to show. I was often gone for three days at a stretch, and even when I was back in Chicago, I might have evening meetings to attend, or papers to grade, or briefs to write. Michelle found that a part-time job had a funny way of expanding. We found a wonderful in- home babysitter to look after Malia while we were at work, but with a full-time employee suddenly on our payroll, money got tight. Tired and stressed, we had little time for conversation, much less romance. When I launched my ill-fated congressional run, Michelle put up no pretense of being happy with the decision. My failure to clean up the kitchen suddenly became less endearing. Leaning down to kiss Michelle good-bye in the morning, all I would get was a peck on the cheek. By the time Sasha was born—just as beautiful, and almost as calm as her sister—my wife’s anger toward me seemed barely contained. “You only think about yourself,” she would tell me. “I never thought I’d have to raise a family alone.” I was stung by such accusations; I thought she was being unfair. After all, it wasn’t as if I went carousing with the boys every night. I made few demands of Michelle—I didn’t expect her to darn my socks or have dinner waiting for me when I got home. Whenever I could, I pitched in with the kids. All I asked for in return was a little tenderness. Instead, I found myself subjected to endless negotiations about every detail of managing the house, long lists of things that I needed to do or had forgotten to do, and a generally sour attitude. I reminded Michelle that compared to most families, we were incredibly lucky. I reminded her as well that for all my flaws, I loved her and the girls more than anything else. My love should be enough, I thought. As far as I was concerned, she had nothing to complain about. It was only upon reflection, after the trials of those years had passed and the kids had started school, that I began to appreciate what Michelle had been going through at the time, the struggles so typical of today’s working mother. For no matter how liberated I liked to see myself as—no matter how much I told myself that Michelle and I were equal partners, and that her dreams and ambitions were as important as my own—the fact was that when children showed up, it was Michelle and not I who was expected to make the necessary adjustments. Sure, I helped, but it was always on my terms, on my schedule. Meanwhile, she was the one who had to put her career on hold. She was the one who had to make sure that the kids were fed and bathed every night. If Malia or Sasha got sick or the babysitter failed to show up, it was she who, more often than not, had to get on the phone to cancel a meeting at work. It wasn’t just the constant scrambling between her work and the children that made Michelle’s situation so tough. It was also the fact that from her perspective she wasn’t doing either job well. This was not true, of course; her employers loved her, and everyone remarked on what a good mother she was. But I came to see that in her own mind, two visions of herself were at war with each other—the desire to be the woman her mother had been, solid, dependable, making a home and always there for her kids; and the desire to excel in her profession, to make her mark on the world and realize all those plans she’d had on the very first day that we met. In the end, I credit Michelle’s strength—her willingness to manage these tensions and make sacrifices on behalf of myself and the girls—with carrying us through the difficult times. But we also had resources at our disposal that many American families don’t have. For starters, Michelle’s and my status as professionals meant that we could rework our schedules to handle an emergency (or just take a day off) without risk of losing our jobs. Fifty-seven percent of American workers don’t have that luxury; indeed, most of them can’t take a day off to look after a child without losing pay or using vacation days. For parents who do try to make their own schedules, flexibility often means accepting part-time or temporary work with no career ladder and few or no benefits. Michelle and I also had enough income to cover all the services that help ease the pressures of two-earner parenthood: reliable child care, extra babysitting whenever we needed it, take-out dinners when we had neither the time nor the energy to cook, someone to come in and clean the house once a week, and private preschool and summer day camp once the kids were old enough. For most American families, such help is financially out of reach. The cost of day care is especially prohibitive; the United States is practically alone among Western nations in not providing government- subsidized, high-quality day-care services to all its workers. Finally, Michelle and I had my mother-in-law, who lives only fifteen minutes away from us, in the same house in which Michelle was raised. Marian is in her late sixties but looks ten years younger, and last year, when Michelle went back to full-time work, Marian decided to cut her hours at the bank so she could pick up the girls from school and look after them every afternoon. For many American families, such help is simply unavailable; in fact, for many families, the situation is reversed—someone in the family has to provide care for an aging parent on top of other family responsibilities. Of course, it’s not possible for the federal government to guarantee each family a wonderful, healthy, semiretired mother-in-law who happens to live close by. But if we’re serious about family values, then we can put policies in place that make the juggling of work and parenting a little bit easier. We could start by making high-quality day care affordable for every family that needs it. In contrast to most European countries, day care in the United States is a haphazard affair. Improved day-care licensing and training, an expansion of the federal and state child tax credits, and sliding-scale subsidies to families that need them all could provide both middle-class and low-income parents some peace of mind during the workday—and benefit employers through reduced absenteeism. It’s also time to redesign our schools—not just for the sake of working parents, but also to help prepare our children for a more competitive world. Countless studies confirm the educational benefits of strong preschool programs, which is why even families who have a parent at home often seek them out. The same goes for longer school days, summer school, and after-school programs. Providing all kids access to these benefits would cost money, but as part of broader school reform efforts, it’s a cost that we as a society should be willing to bear. Most of all, we need to work with employers to increase the flexibility of work schedules. The Clinton Administration took a step in this direction with the Family and Medical Leave Act (FMLA), but because it requires only unpaid leave and applies only to companies with more than fifty employees, most American workers aren’t able to take advantage of it. And although all other wealthy nations but one provide some form of paid parental leave, the business community’s resistance to mandated paid leave has been fierce, in part because of concerns over how it would affect small businesses. With a little creativity, we should be able to break this impasse. California has recently initiated paid leave through its disability insurance fund, thereby making sure that the costs aren’t borne by employers alone. We can also give parents flexibility to meet their day-to-day needs. Already, many larger companies offer formal flextime programs and report higher employee morale and less employee turnover as a result. Great Britain has come up with a novel approach to the problem—as part of a highly popular “Work-Life Balance Campaign,” parents with children under the age of six have the right to file a written request with employers for a change in their schedule. Employers aren’t required to grant the request, but they are required to meet with the employee to consider it; so far, one-quarter of all eligible British parents have successfully negotiated more family-friendly hours without a drop in productivity. With a combination of such innovative policy making, technical assistance, and greater public awareness, government can help businesses to do right by their employees at nominal expense. Of course, none of these policies need discourage families from deciding to keep a parent at home, regardless of the financial sacrifices. For some families, that may mean doing without certain material comforts. For others, it may mean home schooling or a move to a community where the cost of living is lower. For some families, it may be the father who stays at home—although for most families it will still be the mother who serves as the primary caregiver. Whatever the case may be, such decisions should be honored. If there’s one thing that social conservatives have been right about, it’s that our modern culture sometimes fails to fully appreciate the extraordinary emotional and financial contributions—the sacrifices and just plain hard work—of the stay-at-home mom. Where social conservatives have been wrong is in insisting that this traditional role is innate—the best or only model of motherhood. I want my daughters to have a choice as to what’s best for them and their families. Whether they will have such choices will depend not just on their own efforts and attitudes. As Michelle has taught me, it will also depend on men— and American society—respecting and accommodating the choices they make. “HI, DADDY.” “Hey, sweetie-pie.” It’s Friday afternoon and I’m home early to look after the girls while Michelle goes to the hairdresser. I gather up Malia in a hug and notice a blond girl in our kitchen, peering at me through a pair of oversized glasses. “Who’s this?” I ask, setting Malia back on the floor. “This is Sam. She’s over for a playdate.” “Hi, Sam.” I offer Sam my hand, and she considers it for a moment before shaking it loosely. Malia rolls her eyes. “Listen, Daddy…you don’t shake hands with kids.” “You don’t?” “No,” Malia says. “Not even teenagers shake hands. You may not have noticed, but this is the twenty-first century.” Malia looks at Sam, who represses a smirk. “So what do you do in the twenty-first century?” “You just say ‘hey.’ Sometimes you wave. That’s pretty much it.” “I see. I hope I didn’t embarrass you.” Malia smiles. “That’s okay, Daddy. You didn’t know, because you’re used to shaking hands with grown-ups.” “That’s true. Where’s your sister?” “She’s upstairs.” I walk upstairs to find Sasha standing in her underwear and a pink top. She pulls me down for a hug and then tells me she can’t find any shorts. I check in the closet and find a pair of blue shorts sitting right on top of her chest of drawers. “What are these?” Sasha frowns but reluctantly takes the shorts from me and pulls them on. After a few minutes, she climbs into my lap. “These shorts aren’t comfortable, Daddy.” We go back into Sasha’s closet, open the drawer again, and find another pair of shorts, also blue. “How about these?” I ask. Sasha frowns again. Standing there, she looks like a three-foot version of her mother. Malia and Sam walk in to observe the stand-off. “Sasha doesn’t like either of those shorts,” Malia explains. I turn to Sasha and ask her why. She looks up at me warily, taking my measure. “Pink and blue don’t go together,” she says finally. Malia and Sam giggle. I try to look as stern as Michelle might look in such circumstances and tell Sasha to put on the shorts. She does what I say, but I realize she’s just indulging me. When it comes to my daughters, no one is buying my tough-guy routine. Like many men today, I grew up without a father in the house. My mother and father divorced when I was only two years old, and for most of my life I knew him only through the letters he sent and the stories my mother and grandparents told. There were men in my life—a stepfather with whom we lived for four years, and my grandfather, who along with my grandmother helped raise me the rest of the time—and both were good men who treated me with affection. But my relationships with them were necessarily partial, incomplete. In the case of my stepfather, this was a result of limited duration and his natural reserve. And as close as I was to my grandfather, he was both too old and too troubled to provide me with much direction. It was women, then, who provided the ballast in my life—my grandmother, whose dogged practicality kept the family afloat, and my mother, whose love and clarity of spirit kept my sister’s and my world centered. Because of them I never wanted for anything important. From them I would absorb the values that guide me to this day. Still, as I got older I came to recognize how hard it had been for my mother and grandmother to raise us without a strong male presence in the house. I felt as well the mark that a father’s absence can leave on a child. I determined that my father’s irresponsibility toward his children, my stepfather’s remoteness, and my grandfather’s failures would all become object lessons for me, and that my own children would have a father they could count on. In the most basic sense, I’ve succeeded. My marriage is intact and my family is provided for. I attend parent-teacher conferences and dance recitals, and my daughters bask in my adoration. And yet, of all the areas of my life, it is in my capacities as a husband and father that I entertain the most doubt. I realize I’m not alone in this; at some level I’m just going through the same conflicting emotions that other fathers experience as they navigate an economy in flux and changing social norms. Even as it becomes less and less attainable, the image of the 1950s father—supporting his family with a nine-to-five job, sitting down for the dinner that his wife prepares every night, coaching Little League, and handling power tools— hovers over the culture no less powerfully than the image of the stay-at-home mom. For many men today, the inability to be their family’s sole breadwinner is a source of frustration and even shame; one doesn’t have to be an economic determinist to believe that high unemployment and low wages contribute to the lack of parental involvement and low marriage rates among African American men. For working men, no less than for working women, the terms of employment have changed. Whether a high-paid professional or a worker on the assembly line, fathers are expected to put in longer hours on the job than they did in the past. And these more demanding work schedules are occurring precisely at the time when fathers are expected—and in many cases want—to be more actively involved in the lives of their children than their own fathers may have been in theirs. But if the gap between the idea of parenthood in my head and the compromised reality that I live isn’t unique, that doesn’t relieve my sense that I’m not always giving my family all that I could. Last Father’s Day, I was invited to speak to the members of Salem Baptist Church on the South Side of Chicago. I didn’t have a prepared text, but I took as my theme “what it takes to be a full-grown man.” I suggested that it was time that men in general and black men in particular put away their excuses for not being there for their families. I reminded the men in the audience that being a father meant more than bearing a child; that even those of us who were physically present in the home are often emotionally absent; that precisely because many of us didn’t have fathers in the house we have to redouble our efforts to break the cycle; and that if we want to pass on high expectations to our children, we have to have higher expectations for ourselves. Thinking back on what I said, I ask myself sometimes how well I’m living up to my own exhortations. After all, unlike many of the men to whom I was speaking that day, I don’t have to take on two jobs or the night shift in a valiant attempt to put food on the table. I could find a job that allowed me to be home every night. Or I could find a job that paid more money, a job in which long hours might at least be justified by some measurable benefit to my family—the ability of Michelle to cut back her hours, say, or a fat trust fund for the kids. Instead, I have chosen a life with a ridiculous schedule, a life that requires me to be gone from Michelle and the girls for long stretches of time and that exposes Michelle to all sorts of stress. I may tell myself that in some larger sense I am in politics for Malia and Sasha, that the work I do will make the world a better place for them. But such rationalizations seem feeble and painfully abstract when I’m missing one of the girls’ school potlucks because of a vote, or calling Michelle to tell her that session’s been extended and we need to postpone our vacation. Indeed, my recent success in politics does little to assuage the guilt; as Michelle told me once, only half joking, seeing your dad’s picture in the paper may be kind of neat the first time it happens, but when it happens all the time it’s probably kind of embarrassing. And so I do my best to answer the accusation that floats around in my mind—that I am selfish, that I do what I do to feed my own ego or fill a void in my heart. When I’m not out of town, I try to be home for dinner, to hear from Malia and Sasha about their day, to read to them and tuck them into bed. I try not to schedule appearances on Sundays, and in the summers I’ll use the day to take the girls to the zoo or the pool; in the winters we might visit a museum or the aquarium. I scold my daughters gently when they misbehave, and try to limit their intake of both television and junk food. In all this I am encouraged by Michelle, although there are times when I get the sense that I’m encroaching on her space—that by my absences I may have forfeited certain rights to interfere in the world she has built. As for the girls, they seem to be thriving despite my frequent disappearances. Mostly this is a testimony to Michelle’s parenting skills; she seems to have a perfect touch when it comes to Malia and Sasha, an ability to set firm boundaries without being stifling. She’s also made sure that my election to the Senate hasn’t altered the girls’ routines very much, although what passes for a normal middle-class childhood in America these days seems to have changed as much as has parenting. Gone are the days when parents just sent their child outside or to the park and told him or her to be back before dinner. Today, with news of abductions and an apparent suspicion of anything spontaneous or even a tiny bit slothful, the schedules of children seem to rival those of their parents. There are playdates, ballet classes, gymnastics classes, tennis lessons, piano lessons, soccer leagues, and what seem like weekly birthday parties. I told Malia once that during the entire time that I was growing up, I attended exactly two birthday parties, both of which involved five or six kids, cone hats, and a cake. She looked at me the way I used to look at my grandfather when he told stories of the Depression—with a mixture of fascination and incredulity. It is left to Michelle to coordinate all the children’s activities, which she does with a general’s efficiency. When I can, I volunteer to help, which Michelle appreciates, although she is careful to limit my responsibilities. The day before Sasha’s birthday party this past June, I was told to procure twenty balloons, enough cheese pizza to feed twenty kids, and ice. This seemed manageable, so when Michelle told me that she was going to get goody bags to hand out at the end of the party, I suggested that I do that as well. She laughed. “You can’t handle goody bags,” she said. “Let me explain the goody bag thing. You have to go into the party store and choose the bags. Then you have to choose what to put in the bags, and what is in the boys’ bags has to be different from what is in the girls’ bags. You’d walk in there and wander around the aisles for an hour, and then your head would explode.” Feeling less confident, I got on the Internet. I found a place that sold balloons near the gymnastics studio where the party would be held, and a pizza place that promised delivery at 3:45 p.m. By the time the guests showed up the next day, the balloons were in place and the juice boxes were on ice. I sat with the other parents, catching up and watching twenty or so five-year-olds run and jump and bounce on the equipment like a band of merry elves. I had a slight scare when at 3:50 the pizzas had not yet arrived, but the delivery person got there ten minutes before the children were scheduled to eat. Michelle’s brother, Craig, knowing the pressure I was under, gave me a high five. Michelle looked up from putting pizza on paper plates and smiled. As a grand finale, after all the pizza was eaten and the juice boxes drunk, after we had sung “Happy Birthday” and eaten some cake, the gymnastics instructor gathered all the kids around an old, multicolored parachute and told Sasha to sit at its center. On the count of three, Sasha was hoisted up into the air and back down again, then up for a second time, and then for a third. And each time she rose above the billowing sail, she laughed and laughed with a look of pure joy. I wonder if Sasha will remember that moment when she is grown. Probably not; it seems as if I can retrieve only the barest fragments of memory from when I was five. But I suspect that the happiness she felt on that parachute registers permanently in her; that such moments accumulate and embed themselves in a child’s character, becoming a part of their soul. Sometimes, when I listen to Michelle talk about her father, I hear the echo of such joy in her, the love and respect that Frasier Robinson earned not through fame or spectacular deeds but through small, daily, ordinary acts—a love he earned by being there. And I ask myself whether my daughters will be able to speak of me in that same way. As it is, the window for making such memories rapidly closes. Already Malia seems to be moving into a different phase; she’s more curious about boys and relationships, more self-conscious about what she wears. She’s always been older than her years, uncannily wise. Once, when she was just six years old and we were taking a walk together along the lake, she asked me out of the blue if our family was rich. I told her that we weren’t really rich, but that we had a lot more than most people. I asked her why she wanted to know. “Well…I’ve been thinking about it, and I’ve decided I don’t want to be really, really rich. I think I want a simple life.” Her words were so unexpected that I laughed. She looked up at me and smiled, but her eyes told me she’d meant what she said. I often think of that conversation. I ask myself what Malia makes of my not-so-simple life. Certainly she notices that other fathers attend her team’s soccer games more often than I do. If this upsets her, she doesn’t let it show, for Malia tends to be protective of other people’s feelings, trying to see the best in every situation. Still, it gives me small comfort to think that my eight-year-old daughter loves me enough to overlook my shortcomings. I was able to get to one of Malia’s games recently, when session ended early for the week. It was a fine summer afternoon, and the several fields were full of families when I arrived, blacks and whites and Latinos and Asians from all over the city, women sitting on lawn chairs, men practicing kicks with their sons, grandparents helping babies to stand. I spotted Michelle and sat down on the grass beside her, and Sasha came to sit in my lap. Malia was already out on the field, part of a swarm of players surrounding the ball, and although soccer’s not her natural sport—she’s a head taller than some of her friends, and her feet haven’t yet caught up to her height—she plays with an enthusiasm and competitiveness that makes us cheer loudly. At halftime, Malia came over to where we were sitting. “How you feeling, sport?” I asked her. “Great!” She took a swig of water. “Daddy, I have a question.” “Shoot.” “Can we get a dog?” “What does your mother say?” “She told me to ask you. I think I’m wearing her down.” I looked at Michelle, who smiled and offered a shrug. “How about we talk it over after the game?” I said. “Okay.” Malia took another sip of water and kissed me on the cheek. “I’m glad you’re home,” she said. Before I could answer, she had turned around and started back out onto the field. And for an instant, in the glow of the late afternoon, I thought I saw my older daughter as the woman she would become, as if with each step she were growing taller, her shape filling out, her long legs carrying her into a life of her own. I squeezed Sasha a little tighter in my lap. Perhaps sensing what I was feeling, Michelle took my hand. And I remembered a quote Michelle had given to a reporter during the campaign, when he’d asked her what it was like being a political wife. “It’s hard,” Michelle had said. Then, according to the reporter, she had added with a sly smile, “And that’s why Barack is such a grateful man.” As usual, my wife is right. Epilogue MY SWEARING IN to the U.S. Senate in January 2005 completed a process that had begun the day I announced my candidacy two years earlier—the exchange of a relatively anonymous life for a very public one. To be sure, many things have remained constant. Our family still makes its home in Chicago. I still go to the same Hyde Park barbershop to get my hair cut, Michelle and I have the same friends over to our house as we did before the election, and our daughters still run through the same playgrounds. Still, there’s no doubt that the world has changed profoundly for me, in ways that I don’t always care to admit. My words, my actions, my travel plans, and my tax returns all end up in the morning papers or on the nightly news broadcast. My daughters have to endure the interruptions of well-meaning strangers whenever their father takes them to the zoo. Even outside of Chicago, it’s becoming harder to walk unnoticed through airports. As a rule, I find it difficult to take all this attention very seriously. After all, there are days when I still walk out of the house with a suit jacket that doesn’t match my suit pants. My thoughts are so much less tidy, my days so much less organized than the image of me that now projects itself into the world, that it makes for occasional comic moments. I remember the day before I was sworn in, my staff and I decided we should hold a press conference in our office. At the time, I was ranked ninety-ninth in seniority, and all the reporters were crammed into a tiny transition office in the basement of the Dirksen Office Building, across the hall from the Senate supply store. It was my first day in the building; I had not taken a single vote, had not introduced a single bill— indeed I had not even sat down at my desk when a very earnest reporter raised his hand and asked, “Senator Obama, what is your place in history?” Even some of the other reporters had to laugh. Some of the hyperbole can be traced back to my speech at the 2004 Democratic Convention in Boston, the point at which I first gained national attention. In fact, the process by which I was selected as the keynote speaker remains something of a mystery to me. I had met John Kerry for the first time after the Illinois primary, when I spoke at his fund-raiser and accompanied him to a campaign event highlighting the importance of job-training programs. A few weeks later, we got word that the Kerry people wanted me to speak at the convention, although it was not yet clear in what capacity. One afternoon, as I drove back from Springfield to Chicago for an evening campaign event, Kerry campaign manager Mary Beth Cahill called to deliver the news. After I hung up, I turned to my driver, Mike Signator. “I guess this is pretty big,” I said. Mike nodded. “You could say that.” I had only been to one previous Democratic convention, the 2000 Convention in Los Angeles. I hadn’t planned to attend that convention; I was just coming off my defeat in the Democratic primary for the Illinois First Congressional District seat, and was determined to spend most of the summer catching up on work at the law practice that I’d left unattended during the campaign (a neglect that had left me more or less broke), as well as make up for lost time with a wife and daughter who had seen far too little of me during the previous six months. At the last minute, though, several friends and supporters who were planning to go insisted that I join them. You need to make national contacts, they told me, for when you run again—and anyway, it will be fun. Although they didn’t say this at the time, I suspect they saw a trip to the convention as a bit of useful therapy for me, on the theory that the best thing to do after getting thrown off a horse is to get back on right away. Eventually I relented and booked a flight to L.A. When I landed, I took the shuttle to Hertz Rent A Car, handed the woman behind the counter my American Express card, and began looking at the map for directions to a cheap hotel that I’d found near Venice Beach. After a few minutes the Hertz woman came back with a look of embarrassment on her face. “I’m sorry, Mr. Obama, but your card’s been rejected.” “That can’t be right. Can you try again?” “I tried twice, sir. Maybe you should call American Express.” After half an hour on the phone, a kindhearted supervisor at American Express authorized the car rental. But the episode served as an omen of things to come. Not being a delegate, I couldn’t secure a floor pass; according to the Illinois Party chairman, he was already inundated with requests, and the best he could do was give me a pass that allowed entry only onto the convention site. I ended up watching most of the speeches on various television screens scattered around the Staples Center, occasionally following friends or acquaintances into skyboxes where it was clear I didn’t belong. By Tuesday night, I realized that my presence was serving neither me nor the Democratic Party any apparent purpose, and by Wednesday morning I was on the first flight back to Chicago. Given the distance between my previous role as a convention gate-crasher and my newfound role as convention keynoter, I had some cause to worry that my appearance in Boston might not go very well. But perhaps because by that time I had become accustomed to outlandish things happening in my campaign, I didn’t feel particularly nervous. A few days after the call from Ms. Cahill, I was back in my hotel room in Springfield, making notes for a rough draft of the speech while watching a basketball game. I thought about the themes that I’d sounded during the campaign—the willingness of people to work hard if given the chance, the need for government to help provide a foundation for opportunity, the belief that Americans felt a sense of mutual obligation toward one another. I made a list of the issues I might touch on—health care, education, the war in Iraq. But most of all, I thought about the voices of all the people I’d met on the campaign trail. I remembered Tim Wheeler and his wife in Galesburg, trying to figure out how to get their teenage son the liver transplant he needed. I remembered a young man in East Moline named Seamus Ahern who was on his way to Iraq—the desire he had to serve his country, the look of pride and apprehension on the face of his father. I remembered a young black woman I’d met in East St. Louis whose name I never would catch, but who told me of her efforts to attend college even though no one in her family had ever graduated from high school. It wasn’t just the struggles of these men and women that had moved me. Rather, it was their determination, their self-reliance, a relentless optimism in the face of hardship. It brought to mind a phrase that my pastor, Rev. Jeremiah A. Wright Jr., had once used in a sermon. The audacity of hope. That was the best of the American spirit, I thought—having the audacity to believe despite all the evidence to the contrary that we could restore a sense of community to a nation torn by conflict; the gall to believe that despite personal setbacks, the loss of a job or an illness in the family or a childhood mired in poverty, we had some control— and therefore responsibility—over our own fate. It was that audacity, I thought, that joined us as one people. It was that pervasive spirit of hope that tied my own family’s story to the larger American story, and my own story to those of the voters I sought to represent. I turned off the basketball game and started to write. A FEW WEEKS later, I arrived in Boston, caught three hours’ sleep, and traveled from my hotel to the Fleet Center for my first appearance on Meet the Press. Toward the end of the segment, Tim Russert put up on the screen an excerpt from a 1996 interview with the Cleveland Plain-Dealer that I had forgotten about entirely, in which the reporter had asked me—as someone just getting into politics as a candidate for the Illinois state senate—what I thought about the Democratic Convention in Chicago. The convention’s for sale, right…. You’ve got these $10,000-a-plate dinners, Golden Circle Clubs. I think when the average voter looks at that, they rightly feel they’ve been locked out of the process. They can’t attend a $10,000 breakfast. They know that those who can are going to get the kind of access they can’t imagine. After the quote was removed from the screen, Russert turned to me. “A hundred and fifty donors gave $40 million to this convention,” he said. “It’s worse than Chicago, using your standards. Are you offended by that, and what message does that send the average voter?” I replied that politics and money were a problem for both parties, but that John Kerry’s voting record, and my own, indicated that we voted for what was best for the country. I said that a convention wouldn’t change that, although I did suggest that the more Democrats could encourage participation from people who felt locked out of the process, the more we stayed true to our origins as the party of the average Joe, the stronger we would be as a party. Privately, I thought my original 1996 quote was better. There was a time when political conventions captured the urgency and drama of politics—when nominations were determined by floor managers and head counts and side deals and arm-twisting, when passions or miscalculation might result in a second or third or fourth round of balloting. But that time passed long ago. With the advent of binding primaries, the much-needed end to the dominance of party bosses and backroom deals in smoke-filled rooms, today’s convention is bereft of surprises. Rather, it serves as a weeklong infomercial for the party and its nominee—as well as a means of rewarding the party faithful and major contributors with four days of food, drink, entertainment, and shoptalk. I spent most of the first three days at the convention fulfilling my role in this pageant. I spoke to rooms full of major Democratic donors and had breakfast with delegates from across the fifty states. I practiced my speech in front of a video monitor, did a walk- through of how it would be staged, received instruction on where to stand, where to wave, and how to best use the microphones. My communications director, Robert Gibbs, and I trotted up and down the stairs of the Fleet Center, giving interviews that were sometimes only two minutes apart, to ABC, NBC, CBS, CNN, Fox News, and NPR, at each stop emphasizing the talking points that the Kerry-Edwards team had provided, each word of which had been undoubtedly tested in a battalion of polls and a panoply of focus groups. Given the breakneck pace of my days, I didn’t have much time to worry about how my speech would go over. It wasn’t until Tuesday night, after my staff and Michelle had debated for half an hour over what tie I should wear (we finally settled on the tie that Robert Gibbs was wearing), after we had ridden over to the Fleet Center and heard strangers shout “Good luck!” and “Give ’em hell, Obama!,” after we had visited with a very gracious and funny Teresa Heinz Kerry in her hotel room, until finally it was just Michelle and me sitting backstage and watching the broadcast, that I started to feel just a tad bit nervous. I mentioned to Michelle that my stomach was feeling a little grumbly. She hugged me tight, looked into my eyes, and said, “Just don’t screw it up, buddy!” We both laughed. Just then, one of the production managers came into the hold room and told me it was time to take my position offstage. Standing behind the black curtain, listening to Dick Durbin introduce me, I thought about my mother and father and grandfather and what it might have been like for them to be in the audience. I thought about my grandmother in Hawaii, watching the convention on TV because her back was too deteriorated for her to travel. I thought about all the volunteers and supporters back in Illinois who had worked so hard on my behalf. Lord, let me tell their stories right, I said to myself. Then I walked onto the stage. I WOULD BE lying if I said that the positive reaction to my speech at the Boston convention—the letters I received, the crowds who showed up to rallies once we got back to Illinois—wasn’t personally gratifying. After all, I got into politics to have some influence on the public debate, because I thought I had something to say about the direction we need to go as a country. Still, the torrent of publicity that followed the speech reinforces my sense of how fleeting fame is, contingent as it is on a thousand different matters of chance, of events breaking this way rather than that. I know that I am not so much smarter than the man I was six years ago, when I was temporarily stranded at LAX. My views on health care or education or foreign policy are not so much more refined than they were when I labored in obscurity as a community organizer. If I am wiser, it is mainly because I have traveled a little further down the path I have chosen for myself, the path of politics, and have gotten a glimpse of where it may lead, for good and for ill. I remember a conversation I had almost twenty years ago with a friend of mine, an older man who had been active in the civil rights efforts in Chicago in the sixties and was teaching urban studies at Northwestern University. I had just decided, after three years of organizing, to attend law school; because he was one of the few academics I knew, I had asked him if he would be willing to give me a recommendation. He said he would be happy to write me the recommendation, but first wanted to know what I intended to do with a law degree. I mentioned my interest in a civil rights practice, and that at some point I might try my hand at running for office. He nodded his head and asked whether I had considered what might be involved in taking such a path, what I would be willing to do to make the Law Review, or make partner, or get elected to that first office and then move up the ranks. As a rule, both law and politics required compromise, he said; not just on issues, but on more fundamental things—your values and ideals. He wasn’t saying that to dissuade me, he said. It was just a fact. It was because of his unwillingness to compromise that, although he had been approached many times in his youth to enter politics, he had always declined. “It’s not that compromise is inherently wrong,” he said to me. “I just didn’t find it satisfying. And the one thing I’ve discovered as I get older is that you have to do what is satisfying to you. In fact that’s one of the advantages of old age, I suppose, that you’ve finally learned what matters to you. It’s hard to know that at twenty-six. And the problem is that nobody else can answer that question for you. You can only figure it out on your own.” Twenty years later, I think back on that conversation and appreciate my friend’s words more than I did at the time. For I am getting to an age where I have a sense of what satisfies me, and although I am perhaps more tolerant of compromise on the issues than my friend was, I know that my satisfaction is not to be found in the glare of television cameras or the applause of the crowd. Instead, it seems to come more often now from knowing that in some demonstrable way I’ve been able to help people live their lives with some measure of dignity. I think about what Benjamin Franklin wrote to his mother, explaining why he had devoted so much of his time to public service: “I would rather have it said, He lived usefully, than, He died rich.” That’s what satisfies me now, I think—being useful to my family and the people who elected me, leaving behind a legacy that will make our children’s lives more hopeful than our own. Sometimes, working in Washington, I feel I am meeting that goal. At other times, it seems as if the goal recedes from me, and all the activity I engage in—the hearings and speeches and press conferences and position papers—are an exercise in vanity, useful to no one. When I find myself in such moods, I like to take a run along the Mall. Usually I go in the early evening, especially in the summer and fall, when the air in Washington is warm and still and the leaves on the trees barely rustle. After dark, not many people are out—perhaps a few couples taking a walk, or homeless men on benches, organizing their possessions. Most of the time I stop at the Washington Monument, but sometimes I push on, across the street to the National World War II Memorial, then along the Reflecting Pool to the Vietnam Veterans Memorial, then up the stairs of the Lincoln Memorial. At night, the great shrine is lit but often empty. Standing between marble columns, I read the Gettysburg Address and the Second Inaugural Address. I look out over the Reflecting Pool, imagining the crowd stilled by Dr. King’s mighty cadence, and then beyond that, to the floodlit obelisk and shining Capitol dome. And in that place, I think about America and those who built it. This nation’s founders, who somehow rose above petty ambitions and narrow calculations to imagine a nation unfurling across a continent. And those like Lincoln and King, who ultimately laid down their lives in the service of perfecting an imperfect union. And all the faceless, nameless men and women, slaves and soldiers and tailors and butchers, constructing lives for themselves and their children and grandchildren, brick by brick, rail by rail, calloused hand by calloused hand, to fill in the landscape of our collective dreams. It is that process I wish to be a part of. My heart is filled with love for this country. Acknowledgments THIS BOOK WOULD have not been possible without the extraordinary support of a number of people. I have to begin with my wife, Michelle. Being married to a senator is bad enough; being married to a senator who is also writing a book requires the patience of Job. Not only did Michelle provide emotional support throughout the writing process, but she helped me arrive at many of the ideas that are reflected in the book. With each passing day, I understand more fully just how lucky I am to have Michelle in my life, and can only hope that my boundless love for her offers some consolation for my constant preoccupations. I want to express as well my gratitude to my editor, Rachel Klayman. Even before I had won my Senate primary race, it was Rachel who brought my first book, Dreams from My Father, to the attention of Crown Publishers, long after it had gone out of print. It was Rachel who championed my proposal to write this book. And it has been Rachel who’s been my constant partner in what’s been the frequently difficult but always exhilarating effort of bringing this book to completion. At each stage of the editorial process, she’s been insightful, meticulous, and unflagging in her enthusiasm. Often she’s understood what I was trying to accomplish with the book before I did, and has gently but firmly brought me into line whenever I strayed from my own voice and slipped into jargon, cant, or false sentiment. Moreover, she’s been incredibly patient with my unforgiving Senate schedule and periodic bouts of writer’s block; more than once, she’s had to sacrifice sleep, weekends, or vacation time with her family in order to see the project through. In sum, she’s been an ideal editor—and become a valued friend. Of course, Rachel could not have done what she did without the full support of my publishers at the Crown Publishing Group, Jenny Frost and Steve Ross. If publishing involves the intersection of art and commerce, Jenny and Steve have consistently erred on the side of making this book as good as it could possibly be. Their faith in this book has led them to go the extra mile time and time again, and for that I am tremendously grateful. That same spirit has characterized all the people at Crown who’ve worked so hard on behalf of this book. Amy Boorstein has been tireless in managing the production process despite very tight deadlines. Tina Constable and Christine Aronson have been vigorous advocates of the book and have deftly scheduled (and rescheduled) events around the demands of my Senate work. Jill Flaxman has worked diligently with the Random House sales force and with booksellers to help the book make its way into the hands of readers. Jacob Bronstein has produced—for the second time—an outstanding audio version of the book in less than ideal circumstances. To all of them I offer my heartfelt thanks, as I do to the other members of the Crown team: Lucinda Bartley, Whitney Cookman, Lauren Dong, Laura Duffy, Skip Dye, Leta Evanthes, Kristin Kiser, Donna Passannante, Philip Patrick, Stan Redfern, Barbara Sturman, Don Weisberg, and many others. Several good friends, including David Axelrod, Cassandra Butts, Forrest Claypool, Julius Genachowski, Scott Gration, Robert Fisher, Michael Froman, Donald Gips, John Kupper, Anthony Lake, Susan Rice, Gene Sperling, Cass Sunstein, and Jim Wallis took the time to read the manuscript and provided me with invaluable suggestions. Samantha Power deserves special mention for her extraordinary generosity; despite being in the middle of writing her own book, she combed over each chapter as if it were hers, providing me with a steady flow of useful comments even as she cheered me up whenever my spirits or energy were flagging. A number of my Senate staff, including Pete Rouse, Karen Kornbluh, Mike Strautmanis, Jon Favreau, Mark Lippert, Joshua DuBois, and especially Robert Gibbs and Chris Lu, read the manuscript on their own time and provided me with editorial suggestions, policy recommendations, reminders, and corrections. Thanks to all of them for literally going beyond the call of duty. A former staffer, Madhuri Kommareddi, devoted the summer before she entered Yale Law School to fact-check the entire manuscript. Her talent and energy leave me breathless. Thanks as well to Hillary Schrenell, who volunteered to help Madhuri with a number of research items in the foreign policy chapter. Finally, I want to thank my agent, Bob Barnett of Williams and Connolly, for his friendship, skill, and support. It’s made a world of difference. ABOUT THE AUTHOR BARACK OBAMA is the junior U.S. Senator from Illinois. He began his career as a community organizer in some of Chicago’s poorest communities and then attended Harvard Law School, where he was elected the first African American president of the Harvard Law Review. In 1992, he directed Illinois Project VOTE, which registered 150,000 new voters. From 1997 to 2004, he served as a three-term state senator from Chicago’s South Side. In addition to his legislative duties, he has been a senior lecturer in constitutional law at the University of Chicago Law School, practiced civil rights law, and served on the board of directors of various charitable organizations.