What is your goal for college?

It is 1:14 PM and I am at my third high school of the day — Cleveland High School, in Portland Oregon. Colorful post-it-notes dot the classroom. Expectant 11th grader eyes peer at me. On the whiteboard in front of the class, I have just finished writing: “Design Your Ideal College Workshop. Part I: Goals + Problems.”

I turn to the class. "Raise your hand if you have been to a college visit or toured a college campus already."

Every single hand in the room shoots up.

“What questions do you ask on these tours?” I ask.

A litany of hands shoots up. “What is the student to teacher ratio?” one girl blurts out. “What majors do you have?” says a boy. “What student clubs do you offer?” I’ve heard these same questions dozens of times.

I take a couple more, and then ask: “How many of you” — I pause for dramatic effect — “have asked yourself ‘what is your goal for college?’” Silence. Two tentative hands rise up.

In the last year, every time I visit a high school to get the word out about Minerva, I facilitate a workshop, where students participate in a combination of free writing, discussion, and group work to design their ideal college. By now, I have led this workshop more than thirty times. Whether it’s a poor urban city charter school in Dallas or a tony private school in Los Angeles, one thing stays the same: students don't think deeply or thoughtfully about their college search. The types of questions students ask read straight out of the metrics for ranking colleges in the US World & News Report. Prestige and cost reign supreme.

This seems problematic for one of the most important decisions of students' young lives. It is not that the questions students are asking are unimportant; it is that there are even deeper questions to ask first. Thinking about majors without thinking about goals is like preparing to buy a house and asking: “How many rooms does it have? Are there bay windows? Is the backyard artificial or real grass?” but never asking the question in the first place: “what kind of life do you want to live?” (How does a house support that lifestyle?)

The most problematic aspect for me is this: in an increasingly globally competitive world, students no longer have the luxury of groupthink. The conveyer belt of schooling no longer leads to a stable and well paying job, a house with a white picket fence and two and a half kids, and a retirement with full pension, as it may have fifty years ago. Instead, sooner or later, students will have to face these tough introspective questions. Perhaps these questions hit them sophomore year of college, in the middle of an organic chemistry exam, when they realize that the nineteen years that their parents spent grooming them to be a surgeon was for naught; they don’t want to be a doctor. Perhaps these questions will hit them in their late thirties, when they realize that toiling away at a Wall Street firm and climbing the corporate ladder made them a lot of money, but not a lot of self-fulfillment. Perhaps it will be later still.

Sooner or later, students will have to ask these deeper questions of themselves. They will have to think about their core values and how they want to live by them. They will have to consider what their unique talents are and how they should cultivate them.

As many students start the college admissions rat race again this summer and fall, let us make sure they ask at least one of these questions early: “what is your goal for college?”

 

 

Your job is automated away. What's next?

“What do people do after their jobs are automated away?”

This was the question posed by my good friend Asher on a chilly Saturday night at Soma StrEat Food Park in San Francisco.

Asher had just been to a new restaurant called Eatsa, where there are no waiters — everyone orders from an iPad and the food comes up through a circular enclosure on a table. A generously portioned food bowl costs $7 dollars, a steal by SF prices. There are winners: consumers pay lower prices and companies increase profits with lower operation costs. But as with everything capitalism, there are also losers: waiters and waitresses who are out of work.

What do these former waiters and waitresses do next? We brainstormed rapid fire: perhaps find another entry-level job (retail, truck driving [1]) or temp job (substitute teacher), become part time in the sharing economy (only relevant in major cities like SF or NY), move to another city where they have a friend / relative, become homeless, hustle, commit crime, join the army, start a small business, do drugs, go back to school. How do they search for another job? Perhaps use the Internet, ask friends and relatives [2], walk down the street looking for the first “we’re hiring” sign that they can on a window. How can we help?

I was struck by how many conversations I have about the topic of automation, and how few of them consider the individual people involved. In Silicon Valley, there is a lot of talk about technological disruption of traditionally labor-intensive industries (e.g. self-driving cars), and some talk about grandiose policy ideas like universal basic income for the unemployed in a post-work economy. Both ideas treat people the same — as one single mass of “humanity" in the indefinite long run. There is very little thinking about individual human beings and their day-to-day struggles, hopes, thoughts, and feelings after their job is replaced by a machine.

As more and more of the most common jobs in the US (waitressing, driving) become automated, we need to listen to and empathize with these individual stories, and work with them to create new training and employment pathways. It’s one thing to make “society better with technology”; it’s another to empower all individuals to have the opportunity to live the life they want to lead.

---------

[1] Truck driving is the most common job in the United States, with more than 3 million total. I recently met a former transit operator in SF who is now starting a truck driving small business with his cousin to haul dirt away from all of the construction happening in the city.

[2] I remember a conversation I had with an Ethiopian taxi driver in Seattle two years ago. He said the reason he moved to Seattle as opposed to anywhere else in the US was because he knew one friend in the US who was in the towing business in Seattle and said that business was good.

 

This micro-blog post is inspired by the Tim Ferriss podcast episode with Seth Godin, who blogs every single day, and advocates for blogging in order to “putting yourself in public behind an idea.” I tend to be skeptical of business writers (especially ones as popular as Seth), but I was inspired by Seth’s principled approach to life and no bullshit critical thinking. It remains to be seen whether I keep this up for more than one day...but this is a start.

 

 

 

 

Book Review: "The Rise of Teddy Roosevelt"

image

On October 14th, 1912, former president Teddy Roosevelt, on the campaign trail again as nominee of his own Progressive Party, was shot in the chest by an unemployed saloonkeeper. Staggering for a moment, Roosevelt said: “it takes more than that to kill a Bull Moose,” before removing the bullet ridden, 50 page manuscript of prepared remarks from his blood-stained shirt, and proceeding to deliver his campaign speech to a shocked crowd. It wasn’t until he finished his speech, 90 minutes later, that he agreed to check into a hospital. 

Stories such as these propel Teddy Roosevelt to a near mythic status in American culture. How much of these stories is true, and how much is dramatized? What other dimensions did the man have beneath the one dimensional badassness? What sort of life did he lead, and how did he develop into “the most interesting man ever to become president?”

I recently finished reading “The Rise of Theodore Roosevelt” by Edmund Morris, which went a long way towards answering these questions, and more. This Pulitzer-Prize winning biography was a recommendation and gift from my dear friend Dustin, and it lived up to his high words of praise. It begins with the birth of Teddy in 1858 and closes with his assumption of the responsibilities of the presidency as the youngest ever president at age 42, with the assassination of President William McKinley. In between, it charts his sickly childhood in the Brownstones of Manhattan, his meteoric rise in the New York State Assembly, his ranching and hunting in the Badlands of the American West, his leadership of the Rough Riders during the Spanish American War in Cuba, his building of a home at Sagamore Hill, his political career alternating between appointed positions in New York and Washington, D.C., and everything in between.

Here are just a few of the lessons I took away from the book:

Teddy Roosevelt the man

Teddy Roosevelt had so many magnanimous qualities. Here are a few that thoroughly impressed and inspired me:

  • Energy and vitality: Teddy possessed an inhuman energy and vitality. On hunting trips out West, he would ride dozens of miles a day on his horse without resting, in his hunt for big game. He scaled the snowy mountains of Maine as quickly as his backwoods companions. He gained the respect of the cowboys of the West with his relentless work ethic wrangling cattle. This indomitable energy extended from physical activities as much as it did mental ones. As president, he was known for reading a book a day, even after his presidential duties. He was a prolific author, writing 15 books by the time he was 30. He wrote with torrid pace, finishing thousand-page, meticulously researched historical works in three month sprints of pen to paper. Morris describes his bursts of work in mechanical metaphors, as a “great steam engine” or machine. The fount of his energy and vitality came from his grit and his work ethic, developed from extraordinary adversity in his youth. When he was young, he built up his body to overcome his physical battles with asthma; it becomes clear that his mental stamina develops in part to help him cope with the great emotional tragedies in his life. For example, he completed an incredible body of work as Civil Service Commissioner in Washington, D.C. after the death of his wife Alice and his mother Mittie when he is 25.
  • Courage: While there are countless examples of Teddy’s valor as a cowboy in the west and facing rearing grizzly bears in the Badlands, the courage that I especially admire in Teddy is the courage of his convictions—the fact that the’s a man of his word, who follows up talk with action. The ultimate example of Teddy’s courage is the Spanish American War. At the time, Teddy was the hawkish Assistant Secretary of the Navy, as much criticized for his bellicoseness as praised for his effectiveness as an administrator. As soon as the U.S.S. Maine was sank in the Cuban Harbor in 1898, Teddy immediately enlisted in the war in the Army. His closest friends and family criticized him for this reckless move, and his political allies believed he would be a more effectiveness public servant in the leadership of the Department of the Navy than on the front lines as a Colonel. Nevertheless, Teddy had no interest in being an armchair reformer, and it had been a lifelong dream of his to fight in a war. He ended up leading an epic charge on horseback with his Rough Riders up San Juan Hill in the decisive battle of the Spanish American War, mauser bullets whizzing past him.
  • Intellectual curiosity: despite many descriptions from his contemporaries that Teddy is “pure action,” there’s an intellectual side of Teddy as well. He is constantly reading and learning more and more, never satisfying his thirst for knowledge. This stems from his early age discovering books as unlocking imaginary worlds that he couldn’t reach in youth because of his sickliness. There are interesting parallels here between Teddy’s childhood development and that of the second-youngest President and famous speed reader John F. Kennedy.
  • Sense of timing: on both personal and historical fronts, Teddy has an impeccable sense of timing. One of his charismatic tactics that he developed in his early 20s in his stint in the New York State Assembly is always entering a room after other key figures arrive, and once he enters a room, pausing for a dramatic second, all eyes in the room gravitating to him, before proceeding towards his chair. Historically, Roosevelt’s rise as a politician corresponded with the overall trends in popularity of Progressive ideals after a few decades of opulence and excess during the Gilded Era—Teddy was the most important politician in office to ride these populist waves. To what extent Teddy’s timing is sheer luck and to what extent it is strategic and deliberately cultivated is unclear.
  • Family man: throughout his life, Teddy is fiercely familial. He is the father of one baby girl by his first wife, the deceased Alice Lee, and five by his second, Edith Wharton. In addition, he remains the de facto head of the Roosevelt family estate after his father’s untimely death. In his diary, he writes that he is has never experienced bliss like the bliss of being with family. 

The Life of Teddy Roosevelt

Tracing the trajectory of Teddy Roosevelt’s life reinforced lessons I’ve heard before about life:

  • tragedy and adversity developing extraordinary traits: at age 20, the young Teddy loses his father Theodore Sr. Teddy is overcome with anguish, and stated “he is the greatest man I ever knew, and the only one I ever feared.” At age 25, Teddy suddenly and tragically loses his young wife Alice and his mother Mittie on the same day. Throughout his youth, the young Teddy battles severe asthma, which brings him to the brink of death many times. Both research and popular anecdotes tell us that tragedy develops character and leaders. These early tragedies steeled Teddy and forced him to get to the pith of what is important in life.
  • the most effective social reformers bridge establishment and reform circles: From the Meat Inspection Act of 1906 to breaking up the Standard Oil Monopoly, Teddy is a trailblazer of the Progressive movement to end municipal political corruption and the power of huge corporate trusts. His success as a reformer is in part because of his ability to identify both with the powerful establishment and the grassroots reformers. Born into the aristocratic Roosevelt Family, one of Manhattan’s social elite, Teddy early on benefits from his family connections and wealth to vault into the center of New York social circles. At Harvard, he is invited to join the Porcellian Club, the most exclusive of Harvard’s Final Clubs, bastion of the Boston Brahmin. He does so similarly in Washington, D.C., when he joins the elite social circle of John Hay, Henry Adams, and other prominent political elites. Teddy is yet another example of an influential reformer who is able to bridge the common experience and elite circles. Another prominent example of a reformer that empathized with both the top and bottom segments of society is Martin Luther King, Jr. King is the popular leader of the Civil Rights Movement, but he is by no means purely from humble beginnings. According to Marshall Ganz, community organizer and architect of President Obama’s 2008 campaign grassroots strategy, Reverend King’s assumption of leadership of the Southern Christian Leadership Conference is in part because of his unique background as a partial outsider and background as an intellectual from an elite reverend family in Atlanta.
  • the non-linearity of life: at age 24, Teddy is the precocious and charismatic Majority Leader of the New York State House Republican Coalition. His career seems to be on a fast track to the nation’s highest office. Yet, the ups and downs of politics and personal life afflict Teddy. At age 26, he is alone out West ranching and writing, after being publicly losing the support of his reformist coalition at the Republican National convention for ultimately acquiescing in his support of the corrupt Old Guard Republican nominee for president, Blaine. At this point, it’s uncertain when he’ll return to New York or to politics. At 28, he is nominated by, and routed in the New York Mayor’s election. For much of his thirties, Teddy considers writing and literature a more promising occupation than politics. And of course, there is perhaps no bigger surprise than the sudden telegram of William McKinley’s death in 1901, launching VP Teddy into the presidency.

How It’s Written

At 780 pages, “Teddy” is a hefty tome. But those pages fly because in addition to the scholarly rigor of a scientist, Morris excels at writing with the enchantment of a novelist. If for no other reason (and there are so many other reasons), this biography is worth reading because of how well it is written. Here are a couple of tactics that author Edmund Morris is particularly deft at:

  • an extended hook: any good story starts with a hook. Morris opens the Prologue with a detailed account of New Year’s Day, 1907. Teddy Roosevelt has already been president for 5 years, and he is in office in a time of domestic economic prosperity and international peace. Morris describes Teddy in all of his magnanimity—his physical presence (bespectacled face, huge gleaming teeth, rippling chest muscles), his policy successes, and his personal traits. By highlighting all of his traits, Morris raises intrigue in the reader as to how such a man came to be.
  • flow and transitions: Morris seamlessly and creatively alternates between narrative description and analysis of trends in Roosevelt’s development. The tempo of the work is never broken by an off note in writing. These ordinary transitions, hundreds of them in the entire book, bring pleasure to the reader in their conciseness. Yet, these transitions are also punctuated by a few, sublime passages where Morris uses the features of the physical scene he is describing as metaphors for his development of the character of Roosevelt. For example, this passage talking about Roosevelt’s active dating life in college: “sickly and reclusive as a child, preoccupied with travel and self-improvement in his teens, he had had little opportunity to knock on strange doors. Now, doors were opening of their own accord, disclosing scores of fresh faces and alluring young figures.” (pg. 63)

On notetaking

image

I’m a prolific notetaker—and I don’t just mean in school. A quick scroll down my iPhone notes app reveals a list, 132 at the moment, of notes: logistical numbers (golf handicap number, school mailbox number, phone numbers of customer service departments I called to complain to), dream diary entries, action items after meetings, things I should check out, funny anecdotes, insightful quotes, and the list goes on. 

A lot of note taking is reactive—recording something urgent before it slips from your working memory. Yet, note taking can also be a proactive strategy to accelerate your learning. Here, I mean learning in the broadest possible sense: about the world, and about yourself. In a recent conversation with my friend Carl Shan, he made this insightful comment: “learning comes from 2 things: having experiences and reflecting on them.” Note taking helps with this second piece of learning, reflection. According to research from UCLA, the average human has 70,000 thoughts per day. How do we make sense of these thoughts? A popular technique is to journal regularly. Because (infrequent) blogging is as close as I get to journaling, most of my daily reflection comes from note taking on the go: when I come across something noteworthy during my day that sparks new thinking or contributes to an existing theme I’ve been thinking about, my right hand instinctively dives into my pocket, pulls out my phone, and starts typing into the notes app. These “micro notes” make up my own private Twitter stream.

Once I’ve collected some fragmented Tweets—quotes from Season 3 of the Wire, insightful metaphors a friend used in conversation, impressions from an article I read—I sit down to transfer my iOS notes to Evernote, where I edit, format, and organize them into notebooks; I also type up any notes that I’ve collected on pen and paper into Evernote. This transfer phase is crucial to reflection—a consistent space for combining ideas, synthesizing them, and situating them into larger streams of thinking. Just as academic researchers contribute to generalized knowledge, notes contribute to my knowledge—about the world, and about myself. 

3 notes were written during the writing of this blog post.

On the types, measures, and value of time

In Slaughterhouse-Five, author Kurt Vonnegut describes the aliens from Tralfamadore, who see in four dimensions instead of three. While humans can only perceive a single moment in time at once, as the peak of a single mountain, the Tralfamadorians perceive the entire flow of time—past, present, and future—at once, as the panorama of a mountain range.

Lately, I’ve been thinking a lot about the concept of time, from the role it plays in my own life, to its importance in society, to even more broadly, how it dictates the entirety of our evanescent human existence. Inspired by this commencement address 10 timeframes, by Paul Ford, and also by a lot of thinking on a specific product feature (codename: “timeline”) in my work for TurnRight, this post contains some of my fragmented observations and musings on the relationship of Time to us all. Just as French novelist Marcel Proust once said, “the real voyage of discovery consists not in exploring new lands but in having new eyes,” it is my hope that by revisiting the ordinary and the familiar through the lens of time, you see things just a little bit differently. So it goes.

Types of time:

Let’s start with two truisms of time: first, “time heals all wounds,” and second, “use your time wisely.” If we look closely, the two “times” are not the same: the first Time conjures up the image of a supreme arbiter, all-encompassing and all-powerful, yet fair and just; the second time merely describes a possession or faculty that individuals have control over. Based on this, I think we can separate our commonly used notions of time into two camps: personal time with a lowercase ‘t’ and Universal Time with a capital ‘T’. Starting from its creation, every tangible object or being, from a human being to a giraffe to a kitchen knife to a star possesses personal time; the complete duration of your time from beginning to end is called your lifespan. On the other hand, independent of our individual, insignificant lifespans exists Universal Time, which according to Steven Hawking, started right at the Big Bang. While the most common visual paradigm of time is a clock, let’s try another representation: look around you and pretend every person and object around you has a progress bar hovering above them (for all of you gamers, just like an HP bar). How full each bar is represents the age of that individual or object. Look in the mirror—you have one too. Look up into the sky: that big progress bar represents the lifespan of the universe—Universal Time. The end of each progress bar is the end of that respective living being or object’s lifespan: the sun will die in approximately 5 billion years; you and I will die in less than a century. While it may seem silly and trivial to capture lifespans as progress bars, It would probably do us well to think about our own mortality from time to time—I am reminded of the wise words of Steve Jobs in his 2005 Stanford commencement speech: “remembering that you are going to die is the best way I know to avoid the trap of thinking you have something to lose. We are already naked. There is no reason not to follow your heart.”

Measures of time:

Perhaps because we humans can’t see in the fourth dimension, we have invented many measures to keep track of time, all of them man-made. You can think of these measures as the divisions in the hovering progress bars we envisioned earlier. The basic unit of time is duration. The specific man-made units range in a spectrum from infinitesimally small (milliseconds, microseconds) to unfathomably large (eons, light years). Most of our commonly used measures of time lie somewhere in the middle: some quantitative, such as seconds, hours, days, years, and centuries; some qualitative, such as past, present, and future. Interestingly enough, according to Paul Ford, most of the common measures of time, such as decades, were only invented in recent human history. The takeaway: these units of time are man-made and can be changed (Daylight Savings Time, anyone?); when they do change, they change human behaviors, patterns of thought, and lifestyles.

One of the biggest problems facing society today is a mismatch between big problems (climate change, global economic disaster, the Social Security crisis here in the United States) that require long-termed thinking and gritty decision-making to solve, and the short-term myopia of our leaders and common people (who are just looking to win the next re-election or get their next paycheck). While a part of our inability to solve these pressing issues lies in our inherent psychological biases toward short-termed, impulsive thinking, I have another hypothesis: in the age of Twitter, Gmail, text messaging and Facebook notifications, we need to invent smaller and smaller time scales—smaller and smaller blips on our progress bars—to keep up with the speed of information that bombards us every day. This push is largely spearheaded by businesses—for example, milliseconds for algorithmic stock traders means millions more dollars in revenue, and there’s a reason Google keeps working to improve the speed of its Search, if only milliseconds at a time. Gradually, this corporate push toward smaller time scales seeps into the public consciousness, until it enters common, everyday use. This wouldn’t be a problem, except our limited brains can only handle so much information, and keeping track of smaller and smaller divisions of time takes energy and cognitive capacity (imagine counting each millimeter line on a ruler)—it’s no surprise we are more stressed, distracted, and overwhelmed with information overload than ever before. How does this affect decision-making? Well, being bogged down with the next Tweet or next text message or next email shortens our individual attention spans, fracturing our focus, and crowding out our ability to look into the future, where the solutions to all of our big apocalyptic problems lie. On the flip side of the spectrum, while it is true that astrophysicists and cosmologists are inventing enormous timescales to observe and measure the universe, these large time scales don’t counterbalance the smaller ones espoused by social media, smart phones, and Google, which affect our daily lives and culture in a way that the discovery of dark energy in Galaxy X does not. Collectively, its no wonder that we as a society have shorter attention spans and more myopic perspectives than ever before.

The value of time:

Because we as humans value life, and our time is the measure of our lives, we value time. Specifically, we value our personal time more than we value Universal Time, which existed long before we were born and will exist long after we die (there is, however, one proportionally minuscule period of recent Universal Time that is relevant to our lives—we call this blip History, and offer it in college as a major and on television as a channel). Just how valuable is personal time? To answer this question, let’s use a metaphor: personal time, like money, is a currency. You can spend your time, save it, waste it, borrow it, give it, and share it with other people. Just like money, most of us don’t find time intrinsically valuable, but as a means to another end, such as pleasure, money, love, creation, or happiness. However, the closer a person gets to the end of their lifespan, the more they value time as precious in and of itself. Further, the value of time seems to ebb and flow with the value of money, which you value most in the middle of your life, when you need money for a house, a car, kids, and a spouse, and less as you get older. Neither currency seems to correlate strongly with happiness. A Princeton University study reveals that after $75,000 a year in income in the U.S., money does not correlate with day-to-day happiness, while psychological research suggests that your happiest moments occur when you lost track of time—a phenomenon called flow.

In general, which is more valuable, money or time? In our day and age of 24/7 work, sleep deprivation, and global travel, most of us would instantly answer “time,” but markets suggest another story: on average, those who specialize in managing the money of others (stock brokers, investment bankers, hedge fund managers) earn more than those who specialize in managing the time of others (secretaries, time management gurus, lifestyle coaches). In addition, while the homeless and the imprisoned have an abundance of time, few of us would trade places with them. Here, it seems that money is more valuable than time. There seem to be two reasons for this: while most of us are born with a wealth of time, few of us are born with a wealth of money. Second, although all of us know what it’s like to have too much time on our hands that we know what to do with it (boredom), few of us know what it’s like to have too much money on our hands (Mark Zuckerberg) that we know what to do with. But this is only an incomplete analysis. After all, it seems that the most important qualification of both currencies is not merely large quantities, but the freedom and choice to spend them.

More on time, soon.

A design experiment for writing better blog posts

When I think of design, the first thing that comes to mind is the sharp, sleek silhouette of a MacBook Air. This isn’t surprising—consumer technology’s recent design renaissance, led by young, saavy tech startup founders Jack Dorsey of Twitter and Square, Dave Morin of Path, and Joe Gebbia of Airbnb, was pioneered by Steve Jobs and Apple. Yet, as I’ve come to learn through my work for education technology startup TurnRight, there is more to good design than the curve of an ergonomic keyboard or the minimalist layout of a website; rather, design is both a process and a way of looking at the world. Participating in user experience (UX) meetings on everything from philosophical discussions about the purpose of our product all the way down to the most granular details, such as how drop-down versus type-ahead will influence a user’s behavior in filling out their hometown on the profile page, has been eye-opening, to say the least. Although I still know paltry little about the technical skills of design—neither front-end web development languages HTML5 and CSS nor graphic design mainstays Photoshop and InDesign have found their way into my arsenal—I now at least have an introductory grasp of the way that designers think. In this blog post, I hope to peel away some of those layers and discuss how I apply them to my own life.

Last year, Ex Appler, current Facebooker Wilson Miner gave a transfomative talk called "When We Build." During the talk, he asks us all to think about the products of our design as not merely products, but living breathing organisms that make up larger ecosystems we ourselves inhabit and are inevitably shaped by. At the core of Miner’s talk is the idea that design is a two way street, summed up by a pithy quote from Marshall McLuhan: “we shape our tools, and our tools shape us.” That “our tools shape us” is no surprise—from the age old nature-nurture debate to immense bodies of research in the fields of economics, psychology, and sociology, we know that people around us, our material possessions, and of course, our environments shape us and influence our thoughts, behavior, and interactions. What is less obvious is the first part of the quote: “we shape our tools.” Unless you are a professional visual designer, architect, or literally, a shaper of tools, your response is probably: well that’s nice, but I’m no designer. Enter journalist John Hockenberry—he’s no designer either. When Hockenberry was 19, he suffered a horrible car accident that left him paralyzed from the neck down, confining him to life in a wheelchair; thereafter, the story of “tragedy and fear and misfortune” projected by his wheelchair blotted out everything else, and nothing he could do would prevent moms in public from pulling their kids away and gasping, “don’t stare!” How could he change this? A simple design change: flashy wheels. Yes, the simple act of ordering flashy wheels from a catalogue and installing them onto his wheelchair elicited a completely different response from people. Now, not only do kids think he is “cool,” but also little boys occasionally ask him: “can I get a ride?” According to Hockenberry, this simple design change made all the difference because it conveyed authorship and his intent of refusing to be a victim and taking charge of his own life. To sum it up, good design equals intent. To return full circle to Miner’s message, I learned that anyone can be a designer simply by acting with intent, and these actions will return to influence the designer’s thoughts and actions. Inspired by these two talks on design and my work at TurnRight, I decided to apply design to my own life to help achieve my summer goal of blogging weekly. 

I. Reflection

"It is wisdom to know others; it is enlightenment to know one’s self."—Lao Tzu 

In user experience (UX) design, actions are always focused with the end user and their goals, behaviors, and tendencies in mind. To start, I had to gain intimate knowledge of the end user—in this case, myself. I started by asking myself the question: what are my limitations? Time? During the school year, certainly, but not as much during the summer, although I am always wary of Parkinson’s Law, which says: “your work expands to fill the time allotted.” What about lack of motivation? Not after watching this video, or this one. After much thought, I came to the conclusion that the single biggest limiting factor for me writing a weekly blog post is my own tendency to procrastinate. What started as a rebellious streak in middle school of not doing homework until past my bedtime escalated into turning in papers, assignments, and even college applications at literally the last minute. These days, this vice largely manifests itself in time-sapping, aimless web surfing sessions. So I asked myself: how can I prevent myself from procrastinating? And more importantly: how can I design an environment that overcomes these obstacles to empower and encourage me to blog once a week?

II. Design

"We shape our tools, and our tools shape us"—Marshall McLuhen

During his talk, in the middle of making a grand point about how screens (TV, computer, smartphone, tablet, etc.) are the environment of the future, Miner asks his audience, a little tongue-in-cheek, but mostly seriously: “How long does it take you after you wake up to get in front of a screen? What is your ‘time-to-screen’? 1 minute? 2 minutes? 5 minutes if you are really slow?’” The audience laughs, in that “funny-because-its-true” way. This prompted the question: what are the environments that I spend the most time in? Physically, other than my office, my apartment is the centerpiece of my time. Digitally, the obvious environments are my ‘screens’: my iPhone, my iPad, and my MacBook Air (thank goodness TV isn’t one of them). Getting even more granular, I spend most of my Internet browsing time (read: procrastination) on Facebook and email, with the majority of that access through my Facebook and Mail iPhone apps. Armed with both knowledge of myself and my limitations as well as the environments in which I spend the most time, I started designing. Here is a (constantly changing) list of 8 design changes I made to help me achieve my goal of writing a weekly blog post. 1-5 are changes made to cultivate the preconditions for my success even before writing, and 6-8 are changes made while writing, to remove distractions and encourage "flow," a psychological concept espoused by psychologist Cziksentmihalyi—major symptoms of “flow” include: losing track of time, unbridled focus, engagement, and elation: 

(1) make a mutual weekly blogging pact with my friend Brandon (read his blog here!) with negative repercussions if either of us skips a post, which is especially helpful when my individual motivation wanes a little as Sunday midnight approaches.

(2) move note-taking app Evernote, where I write my blog posts, to the front page of all of my devices (it has cross-platform integration, so I can start a post on my iPhone and finish it on my MacBook later), and move distracting apps to back pages.

(3) delete my once front-page Facebook app on my iPhone and iPad to curb addictive Facebook use.

(4) any time I think of, read, watch, or observe something that is blog-worthy, immediately write it into Evernote for iPhone—a positive habit (for once!).

(5) write motivational and thought-provoking quotes all over white boards placed around my room. After all, writing equals thinking, with the notable exception of YouTube video comments.

(6) go to Starbucks for a few hours every Sunday afternoon to write, because the environment, with its white noise, coffee scents, sunlight, and people are ideal for stimulating my productivity, whereas my apartment, though pleasant, puts me on the fast track to aimless web surfing, casual conversation, and watching episodes of “Suits.”

(7) use Evernote full screen when I am in the process of writing so I can’t see the time or easily click into other distracting applications, such as Google Chrome.

(8) listen to music when I write, because it helps me focus and blot out distractions—my favorite is Avicii’s Pier 94 setlist.

III. Evaluation

"fail fast, learn rapidly."—Mary & Tom Poppendiek

As someone who is admittedly horrible at science, I nearly gave myself a high-five for figuring out that the design process mirrors the Scientific Method—hypothesis testing and iteration are at the core of both. Because each design change I make is a hypothesis based on observations about my past behavior, it is constant feedback and reflection that reveals whether the change is actually effective. To my surprise, I found that writing motivational and thought-provoking quotes on whiteboards throughout my room has been mostly ineffective—just as heatmap analytics studies show that Facebook users tend to ignore the ads on the right side of their Facebook pages, I rarely look at the quotes written on my white boards. On the flip side, a couple of granular tweaks have proven to be incredibly effective. First, after I had deleted the Facebook app on my iPhone, I caught myself mindlessly tapping onto the app where my Facebook app used to be several times within an hour without even thinking, withdrawal symptoms from a bygone addiction; however, simply breaking this routine and tapping onto a different app forced myself to think about my actions and allowed my rational self to stop this distractive behavior. Second, entering into full screen mode on Evernote when I write has done wonders for allowing me to hit “flow,” because I can neither see the time nor other apps that I can click on. As far as I can tell, merely applying the rigorous level of thought and inquisition that the design process demands has made me more observant, skeptically optimistic, and elevated my level of thought. Based on this ongoing collection of evidence about my design experiment, I go back to the drawing board to re-evaluate, thinking of new design changes or even questioning the fundamental assumptions I am making in the initial reflection stage. Lather, rinse, and repeat—all this, from a government concentrator who doesn’t know the first thing about science. All this, from a designer who doesn’t know the first thing about Photoshop. All this, from a blog post about how to write a blog post. 

My brush with bystander effect, heroism, and the Red Line

Last Wednesday, after a long day of work at the Cambridge Innovation Center near MIT, I took the Red Line subway train from MIT/Kendall Square station to Harvard Square station. Every Monday through Friday, twice a day, I take this train to work and back. Each one way trip takes 10 minutes plus or minus two minutes. How do I know? Simple—I use songs on my iPod as gauges of time (incidentally, this is also how I time the length of my showers; watches are overrated). More often than not, my subway song of choice is the Live at Slane Castle version of Californication by the Red Hot Chili Peppers complete with soulful arpeggios and a haunting guitar solo starting at 5:15 that sends chills up my spine every time (John Frusciante is a god); this time, I was jamming to some Kid Cudi. Right as “Pursuit of Happiness” faded out and the notes of “Soundtrack to my Life” began to play, I noticed something peculiar:

The man was standing about 20 feet away from me, middle aged and in all respects ordinary looking: hair existentially torn between gray and black, scruffy gray beard sprawled across his chin, wearing jeans and sneakers topped by a navy raincoat over a tan polo. Yet, as the familiar screech of metal on metal signaled the stop in Central Square station, the man started swinging, body gyrating haphazardly, fingers slipping from the steel pole that fused the ceiling of the train car to the floor. I thought that he had simply lost his balance, but while the rest of the passengers tightened their grips on poles and steadied themselves as the train ground to an abrupt halt, the man never recovered—in one continuous, violent motion, as if in a scene from a movie, the pole slipped through the man’s fingers, his head slamming into the pole on the other side, and he fell backwards, the back of his head impacting the floor of the train with a sickening crunch. His head rolled back, eyes glazed over, staring directly at me. I froze. A mix of adrenaline and horror rushed through my veins. The music in my ears began to fade, replaced by the sound of my heartbeat…

Ba-bump. Ba-bump.

Suddenly, a woman yelled: “Push the red button! Tell the conductor! Stop the T!” 

Immediately, I snapped back to attention. I don’t know who yelled aloud, but my vision refocused to see an Asian lady wearing glasses jump up and push the red button at the end of the train, words pouring from her mouth to the conductor in a hasty jumble:

"A man just fell…he passed out…stop the train…!" The doors of the train re-opened. 

Simultaneously, a number of passengers sitting next to the unconscious man got up out of their seats to help him, one checking his pulse and another rocking his shoulders gently and asking frantic questions. Slowly, the man faded back into consciousness; still dizzy, he attempted to get up and exit the train. One young lady helped him up, supporting his arm with hers. As he stumbled towards the open door, his legs gave way once again and he toppled backwards—this time, several passengers were there to prevent him from falling; they laid him gently on the floor, with a rain jacket supporting his head. This whole time, I remained frozen, eyes bulging in disbelief at the scene that was unfolding before me. The Asian lady pulled out her iPhone and began to call 911. A girl standing next to me called out, “is anybody here a doctor!?” No reply. She repeated, walking down the length of the train, shouting with conviction: “we need a doctor!”  Miraculously, a few minutes later, a tall man with long blonde hair, garbed in a full black trench and shrouded with an air of quiet confidence, entered the open door from one train over, and knelt down by the man. With the dexterous movements of a surgeon, he checked the man’s pulse and calmly began asking him a few rudimentary questions. I’ve read reports of population saving vaccines, doctors who work in AIDS invested third world hospitals, and even witnessed an open-heart surgery myself, but never in my life did I have more respect for a doctor and the entire field of medicine than this particular moment.

For the next fifteen minutes, the man lay on the floor, with just enough strength in his body to answer some basic questions, but not enough to get up. I heard through the murmurs of the passengers around me that they heard the man say he “has a history of seizures,” and “forgot to take his medication this morning.” He also “hadn’t eaten anything all day.” Not long after, the EMT’s arrived to cart the man out in a wheelchair…

As I walked slowly from the T station in Harvard Square back to my apartment that day, still a little shaken up, I reflected on this incident. I thought of the infamous parables of social psychology textbooks, most notably the Kitty Genovese murder, an incident that occurred in 1964 when 38 onlookers passively looked on as a killer raped and murdered 28 year old Kitty, all while she was fleeing and screaming for help. I felt a wave of warmth and relief that the outcome of this incident was different. At the same time, I was disappointed. Disappointed in myself for freezing and being unable to move when the man fell, while others around me reacted to help this man. What if no one had reacted? Would I have just stood and watched, idly, passively, while the man suffered the second, perhaps fatal, fall? Would I have washed my hands with the guilt of tacit inaction? Social psychology suggests a number of well-documented theories for what I had experienced. The bystander effect, coined by researchers Darley and Latane following the Kitty Genovese murder, states that the more people are around in a given public emergency, the less likely an individual will go out of his or her way to help. In these emergency situations that challenge our past experiences and render us helpless, we look to others for social cues on how to respond, and when we see everyone looking on passively, as bystanders, we are driven to inaction; a vicious cycle ensues because everyone is thinking the same—this is called pluralistic ignorance. Both bystander effect and pluralistic ignorance are exacerbated by diffusion of responsibility, a phenomenon where the more people there are, the less likely a given individual will take responsibility on his or her own shoulders, because each expects another to take responsibility. Economists know this as the "tragedy of the commons." Finally, in conditions of anonymity, people are even less likely to intervene. Yet, there is a light in the darkness. Although overcoming the collective constraints of bystander effect, diffusion of responsibility, pluralistic ignorance, and anonymity for any individual are incredibly high, if only one person takes the plunge and intervenes, others are much more likely to follow. We call these people heroes.

While our society traditionally tends to see heroes as superhuman (Batman, Superman, the Hulk) or historical (Martin Luther King, Jr., Gandhi), Phillip Zimbardo, Stanford psychologist and head of the landmark Stanford Prison Experiment, argues in this enlightening Ted Talk that true heroism can be achieved by every day, ordinary people. In his book, The Lucifer Effect, he argued that just as it is dangerous to “morally disengage” ourselves from instances of great evil, such as the Holocaust, and state with conviction that we would never stoop to that level of moral degradation, it is equally important to realize that heroes aren’t just reserved for Marvel comics or World Wars—we are all capable of heroism. That is, both the lines between evil and heroism are permeable. The unidentified lady who first yelled for someone to inform the conductor is a hero. The lady who ran through the train asking if anyone is a doctor is a hero. The doctor is a hero. The Asian lady who called 911 is a hero. Heroism, like self-discipline, is a muscle that can (and ought to) be exercised (see Zimbardo’s Heroic Imagination Project here: http://heroicimagination.org/). This incident has inspired me to be more heroic in my own life, because it is simply the right thing to do. I charge us all: the next time we experience a situation that challenges convention and renders us helpless, fight against  the biological responses of stress and the social psychological constraints of social influence. Refuse to look on in helpless fascination. No matter whether the next incident is a fall in a subway, bullying, or something far worse (rape, assault, murder?), we should all act. Because we can. Because we must.

Intellectual curiosity, lifelong learning, and formal education

It was on the plane ride from Boston back home to California after my freshman year at Harvard that I met Michael Baur, Associate Professor at Fordham University School of Law. Michael sat in the aisle seat, and next to him sat his adorable little daughter, no more than a few years old, seat belt tightened to the max. Across the aisle sat Michael’s wife and two more little ones—a boy and a girl. As I squeezed past Michael and his daughter to get to my window seat, he asked me if the “H” shield on my backpack stood for Harvard; I said yes. We started a conversation, and I found out, dumbstruck by this wondrous stroke of coincidence, that Michael was an alum of Harvard Law School and had once been a resident tutor in Cabot House, the upperclass dorm that I had been assigned to next year! As the engines of the plane began to rumble, and I felt the familiar hand of inertia pushing me back into the seat, Michael’s daughter Grace asked innocently: “daddy, how do planes fly?” Without even a hint of irritation or annoyance, he explained, warmly and patiently, gesturing with his hands: “well, darling, it has to do with Bernoulli’s principle. The top of the wing of the airplane is curved while the bottom is flat, so that air travels faster over the top than the bottom, which creates lower pressure above the wing than below it. This creates lift, so the plane can fly!” Right as he finished, the airplane pitched upward and started its ascent. He smiled at Grace, and turned to me seriously: “something like that right? I don’t quite remember exactly.” Snippets from my high school physics class flashed through my head, but the exercise was futile: equations and theorems and principles had gone in one ear and out the other long, long ago. Mesmerized, I looked back at him with a blank stare, laughed nervously, and thought to myself, “just smile and nod, smile and nod.” For Michael, that moment was just one of a hundred in his daily life of being a father, but that brief moment has been forever seared into my memory and permanently filed into my “How To Be A Good Parent” mental cabinet.

Recently, as I’ve been reading and thinking a lot about the problems with our current education system and on a more personal level, thinking about watering the right seeds for my own growth and development, this memory of Michael and his daughter Grace comes to mind time and time again. Parents like Michael, who instill in their young, impressionable kids the value of learning and encouraging them to be curious and explore the world around them, are increasingly rare. I’ve had my fair share of sitting next to parents and their young kids on flights, and too often, I hear this line coming out of the mouth of a parent: “I dunno, don’t ask stupid questions.” In our modern day and age, in a world where, in the words of HBS Professor Nancy Koehne, “turbulent is the new normal”, when the average adult switches jobs 11 times between the ages of 18 and 44, when the traits of curiosity, adaptability, and openness to new experiences are more essential than ever, intellectual curiosity is declining and the population of lifelong learners is dwindling.

What is lifelong learning? According to Wikipedia, lifelong learning is the “‘lifelong, voluntary, and self-motivated’ pursuit of knowledge for either personal or professional reasons.” What is intellectual curiosity? My search for a pithy definition (after Wikipedia, Quora, and Dictionary.com proved unsatisfactory) took me on a roller coaster ride from parenting blogs to college admissions sites (Stanford calls it "intellectual vitality") to research papers on developmental psychology; still, I didn’t find a single agreed upon definition. It is much easier to name the symptoms of intellectual curiosity than the disease itself: asking numerous questions about the unknown, exploring diverse subjects, being open to new ideas and opinions, and having a love for learning (I’ve certainly caught the bug). For the sake of this blog post, I define intellectual curiosity as valuing learning for the sake of learning. That is, the pursuit of knowledge is intrinsically motivated, an end itself, rather than extrinsically motivated, or seen as a means to another end, such as money or power or pleasure. Clayton Christensen, HBS Professor, innovation researcher, and author of the book Disrupting Class, defines intrinsic motivation as “when the work itself stimulates and compels an individual to stay with the task because the task by itself is inherently fun and enjoyable…were they’re no outside pressures, an intrinsically motivated person might still very well decide to tackle this work.” Encouraging children from a young age to ask questions, no matter how trivial, develops their intrinsic motivation for learning, and through this, their intellectual curiosity. Anyone want to bet against me that Michael’s daughter Grace will grow up to be a lifelong learner?

Yet, the present day decline in intellectual curiosity in children isn’t the fault of parents alone; our flawed public schooling system should also shoulder the blame. Take this staggering study for example. According to Sir Ken Robinson (watch this really nifty RSA Animate video on YouTube), in a 1998 longitudinal study in the book Break Point and Beyond: Mastering the Future Today, 98% of 1500 kindergardeners are at the genius level of divergent thinking (measured by a test that asks questions such as “how many different uses can you think of for a paperclip?”), at age 10, this number had declined to 32%, while at age 15, only 10% of the same children were at the genius level. In a later study of 200,000 adults, only 2% were at the genius level of divergent thinking. In his talk, Robinson states that divergent thinking is a prerequisite of creativity. And unsurprisingly, creativity, which necessitates inquisition and exploring many options, is highly correlated with intellectual curiosity. Similar to the divergent thinking test, another developmental psychology test showed that curiosity declines with age (about a -.267 correlation). While a certain part of this decline seems natural, because one evolutionary reason for curiosity is reducing the cognitive burden of uncertainty, and as you get older, there is less uncertainty in the world around you, few would disagree that formal education has contributed heavily to this plummet in intellectual curiosity. Einstein was right when he said: “it is a miracle that curiosity survives formal education.”

What is it about formal education that is so cancerous to intellectual curiosity? For one, there is a disconnect between the pedagogy of teachers and the learning styles of students. Citing psychologist Howard Gardner’s research on multiple intelligences, Clayton Christensen writes in Disrupting Class that although students each have their own unique strengths and weakness and individual combinations of multiple intelligences, curriculums and subjects are taught in the dominant intelligence of that subject. Thus, the same star soccer player, who is very gifted in bodily-kinesthetic intelligence, may be failing his physics class because he is low in logical-mathematical intelligence. Yet, the majority of physics teachers teach in the paradigm that caters to their own intelligence—what they are comfortable with; good at. Modeled after the factories birthed in the Industrial Revolution of the late 19th Century, schools have only become more standardized and one-sized-fits-all as population growth has skyrocketed in the 20th century—more and more like a factory line. But kids aren’t like cars—every kid is different. Our education system is a square hole that tries to fit a bunch of different pegs into it—some triangular, some circular, some rectangular. It doesn’t help that our society as a whole has an outdated view of intelligence as binary: you’re either smart or dumb. The few square pegged students are the ones considered traditionally “smart.” In addition, with the rise of grading and standardized test taking, formal education has provided more and more extrinsic motivations for learning, further diverging from intellectual curiosity, which is predicated on an intrinsic motivation for learning.

Reflecting on my own upbringing, I was incredibly fortunate to have parents who encouraged exploration and nourished my curiosity in all ways, enrolling me in any and all activities that I found interesting, from sports to journalism to acting. And even though I was one of the lucky few who was born a square peg and (perhaps because of it) liked school, it wasn’t until relatively recently that I developed the level of thought and cognitive capabilities to differentiate my formal schooling from learning as a more abstract principle. Only recently did I start seeing my education as only one small piece in a larger, lifelong puzzle of learning. Perhaps, formal schooling, which takes up so much of kids’ childhoods and is the source of most of their limited knowledge, leads students to conflate learning and formal education, so they think the two are one and the same. Thus, when kids perform poorly in school, not because they are incapable, but simply because academic subjects aren’t taught in a way that aligns with their own intelligences, they feel emotions such as anxiety and worry, even fear and helplessness, and their confidence in their own abilities nosedives (research on the emotions that results from the pairing of different skill levels of a subject and challenge levels of a task, see psychologist Cziksentmihayli); understandably, some give up and become apathetic. Based on the research of several developmental psychologists, there is a positive relationship between curiosity and self-esteem, so performing poorly in school, which lowers self-esteem, chokes off the intellectual curiosity of kids. Consequently, many students never develop their mental faculties to the level of thought that enables them to parse learning from formal education (nor do many want to think about learning), so unsurprisingly, when they graduate from high school or college (or drop out), many stop trying to learn.

But all is not lost. Many educators, innovators, and thought leaders are convinced that technology, particularly the Internet, will disrupt education. From free open online classes to awarding badges for learning skills to flipping the learning model upside down so students watch lectures at home and do homework at school and teachers act as facilitators rather than lecturers, education is currently undergoing a paradigm shift. Even as an avid technologist, I am not totally convinced that technology is the be-all, end-all, the turnkey solution to fix our broken education system. It’s too convenient. Too complacent (oh, well some crazy futuristic innovation will come by and change everything). Even if technology is 42, while we wait for the Khan Academy’s and Skillshare’s and EdX’s of our world to make everything better, there must be something we can all do within our daily lives to compel change. As Mahatma Gandhi reminded us, “be the change you wish to see in the world.” This post started with the powerful, incandescent, and brief interaction between a parent, Michael, and his daughter, Grace. It has come full circle, back to two people—you and me. It is my hope that this blog post has lit a fire in your own heads and provided food for thought. I ask you all to ponder: how might parents do a better job of nourishing intellectual curiosity in their child’s development? How might schools change their curriculums and paradigms to foster intrinsic motivation in students and craft lifelong learners? Perhaps more importantly, how do we cultivate intellectual curiosity in our own lives? And how do we cultivate intellectual curiosity in the lives of people around us?

Reading, the Classics, and creativity

As a college student trying to absorb as much as I can about the world and figure out my place in it, but also the owner of a Google Calendar with less white space than colored, I struggle to find the time to read. In elementary and middle school, I was a voracious reader, with a weakness for fantasy (Red Wall, Lord of the Rings, The Golden Compass), but in high school, I fell into a dark, dark time when the only thing I read was (occasionally) the books assigned for class. Though my track record of reading for class has only declined further in college, recently, I’ve decided to give up (mostly) on television, magazines, and keeping up with the news to open up free time for more in depth outside reading. I’ve always been curious about the way the world works, but only recently did I realize that to tap deeply into the font of human knowledge, tracing the arc of human progress from antiquity to modernity, reading books is indispensable.

What’s in my reading list? Partially because of my staunch advocacy for a liberal arts education (more on that later), and partially because of my belief that to understand something, you must look below the branches, dig underneath the tree, and observe its roots, I believe that a knowledge of the Classics (the Plato’s and Nietszche’s and Marx’s and Confucius’s and Joyce’s)—the philosophical and cultural underpinnings of modern society—is vital to making sense of present day happenings. But recently, in piecing together my summer reading list, I stumbled upon a couple of quotations that have challenged my position on the immense value of reading books, particularly the Classics.

The first, by Albert Einstein:

"reading, after a certain age, diverts the mind too much from its creative pursuits. Any person who reads too much and uses their own brain too little falls into lazy habits of thinking."

And the second, by Japanese novelist Haruki Marukami:

“If you only read the books that everyone else is reading, you can only think what everyone else is thinking.”

This presents a dilemma for me, because it just so happens that two principles I value highly are open-mindedness and creativity. Reflecting on Murakami’s pithy words and recalling Howard Roark’s distaste of Classical antiquity in The Fountainhead, I wonder: will strict adherence to the reading of the Classics confine you to predominant world views and chain you to dogma, inhibiting your ability to think big? If books represent knowledge, in the trade-off between knowledge and creativity described by Einstein, how do you balance the two? Do you need to read books to understand the world in a broader historical narrative, or is the Internet and newer forms of media (blogging, micro-blogging, videos) sufficient?

At least for this summer, I’m going to attempt to do it all. It is my hope that this blog will serve as a creative outlet this summer and a fire to forge some of my own premature thoughts and ideas. Writing (creation) will be a counterbalance for reading (knowledge). Blogging—the Ginger to reading’s Fred, the Dionysus to reading’s Apollo. Of course, this blogging thing won’t work without you, my friends—I welcome your musings, suggestions, comments, and perhaps above all, your divergent opinions and critiques. After all, just as Aristotle said in Politics, a potluck dinner attended by many is much tastier than a dinner hosted by one. It is through this collective debate with many different perspectives and opinions that we will all learn and grow.