Vous êtes sur la page 1sur 8

Around 2012, something started going wrong in the lives of teens.

In just the five years between 2010 and 2015, the number of U.S. teens who felt useless
and joyless classic symptoms of depression surged 33 percent in large national
surveys. Teen suicide attempts increased 23 percent. Even more troubling, the number
of 13- to 18-year-olds who committed suicide jumped 31 percent.

In a new paper published in Clinical Psychological Science, my colleagues and I found


that the increases in depression, suicide attempts and suicide appeared among teens
from every background more privileged and less privileged, across all races and
ethnicities and in every region of the country. All told, our analysis found that the
generation of teens I call iGen those born after 1995 is much more likely to
experience mental health issues than their millennial predecessors.

What happened so that so many more teens, in such a short period of time, would feel
depressed, attempt suicide and commit suicide? After scouring several large surveys of
teens for clues, I found that all of the possibilities traced back to a major change in teens
lives: the sudden ascendance of the smartphone.

All signs point to the screen


Because the years between 2010 to 2015 were a period of steady economic growth
and falling unemployment, its unlikely that economic malaise was a factor. Income
inequality was (and still is) an issue, but it didnt suddenly appear in the early 2010s:
This gap between the rich and poor had been widening for decades. We found that the
time teens spent on homework barely budged between 2010 and 2015, effectively ruling
out academic pressure as a cause.

However, according to the Pew Research Center, smartphone ownership crossed the 50
percent threshold in late 2012 right when teen depression and suicide began to
increase. By 2015, 73 percent of teens had access to a smartphone.

Not only did smartphone use and depression increase in tandem, but time spent online
was linked to mental health issues across two different data sets. We found that teens
who spent five or more hours a day online were 71 percent more likely than those who
spent only one hour a day to have at least one suicide risk factor (depression, thinking
about suicide, making a suicide plan or attempting suicide). Overall, suicide risk factors
rose significantly after two or more hours a day of time online.

Of course, its possible that instead of time online causing depression, depression causes
more time online. But three other studies show that is unlikely (at least, when viewed
through social media use).

Two followed people over time, with both studies finding that spending more time on
social media led to unhappiness, while unhappiness did not lead to more social media
use. A third randomly assigned participants to give up Facebook for a week versus
continuing their usual use. Those who avoided Facebook reported feeling less depressed
at the end of the week.

The argument that depression might cause people to spend more time online doesnt
also explain why depression increased so suddenly after 2012. Under that scenario,
more teens became depressed for an unknown reason and then started buying
smartphones, which doesnt seem too logical.

Whats lost when were plugged in


Even if online time doesnt directly harm mental health, it could still adversely affect it
in indirect ways, especially if time online crowds out time for other activities.

For example, while conducting research for my book on iGen, I found that teens now
spend much less time interacting with their friends in person. Interacting with people
face to face is one of the deepest wellsprings of human happiness; without it, our moods
start to suffer and depression often follows. Feeling socially isolated is also one of the
major risk factors for suicide. We found that teens who spent more time than average
online and less time than average with friends in person were the most likely to be
depressed. Since 2012, thats what has occurred en masse: Teens have spent less time on
activities known to benefit mental health (in-person social interaction) and more time
on activities that may harm it (time online).

Teens are also sleeping less, and teens who spend more time on their phones are more
likely to not be getting enough sleep. Not sleeping enough is a major risk factor for
depression, so if smartphones are causing less sleep, that alone could explain why
depression and suicide increased so suddenly.

Depression and suicide have many causes: Genetic predisposition, family environments,
bullying and trauma can all play a role. Some teens would experience mental health
problems no matter what era they lived in.

But some vulnerable teens who would otherwise not have had mental health issues may
have slipped into depression due to too much screen time, not enough face-to-face social
interaction, inadequate sleep or a combination of all three.

It might be argued that its too soon to recommend less screen time, given that the
research isnt completely definitive. However, the downside to limiting screen time
say, to two hours a day or less is minimal. In contrast, the downside to doing nothing
given the possible consequences of depression and suicide seems, to me, quite high.

Its not too early to think about limiting screen time; lets hope its not too late.
Remember the movie Moneyball? The Oakland As are struggling, financially and on
the baseball field. Then they introduce an innovative system for figuring out which
players will improve team performance. Moving away from observations by scouts, the
As begin to use advanced statistics to value players. With their new insights, the As
acquire high-impact players for relatively little money. Within a season, theyre at the
top of the game and so successful that within a few years the rest of the league has
reorganized how they value players, too.

Moneyball highlights the power of innovative knowledge systems: creative new sets of
tools and practices for collecting, analyzing and applying data to solving problems. All
organizations depend on knowledge systems, but its not uncommon, over time, for the
knowledge they generate to become stale and poorly adapted to changing contexts.

As researchers on resilience and sustainability of cities, weve found that unfortunately


that has become the case for a number of cities. This is already causing problems:
Outdated knowledge systems have exacerbated recent disasters and contributed to
growing financial losses from extreme weather, which have exceeded US$110 billion in
the U.S. this year alone.

Discussions around improving resilience and adaptation to extreme events often focus
on upgrading infrastructure or building new infrastructure, such as bigger levees or
flood walls. But cities also need new ways of knowing, evaluating and anticipating risk
by updating their information systems.

500-year flood
Consider the use of 100-year or 500-year flood levels to guide urban planning and
development. Using this framework, cities hope to prevent small floods while limiting
the occurrence of catastrophic flooding.

Yet, the data behind this strategy are rapidly becoming obsolete. Weather statistics are
now changing in many places. As a result, cities are experiencing repeat 500-year floods,
sometimes multiple times, in a few decades or less. Yet cities continue to rely almost
exclusively on historical data for projecting future risks.

The city of Houston, Texas, for example, has experienced a 167 percent increase in the
intensity of heavy downpours between 2005-2014 as compared to 1950-1959. The 2017
Hurricane Harvey flood in Houston represented the third 500-year flood to occur in the
past three years. Prior to Harvey, Harris County flood control managers downplayed the
need to change their knowledge systems, arguing that the two prior flooding events were
isolated events.

New possible futures


Cities need to better anticipate what would happen in the case of these types of
unprecedented extreme weather events. The past few years have seen a growing number
of record-breaking storms, droughts and other weather events.
The National Weather Service labeled Hurricane Harvey unprecedented, both for the
rapidity of its intensification and the record levels of rainfall it dumped on Houston.
Hurricane Mara hit San Juan as the third-strongest storm to make landfall in the U.S.,
based on air pressure measurements. Its rapid intensification surprised forecastersand
presents yet another challenge to climate and weather models.

Record-breaking events like these cannot be made sense of using statistics grounded on
the past frequency of occurrence. Not recognizing the growing risks from extreme
weather is dangerous and costly if cities continue to create more buildings that are more
expensive in increasingly vulnerable locations.

Whats needed are new and more creative ways to explore possible futures and
their potential implications. One approach is to use climate or other predictive models.
Such models are never perfect but can add important elements to discussions that cant
be gotten from historical data.

For instance, cities can look at projected sea level rise or storm surges and decide
whether it makes economic sense to rebuild homes after damaging storms, or whether
its better to compensate homeowners to move outside the flood zone.

Designing for tomorrows storms


Cities also need to upgrade their knowledge systems to anticipate risks in what are often
called design storms. These are the anticipated future storms that people who design
and build individual structures from buildings to flood walls are required to use in
their designs as a minimum risk standard.

Cities need to seriously rethink their design storm standards if they are to fully
understand and be comfortable with the future risks from extreme weather events to
which their businesses and residents are being exposed.

In New Orleans, for example, the U.S. Army Corps of Engineers created a Standard
Project Hurricane in 1957 that defined the wind speeds and storm surges that the levees
built around the city would have to withstand. As with most design storms, the Standard
Project Hurricane was based on retrospective data of past hurricane frequency and
intensity in the century prior to 1957. In subsequent decades, however, hurricane
frequency and intensity changed significantly in the Gulf of Mexico, the Standard
Project Hurricane was not updated and protection infrastructures were not upgraded,
contributing to their failure in the face of Hurricane Katrina.

Cities and federal government


One final area for knowledge systems innovation in cities is risk inequalities.

It seems increasingly clear that cities like Houston, New York and New Orleans were
poorly informed about how flooding risks would be distributed across communities
within their cities, particularly communities of color and low-income communities.
This inattention to disproportionate risk raises several questions: Were the communities
of these flood-prone cities aware of these risks and vulnerabilities? How much did city
officials and developers know? How did their efforts exacerbate existing disparities? Did
people making decisions about where to live understand the risks they faced?

The significance of knowledge systems for urban resilience extends beyond cities to
national agencies and organizations. Sadly, the Trump administration decided in August
to issue an executive order exempting federal agencies and public infrastructure projects
from planning for sea level rise. Abolishing flood standards is a step backwards for
fostering knowledge systems that enhance urban resilience.

Even if federal agencies choose to ignore sea level rise, we believe cities should pressure
them to take it into account. In the end, it is the city and its people who are being put at
risk, not the federal government. It is promising, for example, to see local and regional
efforts like the Southeast Florida Regional Climate Compact coming to together to
upgrade their resilience knowledge systems and advocate for desirable federal policies
for climate adaptation.

What cities know and how they think are essential to whether cities can make better
decisions. For over a century, cities have broadly approached knowledge about weather
risks by collecting and averaging past weather data. Nature is now sending cities a
simple message: That strategy wont work anymore.

This article was produced by the Knowledge Systems Innovation Group at Arizona
State Universitys Urban Resilience to Extreme Events Sustainability Research
Network (UREx SRN) (Eric Kennedy, Margaret Hinrichs, Changdeok Gim, Kaethe
Selkirk, Pani Pajouhesh, Robbert Hobbins, Mathieu Feagan).
One of the biggest modern myths about agriculture is that organic farming is inherently
sustainable. It can be, but it isnt necessarily. After all, soil erosion from chemical-free
tilled fields undermined the Roman Empire and other ancient societies around the
world. Other agricultural myths hinder recognizing the potential to restore degraded
soils to feed the world using fewer agrochemicals.

When I embarked on a six-month trip to visit farms around the world to research my
forthcoming book, Growing a Revolution: Bringing Our Soil Back to Life, the
innovative farmers I met showed me that regenerative farming practices can restore the
worlds agricultural soils. In both the developed and developing worlds, these farmers
rapidly rebuilt the fertility of their degraded soil, which then allowed them to maintain
high yields using far less fertilizer and fewer pesticides.

Their experiences, and the results that I saw on their farms in North and South Dakota,
Ohio, Pennsylvania, Ghana and Costa Rica, offer compelling evidence that the key to
sustaining highly productive agriculture lies in rebuilding healthy, fertile soil. This
journey also led me to question three pillars of conventional wisdom about todays
industrialized agrochemical agriculture: that it feeds the world, is a more efficient way to
produce food and will be necessary to feed the future.

Myth 1: Large-scale agriculture feeds the world


today
According to a recent U.N. Food and Agriculture Organization (FAO) report, family
farms produce over three-quarters of the worlds food. The FAO also estimates that
almost three-quarters of all farms worldwide are smaller than one hectare about 2.5
acres, or the size of a typical city block.

Only about 1 percent of Americans are farmers today. Yet most of the worlds farmers
work the land to feed themselves and their families. So while conventional industrialized
agriculture feeds the developed world, most of the worlds farmers work small family
farms. A 2016 Environmental Working Group report found that almost 90 percent of
U.S. agricultural exports went to developed countries with few hungry people.

Of course the world needs commercial agriculture, unless we all want to live on and
work our own farms. But are large industrial farms really the best, let alone the only,
way forward? This question leads us to a second myth.

Myth 2: Large farms are more efficient


Many high-volume industrial processes exhibit efficiencies at large scale that decrease
inputs per unit of production. The more widgets you make, the more efficiently you can
make each one. But agriculture is different. A 1989 National Research Council
study concluded that well-managed alternative farming systems nearly always use less
synthetic chemical pesticides, fertilizers, and antibiotics per unit of production than
conventional farms.
And while mechanization can provide cost and labor efficiencies on large farms, bigger
farms do not necessarily produce more food. According to a 1992 agricultural census
report, small, diversified farms produce more than twice as much food per acre than
large farms do.

Even the World Bank endorses small farms as the way to increase agricultural output in
developing nations where food security remains a pressing issue. While large farms
excel at producing a lot of a particular crop like corn or wheat small diversified
farms produce more food and more kinds of food per hectare overall.

Myth 3: Conventional farming is necessary to feed


the world
Weve all heard proponents of conventional agriculture claim that organic farming is a
recipe for global starvation because it produces lower yields. The most extensive yield
comparison to date, a 2015 meta-analysis of 115 studies, found that organic production
averaged almost 20 percent less than conventionally grown crops, a finding similar to
those of prior studies.

But the study went a step further, comparing crop yields on conventional farms to those
on organic farms where cover crops were planted and crops were rotated to build soil
health. These techniques shrank the yield gap to below 10 percent.

The authors concluded that the actual gap may be much smaller, as they found
evidence of bias in the meta-dataset toward studies reporting higher conventional
yields. In other words, the basis for claims that organic agriculture cant feed the world
depend as much on specific farming methods as on the type of farm.

Consider too that about a quarter of all food produced worldwide is never eaten. Each
year the United States alone throws out 133 billion pounds of food, more than enough to
feed the nearly 50 million Americans who regularly face hunger. So even taken at face
value, the oft-cited yield gap between conventional and organic farming is smaller than
the amount of food we routinely throw away.

Building healthy soil


Conventional farming practices that degrade soil health undermine humanitys ability to
continue feeding everyone over the long run. Regenerative practices like those used on
the farms and ranches I visited show that we can readily improve soil fertility on both
large farms in the U.S. and on small subsistence farms in the tropics.

I no longer see debates about the future of agriculture as simply conventional versus
organic. In my view, weve oversimplified the complexity of the land and underutilized
the ingenuity of farmers. I now see adopting farming practices that build soil health as
the key to a stable and resilient agriculture. And the farmers I visited had cracked this
code, adapting no-till methods, cover cropping and complex rotations to their particular
soil, environmental and socioeconomic conditions.

Whether they were organic or still used some fertilizers and pesticides, the farms I
visited that adopted this transformational suite of practices all reported harvests that
consistently matched or exceeded those from neighboring conventional farms after a
short transition period. Another message was as simple as it was clear: Farmers who
restored their soil used fewer inputs to produce higher yields, which translated into
higher profits.

No matter how one looks at it, we can be certain that agriculture will soon face another
revolution. For agriculture today runs on abundant, cheap oil for fuel and to make
fertilizer and our supply of cheap oil will not last forever. There are already enough
people on the planet that we have less than a years supply of foodfor the global
population on hand at any one time. This simple fact has critical implications for
society.

So how do we speed the adoption of a more resilient agriculture? Creating


demonstration farms would help, as would carrying out system-scale research to
evaluate what works best to adapt specific practices to general principles in different
settings.

We also need to reframe our agricultural policies and subsidies. It makes no sense to
continue incentivizing conventional practices that degrade soil fertility. We must begin
supporting and rewarding farmers who adopt regenerative practices.

Once we see through myths of modern agriculture, practices that build soil health
become the lens through which to assess strategies for feeding us all over the long haul.
Why am I so confident that regenerative farming practices can prove both productive
and economical? The farmers I met showed me they already are.

Vous aimerez peut-être aussi