https://undark.org/2026/01/13/trump-greenland-environment/
What Would a Trump Takeover Mean for Greenland’s Resources?
AR GETS PENDRIVE 25.1
MAIRA FLIPS 26.1
🏏New Zealand vs India 3rd T20I in Guwahati @ 7 PM IST🏏
🏏WPL 2026: Match 16: RCB W vs MI W at Vadodara on 26/01/26🏏
Confusion about Parallel Leads 😜 seriously was it Ruhi?
Border 2 is a huge blockbuster
Ameesha Patel Dreaming Of Gadar 3
What is this behaviour Maira?
Padma Awards - Dharmendra Satish Shah Alka Yagnik Mammootty
Gadar Was Offered To Govinda
3 years of Pathaan
Border 3 Officially Confirmed
Happy Republic Day, forum members!
🏏🎊Padma Shri Honour for Harmanpreet Kaur and Rohit Sharma🏏💐
https://undark.org/2026/01/13/trump-greenland-environment/
What Would a Trump Takeover Mean for Greenland’s Resources?
The surprising benefits of standing on one leg
https://www.bbc.com/news/articles/czx1pnlzer5o
Oldest cave painting of red claw hand could rewrite human creativity timeline
https://www.science.org/content/article/bizarre-400-million-year-old-fossil-was-unknown-life-form
Bizarre 400-million-year-old fossil was an unknown life form
Why You Should Text 1 Friend This Week
by Angela Haupt
You probably send dozens of text messages a week. But do any of them truly deepen your friendships?
For many people, the answer is no.
“We’re constantly receiving and exchanging information, and communicating in some way, but that doesn't necessarily mean that the quality of that connection is there—or that the intention is there,” says Miriam Kirmayer, a clinical psychologist in Montreal who researches adult friendships. “When you ask someone if they’ve purposefully and intentionally and thoughtfully reached out to a friend through text, they often say, ‘Hold on, no, I haven’t. I’ve texted someone about arranging a play date for my kid. I've texted my spouse about what we're cooking for dinner tonight.’ But people don't necessarily take the time to check in with friends in that way with intention.”
Reaching out to one friend a week is a manageable way to boost your well-being. Here’s what to know.
Why you should do it
People often underestimate the value of “micro-moments” of connection, like a simple greeting to a coworker or smiling at a stranger. These small interactions “can do wonders for improving our overall levels of social connectedness and reducing loneliness,” Kirmayer says. “It doesn’t take all that much, when we’re feeling hungry for social connection, to feel a little a little bit closer and fuel and fulfill that need.”
Research has found that social connection predicts both mental and physical health, including protecting against anxiety and depression. Even brief moments of connection can boost overall happiness.
Plus, Kirmayer thinks of the exercise as a way to strengthen your social muscles. It helps you get better at being comfortable with initiation (being the first to reach out); finding ways to keep in touch over time; managing conflict; learning to be vulnerable; and asking for what you need. Every time you text a friend, you’re refining the complex art of being a good friend.
What to say
When Kirmayer gives keynote speeches about friendship at workplaces or community events, she challenges attendees to take out their phones and send a message to a friend during the middle of the event—that way, they can't say they will and then never get around to it. Someone always asks her how to word the text. “The freeing piece of advice I give is that it doesn't actually matter all that much,” she says. “The thing that typically stops us from sending the message is that we get caught up in that perfectionist mindset: ‘This needs to be the right message; it needs to be witty or overwhelmingly interesting or so deeply personal and juicy that of course they will respond.’” When you revise a message in your head a million times, it starts to feel so daunting that people often abandon the effort altogether, she adds.
That said, Kirmayer has a couple go-to tips for those who crave direction. It’s a good idea to incorporate some personal specificity into your message, she says. Instead of saying “Hi, I’m thinking of you,” offer a reason why. You might word it like this: “Hi, you’re on my mind because I read something by this author, and I know you loved their last book.” Or: “I was just thinking about how amazing that conversation was when we grabbed coffee a couple months ago, and I wanted you to know I'm grateful for you.”
The more you can tie it to something specific about that person, the more it communicates a level of genuineness and authenticity that people really respond to,” she says. “It’s also a way to make people feel seen and appreciated, and that can be a powerful spark for connection.”
Ask yourself, too, why you’re reaching out to that specific friend. Do you want to get together? Would you like to set up a time for a phone call, so you can connect in a more meaningful way? “When you get clear on not just your who, but your why, that can potentially inform the message,” Kirmayer says. Consider these examples: “I’m thinking of you and would love to know, can we plan a lunch for the coming weeks?” Or: “I’d love to hear how that project you've been working on is going. Do you have time for a phone call one evening this week?”
Put a spin on it
If you find that you enjoy reaching out to one friend a week, make it an ongoing habit. You could further challenge yourself by texting a different person each time. “For some people, it might be worth asking the question of, ‘OK, now do I need to try something new? What's the next rung in this ladder that I’m climbing when it comes to improving my social skills or social health?’” Kirmayer says. If you want to diversify the experience and expand your network—and fine-tune your ability to connect with more than one person—add different people to your contact list.
For others, though, texting the same person each week might feel even scarier than reaching out to an assortment of friends. After all, it means asking yourself: “What else am I going to say? How can I deepen this connection?"
“It’s really a matter of checking in with yourself and being open and honest about what your connection needs and intentions are,” Kirmayer says, “and what area of your social fitness you’d like to work on.”
Trump has renewed his ambition to take control of Greenland for national security reasons and questioned whether Denmark has any legal right to the Arctic island.
Stop assuming every USB-C port on your laptop does the same thing
No, willpower isn’t a muscle – here’s a better way to think of it
by Alberto De Luca, cognitive neuroscientist
The muscle metaphor based on ego-depletion theory hasn’t survived scrutiny. But there’s an alternative that holds promise
You’ve probably heard it said that self-control – or what some people think of as willpower – is like a muscle. Use it too much and it gets fatigued, potentially to the point of exhaustion. The idea has become deeply embedded in pop psychology books, productivity blogs and even casual conversations. After a tough day of decision-making, we say we’re ‘mentally drained’, as if we’ve used up some internal energy or worn out our willpower muscle.
In psychology this idea is known as ego depletion in reference to the way that acts of self-control supposedly ‘deplete’ an inner store of mental energy. Skipped dessert at lunch? It’ll be harder to resist Netflix instead of working tonight. The model has an intuitive appeal. It’s tidy, metaphorical, and gives us an excuse to indulge when we feel worn out. Another part of its appeal may lie in the fact that it mirrors how willpower often feels from the inside, such as the heavy mental drag after a day of resisting temptation or making decisions, or the way distractions suddenly seem more alluring when you’re tired.
But what if it’s wrong? What if willpower doesn’t deplete at all?
The theory of ego depletion became wildly influential after its introduction in the mid- to late-1990s. Apparently backed by dozens of lab studies, it suggested that every act of self-control – resisting temptation, focusing attention, managing emotion – taps into the same limited internal resource (the same willpower muscle, if you like). Once that muscle is fatigued, we’re more prone to impulsivity, distraction and poor decisions.
The theory made its way into bestselling books, corporate training and even political life. The former US president Barack Obama once explained that he wore the same-coloured suits every day to save his decision-making power for the things that mattered. The idea wasn’t just popular – it seemed to be useful. It helped people explain their mental exhaustion and opened the door to strategies to help conserve willpower or build it up.
The more psychologists tried to pin down ego depletion, the more it slipped through our fingers
But over time, cracks began to show. In many of the experiments testing ego-depletion theory, participants would complete one task requiring self-control – such as ignoring distractions on a computer screen – and then attempt a second demanding task. According to the theory, their performance should drop on the second task. In everyday terms, this would be like spending the morning ignoring your phone to focus on work, only to find it harder to resist scrolling social media in the afternoon.
Yet meta-analyses that combined results across multiple studies failed to find consistent support for ego depletion. Several large-scale replication efforts, involving labs across the world, came back with mixed or null results. Even the most basic questions – what exactly is being depleted? – remained unanswered. Early versions of the theory linked depletion to blood glucose, but that explanation has been largely discredited. In short, the more psychologists tried to pin down ego depletion, the more it slipped through our fingers.
However, despite these challenges, supporters of the ego-depletion account still defended it, claiming that the tasks used in the studies weren’t difficult or long enough. For that reason, my colleagues and I set out to create a design that gave ego depletion multiple opportunities to manifest. We reasoned that if willpower really is like a muscle, then the longer someone engages in a mentally demanding task, the more fatigued or depleted they should become. Their performance on subsequent tasks – especially those that also require self-control – should get worse over time.
We ran a 35-minute online study that involved participants completing two alternating tasks. One, a challenging numerical version of the Stroop task – which required participants to quickly say how many digits appeared on screen, ignoring the numbers those digits represented – was designed to tax their cognitive control. The other, a global-local task was a bit like spotting the forest and the trees at the same time: sometimes you had to focus on the big picture, other times on tiny details, and quickly switch between the two. We used this second task to measure whether our participants’ ability to focus and respond effectively deteriorated as time went on.
In direct contradiction of the ego-depletion theory, as time wore on, our participants adapted, got faster, more accurate, and showed no systematic decline in performance, even after prolonged cognitive effort.
Crucially, we also varied the difficulty of the Stroop task: some participants received a ‘high-depletion’ version (with more frequent conflicting trials), while others did an easier version. If willpower really worked like a muscle, then our ‘high-depletion’ Stroop task was the mental equivalent of running uphill rather than on a flat – it should have drained participants’ reserves faster than the easier, flat-terrain version. But that’s not what we saw. Instead of slowing down, the participants in the ‘uphill’ condition kept pace, and in some cases even got quicker.
What looks like ‘depletion’ or fatigue might actually be a transition
If the ego-depletion model doesn’t fit how people’s self-control or willpower actually works, we need a new way to think about it. One promising alternative is metacontrol theory – a framework that makes a conceptual shift away from the idea of ‘mental resources’ to mental modes. First proposed by the cognitive psychologist Bernhard Hommel, this theory suggests that the brain operates along a continuum between two cognitive states: persistence and flexibility. You can think of it like a car with two gears: persistence is the low gear for steady climbing, flexibility is the high gear for cruising and taking in new routes.
In a persistent state or low gear, your brain narrows its focus. You’re more goal-driven, more resistant to distractions and more likely to power through a difficult task (of the kinds that we used in our study). In a flexible state or higher gear, you’re more open to alternative ideas, better at integrating new information and more likely to let go of rigid goals.
Neither mode is ‘better’. They’re context-dependent strategies. In daily life, you’ve probably felt this shift: pushing through the last part of a workout or study session in persistence mode, then relaxing into flexibility when chatting with friends or brainstorming ideas.
Persistence is great when you’re writing a paper or resisting a craving. Flexibility helps when you’re brainstorming or navigating unexpected challenges. And crucially, your brain naturally shifts between these states, especially when tasks are prolonged or rewards are uncertain.
From this perspective, what looks like ‘depletion’ or fatigue might actually be a transition. After an extended period of persistence, the brain may ease into flexibility – not because it’s run out of mental fuel, but because switching gears is adaptive. From an evolutionary perspective, our ancestors likely gained a survival advantage from readily switching between mental modes or gears rather than locking into one task until the point of exhaustion.
Not only does metacontrol theory better explain the way people’s performance changes over time, it also aligns with current neurobiology (unlike ego-depletion theory). It links changes in cognitive mode to dopaminergic activity in different brain circuits – specifically, the prefrontal cortex and the basal ganglia. These systems are known to regulate the balance between focused persistence and flexible adaptation. In simple terms, brain chemistry can nudge us toward different modes of thinking. For example, when dopamine levels rise in certain areas at the front of the brain, we tend to become more persistent and focused on a single goal. When dopamine shifts to deeper brain areas, it helps us become more flexible and open to new ideas.
All of this has practical implications for how we think about effort, self-discipline and failure in everyday life. From a first-person perspective, the phenomenology of a ‘lapse’ – mind-wandering, craving a break, or suddenly rethinking your goals – may actually reflect this adaptive gear-shifting, not a failure of character or a depleted willpower muscle.
Improving willpower may require less emphasis on grit – and more on insight
If willpower isn’t something we lose over time, but a mode we shift out of, then moments of distraction or fatigue may not reflect weakness – they may reflect natural transitions in the cognitive system. And those transitions can be shaped by context, motivation, environment and feedback.
Sometimes, a short break isn’t failure. It’s recalibration.
In other words, when you experience mental fatigue or what feels like weakening willpower, it is not necessarily a sign that you need to push harder. Sometimes, what we interpret as depletion might actually be the system doing exactly what it’s evolved to do. In these situations, it might be wiser to pause, reset or switch strategies. In practical terms, this might mean stepping away from your desk after struggling with a tricky report to take a short walk, or switching from writing an essay to organising your workspace. By changing the type of mental demand, you give your brain a chance to reconfigure its control mode without ‘burning out’.
If it’s time to retire the ‘willpower as muscle’ metaphor, then what could be an alternative? I propose imagining your willpower as a car with two gears – not something that gets worn out, but something that changes gears based on your environment, your goals and your internal feedback – the brain’s ongoing monitoring of how well your current strategy is working, based on signals like perceived effort, progress toward the goal and the rewards you expect to get.
This doesn’t mean we should abandon discipline, but that improving willpower may require less emphasis on grit – and more on insight (that is, better understanding the way your brain monitors, modulates and adapts). For this we need more nuanced psychological models of willpower – ones that reflect how the brain actually works, rather than how we wish it did.
Alberto De Luca is a cognitive neuroscientist and psychologist at the University of Bologna, Italy. His research focuses on cognitive control, emotion, and digital mental-health interventions. His work has been published in leading journals in experimental psychology and neuroscience.
https://theamericanscholar.org/the-twilight-self/
The Twilight Self
Embracing mutability in a world gone mad means understanding how fantasy took hold of American psychiatry
Years ago, when I was interned in a psychiatric unit, I watched as a young woman was wheeled in on a gurney. She was about my age and only semiconscious, her pallor nearly as gray as the gown issued to each of us patients. The gray blanket covering her legs and feet and the unit’s gray walls gave the whole scene, as I recall, a chilling grisaille, overlayered with apprehension. The accompanying silence was funereal. Sounds baffled, death was in the air.
Penny, as I will call her, had been resuscitated in an emergency room after an overdose of barbiturates, and then brought to the unit for treatment. After sleeping for a few days and starting on some 1980s-era pharmaceutical rescue regimen, she became lively, smart, affable, and wry. She and I formed a friendship, in a fellow-patients-in-a-hospital-unit way, got together a few times post-discharge, then lost touch. Yet decades later, the memory of that underworld scene still elicits a frisson of dread.
I’d been interned there for being depressed and lonesome, desperate to escape my family’s view of me, and dangerously self-depriving. Eleanor, down the hall, had been interned for not “responding” to antidepressants and sentenced to electroconvulsive therapy. Gene, in the room across from mine, ditto. MaryAnn had also “failed” antidepressant therapy and was there to be transitioned onto a different drug. Now Penny.
Psychiatric hospitals have often been likened to prisons, and I wouldn’t disagree. But the place was also a factory, its product the clearer-minded person, its processes regularized. I was to sleep a certain amount, consume a certain number of calories, and undergo both psychotherapy (still Freud-tinged, in those days) a couple of times a week and ancillary therapies (art, occupational, group). Penny was to take prescribed doses of the new medication that would wean her off the other medication (of which she had taken too much) and also undergo talk therapy. Eleanor and Gene had their electroshock sessions, MaryAnn her pill regimen. A process for each of us. Wake up, medication call, breakfast, therapeutic activities, medication, lunch, rest, more therapy, medication, dinner, hang out in the TV room to practice being social. We were empty vessels, waiting to be filled.
What stings in remembering these gray scenes is that our inner worlds had been remade into deserts, waiting to be colonized by the medical institution.
Why does this memory ring so loud right now? I think it’s because our universe has suddenly come undone. The Enlightenment has collapsed. The external world can’t be made sense of, not by resorting to any principles other than “He Who Has, Gets” and “Only the Strong Survive.” The inner world, the ever-roiling ocean that we call the mind or the self, is increasingly threatened with oblivion. The central fact of humanity, which is that we are all alike in that our inner worlds are all unlike, is in danger.
The threat is not that we won’t be able to think (although I suppose AI, if it doesn’t eat itself as I suspect it will, would eventually abolish even that faculty of ours). The threat is that we will not be able to think for ourselves, and to feel with others. For that, after all, is the aim of the regimes that dominate in the United States today, and in many other nations, too. To reduce us to autonomous but utterly lonesome monads, detached from our inner worlds and the emotions of the people who live alongside us.
In this era of the World Turned Upside Down, it’s urgent, and might be somehow comforting, to remember that one’s inner world is always upside down: uncertain, inconsistent, not fully knowable, fearful, and constantly shifting.
That memory of hospitalization awakens now because it has everything to do with the mind, the belief in illnesses of the mind, and the urgency with which powerful institutions try to turn the unruly self into a manageable thing. To draw from Walter Benjamin, the remembrance of past experience revivifies a uniqueness, a “now-time,” which remains crucial to appreciating the current moment. The recovery of memories may offer a script that “prophetically glosses the text” of the past for an uncertain present.
Our present is certainly uncertain.
“Illness is the night-side of life,” Susan Sontag writes, the dark “kingdom,” as she calls it, where sooner or later we all end up. Sooner or later: she means that we will all eventually be diagnosed, and in that sense, we will be diagnosed with the thing that will kill us. Fair enough. But night-side and day-side aren’t the only two realms: Most of us, much of the time, live in twilight, illuminated by relative vigor and, for the lucky, by love, but darkling with self-doubt, anxieties, the troubles of today and tomorrow. Recognizing our misgivings about our own selves, more and more of us reach for diagnosis: The number of depressed people only goes up each year, and a 2023 Gallup survey found that 29 percent of Americans—nearly a third of us—report having received a depression diagnosis at some point. We see the same trends with ADHD, anxiety, and symptoms of obsessive-compulsive disorder. Many people seek a diagnosis for the silence it provides. For the release from exploring, even reckoning with, inner turmoil. For permission to tell a prepackaged story of themselves, which is to say to adopt what today is called an identity.
Religion offers ways to speak about the anarchic, disheveled authenticity of the self; for many centuries it was our primary explanation. When the outer universe is mysterious, the inner one can be, too, and we might feel reassured that the inner tumult is normal, even foreordained. Odysseus needn’t doubt that he truly wishes, always, to return to Penelope, for it is Poseidon’s treachery that is keeping him from home. Moses’s yearning to cross into the Promised Land isn’t to be fulfilled because his faith had wavered at the Waters of Meribah, and he accepts God’s prohibition. In the sixth century BCE, the followers of Asclepius advocated “incubation,” or sleeping in the temple, whereby the cure to the sleepy sufferer’s malady might be delivered in the form of a dream. Even as late as the 14th century, Julian of Norwich could take comfort in knowing that God “keepeth us even alike secure in woe and weal.” The world is greater than any of us, religion tells us. Shrouded in myth, the world is ever inscrutable. Inner disorder is incomprehensible, which is to say, ordinary. Don’t fret.
That religion thrived even after the Enlightenment surely has something to do with this capacity to offer solace to the troubled mind. Even Freud, who did more than anyone to map the inner world, left the unconscious mind a terra incognita—explorable, but provisionally at best, and only via arduous journeys with trained guides (psychoanalysts, that is). We remain mysteries to ourselves. And, therefore, essentially unique. Fret, but no more than the next person on the therapist’s couch.
My entry to the psych ward was a border crossing. I went from a not entirely unpleasant hospital room (a window overlooked a garden, which I could even go downstairs and stroll in) to that gray ward: hermetically isolated and air-conditioned, with plastic furniture, windows covered by egress-preventive steel screens, sleeping-room doors you weren’t allowed to close, and staff who hung out in a glass-walled nurses’ station that made their conversation (and any trace of the outside world) inaudible. I spent the first couple of nights in the observation room, in which the bed was positioned such that, with the door statutorily open, the inmate was fully visible from the nurses’ station. When I had acquitted myself adequately in this hazing (which I suppose just meant that I didn’t try to hang myself with my pajamas), I was moved to a different room. Penny was the next resident of the observation room; she spent a long time there, as I recall. Not trustworthy, I guess.
The psychiatrists who had devised and now managed this place were sure of themselves. They knew what was “wrong” with all of us, and they knew what to do to “stabilize” each one. It was—ironically, given the very different role played by religion—quasi-ecclesiastic, the feeling intensified by monastery-like imperviousness to outside noise or news, communal meals of food all drawn from a white-yellow-beige palette, and secret confessional meetings with the shrinks.
I know that the regimen must vary across psychiatric institutions and rehabs and has undoubtedly changed over time. But the salience of the memory, today, is the exposure to the certitude of the psychiatrists who were sure they knew about my inner world. Their confident deformation of the outer one in the service of making my mind as alike as possible to others’ is cut from the same cloth as the totalizing forces in today’s political world. With the self, each self, marked as known, the inner world is stripped of its clothing of myth.
Such obsessive separating means that you—I, each of us—must join either Sontag’s day-side or the night-side. We may not linger in the gray of twilight, or struggle in our own complicated but characteristic way. Penny was in the hospital in her gray gown because her problems, which were manifold, had been reduced to one: barbiturate dependence leading to overdose. I was there for a longer, but still discrete, list of problems: anxiety, anorexia, an adjustment disorder, depression, possibly a few others of which I wasn’t informed. I needed (at various times, according to various psychiatrists): anxiolytics, antidepressants, psychoanalysis, different antidepressants, strict behavioral therapy, some combination thereof. Most of the others on the unit had been diagnosed with major depression, often depression refractory to treatment with medications; they were sent to electroconvulsive therapy. We were all on the night-side. We needed illumination. Which would be provided by the psychiatrists, armed with pharmacology (or electricity). We were not mysterious, deep, and changeable seas. We were things. Empty houses waiting for experts to bring us to the acceptable standard of habitation.
This reconceptualization of the world into one of thinginess requires the compression of complex beings into interpretable blocks. Other writers have explored the concept of reification in this vein. Hannah Arendt phrases it this way: “Action, speech, and thought” must undergo a “materialization … in order to remain in the world at all.” For her, this interpretable block “replaces something which grew out of and for a fleeting moment indeed existed as [a] ‘living spirit.’ ” Second, the hegemony of instrumental thinking means that goodness will be understood not in terms of virtue, interpersonal relations, or freedom to create, but only as comfort, convenience, and security from violent harm—that is, the benefits that things offer, rather than what people do.
Another reason why the memory of the hospital returns right now: In this moment, when the regimes of power destroy language and trample rights on behalf of a politics that flows from hate, it’s more than ever essential to resist stripping the inner world of its mystery. If we shed our mystery, become empty territory, a thing waiting to be worked on by others, we lose the language of renewal. “The new beginning inherent in birth,” Arendt writes, “can make itself felt in the world only because the newcomer possesses the capacity of beginning something anew, that is, of acting.”
Mysterious territory eludes regimentation. Being free to create, to renew, means honoring the messiness of the inner world.
Psychiatry’s role as the guardian of “mental health” impels psychiatrists to map its boundaries. The task is elusive, not only because the territory is unknowable but also because the cartography is buffeted by cultural winds, as American history attests. Already in 1840, the U.S. national census included a category for the “insane and idiotic.” The Association of Medical Superintendents of American Institutions for the Insane was founded soon after. It became the American Medico-Psychological Association before the turn of the 20th century, then set itself to clearing up the muddle of diagnoses applied to institutionalized Americans. The result, in 1918, was the Statistical Manual for the Use of Institutions for the Insane, the precursor to today’s definitive DSM-5 (Diagnostic and Statistical Manual of Mental Disorders, version 5). That was the foundation for subsequent boundary-setting expeditions, for already in 1918, the book’s aim was to codify conditions for the sake of counting. When the renamed American Psychiatric Association brought out the DSM-I in 1952, it maintained this focus (adding “Diagnostic” to the title), in the hopes of producing a kind of gazetteer also applicable to the variegated landscape of non-institutionalized Americans.
In the aftermath of the chlorpromazine revolution, drugs for depression and anxiety became big sellers, helping to popularize these diagnoses and making their manufacturers even wealthier.
The DSM was updated in 1968, but it retained the psychoanalytic semantics that marked the first edition. In that earlier edition, symptoms were “reactions,” and disorders arose from conflicts between personality and inner drives or needs. The DSM-III, appearing in 1980, was swayed by the multiple anti-Freudian movements in the profession, moving the neuroses from their place as alternative reaction formations to disorders in their own right. Depressive neurosis, for instance, was reestablished as major depressive disorder.
Psychiatry’s cartography was profoundly shaped by the increasing availability of pharmaceutical products that could “improve” (psychiatrists would not claim to “cure”) mental disorders. In 1952, psychiatric researchers in France who ran a trial of chlorpromazine, a relatively new drug, reported a specific tranquilizing effect that could dampen the drives said to lead to insanity; just two years later, the U.S. Food and Drug Administration approved it as the first specific treatment for psychoses. Smith, Kline & French advertised it as a drug that “reduces or eliminates the need for restraint and seclusion” for the mad, thereby increasing mental hospitals’ capacity to “serve more patients than ever before.” The medical industry depended on illness; ameliorating illness without extirpating its causes would, implicitly, lead to greater earnings everywhere. Smith, Kline & French revenues increased severalfold over 10 years. In the aftermath of the chlorpromazine revolution, drugs for depression and anxiety—leaders among the new disorders—became big sellers, helping to popularize these diagnoses and making their manufacturers even wealthier.
Whereas psychiatry in Europe developed in universities and private consulting practices, in America psychiatry grew out of the mental hospital, where it quickly fell prey to the profit motive. First the pharmaceutical sector and then the medical insurance racket depended for their profits in part on the compliance of psychiatrists in diagnosing and medicating patients. This collusion was one of the grounds for critics to charge that psychiatry was inventing disorders just so that it could pretend to treat them. Thomas Szasz, whose The Myth of Mental Illness appeared in 1961, would later write, “We should value liberty more highly than mental health, no matter how defined.”
In the six decades since The Myth of Mental Illness, psychiatry hasn’t identified biological bases for many of the soul struggles it faces. Nor has it taken seriously Szasz’s implicit indictment: Diagnosis and treatment are invidious to liberty. Psychiatry is unlike other medical specialties in that it can never demonstrate the validity of a diagnosis by excising a sick organ or diseased tissue, closing a wound, or killing the infecting bacteria. There’s some truth in the accusation leveled by its critics (Erving Goffman, Peter Conrad, R. D. Laing, and many others) that it is a power system in its own right.
I disagree with Szasz’s rigid dichotomy between the physical body and the mind. And the mental hospital system of his day is much reduced now. Still, there is an essential truth in his critique. “Mental illness” is semantics, not truth. However much it is meant to package the troubled inner world as a knowable and manageable phenomenon, it is not a description of anything like that. Nothing can be pinned down at all. Some mental phenomena have sharp edges that can be beveled with treatment, but that’s all.
It’s certainly true that some diagnoses, and some therapies, help people to suffer less. To me, the greater problem is that psychiatric diagnosis passes for an abiding truth about the self. When you identify yourself as a “chronic depressive” because you take an SSRI every day, or an “alcohol abuser” because you attend Alcoholics Anonymous meetings, it is easy to see as mere symptoms the dark misgivings or shaky uncertainties that mark your you-ness. You come to believe you need a higher dosage or should work harder on the 12 steps, and then you’ll be normal. In the daylight.
Not long after my hospital discharge, a psychiatrist prescribed the antidepressant imipramine for me (tricyclics, the class of drugs imipramine belongs to, were more or less state-of-the-art in those pre-Prozac days). Imipramine offered what today we might refer to as neuroenhancement: It coated my melancholy with a veneer of contentedness. It allowed me to sit on the subway without attracting attention by weeping.
Soon, though, I was troubled by the sense that the upbeat cheeriness was an epiphenomenon, as if the hopeful mood were on loan and not truly my own. After a while, I stopped taking the medication. My feelings, bleak again, were at least identifiably mine. The psychiatrist berated me for stopping the drug without consulting him, and when I told him I wanted to make the decisions about what medications I did or did not take, he told me I was “crazy” and refused to see me anymore. Just as well. My point is that the experience revealed to me the promise of pharmaceuticals (they help you act like everyone else) and the danger (they help you act like everyone else). Where, in this cheery persona, was my own dear melancholy self?
Psychiatry might have gone in a more humane direction. Knowing that cure can’t be proven, it might have ventured to explore the twilight self, the psyche in its wild ambiguity. This nearly happened a century ago. In the 1920s, psychiatrists in Germany—one group in Berlin, another in Heidelberg—began using psychoactive drugs to explore mental states. Their aim was to alter the psychic life, as the leftist physicians Ernst Joël and Fritz Fränkel, mainstays of the Berlin group, suggested. Not merely to medicate the problems of the mind, that is, but to incorporate all possible phenomenological and analytical approaches (including mescaline and hashish) to help the sufferer—who was, they knew, the victim of oppressive social conditions. In Heidelberg, a group of psychiatrists crystallized around Karl Jaspers, the psychiatrist-turned-philosopher, for whom the concept of mental illness itself was off base. A better approach would be phenomenological, in his view, emphasizing the fact of irrationality or delusion rather than its content. In keeping with this approach, Kurt Beringer, a member of the Jaspers circle and of the Heidelberg University Psychiatric Clinic, made use of mescaline to unlock the pathways of delusion. The so-called mental illnesses—rooted as they are in the irrational—are beyond interpretation in the usual sense, these mind explorers knew. Both consciousness and spirit are modes of encompassing essential experiences, indispensable to the process of finding ourselves in the world.
A psychiatry centered not on fixing discrete problems but on exploring and expanding the sense of self-in-world (and using psychoactive drugs to assist in the effort) was short-lived. Although the Nazis had much to do with that (all members of the Berlin group were leftists and Jews, and some of those in Heidelberg were Jewish; they either fled Germany or were killed), the rise of American pharmaceutical-assisted psychiatry was its death knell. By the 1980s, in the shadow of the DSM-III—and with specific drugs now allegedly capable of ameliorating angst, melancholy, delusion, disorder, oscillation, and more—diseases were invented to make the case for the pharmaceuticals’ value. That was exactly in keeping with the now-customary practice of inventing needs that could be met by new technologic breakthroughs. You might not have known that you needed to see who was ringing your doorbell even when you weren’t home, but now there’s a product that you really must have. You might not have realized that your stress about work and family was anxiety, but now there’s a product—a benzodiazepine, or an SNRI—that you really must have.
Testing the universe’s boundaries—this is what art does. Other animal species may communicate with one another, but to have a worldview and yet defy it—to make art—this is not for the birds or the bonobos. Only us.
A century after these early efforts, psychedelics are making a comeback in medicine today. But still they are shoehorned into treatment of specific mental illnesses: depression refractory to other medications, post-traumatic stress disorder, “existential anxiety” in people with cancer, a few others. The psychiatrist has become a mechanic, a fixer of broken minds, whether he prescribes ketamine or Lexapro. Experiences must be reduced and simplified. Specific states must be named so that the one that needs fixing can be pinpointed. Penny was addicted to barbiturates. I was an anxious anorexic with an adjustment disorder and depression. This one is a real American; that one is “illegal.” This one is fit and worthy; that one is “disordered” and a drain on resources. The powers that be, bent on uprooting and isolating us, rely on an obsessive particularizing.
Melancholy was the term the ancients gave to the twilight. The critic and art theorist László Földényi wondered whether the melancholic, “combining the center and the extremity in a single individual … perceives the order of the cosmos by constantly violating it.” Testing the universe’s boundaries, its substance, its consistency—this is what art does. It is part of being human. Other animal species may communicate with one another, but to have a worldview and yet defy it—to make art—this is not for the birds or the bonobos. Only us.
Why, then, the urge to mechanize the self, to reduce to a simple either-or, broken or fixed? Is it the romance of technics, the fantasy that technology will allow you to escape perpetual immersion in yourself, as the bus to Dublin is for Edna O’Brien’s country girls? Is it this fantasy that accounts for hewing to the diagnosis and taking the medicine? That fantasy certainly overtook American psychiatry. But the mechanization also fortifies management structures. Those include the patienthood-in-perpetuity one, in which pharmaceutical corporations accumulate capital by providing the medications that, we are supposed to think, will fix what is broken, encouraging the writers of the DSM to feature yet more diagnoses and giving other corporations, the insurance companies, grounds to take in annual revenues in the billions of dollars by limiting or simply refusing to pay for treatments and hospitalizations. Also, as Peter Conrad and other sociologists have pointed out, modern psychiatry enforces a moral perspective of equally simple dichotomies, one that tends to equate psychic deviance with maleficence.
My view goes further. The reason for our alacrity in diagnosing resides not in social formations alone. It is in us. It is our avoidance of looking at, of owning up to, and especially of speaking about our own troubled souls. It is a silencing. By parting ourselves into the well and the ill, the potentially rich language of twilight is quieted. The poets may write with precision of the turbulent soul (I think of Robert Lowell, “I myself am hell”; Anne Sexton, “And I am queen of this summer hotel / or the laughing bee on a stalk / of death”; and Philip Larkin, “… the dread / Of dying, and being dead, / Flashes afresh to hold and horrify”; you likely have your own favorites). The painters, too, with madness, melancholy, or an affect that resists naming. What you get from Rothko, Kandinsky, van Gogh, or others might be indefinable, but it isn’t silent.
But outside of art, it is with silence that we respond to the face of turmoil. To the young woman in gray, semiconscious in the hospital unit. To the reeking unwashed man in the subway car who is talking to himself, to the woman sprawled on the sidewalk nodding through an opiate dose, to the college student on the bus digging her fingernails into the flesh of her forearm as she weeps audibly, to the shirtless and barefoot man raving in the middle of the street, to the guy eating the remains of a pizza slice out of a sidewalk trash can, to the woman speaking volubly to Jesus in the park. To them, we say nothing. We might call the cops. Perhaps the police will hold the person and bring on the psych evaluation for involuntary commitment. We hope they won’t shoot.
I have shed the various problems that, 40-odd years ago, gave professionals cause to lend order to my disordered life by diagnosis. (Sometimes, Dr. Freud, hysterical misery turns into ordinary unhappiness without psychoanalysis.) In my case, many years of cognitive behavioral therapy helped—possibly it wasn’t the type of therapy that mattered, but that I quickly came to trust the therapist and kept going back, and back, and back. I have come to treasure the gems that twinkle in my internal half-light: my unreasoning fear of bears; the sweet feeling of flow when I go running; a persistent aversion to optimism; a certainty, when I’m out at night, that spirits of evil and those of transcendent beauty dance in the dark; the abiding sense that renewal of the world is possible, side by side with my faith in Kafka’s aphorism that there is endless hope in the world, but not for us. By these gems, I know myself. The inner world is enigmatic and inconstant because it is always—and about this Freud was right—tumbling in the deep roiling of the soul occasioned by the crashing of the outside world’s tides onto the unyielding headland of the fixed and deep urges. The turmoil doesn’t cease. The woe so often outstrips the weal.
Of the many disheveled people I knew as a young man, none continued to be the diagnosis of that time. Penny, barbiturate habitué, became vibrant, and later troubled, and finally a resident in whatever pain sent her to the drugs in the first place. Others went on in a muted melancholy or mania, or both, adjusting to themselves, sometimes with the assistance of medication but more often just with time. A few produced art. Others, ostensibly well after a time, later found themselves in other twilit variants, went to the customary mental authorities, and landed a different diagnosis (bipolar disorder, ADHD, etc.), leading to additional prescriptions to help them live on the day-side. Often, this worked, in its way. Sometimes it didn’t: Quite a number of these people, at first deemed to have “shown improvement,” later killed themselves. Some left the somber twilight to join the brightly illuminated American body social: productive, artifact-consuming, mindful of their diet, attentive to “wellness,” cheerful. Some, and I include myself, managed a life of work and friendship, love, children, a home—yet with no illusions that we have left the twilit world behind. Always in the company of regret and misgivings, alive to the world’s constant deviation from its preferred kilter, host to dreadful dreams and night sweats.
To have shed a diagnosis of mental disturbance is not freeing, I mean. No more than to be given one is in any way defining of the self. You may make something of the diagnosis: become a “recovering addict,” a “trauma survivor,” the bipolar child of manic-depressives. You may even convince yourself that this is who you really are—as if anyone is ever really, definitively, and consistently one thing. As if you are a thing.
All of the rhetoric about the authenticity of disorders, the harms of mental hospitals, and the overweening power of psychiatrists and all the debates about pharmaceutical therapies have served to hide this simple fact: None of us is a thing. Inside each of us is a world roiling with grief. The psychoanalysts of the early 20th century knew this, even as they differed as to what its sources were.
Philip Alcabes is Professor Emeritus of Public Health at Hunter College, the City University of New York. His writing on health, medicine, and illness has appeared in the Scholar since 2004. He is working on a book about the institution of medicine and public health.