https://www.bbc.com/news/live/cwyqq7z2z2wt
Wrestling legend Hulk Hogan dies aged 71
Yeh Rishta Kya Kehlata Hai - 1st Aug 2025 EDT
GEETU & KICHDI 1.8
71st National Film Awards (Celebrating 2023)
New Time Slot
ONE CHANCE GIVEN 2.8
Yeh Rishta Kya Kehlata Hai - 02 August 2025 EDT
Congratulations SRK National Award
Makers mission to prove Navri incompetent in all aspects.
Katrina Kaif Pregnancy Rumours
Anupamaa 01 Aug 2025 Written Update & Daily Discussions Thread
ManVik Hits 150 & Forum Hits 100😎
Congratulations National Award Winning Actress Rani Mukerji
Anupamaa 02 Aug 2025 Written Update & Daily Discussions Thread
A joke called National award
💕 Lexophile Dosts 💕 August 2025 Reading Challenge
22 years of Hungama
Rate episode 62: "Officer Purvi's Narrow Escape"
https://www.bbc.com/news/live/cwyqq7z2z2wt
Wrestling legend Hulk Hogan dies aged 71
8th standard student was asked to write an essay on "THIEVES". You might be surprised to read this, but shedding some light on it. This is what he wrote:
Thieves are also an important part of a nation's economy. They play a significant role in providing employment and contributing to the nation's development.
Safes, locks, lockers, cupboards, etc., are made only because of thieves. Many factories and workshops involved in making these items provide employment thanks to this profession.
Even in homes, masons and workers get work installing latches, locks, grills on windows and doors.
Then, to protect houses, shops, schools, colleges, offices, and factories, security guards and watchmen are essential.
Companies that manufacture CCTV cameras, metal detectors, and security systems also generate jobs.
Because of thieves, police officers, court staff, judges, lawyers, and others are employed.
Purchases of barricades, weapons, bullets, batons, uniforms, vehicles, and motorcycles for the police help boost the economy.
Thanks to thieves - jails, jailers, and prison staff have jobs.
When items like mobiles, laptops, cars, motorcycles, electrical appliances, purses, or lipsticks are stolen, people have to buy them again, which boosts business.
Famous and notorious thieves often enter politics, where even bigger thefts take place. Much more could be said, but overall, the contribution of thieves to a nation's economy is noteworthy."
The teacher awarded this research-rich essay full marks (100%) and included the student in the merit list.
šš
Weāre learning just how early in life empathy starts to move us
Your Face Tomorrow
The puzzle of AI facial recognition
by Michael W. Clune
I first realized there was a new use for my face when I got my passport renewed in November 2023. I went to the local CVS to have my photo taken. The harried woman behind the counter groaned and led me to the corner of the store where they took the pictures.
She told me to stand in front of a white screen. It was the only usable surface of the store that was not covered by words about consumer medical products, images of consumer medical products, or the bright colors associated with consumer medical products. The screen was the blank white of a medical crisis.
She told me to take off my glasses. I did. It was blurry, but I could just make out the electronic cameraās tiny lens embedded in the black device before me.
I waited.
āStop that,ā she said.
āStop what?ā
āThe . . . that thing youāre doing with your face.ā
āMy face?ā
āSmiling,ā she said irritably. āStop smiling. Youāre not supposed to smile.ā
Youāre not supposed to smile? I hadnāt remembered that being the rule the last time I got my passport renewed. Had something changed?
I relaxed the muscles of my face. It felt strange to be standing there in the middle of the store with no expression at all. What they want with my face has nothing to do with my expression, I thought. This isnāt about what I can do with my face. This is about what they can do with it.
A word from something Iād read recently swam into my consciousness. Faceprint: āA digital scan or photograph of a human face, used for identifying individuals from the unique characteristics of facial structure.ā
āOkay,ā she said.
When I put my glasses back on, I looked into the reflective glass of the camera. It looked like the black, expressionless eye of an insect. What does an ant see when it looks at your face? I didnāt know. I thought about ants because a bugās perspective seemed like the most alien thing I could imagine.
I didnāt understand then that the machine behind the eye of that camera is much more alien than an ant. Scientists know a lot of things about insect optical processing. But no one knows what artificial intelligence sees when it looks at a picture of your face.
The large language models that programmers train to identify faces are black boxes. Even the engineers donāt know how or in what form your face appears to the system. All they know is that AI likes your face to be brightly lit. And that it prefers for you not to smile.
This initial brush with an alien way of seeing my face made me attend a little more consciously to the normal, human way of using my face. A couple weeks later, I went to my departmentās annual holiday party, at the home of the chair. As my wife and I approached the front door, my face began to change.
Up until that moment, driving over to the chairās house, I hadnāt been paying much attention to it. Lauren would say something funny and Iād laugh. I squinted a little when we turned off the brightly lit road.
My face was like a pool of waterāsomeone would toss in a pebble and ripples would spread. Or it would hold the faint impression of a blown leaf, floating on the surface. Natural impressions, organic expressions.
But standing there, waiting for the department chairās door to open, I put a very specific smile on my face. I did it manually, so to speak. I had a pretty good idea of what it looked like, from mirrors and photographs of myself. Happy. Open. Excitedānot too excited. Definitely not too excited.
I briefly wished that all department chairsā houses had cameras set in their walls, so I could compare my current smile-at-the-holiday-party-door with the smile I had put on when I was an associate professor, and before that an assistant professor, and before that a graduate student. I speculated that a skilled anthropologist could derive my precise status in the department from the nature of my smile. The āsmileprints,ā placed next to one another, would describe an arcāfrom the smile of someone who wanted to please to the smile of someone with a gracious willingness to be pleased.
As I moved through the party, my facial expressions modulated. Polite interest as someone described the repairs being done to their home. Quiet smile of acknowledgment at a perfunctory joke. Deferential and concerned attentiveness to an elderly retired faculty memberās description of a medical procedure. Furrowed brow of performed thought after someone asked my opinion of a recent essay.
And then, just once, just as my face began to tire from this incessant demand to express, I emitted an aggressive, open shout of laughter at my own joke, followed by a gleaming, tigerish grin around the little circle of colleagues whose own faces described an arc of smiles ranging from defensive to liberated.
Liberated? Liberated from the pressureāthe constant pressure that accumulates on oneās face in social situations, a pressure felt in oneās facial muscles. There are approximately forty-three of them: some ten thousand possible configurations. Each configuration has a meaning, each one is judged and evaluated and acclaimed or condemned or let pass by the ten thousand possible configurations of the face of the person watching you, who is also being watched by you, and by others, one of whom may be artificial.
I knew it was time to leave the party when my face had frozen into a rictus grin. This condition exposed me. Itās a symptom of terminal facial overload, a sign that my forty-three muscles were fighting against relaxation.
The people to whom I was listening could tell that my face was trembling on the verge of a total absence of expression. I could sense this not in their mouths, but in the muscles around their eyes. Such a face would be like a giant unhewn boulder dropped into the middle of the chairās tastefully decorated living room. People will do just about anything to avoid witnessing such a face in a human social situation.
This is the face AI likes best. Sometimes I think I might prefer it, too.
Like many other forms of AI, facial-recognition software received a shot in the arm from the maturation of artificial neural networks such as the large language models powering systems like ChatGPT. The process used by most current methods contains several steps. First, the software translates a photo or video of your face into a set of measurementsāthe distance between your eyes, between your nose and your lips, and so on. Then this āfaceprintā is fed into an artificial neural network. This system uses statistical methods to match your faceprint to others in its database. Programmers ātrainā the system by rewarding it for correct matches and penalizing it for misses. The model generates results in the same way that LLMs produce languageāby processing a large database in order to obtain statistical relationships among entities. Eventually, and with a sufficiently large database, the system can reliably match one image of a face with another. Given adequate lighting and a relatively clear picture, software can match your passport photo with your appearance in a party photo on a friendās Facebook page, for example, 99 percent of the time.
No one knows exactly how the system obtains its matches. There is an entire field in AI known as āmechanistic interpretabilityā that is attempting to understand how LLMs move from a given input (your passport photo) to a given output (identifying your face in the Facebook post). The existence of this fieldāand its limited success thus far, despite the significant resources devoted to itāis one index of the alien quality of the āthinkingā that goes on in artificial āminds.ā
The opacity of these systems has created controversy, with various advocacy groups arguing that citizens possess a āright to explanationā in cases wherein a black-box process leads to an adverse outcome. If a bank denies your loan application using an algorithm, or if you discover at the airport that youāve been placed on a terrorist watch list by an AI system, you should be able to find out why. The European Union, as part of the General Data Protection Regulation of 2016, provides such a right to explanation, though no similar legislation exists yet in the United States.
Even with rapid improvements in the technology, the danger of being wrongly identified as a criminal remains. A 2024 New York Times story identified three individuals who had been subject to such mistaken identification by the Detroit Police Department, which has used facial-recognition technology since 2017. Various factors contribute to mistakes of this kind. For example, while thereās been progress in obtaining correct matches with blurry or shadowed or smiling faces, systems can still struggle with such images; thus, perhaps, the CVS workerās demand that I not smile. But the primary constraint on effectiveness is the size of the database. You need tens of thousands, and preferably millions, of images of faces to train the system to optimal precision.
If occasional inaccuracy is one reason to worry about AI-powered facial recognition, the larger concern is its effectiveness. Decades of unease over the rise of electronic surveillance have primed us to freak out. The nonspecialist writing on the subject operates in a genre one might call paranoid realism; the recent bestseller Your Face Belongs to Us, by the journalist Kashmir Hill, for example, tells the story of the company Clearview AI and dwells exhaustively on the dystopian implications of the technology.
Its possible use cases are certainly freak-out-worthy. Consider: A stalker with access to the software could take a picture of you and then find out where you live, where you work, who your friends are, and where you get your groceries. Surveillance cameras mounted in public streets could record you walking into a strip club or a pot dispensary, and on this basis, a credit agency or employer could deny you a job or a loan. And the government might deploy the technology to discover everything anyone has ever done, and punish them for it.
On the other hand, Hill describes the enthusiasm for the technology among law-enforcement agencies, which point to its efficacy in helping them track down wanted criminals. Cops can use facial recognition to identify child p.rn.graphers and abusers from images circulating on the dark web. They believe, plausibly, that the use of these systems could prevent te..orist attacks. But in the context of Hillās well-researched book, the theoretical positive uses seem overwhelmed by a swirl of anecdotes, speculations, and predictions that suggest this technology is on the verge of plunging us into a more efficient version of Orwellās Oceania. Certainly that was my experience reading the book.
When I try to examine the threat of facial-recognition software dispassionately, I see the problem boiling down to two basic questions. The first is: Do we trust the government agencies that have access to such systems? If we do, then the benefits of preventing terrorism or child abuse might well outweigh the potential abuses and inaccuracies. If we donāt, then it seems our first task should be to bring these agencies under democratic accountability. After all, if the government is out to get us, it has enough tools to do so even without AI.
The second question is: How much do we value our privacy? Thereās been renewed discussion in recent years of the putative āright to privacyā that, in a celebrated 1890 article, Samuel Warren and Louis Brandeis argued was derivable from existing U.S. law. But if it existsāand this remains debatableāsuch a right comes, even in its original formulation, with a host of exceptions. A right to privacy can in many cases come into conflict with the better-established First Amendment right to free expression. Most of the photos held in vast corporate databases (think of your Facebook photos) or government databases (think of your passport photo) were gathered with the explicit or implicit permission of the subjects. Experts have long been familiar with the āprivacy paradox,ā whereby we say we value our privacy while uploading our photos to public forums, opting into requests to monitor and share our online behavior, and so forth.
In short, when I think carefully about the threat posed by facial recognition, my instinctive, paranoid response gives way to more ambiguous reflections. Maybeālike everything elseāthis thing will be good in some contexts and for some uses, and bad in some others. Perhaps this isnāt a black-and-white case, but one that involves trade-offs between different values, like privacy and safety.
And now I begin to wonder about that paranoid response, the feeling of horror that first came over me when reading Hillās book, and subsequently as I searched the web for similar stories about facial recognition. What is the source of this paranoia? Perhaps, I began to suspect, my instinctive urge to reject and ban this software is something like a defense mechanism.
A defense against what? When we consider the way AI makes use of faces, we inevitably contrast it with the way humans make use of faces. The flip side of the paranoid revulsion at AI is an idealized, even romantic sense of the comforting familiarity of human face-to-face interactions. As with so many situations in which we confront new technology, our tendency is to project our fears onto the new thing and to cling to the old, natural way.
But sometimes the old, natural way is the problem. Professors like myself hate ChatGPT and similar platforms because our students turn in artificially generated, robotic papers. But if we ordinarily gave vapid, shallow papers the Dās or Fās they deserved, this problem wouldnāt exist. The fact that such papers routinely get Aās or Bās shows that we have come to expect and to train humans to write robotic papers. Similarly, when I worry I canāt distinguish a colleagueās genuine sentiments from the vaporous generalities Gmailās AI suggests, what am I really worrying about? Is it that the machine is so good? Or that my interactions with my colleague are so empty?
Once we step back from the paranoid reaction, the problem presented by AI facial recognition assumes different contours. In posing anew the question of facial control, the technology provides us with an opportunity to think about how such control works in both its artificial and natural forms.
If weāre serious about privacy, we should examine the problem seriouslyānot neglecting the ordinary, traditional, everyday, nontechnical cases in which our privacy is at stake. In her book, Hill expresses one of my favorite arguments in favor of privacy when she writes, āAnonymity provides powerful protection for those who donāt conform to the status quo.ā I am among those who believe that conformity is the enemy of the creative, social, and conceptual breakthroughs that enrich and transform human life. When my face is known, I shed the protective invisibility of anonymity. The protean, multiform energies within me become measurable, locatable, predictable, controllable. My face is the hole through which the status quo enters me, disciplines me. It has always been this way.
But now I have two faces. Two doors that swing open to two different forms of control. The first door is my face on my passport. The second door is my face at the department holiday party.
Letās look a little closer at whatās behind door number one.
In early November 2024, I visited Yu Yinās lab at Case Western Reserve University, where I taught at the time. In a modern research university, people rarely have a clear idea of what their colleagues in other departments are doing. I learned of Yinās work on facial recognition during a reception for the board of trustees, where I found myself speaking with the head of her department. This intelligent, engaging man worked on a different branch of AI. Over the course of our conversation, I came to think of him as a character in an eighteenth-century novel, the Optimist. He all but suggested this name himself.
āI am an entrepreneur,ā he told me at once. āI have to be an optimist.ā
Upon learning that I was thinking of writing an article about AI, he expressed unalloyed enthusiasm for the revolution. I decided to test his optimism.
āSome people,ā I said tentatively, āthink that facial recognition poses a threat to privacy.ā
He waved his hand dismissively.
āNo one really cares about privacy,ā he said, citing research on the privacy paradox.
I decided to try something a little bolder, remembering a theme of the AI conference in New York where Iād been invited to give a keynote lecture that October.
āSome people,ā I suggested, āthink that the development of AI will soon hit a wall. There was a recent study in Nature showing once LLMs begin to be trained on AI-generated text, they start spewing nonsense.ā
He smiled.
āThese problems are fixable. Do you understand the speed of progress in AI research? Let me give you an example. On Monday, my graduate students and I post a paper online about a problem. Thousands of people all over the world read it. They work on the problem. On Thursday they publish a new paper citing our paper.ā
By this point, an elderly trustee had joined us.
āEverything is getting better all the time,ā said the Optimist.
The trustee nodded in approval at these confident pronouncements. I wondered if my interlocutorās optimism had any limit.
At the end of our little exchange, he told me of Yinās work on facial recognition and suggested I contact her. Thus it was that I found myself in her lab in the School of Engineering on a cold, clear November morning. Sheād promised she would make a faceprint of me and that I would be able to see the whole process. This excited me, because getting direct experience of the technology had proved rather difficult. I began my visit by mentioning this.
āYes,ā she agreed. āThis software is not available for consumers. My personal view is that applications that identify a personās face should not be licensed for consumer use.ā
āBecause of the danger of stalkers using it, that kind of thing?ā
She nodded. She told me she had been born in China and had lived there through her undergraduate years, moving to the United States for graduate training.
Part of the work being done in her lab, she explained, involved taking photographs of individuals and then animating them. With a recording of the individualās voice, one could use AI vocal programs to make their face say various things, complete with the requisite expressions and mouth movements.
One possible application might be for something like Zoom calls, where you could deploy an avatar of yourself, and speak into the microphone with your AI avatar speaking your words perfectly, your face looking absolutely natural, no one the wiser. They werenāt yet able to get the animation to work in real time, but they were making progress.
I envisioned someone getting a picture of my face, and then using the tech to make a Zoom call to my elderly mother, asking her for money. Yin smiled, and acknowledged that this technology might be abused. But it could also have benefitsāfor gaming or film, for example.
At this point, one of her graduate students arrived, and Yin informed me that it was time for my faceprint. Suddenly I became a little nervous. Somehow I had imagined a vaguely medical scenario. I would be led, I imagined, from this small white-paneled room, full of more or less recognizable computing equipment, into a different space. Perhaps Iād be invited to lie down, on the kind of bed they have in doctorsā offices. And perhaps a special camera would be lowered over my face, and Iād be told not to move while an enormous glass lens dilated and constricted above me.
Standing there in Yinās office, I realized that what Iād been imagining was a larger and more intrusive version of the X-ray machine dentists use, with a camera pointed not toward some part of my jaw, but toward my whole faceāa camera that was correspondingly larger. This strange anxiety-fantasy vanished instantly as Yinās graduate student gestured toward a table holding an open laptop. I looked at it. It showed a video feed of myself and the grad student staring into the laptopās webcam.
āWhat is this?ā I asked.
āThis is it,ā he said.
āWhat?ā
āIt is capturing your face.ā
āWhen? Now?ā
āYes,ā he said. He looked at Yin. They both smiled.
I examined the screen more carefully. There was my face. But it was different. A blue highlight appeared over my left eye, like an eyebrow scrawled in marker on the screen. A green highlight appeared over my right eye. And a wavering white stripe, like something my five-year-old daughter might drawāhesitant, wobblyācircled the shape of my face. Some numbers flickered on the left side of the screen.
āThis is it,ā I repeated.
The effect was vaguely cartoonish. It reminded me of a certain genre of TV ad; I thought dimly of soda commercials, or maybe running shoes, ads in which bright pastel colors scribble over dancing consumers.
But, remembering what Iād read about the technology, I realized that this cartoon effect emerged from something like the polar opposite of a marketerās mind. Those pastel colors were the expression of a deeply alien process, a truly inhuman perspective on the human face.
What I was looking at as I stared at my concentrating face on the screen, I realized, was a trace of the way the machine saw me. The cartoon highlights, the wavering white outline, the numbersāthese were the visible sign of the machineās digestion of my human features into a code, a set of coordinates, a faceprint.
āWhat are those numbers?ā I asked.
Two slowly ascending digits flickered in the upper left corner of the screen, like a reverse countdown: 81. 83. 86.
āThat is the match,ā he said.
He explained that the numbers showed the degree to which the current image of my face matched other instances of my face in the database.
I watched the number climb into the nineties. The machine was learning to recognize me.
The scene in Yinās office represents the third time in my life that my face had been captured for a use beyond its standard social function. We can distinguish between these two ways of using my face by distinguishing between an interest in expression and an interest in non-expression.
The regime of expression includes photographic captures of my face ranging from family or elementary school class photos to television appearances. In each of these cases, the āusersā of my face desire expression. They want to see my face emitting a true or false (this distinction doesnāt matter in most cases) rendering of a prosocial inner state. My family wants to see me smiling happily in the school photo. The TV interviewer wants to see me engaged in serious thought or conversation.
But in Yinās office, during my third encounter with the non-expressive regime, none of this mattered. The point here wasnāt for me to perform some inner state, but simply for the software to identify me. My second encounter with this regime had been at the CVS when my passport photo was taken. It is probably true that there have been other timesāwhen my previous passport photos were taken, or when I posed for my driverās-license photoāthat were also instances of interest in my non-expressive face. But if so, I was confused about this at the time, and treated each photographing as an expressive scenario, smiling broadly.
But the first time I encountered this regime, I didnāt smile, and I was not confused. This was when my mug shot was taken in a Chicago jail in 2002, after I had been arrested for felony possession of narcotics.
āStand over there,ā a cop commanded. āAnd look over here. Give me your glasses.ā
āMy glasses?ā
He pulled me toward him and snatched my glasses off my face. He then half closed them, and brought the two stemsāwhich made a kind of miniature viseāto bear on my throat. He pressed, hard.
āYou see?ā he said. āYou see this is a weapon?ā
Then he released me, glasses-less, and I stumbled backward. I heard the click of the camera, and then he took me to my cell.
Whatever expression that camera captured (and Iāve never seen the mug shot; when my felony was expunged a couple years later, they sent me my fingerprints in the mail, but not the mug shot), whatever expression I woreādazed, blinking sightless at broad planes of gray and white jail-colorāwas not intentional. Perhaps it was in a sense the most authentic image of my face ever captured. At any rate, the justice system was certainly not interested in making me smile. This photo was for identification purposes only.
What links these three different scenes of facial captureāin Yinās lab, in the CVS, in the West Side of Chicago jail? These different systemsāthe state, local, and federal police agencies; the U.S. State Department; the actual and possible end users of facial-recognition technologyāare not oriented toward what is inside me. Theyāre not trying to capture my thoughts or feelings or words. They are oriented toward my actions.
Each system is designed to match a given actionāa drug offense, a violation of customs or immigration law, an appearance at a party that someone is recording for an Instagram videoāto a given name. My name.
This is the truth embedded in the term āfaceprint.ā Itās like a thumbprint: a pattern unique to one individual, the trace someone might leave in every space he traverses. Thinking of an image of your face as somehow analogous to your thumbprint leads to serious mental vertigo. Take a look at your thumb. Now look at your face in the mirror. Imagine your neck terminating in a giant thumb.
To understand how oneās face can become as neutral, as objective, as expressionless as oneās thumb, is to grasp the key difference between the regime of facial recognition and the regime of everyday expressive facial control.
The example of my mug shot perhaps gives an unduly negative view of the possibilities of facial recognition. The effort to view a person from a distance, to observe their actions from an external perspective, doesnāt have to be creepy. One might even think that this effort is essential to being a good person.
Adam Smith thought so. In his Theory of Moral Sentiments, he wrote that we cannot judge whether our actions are morally good or bad from within. The only way to make such a judgment is to āremove ourselves, as it were, from our own natural station,ā and to āendeavour to examine our own conduct as we imagine any other fair and impartial spectator would examine it.ā Smith continues:
I divide myself, as it were, into two persons; and that I, the examiner and judge, represent a different character from that other I, the person whose conduct is examined into and judged of. The first is the spectator, whose sentiments with regard to my own conduct I endeavor to enter into, by placing myself in his situation, and by considering how it would appear to me, when seen from that particular point of view.
To be a good person, one has to be able to judge the moral status of oneās actions. And to do this requires alienation. I must dissembled myself from what I think and feel, pry my mind out of my face, and observe myself as a simple name and body performing certain actions.
I do this all the time without thinking. When my five-year-old runs into my office as Iām playing a computer game and wants me to read her a book, I feel irritated, set-upon. Probably my facial expression conveys these feelings.
But I also see the situation from a different perspective. I imagine a perspective outside the situation, watching me. And I imagine that this observing entity would be pleased if I stopped what Iām doing. It thinks that it would be good if I stood up, took the book the child proffers, and spent ten minutes reading it to her.
I want this observer to think well of me. So I perform the action that will produce this positive judgment. Afterward, my daughter leaves my office with a happy smile. And I am also smiling as I return to my game, feeling like a good father.
Who was this alien observer, whose gaze made me into a (slightly) better person, whose gaze (slightly) reduced my incorrigible self-centeredness? In one sense, of course, it was me. But it was a version of me identifying for the moment with an outside point of view that registers and judges my actions. The outsideness is crucial. This perspective doesnāt know or care what Iām thinking or feeling. It isnāt compelled byādoesnāt even recognizeāthe expression of irritation on my face. It doesnāt attend to my thoughts about how Iām so close to beating this one level Iāve been trying to conquer for a week. It trains its cold, alien gaze only on my actions.
The kind of surveillance Smith describes is a form of control. Such control isnāt intrinsically tyrannical or oppressive. Its value depends on the aims it serves. In the example with my daughter, the aim is moral goodness. Smith argues, to my mind compellingly, that there isnāt an easy way to be a good person without this kind of surveillance by another, even if that other is, in the end, only you. Of course, itās easyāmaybe even too easyāto imagine scenarios in which the observerās aim is a bad one. Nineteen Eighty-Four. China. Google.
But is it possible, I wonder, to imagine a world in which social control operates primarily, and optimistically, through a non-expressive model? Imagine a benign government, watching you through street-mounted surveillance cameras, tracking you through your friendās Facebook posts. Maybe itās not going to do anything in particular with these images. Perhaps it might intervene if it looks like youāre about to harm someone. But in general it just watches. It wants you to be a good person. The imaginary regime works, in fact, like Smithās impartial observer: the constant alien and alienated eye that alone makes it possible for a person to truly do good.
This thought experiment will seem fanciful to some, terrifying to others. But I want to use it as a contrast with another regime, a regime with nothing fanciful or imaginary about it. This regime may have temporarily relaxed its hold on your face as you read this. But soonāvery soonāit will be manipulating those forty-three muscles again.
The advent of facial-recognition technology arouses unsettling feelings in part becauseāfancifully or terrifyinglyāit opens the prospect of an alternative to the way weāve always used faces. The emergence of an alternative gives a new perspective on the old thing, and not always a flattering one. Think of the invention of indoor plumbing.
So letās take a closer look at the old thing.
Return to the scene with my daughter. Iām sitting in my office, playing a computer game, when she comes in holding a book, demanding I read it to her. Now, instead of my actions being tracked by Adam Smithās impartial observer, letās imagine a real, live person sitting there watching me. Perhaps itās a relative. Perhaps a neighbor whoās dropped by. Perhaps a friend.
Like the alien, outside gaze of Smithās observer (or AI facial-recognition software), the mere presence of this person will exert control over me. But the control wielded by the human is deeper, more intrusive. This real person is not just interested in my actions. They donāt see my face as in any way like a thumbprint, a simple means of identifying my actions. They watch my expressions.
They see the child run up to me. If this third person wasnāt there, my face would instinctively reflect my actual emotional state of mild irritation. But with this person watching, I canāt afford to betray my feelings by this expression. The kid wants Daddy to read her a book. What kind of father could be irritated by so charming and salutary a request? So I turn to the child with a smile. Of course Iām happy to drop what Iām doing and read her a story.
That smile is a razor, cutting backward into my brain. It rips my irritation to shreds. It might even destroy the memory of it. My neck and shoulder muscles tighten with the internal effort of eradicating and erasing my first, natural feeling. The face I turn to my child with is the face of a loving, smiling parent, unbotheredāexcited evenāto be interrupted.
In both scenariosāthe one in which I adopt the perspective of the impartial observer, and the one in which I am watched by an actual human observerābeing watched changes my behavior for the better. But consider the difference.
In the first example, as I realize the action I should take, my frown slowly turns upside down. I expel my irritation with a sigh, as I sit the child on my lap and start to read to her. By the end, both she and I are smiling. I have become the happy, good father. But Iāve arrived at this morally desirable end point through a natural, relatively slow process in which the demands of my own self-centered feelings and inclinations were challenged, and then defeated, by a sense of how an impartial observer would judge me.
In the second example, I become the happy, good father instantaneously. But this good man is an artifact. He is artificial. He has been created by the human sitting in the chair across the room, who, by triggering the manipulation of my facial muscles, nullifies the expression of my natural feeling. In changing the way I appear, this person also causes a deeper change.
Hegel says that āthe self perceives itself at the same time that it is perceived by others. . . . Self-consciousness exists . . . by the fact that it exists for another self-consciousness.ā I become myself by identifying with the object you see. And what youāyou other human beingsāmainly see is my face. As the psychologist Silvan Tomkins writes, āthe self lives in the faceā:
Both transmission and reception of communicated information take place at the face. The mouth talks, the eyes perceive; and the movements of the facial musculature are uniquely related to oneās experienced affects and to the affects transmitted to others.
All of us, pretty much all the time, want others to see us as a good object. Since our sense of ourself depends so highly on the attitudes of others, we instantly, preemptively, and constantly work the forty-three muscles of our face to produce the expected responseāthe response that will please or impress others. This dynamic is so pervasive and unremitting that it can be hard to bring it to consciousness.
An offhand comment that William S. Burroughs makes in Naked Lunch illuminates the way others shape us from an unexpected angle. A character in the novel says you can learn more about someone by talking to them than by listening to them. How could this be? Because the part of our mind that learns about someoneās attitudes by listening to their words is inferior in its cognitive power to the part of our mind that grasps their attitude from a thousand tiny cuesātheir clothing, their posture, the precise modulations of their own facial musclesāand then responds to this information by working our own tone of voice, word choice, and facial expressions to produce the impression they expect.
When I enter a room and find my wife talking on the phone, I can almost always identify the person with whom she is speaking, even though I canāt hear anything the other person says. I simply listen to how my wife talks and I know. She has a special tone of voice, and special facial expressions, when her auditor is her mother. A different facial and vocal suite when sheās speaking to her friend John. Still another when sheās speaking to my daughterās teacher at school.
Iām the same way. Am I conscious that Iām a slightly different person when Iām speaking to my friend Dave versus when Iām speaking to my friend Jason? Not normally. But now, when Iām thinking about this problem, when Iām listening to myself speak, I hear the difference.
Weāve examined the paranoia of the non-expressive regime of facial recognition. But there is also a paranoia of the expressive regime. āWe can consider ourselves as āslaves,ā ā writes Sartre, āinsofar as we appear to the Other.ā A gaze tuned only to my actionsāa gaze that sees my face as a kind of thumbprintāexerts control over what I do. But it leaves the space within me free. It doesnāt control what I think or how I feel. The machine recognizes facial structure but not facial expression. Such a gaze allows me the possibility of naturally coming to identify with the actions it encourages or pressures me to takeāas when my feelings about reading to my child change from an initial irritation to eventual joy.
Even in a surveillance regime controlled by users who donāt want me to be a good person, but merely compliantāa tool of their own powerāmy interior remains relatively free. I can do one thing while thinking and feeling and perhaps planning another.
Of course, itās also possible to do this in an expressive regime. I can smile while hating you. But the pressure is greater. People are adept at detecting the little blips in facial manipulation that indicates someoneās faking it. Over time, it becomes easier to simply feel what your face expresses.
āThe self perceives itself at the same time that it is perceived by others.ā Who am I? A real person, watched by artificial eyes? Or an artificial person, watched by real eyes?
I close my own eyes and see my face on the screen in Yinās office. The pastel colors smudge like cartoon eyebrows. My expressionābaffled, then interested.
I feel Yinās eyes on me. I smile.
The digits on the left side of the screen ascend.
Michael W. Clune is a contributing editor of Harperās Magazine. His novel Pan was published in July by Penguin Press.
Dennis Lehtonen Documents a Pair of Immense Icebergs Paying a Visit to a Small Greenland Village
https://www.youtube.com/watch?v=CJQEJMLljCU
Baakiyalakshmi | 28th July to 1st August 2025 - Promo
https://www.youtube.com/watch?v=cimuFsGhebE&list=RDcimuFsGhebE&start_radio=1
Art Grafunkel - Since I Don't Have You
Why your body ages rapidly in two 'bursts'
Scientists have found that ageing increases in our mid-40s and 60s
Have you ever woken up in the morning and suddenly felt old? There might be a good reason. A series of studies has found that, rather than ageing gradually on a linear timescale, we might have significant "bursts" of getting old during our adult years, said National Geographic.
'Provocative' findings
These two bursts usually happen in our mid-40s and our early-60s, according to a research team at Stanford University, who tracked thousands of different molecules in people aged 25 to 75. The researchers found that 81% of the molecules didn't change continuously, as you'd expect, but actually transformed significantly around certain ages.
Their "provocative" findings seem to "fly in the face" of current models of ageing, said David Sinclair, a molecular geneticist, longevity researcher, and professor at Harvard Medical School.
A separate study by a team in Germany last year found that sudden chemical modifications to DNA occurred in mice in early to mid-life and again in mid to late life, hinting that there were "three discrete stages of ageing", said The New York Times. And a 2019 analysis, which examined the blood plasma of over 4,000 people, found there were "significant jumps" in concentrations of proteins associated with ageing in the fourth, seventh and eighth decades of life.
'Steep uptick'
This "sudden ageing" can come with "an acceleration in muscle wastage and skin decline", said New Scientist, along with an inability to metabolise alcohol, a swift dwindling of immune cells, and substantial increases in the risk of cardiovascular disease and of dying.
Although none of this sounds particularly enjoyable, the various findings "don't have to make you dread hitting your 40s and 60s", said National Geographic, because understanding "how and when we age" can help experts and the general public take "specific steps to prevent or at least prepare for" some of the "most undesirable" aspects of the process.
The pattern "fits with previous evidence" that the threat of many age-related diseases does not "increase incrementally", said The Guardian. Conditions such as Alzheimer's and cardiovascular disease tend to show a "steep uptick" after 60.
There is a note of caution to attach to the Stanford findings, said Sinclair, as other studies have found that people often experience a "mid-life crisis" in their late-30s and early-40s or a "late-life crisis" in their late-50s and early-60s ā the two periods linked to the ageing "bursts".
To put it another way, it's possible "associated psychological and lifestyle changes may be responsible for these changes in ageing and not due to our inherent biology".
https://www.yahoo.com/news/articles/human-babies-aren-t-supposed-140000550.html
Human Babies Arenāt Supposed to Have 3 Parentsābut Now They Can
https://www.youtube.com/watch?v=_E5LgXm1KF4
ą®ą®ąÆą®ą®³ąÆ ą®
ą®©ąÆą®Ŗą®æą®±ąÆą®ąÆ ą®Øą®©ąÆą®±ą®æ..š | Baakiyalakshmi
Previous thread links: From To Satish #1 From To Sathish #2 From To Sathish #3 From To Sathish #4 From To Sathish #5
2k