Quantcast
Channel: Tropical Depression Romina passes near Kalayaan, but PAR entry no longer seen
Viewing all articles
Browse latest Browse all 1501

Stop drinking from the toilet!

$
0
0

We can’t live without air. We can’t live without water. And now we can’t live without our phones. Yet our digital information systems are failing us. Promises of unlimited connectivity and access have led to a fractionalization of reality and levels of noise that undermine our social cohesion. Without a common understanding and language about what we are facing, we put at risk our democratic elections, the resolution of conflicts, our health and the health of the planet. In order to move beyond just reacting to the next catastrophe, we can learn something from water. We turn on the tap to drink or wash, rarely considering where the water comes from–until a crisis of scarcity or quality alerts us to a breakdown. As AI further infiltrates our digital world, a crisis in our digital information systems necessitates paying more attention to its flow.

Water is life sustaining, yet too much water, or impure water, makes us sick, destroys our environment, or even kills us. A bit of water pollution may not be harmful but we know that if the toxins exceed a certain level the water is no longer potable. We have learned that water systems need to protect quality at the source, that lead from pipes leach into the water, and that separation is critical–we don’t use the same pipes for sourcing drinking water and drainage of waste and sewage.

Today, digital services have become the information pipes of our lives. Many of us do not understand or care how they work. Like water, digital information can have varying levels of drinkability and toxicity — yet we don’t know what we are drinking. Current system designs are corroded by the transactional business models of companies that neither have our best interests in mind, nor the tools that can adequately detect impurities and sound the alarm. Digital platforms, such as Instagram, TikTok, or YouTube, don’t differentiate between types of content coming into their systems and they lack the equivalent of effective water filters, purification systems, or valves to stop pollution and flooding. We are both the consumers and the sources of this ‘digital water’ flowing through and shaping our minds and lives. Whether we want to learn, laugh, share, or zone-out, we open our phones and drink from that well. The data we generate fuels increasingly dangerous ad targeting and surveillance of our online movements. Reality, entertainment, satire, facts, opinion, and misinformation all blend together in our feeds. 

Technology is neither good nor bad; nor is it neutral.

Digital platforms mix “digital water” and “sewage” in the same pipes, polluting our information systems and undermining the foundations of our culture, our public health, our economy, and our democracy. We see the news avoidance, extremism, loss of civility, reactionary politics, and conflicts. Less visible are other toxins, including the erosion of trust, critical thinking, and creativity. Those propagating the problems deny responsibility and ignore the punch line of Kranzberg’s first law which states, “technology is neither good nor bad; nor is it neutral.” We need fundamental changes to the design of our information distribution systems so that they can benefit society and not just increase profit to a few at our expense.

To start, let us acknowledge the monetary incentives behind the tech industry’s course of action that dragged the public down as they made their fortunes. The foundational Internet infrastructure, developed in the 1970s and 80s, combined public and private players, and different levels of service and sources. Individual data bits traveled in packets down a shared distributed network designed to avoid single points of failure. Necessary separation and differentiation was enforced by the information service applications layered on top of the network. Users proactively navigated the web by following links to new sites and information, choosing for themselves where they sourced their content, be it their favorite newspaper or individual blogs. Content providers relied heavily on links from other sites creating interdependence that incentivized more respectful norms and behaviors, even when there was an abundance of disagreements and rants.

Then the 2000s brought unbridled consolidation as the companies that now make up BigTech focused on maximizing growth through ad-driven marketplaces. As with some privatized water systems, commercial incentives were prioritized above wellness. This was only amplified in the product design around the small screen of mobile phones, social discovery of content, and cloud computing. Today, we drink from a firehose of endless scrolling that has eroded our capacity for any differentiation or discernment. Toxicity is amplified and nuance eliminated by algorithms that curate our timelines based on an obscure blend of likes, shares, and behavioral data. As we access information through a single feed, different sources and types of content–individuals, bots, hyperbolic news headlines, professional journalism, fantasy shows, and human or AI generated–all begin to feel the same.

Toxicity is amplified and nuance eliminated by algorithms that curate our timelines based on an obscure blend of likes, shares, and behavioral data

Social media fractured the very idea of truth by taking control of the distribution of information. Now. Generative AI has upended the production of content through an opaque mixing of vast sources of public and private, licensed, and pirated data. Once again, an incentive for profit and power is driving product choices towards centralized, resource intensive Large Language Models (LLMs). The LLMs are trained to recognize, interpret, and generate language in obscure ways and then spit out, often awe inspiring, text, images, and videos on demand. The artificial sweetener of artificial intelligence entices us to drink, even as we know that something may be wrong. The social media waters are already muddied by algorithms and agents, as we are now seeing “enshittification” (an aptly coined term by Cory Doctorow) of platforms as well as the overall internet, with increasing amounts of AI generated excrement in our feeds and searches.

We require both behavioral change and a new more distributed digital information system–one that combines public and private resources to ensure that neither our basic ‘tap’ water or our fancy bottled water will poison our children. This will require overcoming two incredibly strong sets of incentives. The first is a business culture that demands dominance through maximizing growth by way of speed and scale. Second is our prioritization of convenience with a boundless desire for a frictionless world. The fact that this is truly a “wicked problem” does not relieve us of the responsibility to take steps to improve our condition. We don’t need to let go entirely of either growth or convenience. We do need to recommit to a more balanced set of values.

As with other areas of public safety, mitigating today’s harms requires broad and deep education programs to spur individual and collective responsibility. We have thrown out the societal norms that guide us to not spit in the proverbial drink of the other, or piss in the proverbial pool. Instead of continuing to adapt to the lowest common decency, we need digital hygiene to establish collective norms for kids and adults. Digital literacy must encourage critical thinking and navigation of our digital environments with discernment; in other words, with a blend of trust and mistrust. In the analog world, our senses of smell and taste warn us when something is off. We need to establish the ability to detect rotten content and sources–from sophisticated phishing to deep fakes. Already awash in conspiracy theories and propaganda, conversational AI applications bring new avenues for manipulation as well as a novel set of emotional and ethical challenges. As we have learned from food labeling or terms of service, transparency only works when backed by the education to decipher the facts.

The artificial sweetener of artificial intelligence entices us to drink, even as we know that something may be wrong.

Mitigation is not sufficient. We need entrepreneurs, innovators, and funders who are willing to rethink systems and interface design assumptions and build products that are more proactivedistributed, and reinforcing of human agency. Proactive design must incorporate safety valves or upfront filters. Distributed design approaches can use less data and special purpose models, and the interconnection of diverse systems can provide more resilience than consolidated homogeneous ones. We need not accept the inevitability of general purpose brute force data beasts. Human agency designs would break with current design norms.  The default to everything looking the same leads to homogeneity and flattening. Our cars would be safer if they didn’t distract us like smart phones on wheels. The awe of discovery is healthier than the numbing of infinite scrolls. Questioning design and business model assumptions require us to break out of our current culture of innovation which is too focused on short term transactions and rapid scaling. The changes in innovation culture have influenced other industries and institutions, including journalism that is too often hijacked by today’s commercial incentives. We cannot give up on a common understanding and knowledge, or on the importance of trust and common truths.   

We need policy changes to balance private and public sector participation. Many of the proposals on the table today lock in the worst of the problems, with legislation that reinforces inherently bad designs, removes liability, and/or targets specific implementations (redirecting us to equally toxic alternatives). Independent funding for education, innovation, and research is required to break the narrative and value capture of the BigTech ecosystem. We throw around words like safe, reliable, or responsible without a common understanding of what it means to really be safe. How can we ensure our water is safe to drink? Regulation is best targeted at areas where leakage leads to the most immediate harm — like algorithmic amplification, and lack of transparency and accountability. Consolidation into single points of power inevitably leads to broad based failure. A small number of corporations have assumed the authority of massive utilities that act as both public squares and information highways–without any of the responsibility.

Isolation and polarization have evolved from a quest for a frictionless society with extraordinary systems handcrafted to exploit our attention. It is imperative that we create separation, valves, and safeguards in the distribution and access of digital information. I am calling not for a return to incumbent gatekeepers, but instead for the creation of new distribution, curation, and facilitation mechanisms that can be scaled for the diversity of human need. There is no single answer, but the first step is to truly acknowledge the scope and scale of the problem. The level of toxicity in our ‘digital waters’ is now too high to address reactively by trying to fix things after the fact, or lashing out in the wrong way. We must question our assumptions and embrace fundamental changes in both our technology and culture in order to bring toxicity levels back to a level that does not continue to undermine our society. – Rappler.com

Judy Estrin is the CEO of JLABS, LLC and the author of the book, Closing the Innovation Gap: Reigniting the Spark of Creativity in a Global Economy. She is a network technology pioneer, entrepreneur, and business executive.

This article has been republished from Coda Story with permission.


Viewing all articles
Browse latest Browse all 1501

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>