[TLDR (too long didn’t read): If you are reading this, chances are it behooves you. This HR Reader is about approving or disproving the avalanche of controversial forces of change we are faced with. For a quick overview, just read the bolded text]. Traducir/traduire los/les Readers; usar/utiliser deepl.com
–Reasonable positions are seldom news.
—Beware of the sloganization of the hashtag type.
1. Today, social media can validate almost any experience or idea you can think of. It is an unreal world out there. But what can you do? Action and activism are the only antidote to despair. (Stephen Bezruchka)
2. Is it a stupid version of reality that we get from the social media? (Should I include some zooms, webinars and other such…)? Few among us read books these days. (Pascal Lottaz)
3. We ought to (but have not so far) more vigorously demand the media to give exposure and attention to relevant issues and conflicts that are so often ignored. The less visibility these issues get, the more the risks of escalation of conflicts with their corresponding human rights (HR) violations. But, alas, when visibility is given, it can also come with harmful distortions. Suffice it to say that, unfortunately, the current coverage we get from the media is mostly inaccurate, poorly contextualized, misguided, and dangerously stereotypical.
And then, of course, there is, AI…
AI is considered to be the new main hegemonic force of change: But? (Riccardo Petrella)
4. First, in AI, there is the issue of data bias. AI models are only as good as the data they are trained on*, and if that data is skewed or incomplete, the AI’s predictions end up being inaccurate or, worse, discriminatory. These misconceptions or outright bias can and do lead to false accusations or missed warning signs affecting populations with scant access to accurate information. In the context of HR, such bias will have devastating consequences.
*: A quick aside here: Modeling is a branch of neoliberal development that economists are enamoured with. But the ultimate justification of models must be their real utility for public policy and politics, not their theoretical purported rigor. Economists must, therefore, find an acceptable compromise between theoretical elegance and practical utility since excessively theoretical models are often dangerously used as a means for legitimizing policy. (Nancy Krieger) The ‘social software’ of models is what is most usually missing. Economic models use a single-period time horizon. Chains of assumptions lead to models that primarily have a internal (often subjective) logic. But when one adds social equity criteria, the models too often have no such logic.
5. A second concern with AI involves transparency. AI algorithms, especially those based on machine learning, often operate as ‘black boxes’, meaning it can be difficult to understand exactly how they arrive at their conclusions. This lack of transparency is and will undermine the trust necessary for their implementation in HR monitoring, as activists are justifiably reluctant to rely on decisions they do not fully understand. This issue is particularly dangerous when dealing with generative AI models that can (un?)intentionally spread misinformation or misinterpretations, especially in politically volatile environments.
6. Any AI algorithms used in the realm of HR must have strong ethical and human oversight to prevent foreseeable harm. Another ethical pitfall is the potential misuse of AI by authoritarian regimes. In the wrong hands, AI can be and is being used to monitor and suppress dissent, leading to further HR violations. (Sam Bowman)
7. Algorithmic transparency will not materialize if it is left to the market to decide what should be transparent —nor are non-binding ethical frameworks in this realm. We need mandatory rules and new transparency instruments that ensure that people are aware of the use of these technologies.** Algorithmic transparency is closely associated with respecting, protecting, and promoting HR. For example, if a government uses an algorithm to allocate subsidies, a person denied that subsidy would need to know that such a tool was used, as well as how it was used, to be able to understand how to challenge the decision.
**: To be noted: The emergence and introduction of any ‘big technology’ is, not only a technical process, but results in inevitable political transformations. (Joyce Souza)
8. It is common for governments to challenge requests for algorithmic transparency based on concerns about intellectual property, data protection and cybersecurity. This underscores the need for mandatory rules regarding the disclosure of information about algorithmic systems. We do not have to start from scratch though as there are existing examples of recommendations, guidelines, and standards of algorithmic transparency in the public sector that have been issued in Chile, the United Kingdom, and Australia. (Juan David Gutiérrez)
Globally, it feels like we are in a classic cartoon moment:
9. We have run off the cliff, are suspended in midair, with legs still spinning, pretending there is solid ground beneath us. And for now, we are all playing along, pretending the AI ground is solid, but it is not; it is still shaky at best –so we better get moving.
10. That is exactly where AI governance stands today —racing forward without enough guardrails, while we hold our breath, hoping for stability. The risks are mounting: unchecked power and a widening gap between AI’s rapid development and our ability to ensure it serves the public good. We can either keep pretending everything is fine, or we can face reality and start building an AI future that puts people first –not just profit and power. (Mozilla)
Bottom line
—Only outrage delivers engagement. (Yuval Noah Harari)
11. If the Left fails to develop the courage to proactively create such an outrage that denounces the current economic, political, ideological and cultural sorry state of the world and fails to presents to the masses a coherent, realistic plan for an alternative world order, capitalist globalization, in good part now pushing-on with AI, will continue to reign supreme, and the far-right will be its main political beneficiary.
Claudio Schuftan, Ho Chi Minh City
Your comments are welcome at schuftan@gmail.com
Postscript/Marginalia
—Not to forget: AI and cryptocurrency ‘mining’, both require massive amounts of electricity; by 2026, it is estimated it will be as much as the whole country of Germany needs. (GHW7, PHM)
—The Math on AI’s Energy Consumption: A single ChatGPT query uses as much energy as running a microwave for eight seconds. But when multiplied by billions of daily users, artificial intelligence now consumes 4.4% of all U.S. electricity –and that is just the beginning. By 2028, AI could devour enough power to supply 22% of American households annually, forcing tech giants to resurrect nuclear plants and build stadium-sized data centers while keeping their actual energy usage completely secret. The Lawrence Berkeley National Laboratory warns that this unprecedented surge lacks transparency or planning, potentially leaving consumers to subsidize Big Tech’s power bills through higher electricity rates. Most troubling: Today’s relatively simple AI tasks represent the smallest energy footprint we will ever see, as the industry races toward autonomous agents and reasoning models that require exponentially more power. MIT Technology Review
