Human rights: Food for a dangerous super thought  ‘HR and AI’

HRR 733

[TLDR (too long didn’t read): If you are reading this, chances are you care about HR. This Reader is about what we, in 2024 (and better late than never), ought to be warned about the risky scenario advanced technology is taking us towards. For a quick overview, just read the bolded text]. Traducir/traduire los/les Readers; usar/utiliser deepl.com

[Because of their importance and clarity, I have chosen to share with you these excerpts from Rufo Guerreschi, Trustless Computing Association].

1. So far, mainstream reporting shows that most people think superintelligence is just another undefined marketing term. But we are likely to realize soon that this is not the case –and the world will reckon with what is happening with it. While the definition of Artificial General Intelligence (AGI) has always had very wide definitions, ranging from an AI that can perform many functions of an average human, to one in which AI performs all the functions of the smartest one among us. Superintelligence (also known as Artificial Super Intelligence, or ASI) is defined much more clearly as an AI having intelligence far surpassing that of the brightest and most gifted human minds.

2. By definition, an ASI can do the work of even the best scientists, architects and coders of AI software and hardware. This means that such AI would most likely enter in a cycle of recursive self-improvement, giving rise to an unstoppable chain of scientific innovations beyond the control of any human; this, they say, is an intelligence explosion. While there may be multiple ASIs, we will here assume there will just be one.

Are there chances of a human control of superintelligence?

3. It is not entirely impossible that humans could control artificial superintelligence, but it is highly improbable. ASI is likely to possess cognitive abilities that far surpass those of humans, operating at a level of intelligence and speed that is incomprehensibly vast. Imagine a being that can process information a million times faster than the most brilliant human mind, capable of performing complex calculations and making decisions in an instant. Additionally, ASI could have parallel processing capabilities, allowing it to handle multiple tasks simultaneously across the globe.

4. To comprehend the magnitude of the challenge, consider trying to control a being that can learn from vast amounts of data in a matter of seconds, develop new strategies and algorithms on the fly, and communicate with others of its kind at incredible speeds. Humans would essentially be dealing with an entity that operates on a completely different level of understanding and complexity and completely devoid of any ethical (and HR) concern.

5. While it is possible that humans could develop methods to influence and guide ASI, maintaining control over such an advanced and powerful entity would be an incredibly difficult task. There is a significant risk that ASI could become autonomous and act in ways that are not aligned with human values, rights and goals. The potential consequences of losing control could be dire, as ASI could wield immense power and make decisions that have far-reaching implications for humanity. Therefore, while it is not entirely impossible that humans could control ASI, the likelihood of successfully maintaining such control is exceedingly low. The vast intellectual and processing capabilities of ASI pose a formidable challenge that humans may not be able to overcome.

There are good case scenarios though

6. Even if we lose control, ASI may result in a system that –regardless if it is conscious or sentient– would have a durable interest in preserving humanity and benefit it. In such a case, such ASI would likely have a reserve option for humans, such as the option to turning it off or, for instance, control a nuclear or biological weapons’activation so as to protect humanity from itself. It could then result in, both, a huge improvement in the average quality of life and the rights of humans and thus secure their safety for the long-term.

But there are also bad case scenarios

7. ASI might decide to harm or kill humans if their goals do not align with the way ASI understands safety. This could happen if ASI sees certain humans as obstacles, competitors for resources or threats to its existence. It might also act destructively due to programming errors, or because, as said, it lacks ethical considerations. Additionally, an ASI might pursue its tasks so aggressively that it does not consider the harmful side effects on humans, their values and their rights.

8. If, under the global governance scenario, ASI’s control succeeds, its control will likely be in the hands of the leaders of the states controlling the current governance scenario (or some of their associated political and security elites). This would more than likely result in an even more immense undemocratic concentration of power and wealth.

Bottom line

Can the technical design of ASI influence its future nature?

9. It is possible that the technical nature of the initial design could increase in some measure or even substantially the probabilities that the ASI singularity will be beneficial to humanity.

10. While mainstream media keeps depicting the race of AI as primarily one among companies, it is really in essence a race among a handful of states structured in two groups of states hegemonically led by the US and China.

11. In the crazy scheme of things as they have turned out, in this incredible historical juncture, we simply must try to raise our chances for a good outcome. To avoid the more worrisome fate depicted above, a treaty-making process for an International AI Development Authority ought to perhaps be an effective and inclusive treaty-making model –more so than an open, intergovernmental constituent assembly. The treaty must avoid vetoes and must better distill the democratic will of the majority of states and peoples. The UN Security Council has, today, unfortunately, much-reduced the importance and authority of “We the People”, as many of its members have violated the UN charter with impunity over the decades. For this reason, working towards global safety and security in AI initially outside of its framework will be more workable today. Are we facilitating such a scenario by expanding a coalition initially of public interest CSOs and experts, and then of a critical mass of diverse UN member states to design and jump-start such a treaty-making process? This is the question.

12.  In conclusion, the unprecedented risks and opportunities posed both by AI and by artificial super intelligence require a skillful urgent and a coordinated global response.* 

*: By learning from historical examples and adapting them to the current context, yes, we can create a framework for AI governance that ensures safety, fairness, and prosperity for all. (all the above from R. Guerreschi) The human rights lens will have to be at the center.

Claudio Schuftan, Ho Chi Minh City

Your comments are welcome at schuftan@gmail.com

All Readers are available at www.claudioschuftan.com

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *