This is the iconic Marvin the Paranoid Android from the Hitchhiker’s Guide to the Galaxy. And this little passage about Marvin is from Douglas Adams’ third book “Life the Universe and Everything.” “Having solved all the major mathematical, physical, chemical, biological, sociological, philosophical, etymological, meteorological and psychological problems of the Universe except for his own, three times over, [Marvin] was severely stuck for something to do, and had taken up composing short dolorous ditties of no tone, or indeed tune. The latest one was a lullaby. Marvin droned, “Now the world has gone to bed Darkness won’t engulf my head I can see by infra-red How I hate the night” In the character of Marvin, Douglas Adams encapsulates many of the contradictions in our emerging relationship with AI. A robot with a ‘brain the size of a planet’, who suffers from the “long dark tea time of the soul”; do we want super-intelligence with existential angst? The inevitability of our anthropomorphisation of AI, because that’s what our brains do. Hubris about our ability to “control” outcomes. Worry that we won’t. And the joke about all our problems being solved “three times over" is also mayb a nod to some of the larger claims made about AI, & its omniscience. Fear and hope as a result of the emergence of new kinds of intelligence in the Galaxy!
So this is a space to think carefully about the role of AI across the whole piece of hitchhkers.earth . Do we have any editorial guidelines for content creation? If we are seeking some form of clear water between existing tech business models and incentive structures what should our relationship to the big LLM's be? Do we get to a stage where we have our own Open Source versions ?
This also links to how we charter the data relationships with our members - and how we can maybe create Genuine People Personalities that give our biological selves more visibility of our data selves, which increasingly swim in an AI salted sea.
There could be a role for hitchhikers.earth to be at the forefront philosophically and technically of our multiple "selves" ( I am not sure that the Digital Twin idea is a sufficient description of what is actually happening to us in data space where we are scattered to the four digital winds on server farms goodness knows where, continuously being configured and reconfigured by goodness knows who, with what implications for us in our biological and our digital entanglements)
Another issue that interests me is the sheer volume of material that an AI (an LLM in this case) can create - the experience can be overwhelming and in the oceans of words it can become alarmingly easy to abdicate to the LLM
Here is a rubric that I created in terms of exploring the relationship between Humans and any given AI System in any given context
The rubric tries to ask simply what the nature of the "work" that an AI is doing - it is about the naming of the things and the understanding of boundaries.
It might be helpful as we consider wider governance issues too
1. Advisory: - This aspect assesses the extent to which an AI system "only" provides advice, recommendations, or insights to human decision-makers. - It considers whether the AI system is designed to augment human intelligence and expertise, rather than replacing human judgment entirely. - Key questions include: Does the AI system provide clear, understandable, and contextually relevant advice? Does it allow room for human interpretation and discretion? Are the limitations and uncertainties of the AI's advice transparently communicated? 2. Authority: - This dimension evaluates the degree of decision-making power and control over resources or processes that an AI system is granted within an organization. - It examines whether the AI system has the authority to make binding decisions that directly impact operations, customers, or other stakeholders. - Key questions include: What types of decisions is the AI system authorized to make? Are there clear boundaries and constraints on the AI's decision-making authority? How are potential conflicts between AI and human decisions resolved? It also considers issues such as implicit and explicit delegation of authority and issues of power to do something and power "over" something or somebody(s) 3. Agency: - This aspect assesses the extent to which an AI system can take independent actions and make autonomous decisions within its designated domain or environment. - It considers the degree of autonomy and flexibility the AI system has in pursuing its objectives and adapting to changing circumstances. - Key questions include: What is the scope of the AI system's autonomous action? Are there safeguards and human oversight mechanisms in place? How are the AI's goals and objectives defined and aligned with organizational values?
4. Autonomy: - This dimension evaluates the level of independence an AI system has in performing its tasks and making decisions without direct human intervention or control. - It examines the extent to which the AI system can operate and adapt on its own, based on its training, data inputs, and decision-making algorithms. - Key questions include: How much human oversight and intervention is required for the AI system to function effectively? What are the triggers and mechanisms for human intervention? How are the risks and benefits of AI autonomy balanced? 5. Abdication: - This aspect assesses the potential risks of over-relying on AI systems and relinquishing human responsibility, expertise, or control. - It considers the human skills, roles, and decision-making capabilities that may be eroded or replaced by AI systems over time. - Key questions include: What human capabilities are most at risk of being diminished or lost? How can human expertise be maintained and developed alongside AI? What are the long-term implications of delegating decisions to AI? 6. Accountability: - This dimension evaluates the clarity and effectiveness of accountability mechanisms for AI systems and their outcomes. - It examines how responsibility is assigned and traced for AI decisions, actions, and impacts, both positive and negative. - Key questions include: Who is accountable for the AI system's decisions and actions? How are accountability processes and measures defined and enforced? How are unintended consequences and potential harms addressed? 7. Alignment: - This aspect assesses the degree to which an AI system's objectives, behaviors, and outcomes align with the values, ethics, and goals of the organization and its stakeholders. - It considers how well the AI system's actions and decisions support and enhance human and organizational well-being, fairness, and societal benefit. - Key questions include: How are the AI system's objectives and rewards aligned with human values and priorities? What mechanisms ensure that AI behavior remains consistent with ethical principles over time? How are trade-offs between competing objectives or stakeholder interests resolved?