2019

Aisha Ghei Dev                                            Info


Hello! I’m a designer, researcher and curator(ish), exploring the influence of spatio-technics on indentity and culture.
My work is an articulation of these explorations. 


*~*~*~*~*~*~*~*~*~*~*~*~*~*

QUESTIONS — 


1 - Moral Codes [2018]

AI research, speculative design, digital fabrication.

2 - Weird Temples [2019 - ]

worldbuilding, visual design.

3 - Yellow Line Radio [2018]

experience, brand, art direction.

4 - Code of Conscience [2019]

parametric design, brand, visual system.

5 - Humanoid [2019]

product, ui/ux, brand development

6 - Archive [ - ]

fun things!


















Mark





THE PROJECT

My project was focused on addressing human autonomy, the ‘moral crumple zone’ and democratic decision making in AI systems. I created a speculaitve data sculpture, a decision tree that grows, using your decisions as DNA. This ‘Moral Code’ plugs into AI systems, overriding biased, monopolosing algorithms, with your own.



For full process, see Medium.

Spring 2018
Project Advisor - Dan Lockton














CASE STUDY

I narrowed down on the field of Autonomous vehicles; how do we address the moral crumple zone and at the same time democratise decision making?

HOW MIGHT WE

How might we democratise decision making so that algorithms, embedded with ethics and values of the people who code them are more applicable to everyone? How might we literally make autonomous systems extensions of ourselves?

How might this help figure out accountability and responsilbility in the age of intelligence plurality. How to we make sure that human judgement trumps machine intelligence when it comes to ethical and highly subjective, indivudal-specifc decisions with no clear value judgement?









Autonomous vehicle with
the Moral Code





THE MORAL CODE

The Moral Code is speculative data sculpture that grows based on the decisions you make through your lifetime and literally attempts to represent ‘where you’re
coming from.’

This acts both as a meditative sculpture as well as an ethical footprint that could be used to connect you to your environment via AI systems and products.

By plugging in your Moral Code you could embed your own ethical code into the technologies you use, increasing your agency over these systems. This
sculpture attempts to democratise the ethics of future AI technologies and address the ‘moral crumple zone’ in unmanned systems.









Moral Code for Mildred Plotka





Moral Code typologies





PROCESS







RESEARCH

I was really interested in looking at how very human and extra-human ideas would change in the context of AI. Religion and ethics, bigger umbrellas were things I was considering. 

I was interested in the ‘Intelligence’ of AI, what it means and who it means it to. This got me thinking about the neural netwroks of these systems, how can we get a more hollistic picture of these intelligences? 


ETHICS OF AI

Instead of thinking about the ethics of current or developing AI in relation to the ethical affect on people and cultures, I’m going to be looking at the ethics or the moral code of the actual AI, the Alexa, Cortana, Siri.

What are the values that these systems hold? How does our behaviour change in response to these networks?


THEORY OF MIND AND INTELLIGENCE PROFILING

This tangent kind of spun out of the conversation with David Danks. He mentioned considering AI rights as we consider animal rights, there is a certain level of anthropomorphising that takes place in this situation.

A lot of these rights, however, are specific to cultural mores and morals and integrate a set of social interactions. A lot of



this depends on how you make inferences from observations and happenings, from the experienced world.

For example, on the roads, people see people to be mindful of, cars, trees etc. Self driving cars see people not as we see people, they don’t see the rich picture, they see pixels and data to avoid via computer vision.

What if we took people from different places? A driver from Pittsburgh would see people, trees, signs etc. A driver from Delhi would see crowds, a really wide range of precarious and unsafe vehicles, etc.

How do we construct ideas of how other people think? How do we construct ideas of why people do what they do?

EXPLORING ARTIFICIAL ETHICS

What if we could create a system that would record and predict the decision-making process for people, according to their ethical code?

Visualising ‘where people are coming from.’ We hear this being thrown around, especially in such a polarised world, what if this system could literally visualise the contexts that people are situated in, so their decisions are more understandable.









1. QUIPU

This is an Incan system of accounting and record keeping — it is based entirely on knots on rope, worn as a belt on the community accountant. The knot patterns are unique to each individual/family, according to their own personal account.

It is essentially a footprint of their data that is presented in a comprehensible way, for administrative purposes.












Quipu.





2. COMPUTATIONAL QUILT

This got me thinking about textiles and weaves — how they’re so specific to cultures, people and places.

What kind of information could be embedded into these quilts using material logic as well us technique?





                                                                                                     

Lorrie Faith Cranor (2013)





THE THING










DEMOCRATISING DECISION MAKING

“What people should strive for, in Greene’s estimation, is moral consistency that doesn’t flop around based on particulars that shouldn’t determine whether people live or die.”

This extract was taken from The Atlantic article on the response of Buddhist monks to a variation of the trolley problem. What was thought to be a choice made by “psychopaths and economists” was also made by these monks.

The data physicalisation would also be a representation of the judgement of morality and decisions we make based off our moral codes; what is considered prudent or even noble in one culture or belief system, might be considered brutal and inhumane in another.

“AI systems must embed some kind of ethical framework. Even if they don’t lay out specific rules for when to take certain
behaviors, they must be trained with some kind of ethical sense.”

The article goes on to point out that although there should be some sort of ethical code embedded into AI technologies, people aren’t comfortable with the idea of companies making decisions on their part. “Again, in that instance, people don’t hold consistent views. They say, in general, that cars should



be utilitarian and save the most lives. But when it comes to their specific car, their feelings flip.”

Do we know which parts of our moral intuition are features and which are bugs?


ADDRESSING THE MORAL CRUMPLE ZONE

I was first introduced to this question by Madeleine Elish, from Data&Society while we were consulting them.

“As control has become distributed across multiple actors, our social and legal conceptions of responsibility remain generally about an individual.”

How will we assign accountability? With increased interaction and diffusion of human and machine intelligence, who do we hold responsible for decisions that aren’t compleltely autonomous?







Initial forms —
Rendered parametrically
on Rhino 5 / Grasshopper






























Mark