A Murder on the Bayesian Express

The year is 2049. It was 6'o clock on a winter morning in Shenzhen. Alongside the express highway stood two detectives in robes of black and blue, overlooking the halted pedestrian conveyor belt, in the shadows of the entangled web of flyovers and intersections which were filtering the Neon lights emanating from the city.

“This is the first time I have seen so much blood splattered around.” — said K in a peevish tone, trying to block the putrid smell of flesh in the open air with his hand.

“In the old days, detectives were used to seeing ten times the blood each week,” he said this with unusual pride but then realising the morbidity of the situation, his wry smile dropped into his old serious visage. “I admit it has been long for me too. I cannot remember the last time I saw a man’s intestine outside an xMRI” — replied Dell in his hoary voice. “So, What happened here?” — he continued while gesturing his finger in a circular portion, pointing them towards the whole mess.

“Sir, A Tesla car linked with a person named DAVE, overran the side traffic barriers into the pedestrian segway channels, crashing into and killing 3 people.” — said K in a matter of fact manner. It almost made him sound like a robot.

“How is that possible? I think the car could have taken multiple other options. It could have self-destructed or rammed into the other side, plunging into the river. This action does not make sense. I smell something fishy.” — as Dell spoke those lines, he was booting up his old-school terminal. Based on his intuition, he submitted a request to initiate an investigation to collect evidence on whether this was the best possible action.

“Why are you initiating a criminal investigation? All we should do is to initiate the family compensation routine for the victim’s family.”-said K. naively. He did not seem to realise the fact that lives have been lost. I guess humans are losing their human touch thought Dell.

“Well, back in the day things like car accidents were very common and it was common for our justice system to spend thousands of hours on dishing out punishments for these incidents. The advent of self-driving cars has made this a rarity, which makes this accident special. Boy, you are lucky to see a complex car-accident at such a young age.”

He continued, as K. listened with intent — “Given the fact that lives were lost even if there existed so many possible actions, the state needs to assign blame on someone- either the AI company or the driver. We cannot have the Luddites from 20 years ago creating a din again.”

Dell thought this was the perfect opportunity to mentor K. about the inner workings of our semi-automated investigative systems. Human-AI hybrid systems have the most complex law ecosystem around it and experience with solving a case in such a domain would do wonders for Boy’s career.

The basic query behind such investigation was really simple — Was this behaviour common between all cars sharing the same AI version or was this something unique to the user’s car? In the first versions of self-driving cars, all actions were determined by the car’s model. But this created a problem that all blame lies on these big corporations and so they created a system to shift the blame back onto the people. Each car is connected to the user’s brain through a neural link and based on the mental feedback of the user, it attunes to the tendencies of the user. A lot of proponents also think that it was necessary for enabling morality in cars as that problem remained unsolved for AI agents.

Dell walked towards the car and extended a wire protruding from his sleeves into the car’s main cognitive engine. He signalled K to replicate his hologram in his projector.

“So first, we will step up the simulation with these configurations. This would connect the car’s engine to the uber-computer and after the Smalltalk is established, the computer will access the model inside the car’s engine to see how different simulations would have played out.” — spoke Dell as he pulled out the cable once the secure connection was set up.

“As the simulations are being set up, I will also access baseline performance dashboards for the Tesla model to establish a vantage point for the simulations…” — Dell spoke like a school teacher as K observed his hologram. He was astonished on how Dell can use computers with his fingers with such ease. He felt lucky that he can glimpse into how things were done not so long ago.

“Okay. All the configurations are set. Now we will wait for the metrics to return the posterior numbers on systems prediction about DAVE’s culpability.”

“Got it boss”- nodded K. He genuinely followed each word noting it down in his mental iNotebook. Soon he was into his robotic rhythm of procedure that has been drilled into him in the academy and bored, he found himself drifting back to something that caught his eye earlier. In no time, all evidence was registered and accounted and they were just waiting for the simulations.

He had noticed earlier that Dell was carrying a physical book in his jacket. He was curious to see it and taking benefit of the free time, he asked — “Whoa. Can I see that book? Where do you get such rare stuff?”-

“No No. I am just that old. Here have a look. These used to be man’s best friend before the spread of the NeuralLink.” — He said in a half-mocking and half lamenting tone. He handed the book to K. , with his back-cover at top and K grabbed the book, feeling its weight in his hand, turned it back to its cover.

Ved-anta: The end of the search for irreducible irresistible Truth — well that is a heavy title. I don’t think I will be able to read a sentence. Can you tell me what this is about?” — K moved his fingers across the slight matte impressions of the letters printed in Ink without attempting to read these impressions. The coarse minuscule ravines engraved in the imperfectly compressed wood pulp felt alien to someone who was used to glossy shiny surfaces and intangible holograms.

“I am interested in recent history, especially history of our AI systems that have burst into our lives in the past 20–30 years. So this book talks about how we stopped trying to achieve a complete conscious AI system and have now embarked on a journey to create complex human-AI hybrids. This is a must-read for all criminal prosecutors and detectives.”

“Why is that the case? I am already interfaced with so many systems. Why read about historical systems when each day we get state of art updates in each of them?” — replied K.

“Well, we should be dependent on machines but never forget that we are the masters and not them. I know they are not beings like us but they have been slowly controlling more and more parts of our lives. So knowing this stuff helps, you know. ” — chided Dell in a didactic tone.

“And this goes into the basic hypothesis on why human’s would always be needed for AI systems — the ones you mention and why the human feedback was necessary to solve the Moral Machine problem …..” as Dell was lecturing K, he got a notification. The preliminary upper bound estimate must be out.

As he reopened his hologram, the screen read- “preliminary posterior sampling ratio out of bounds. Possible misalignment detected.

Dell smiled as he realised what this meant, prompting K to ask, “Why are you smiling? What does the sampling distribution show?”

“It shows that a human might be prosecuted for a car accident after 20 years. This is going to be interesting”

We, Robot

Circa — the late 2040s— in a physical classroom of a prestigious university, we find students sitting on their desks as the female professor is interacting with the hologram in front of her. This university prided itself for still following the traditional teaching methods that have continued from the first universities of Taxila. They have kept the basics of student interaction but updated the medium to the late 21st century.

[author’s note — The people in mid and late 21st century thought physical classrooms still as the best medium for humans to learn. The way humans tangibly interact with their classmates and professor could not be replicated by any neural network link at that time. Also, a prevalent idea at that time was that humans need to pass knowledge through all three layers of consciousness — consciousness, sub-conscious and Id to really encode their Neural networks.]

“Introduction to AI: An historical perspective” — read the first slide. The professor is talking about how humans have imagined AI systems and how that has shaped the regulations around the current pervasive systems.

“As we see in this movieHolo — I, Robot — humans have been thinking about how to control AI systems for a long time. The visionary, Issac Asimov imagined the three law of robotics, described in the English language, that would be sufficient to ensure intelligent machines can coexist with humans.”

Many students chuckled as the professor said these lines. It was funny to them that some vague concepts in a context-specific medium like the English language could be imagined to control the behaviour of machines or humans. It was beside them that how can a so-called visionary not imagine bayesian probability functions as the foundation for these laws.

“Before you laugh, you should understand that the movie itself raises the question about if such a system would be adequate to handle day to day living in a world run by AI systems or robots in the movie.”

A robot may not injure a human being or, through inaction, allow a human being to come to harm. — The laws placed robots in service of humans and enforce that our creations cannot destroy us.”

“The issue is, in the world shown in the movie, they translated concepts like harm or benefit in terms of singular utility functions. Like when comparing a child’s life with a grown man’s, the robot predicts the probability of survival and saves the man-Will Smith’s character. This decision by a robot was the kernel of our protagonist hatred for robots as well as the author’s indication that this system is incomplete. Our hero felt that in this world with singular utility functions, humanity is being lost. If there was any human trying to save them, they would have tried to save the little girl, knowing that a grown man can save himself.”

“So as the movie progresses, the central AI intent to enslave humanity is revealed. As a consequence of the laws encoded into the minds of these machines, they decide that controlling humans is the only way to not let them harm themselves through their constant exploitation of the environment. This is a perfectly rational response under the paradigm of the Three laws but if you ask any human, they won’t agree to it. As our hero says — It’s heartless”

“So in one of the interpretations, the movie ends by postulating that the system would be complete when robots achieve consciousness at the level of humans. They would realise the “heart” of each action and use that to weight against different objectives they have to follow. The main robot — Sonny — was able to gain consciousness and thus was able to aid our heroes in stopping this robot revolution. ”

“As a corollary, an important idea was raised is that morality cannot be encoded. It can only be ‘felt’ by a living brain and the first step should be to chase truly AGI systems if we want to live in a technical Utopia. While it had great ideas about misfirings of AI systems under single/pre-defined utility paradigms, the reality turned out to be a little different.”

“Yeah. Robots cannot be conscious.” — one student spoke up in a sarcastic tone.

“Well not yet. Given that we were able to develop intelligent yet non-conscious systems like self-driving cars and helper robots, we needed a solution that was simpler than making robots self-aware.”

“Hence the first AI systems in the military started working on a principle of ‘learning to defer’. We realised that humans consider multiple choices, imagine different realities and magically pick one of the alternatives[yes, it takes heart to decide to press the KILL button to fire a drone missile]. Suppose we take a self-driving car and due to some circumstance it has to choose between killing its occupants or the pedestrians on the road. This is a form of the well-known Trolley problem and depending on if you are a utilitarian like Locke or deontologist like Kant, you would choose one of the equally ghastly choices. Over the centuries philosophers have tried tackling that question but have ended up short. There is no right answer. But if we find a person taking some action, according to one of the philosophy, kills someone not with intent but due to unavoidable circumstance, that is considered “non-culpable” homicide. Hence how can we create systems, when we ourselves don’t know what's right or wrong. Hence, even though we were not sure if what humans did was right, the govt. found it easy to pass legislation allowing these technologies if humans were involved in the process.”

“The primordial systems had a terminal with text that showed relevant information and utilities under different action plans. There were people that looked at that information and took a decision of if a kill should be validated or not. This was a great improvement in killing efficiency if we are judging by standards of war, over old human soldiers. But people realised that this can be improved a lot further if this deferment could be expedited to milliseconds or microseconds”

“Thus the first real-time systems were not created until Neural link was fully developed by Elon We realised the Neural link can sense coarse feedback if certain data/simulation was observed by the brain through emotions and regional activation. A.A. created a system to do MCMC over people through a neural link in a p2p manner over their subconscious to generate a model that can be used to defer to in real time as a proxy for collective human will.”

“Ma’am, Is this the blueprint for the core of our system’s core? Can you explain again?”

“Simply put, we would compress collective human will in a mathematical model and use it to predict moral actions that would be approved by a majority. It was the first time democracy was established at each action level, though by proxy. Throughout this evolution of AI, the goal was to create a child in Man’s image. Hence it was logical to personalise this collective human according to the will of each human user of these AI systems. Also, there was a lot of fear-mongering by the Liberalistic corners who were worried that such pervasive uniform AI systems would create a homogenous dystopia and strip world of its multi-varied complexity. Hence, the govt.’s mandated that each AI system should be personalised according to the user.”

“So we have a model that compresses collective human will of a sub-group of humans whose alignment is changed by running continuous backprop against each user action when working with the AI. On top of the common AI, a layer of the person is added — that signifies his free will through how his brain has changed the base model over time.”

“Your AI system is an extension of your morality on top of the cultural principles. Ergo, if it sins, it is as if you have sinned.”

“Why we need to do such complex MCMC. Why can’t we just encode what we think is right and wrong?” — as students were trying to grasp the dump of information that the professor has unloaded on them, one of the girls sent this query to the professor.

“I am not saying that this complex MCMC is complete. It is still an unsolved problem but the current solution is better than any encoding. What should be encoded? Should the systems be utilitarian like the robots of the movie? Or Should they be vague and pick from any of the possible solutions? Before answering any of these questions, we need to ask — where do our morals come from?”

“Well, It is based on our genes, our culture and ideology and maybe free-will?” — guessed one student.

“Yes, you are right! A complex combination of these factors determines our moral actions. The issue is that our brains are black boxes and we cannot access their internal state even with Neural Links. And it is not possible to simulate billion years of biological and thousands of years of social evolution to let our agents learn. Furthermore, can we create the encoding for the infinite space of possibilities for which we may have to assign a moral valence in the day to day life? Thus the only answer was to loop humans in each step.”

“In the next class, we will study the method to decompose the model learned from AI systems into the combination of genes and social factors as suggested by you. Till then, Hasta la Vista.” — the professor ended the lecture in a wink. Obscure culture references from an ages ago is obviously a favourite pastime of a history professor.

We’re gonna carry that weight.

The detectives are walking back to a building that looked like a gargantuan brick lying on the ground that seemed to extend into infinity. The blackness of the surrounding engulfed all the protrusions and designs to give it a smooth dark look. The freckled lights coming from the offices inside were like the binary code on a black terminal. All of the people and machines working inside are just small bits of the global program that was the Justice Department.

The oppressive air of the hazy evening gave way to perfectly ambient gushes of air as they entered the building through the automatic doors. Our detectives enter the lifts that took them to their offices. In a building this huge, they did not have any sense of where their offices were located. All they knew was to press a button and the lifts magically take them there. Dell hated that. He still tried to hold on to whatever he can from the old times when humans had any skills of their own. This was clear when once glanced once across his room. A dusty pile of paper magazines on one corner. A table with paper littered all over it. A dilapidated bookshelf. If we did not know better, we would have felt this was a gallery event presenting artefacts from the early 21st century.

Dell and K. entered the office and Dell dusted off his physical keyboard and booted up an old looking 4K resolution terminal. This machine can access the biggest supercomputer in the district but many found his way to interact with the machine regressive. K. roamed around, fidgeting with the odd artefacts surrounding him, and then observes that Dell is deeply engrossed by the snaggy characters appearing on his terminal.

“So What do you mean when you say that human might be responsible? How could Dave have killed the people when it was the car that hit them?”

Dell was smiling as K. continued on his line of query. It was natural of him to ask as he was put up with him to learn. He has been thinking about this for quite some time and now that a case was here, it unfurled a flurry of thoughts that he had gestated for long.

“I will tell you a story. A story you know already”

“Which one?”

“From where it all started — The story of creation. God created humans as innocent pure creatures who did not have any burdens. They were so beloved to God that all he wanted for them was to enjoy his bountiful garden of Eden for posterity. We were not built to handle the weight of choosing our own actions — the right from wrong.”

“God gave us free will”

“Yes, like any good parent. He wanted us to create ourselves but he warned against one thing and we could not stop ourselves. Maybe it was our innocence that led us to be betrayed by Satan.”

“The apple from the tree of knowledge. Ah!”

“The knowledge of deciding right from wrong. This introduced the concept of evil, as an opposite to the good that mankind was, and hence tainted our species since. We have been fighting for thousands of years to forge something that can undo the original sin. We are literally waiting for Jesus to save us from damnation.”

“How will Jesus save Dave?” — asked K. with a puzzled look, trying to grasp at the argument Dell was trying to make.

“No…No.. I mean we know that good and evil exists and we still can’t decide how to create a life where no evil exists. So we found a bypass. We created robots that were innocent and naive. They are our children whom we have created to forgo our burden. Maybe we are the gods of these new species.”

“If that is the case, has the car sinned today by killing people?”

“No the age when our children eat from the tree of knowledge has yet not arrived. That will be a scary problem of its own.

“So what's the problem?”

“We could not share the burden. We tried to renege our moral responsibilities as humans but like all things humans do, we failed miserably. Finally accepting God’s curse again, we realised that we need to place a piece of ours inside our children. A piece of our free will within the minds of these wonderful naive machines.”

“You mean the Free-will layer mandate under the Liberal regulations?” — K nodded, as he was remembering things he learned for his criminal law exams. The old-fashion course always gave him trouble.

“Yes. We built Quantum Variational bayesian backprop method to let machines share that burden with us when they were not able to take it. So now if your machine is doing anything, there is a part of you inside it that makes the decision. Or that is central to the decision. This way, if the machine did it, then it was really Dave who did it!”

“Aha! That is why you said he might be prosecuted. But why this accident in particular. There have been multiple accidents with self-driving cars. What makes Dave or this accident special?” — as K was grasping Dell’s metaphorical story and formulating the array of questions popping in his mind like popcorns, there was loud noise that arose from Dell’s system.

There were whirring, whizzing noise. Dell smiled as he has put the old broadband tune as his notification tone for a theatrical effect. You must have realized by now that he has a flair for the theatrical and a special place for nostalgia in his heart. The results for the Quantum counterfactual investigation were out!

“Let me explain by walking through the counterfactual investigations. Since we want to expedite the investigation, I just ran 1 trillion simulations and at the crime scene, the preliminary sampling showed that in a counterfactual world, if the car was a general Tesla Model 9e, it would have sacrificed the driver 99.7% of the time. I set up P(Hit|Dave) and compare with P(Hit) = Sum(P(Hit|user), for all users) as the probability calculation.”

“Well that means, 0.3% people would have acted similarly to Dave. This does not make Dave special or culpable.”

“You are right. This is where we have to go deeper into the mind. As you might know, the intent decomposer is used in these cases.”

“I have a functional understanding. It tells us the source of our intentions. But not beyond that…” — K was getting confused now. He still doesn't understand why a human would be responsible.

“So, this system tries to explain the action into intentions. But we can’t access intentions using the neural links. The constant connect of the neural link between the car and the brain allows us to use brain simple actions signals and infer the latent desires of the actual human brain. We never thought that talking with our robot children would unlock the secrets of our own brain. If we consider a hierarchical Bayesian model — environment-> intentions -> actions, we can calculate posterior probabilities like P(intention|action). We go even one level further and try to find the P(environment|actions) as a decomposition of what part of environment encoded these intentions in Dave. Was it the civil society, the culture he belonged to or his genes that made him do this actually. Thus we can write P(environment|actions) = f_society + f_genes+ f_culture + \epsilon(free will).

“What now?” — This went above K.’s head. Dell should improve his pedagogical skills.

“In short, we want to know, that even if Dave took an unnatural action, was he really responsible for taking this action. If this is just because he was predisposed because of his genetic constitution, how can we put him in Jail.”

“What do the simulations show for Dave then? What part of the function is high.” — K understanding the basics, ask the obvious question again.

“Well that is why I said this was going to be interesting. It is for the first time I have seen such a decomposition. Come have a look for yourself…” — Dell signalled to K. to come over his back and watch the results of the simulation at end of each epoch of the simulation. K raises his finger towards the terminal and traces the numbers for each of the possible root causes. He understood as he read the whole line of results on an old looking spreadsheet.

“Now I understand…. Let’s wait for more conclusive answers..” — As K. also realised the implications of the numbers, he walked towards the window facing the back of where Dell was sitting. He thought about how Dave’s mind must have worked as he looked onto the frail tiny human figures that dotted the vast parking lot visible from the window. They seemed so powerless and tiny and yet, they were all moving with resolve towards some goal which was not visible to him. The recursive nature of his thoughts created a miasma of confusion around him as the machines went on computing better estimates for the function silently.

The Fourth Act

Criminal law is a field that is known for its fidelity to customs. They still use the same terminology from the time of the Greeks and the British empire. There is an interview going on for an associate at a law firm specializing in AI law.

“The first case is Mr Adam. As we look over the detective counterfactual report, we see it indicates that f_society and f_genes scores to be high. So tell me what does it mean?” — the interviewer showed the data to the candidate on a screen.

“Under the article 1, Act 4 of the “Harm principle legislation of 2040”, Adam would be liable to pay fines up to 2MN$. He would lose his license to operate or interact with all intelligent agents. Given his pre-disposition for malice, he won’t be allowed to contribute to these machines kernel as part of the free-will layer. He needs to accompanied by a caretaker who will inter-operate with the machine in case he needs to use transport for example.” — the interviewee replied in a matter of fact tone.

“Next question — Would he be liable to serve jail time?”

“Since the epsilon values are low and we are finding good fits across multiple chains, I don’t think any judge would find him culpable. As the famous saying goes- If the function fits, you must acquit!

“Are you aware of the precedent set by the famous “Divergent Dave” case?”

“Not really. I am just aware that it challenged the notion of criminal law at that time. ”

“I think that's enough for me. Thanks for your time!” — interviewer signalled to the interviewee to leave the room with a smile as they called in the next candidate.

Inside Out and Outside In

This document is part of the character background records admitted as addendum information to the courtroom proceedings for the defendant David Bowman in the case Dave vs. The people of United States. For additional references, please refer to the Justice Department archives.

Document 1: Defense deposition

Defence Lawyer(DL): “Are you aware of your circumstances Mr Dave?”

Dave: “Yes. I suppose I am in what they call a pickle.”

DL: “Sir. Don’t take this lightly. This might mean you might be Jailed!”

Dave: “Why is that the case? I thought people don’t get punished for car accidents anymore?”

DL: “Your case is different. I saw the detective reports on intent decomposer. Your epsilon was reported as the highest ever recorded for a criminal offence.”

Dave: “What does that mean? ”

DL: “Before answering that, let me ask you. If you were driving the car instead of relying on the AI system, would you have killed the pedestrians to protect against any danger to your life or your car?”

Dave: “I suppose so!”

DL: “No. This is not what we practised. We don’t have a defence other than this?”

Dave(looks amused rather than disturbed): “Why do I need a defence exactly, again?”

DL: “If this is not clear, I will say this again. This is historically unprecedented. The law mandates human culpability for p-values failing the 99.9% hypothesis test for epsilon. This has never happened in the recorded past before your case. In short, you are going to jail without my help. But I need you to help me, help you. Please.”

Dave: “If this is so extreme, What are you trying to do? What help do you want?”

DL: “Since this is unprecedented, I will create reasonable doubt in the mind of the judge. This is so extreme that this cannot be true. There has to be something wrong with the calculation or the detectives' data. Hence you need to assure the jury that you would have never taken this action consciously by your own will. Then maybe we can create doubt about the most robust system that humanity has ever created.”(ends in a sarcastic tone.)

Dave: “Somehow, I cannot bring myself to say that. I am sorry. I cannot be unauthentic. Rather I would prefer jail time”

DL: “Please I implore you.”

Dave: “I am sorry.”

DL: “The defence won’t have any arguments to present. You are not giving me any room. I will see you tomorrow in court.”

Dave(smiling): “Thanks for your time!”

DL: “Off the record — You are not realising still. Maybe you need to go to jail. Bye Mr Dave!”

Document 2: Dave’s testimony/confession:

I am not sure if this would have turned out any different for any of us if I had been what everyone wanted me to be. But the truth is, I am what I am. I cannot start being harmonious if the world wants me to be. I know that people might think that I was born this way or was taught to be this way by my parents. I am even tempted to follow the defence that my AI was not in sync with my own free will. But the truth of the matter is, that I feel, deep down when I ask myself If I would have swerved the car onto the pedestrians, that maybe I would have done the same with my own hands on a steering wheel. Honestly, I don’t see a difference.

Document 3: Final statement after punishment is delivered

“For everything to be consummated, for me to feel less alone, I will only wish that there be a large crowd of spectators the day of my imprisonment and that they greet me with cries of hate.”

Until next time!

“Hey let’s just drop out of this lecture and attend that party!”

“Hell yeah! I am not in a mood to be bored today.”

“Yes. Quickly as he starts his stream, we will drop out so that he does not notice”

As they were waiting for few mins for the Professor to start speaking to get a window to drop out of the lecture…

“Today we will study the case of David Bowman aka Dave and how this landmark case created a precedent and changed our perspective of AI law in that era. We will also discuss how we solved the problem raised by the proceedings of the case through a mixture of philosophy and technological advancements ….”

BEEP. BOOP. Connection Ended. Thanks!

I write when I am depressed.