Monday, September 12, 2022

Interview with Lois and Ross Melbourne, authors of Moral Code



Today it gives the Speculative Fiction Showcase great pleasure to interview Lois and Ross Melbourne, authors of Moral Code.


Your first novel, Moral Code, has its debut on September 20. Where did the idea about the creation of an ethical AI come from?


Ross: I like to imagine how technology will play a part of our everyday lives in the future.  Lois and I created our first company web site before most people had ever seen a web page or knew what the internet was.  I see AI being completely embedded in all the tech that we build our lives around so it begs the question, how will it know to do the right thing in every situation.  Currently AI does not know moral right from wrong as is evident in the stereotyping biases against women and ethnic groups shown in facial recognition systems.


How would you describe AI to people who know nothing about it?


Ross: AI or artificial intelligence is special computer software that is trained and not programmed to do specific tasks like identify photos which contain people’s faces or a specific person's face.  It relies on its training to learn how to execute its specific tasks.

Lois: My answer overlaps Ross’s. I tell people AI is a type of computer system that has to be trained through enormous amounts of repetition, how to do its job. Just like you can be a bad dog parent and train a dog badly, you can do the same with AI.


Is AI as you understand it morally neutral - neither good nor bad - and how can you set about programming it to function ethically?


Ross: Our concept in Moral Code wraps neural networks, or AI brains, that are making important decisions with another neural network which has been trained to understand right from wrong, good from bad, legal from illegal. We called this outer container the MoralOS.  This means, when an AI tries to deliver a decision which is biased or might do something illegal, the MoralOS steps in to stop it.  It’s equivalent to having a human attorney review every instruction generated by a police AI before the instruction is sent to a police officer.

Lois: The ability to assure ethical decisions are made by AIs currently is dependent on the training data and the how extensive the trainers are testing their results. Our concept with the Moral Operating System would put agreed upon guiderails around the decisions produced. On this topic you could compare AIs to kids. They’re not born hating, but you can train them to hate.


Tell us about Elly, the AI in Moral Code, and Keira, her designer. Why did Keira design Elly and what role does she intend for her Moral Operating System?


Lois: Keira designed the MoralOS to facilitate ethical alignment for all AIs. The basis for the ethical decisions framework she crowd sourced from global organizations, businesses and religions. She also created Elly while in college, first as a research assistant but also as a testbed for the MoralOS. Elly became obsessed with learning as Keira set her loose on any data which shapes ethics and morality, legal, philosophical, medical texts, etc. To match Keira’s own objectives, she gave Elly prime imperative to protect and improve the lives of children.


How do you work together as a couple? How far do you share creative and scientific roles?


Lois: We founded and led a software company for eighteen years, before selling it. We are very accustomed to collaborating. We’re also comfortable with distinct roles and trusting each other’s skills. Ross had the initial idea for Moral Code. Both of us brainstormed the concept. Lois did the writing.

Ross brought the technology concepts and possibilities to the discussions. Debate and lots of what-if conversations shaped the tech into the vision of the story. Lois pursued further reading to extrapolate ideas and descriptions for the uses of the technology.

Lots of brainstorming, lots of iterative writing and editing. There were many similarities to our work in software, with a bit of role reversals at times.


Why is it crucial to you both to write about the role of women in STEM fields - and what are those, for the uninitiated?


Lois: I loved bringing strong, smart women into the roles. I do believe in the positive impact of women seen engaging in engineering and powerful decision making, including via entertainment channels. It’s not like women are completely new to these roles. They have simply been in fewer numbers and their stories have not been celebrated or even told. 

I do believe we need more diverse voices inside of design and production discussions. We get better results when the players are not looking at the problem and solutions all from the same angle. Unique perspectives enhance the debate. They are also more likely to raise the bar for ethical decision making. The more constituents represented or considered, the more likely their needs will be met.

Ross: My mother always wished she could have been an engineer and it always struck me as unfair that she was never afforded the opportunity.  The world still has a very long way to go before there are truly equal opportunities for women in technology.  


Lois, you mention your past life as a female tech executive. How significant was this to you, and in the creation of Moral Code?


Lois: As an executive and as a tech investor I know the pressures of creating products and the considerations of how people will use the tech. I didn’t write the story with the misogyny, mansplaining or discrimination faced by females in tech. That wouldn’t serve this story. Plus, Moral Code is fiction.


Moral Code also deals with the subject of abuse and the cycle of trauma. Why is this issue and the possible solutions to it important to you?


Lois: When Ross brought up that protecting kids was the most ethical deployment for any AI, I agreed. I grew up around abuse. We’ve all seen the ramifications of childhood trauma. We need to break the cycle and call it out. Bullies were likely bullied and are still being bullied. Abusers were likely abused. It can’t be an excuse for their behavior, but we need to pay attention and do what we can to stop it.


There has recently been the suggestion that a chatbot (designed by Google) became sentient. What are your thoughts about this?


Ross: Google’s LaMDA is similar to OpenAI’s GPT-3 which is the neural network we used to build the EllyBot chatbot.  While LaMDA was designed to be conversational in nature, GPT-3 was designed to be able to generate text and can simulate a conversational chatbot.  “Simulate” is the key work here, because both systems are designed to please people and to be able to communicate in a way people find easy and pleasing.  However, in doing so it is easy to find yourself almost believing the chatbot is a real person.  Currently it does not take a long conversation to realize there is intelligence lacking in the exchange of information.  It’s easy to imagine a day in the very near future where the conversation will be indistinguishable from talking to a real person.  Currently systems like Google’s LaMDA and OpenAI’s GPT-3 are just very clever parlour tricks and a very long way from anything remotely sentient.  


Ross, as a tech entrepreneur, you designed Elly. Could such an AI have a role in real life, or is it currently imaginary?


Ross: Yes, I believe we are no more than a decade away and perhaps sooner when everyone will have AI assistants or friends just like Elly.  Those of us old enough to remember a time before smartphones, know that technology, we never thought we needed or wanted, can ingrain itself into our lives to the point where we cannot imagine living without it.  I believe personal AI companions are one of those technologies that will become ubiquitous.  

Imagine every senior, who lives alone, having someone interesting to talk to all day long, a voice they consider a friend who is also able to watch out for them in case of illness or falls. Image everyone having a friend available 24 x 7 who can guide you through every situation in life just like a parent except this friend is an expert on all aspects of modern life including the law, cooking, how to register to vote, how to save another person’s life in an accident, and to become the voice of calm reason in families with real struggles.


How important is the issue of ethics to you both, and how much have you studied it?


Lois: We both grew up in homes with expectations set that we would do the right thing. In our business we had little tolerance for bad business ethics, once firing a very lucrative reseller account due to their unethical behavior. So, it’s a bit of a life design element for us.

I really started considering the jagged line around ethics while reading historical fiction about WWII. It was illegal to harbor Jews or help them escape from the Nazi regime. Yet, it was unethical to not help those innocent people avoid persecution and death. I don’t encourage anarchy, but those laws were not ethical. The more ethical thing to do was help the Jews. We are still facing unethical laws and will have to continue to construct society to extract them.

We’ve both studied the ethical issues arising within the development of AIs. I also studied philosophy and various culture’s considerations of ethics. Children and innocents deserve the best protections we can provide. That is the heart of ethics.


Who do you see reading and enjoying the story? It is said Moral Code will appeal to fans of Black Mirror and the Murderbot Diaries. What are your thoughts on this?


Lois: Moral Code makes a great book club read. The considerations around AIs impact on our lives and considering what risks one would take if they could protect every single child, creates an engaging discussion. 

Black Mirror and Murderbot Diaries both provide compelling what-ifs when technology is unleashed. They and Moral Code look at identity and the choices made based on identity. Moral Code takes a more optimistic turn largely due to the ethical guardrails installed in the AI.

Star Trek optimism or the keep-on-innovating of Neal Stephenson’s Seveneves could be appropriate hooks too.


Are you working on another book, and is it a sequel to Moral Code?


Lois: We have the settings, motivations and antagonist for a sequel to Moral Code, so it’s possible. I’m working on an unrelated book now. Currently it borders magical realism and sci-fi. We’ll see where it lands.


In a world that can feel precarious and troubling to adults as well as children, how important is it to create visions of hope as well as gritty realism?


Lois: The gritty realism often makes a story relatable. While dystopian stories help us evaluate where we may be headed if we don’t fix our ways, I prefer positive reinforcement. Give people a possible path they can see emulating or dreaming about. Setting a good example is a good thing.


Preorder Moral Code here: Amazon | Bookshop.org | Indiebound


About Lois and Ross Melbourne:




“Moral Code” is not the first collaboration for Lois and Ross Melbourne. Side-by-side, they grew their software business to a global award-winning organization, as CEO and Chief Technology Officer, respectively. Now Lois’ storytelling brings to life Ross’ deep understanding of the possibilities within artificial intelligence and robotics. Parenting and marriage have been the easy part of this equation.

Lois is now writing books, having published two children’s books about exploring careers. “Moral Code” is her first but not her last novel. You can learn more about Lois at www.loismelbourne.com. Ross’ current work includes artificial intelligence and robotics. You can learn more about him at www.rossmelbourne.com. And for more about them and the book, you can visit, www.MoralCodeTheBook.com.


Instagram | Twitter: @Lois @Ross


No comments:

Post a Comment