Noelle Silver and Voice Technology
Diversity in AI is only as diverse as the person teaching it. Wise takeaways from my interview with Noelle Silver on the Sound In Marketing podcast (Ep. 48 and 49).
Noelle, data scientist, Microsoft MVP in AI award winner, and social conscience described to me her perspective in diversity in AI. In 2014, the year Alexa was “born”, Noelle joined Amazon to help build out an education team at AWS. She wanted to figure out some kind of structure for developer training. It was through this work that she received a Beta version of Alexa and started to explore voice and voice app development.
Noelle was fascinated by the idea of talking to a kitchen device.
Diversity in AI Starts With The Developer
AI may be able to learn on its own, but it all starts with a human being. If your human collective of developers are only white or only male, that is the only perspective the machine can learn.
This is not to say that white men can’t or shouldn’t program. This just means your developer collective should not ONLY be white men but also white women, black men, black women, other minorities, indigenous people, those with disabilities and/or handicaps, people with different sexual orientations as well as being located nationally and/or globally.
It is impossible to relate completely to people that are outside of our homogeneous subsets, Noelle explained. We can empathize, but we can’t relate to others’ unique perspectives and experiences. They grew up different than us. They have different backgrounds, learned different nursery rhymes, had different people they looked up to. We all come from different schools, know different people, experience different climates, interests, and have different inside jokes. They are not us and we are not them. However, everyone is valid and should be represented.
Noelle Silver
Noelle has worked at IBM, Amazon, Microsoft, NPR, and HackerU to name a few. She launched 4 languages at IBM as well as “caring for” 17 different models of API at Microsoft.
Combining her educational background with her desire to create AI correctly and ethically, she now works at HackerU to “reinvent university education”. Noelle is working to develop a standardized course curriculum for college students in voice first industries at a university setting.
The Progression of Ethics in AI
Through Noelle’s data science profession, she has seen a lot of companies delivering strong messaging on partnerships and ethics. However, they are not supplying the toolset for an actual “whistleblower” program. Although some of these companies hold ethical training classes and courses, engineers are not required to take them. And if by chance they did take them, there was no further course of action past the class. No tool sets or systems set in place by said company.
Noelle has found that setting these ethical whistleblowing programs in place now rather than later can save a company billions of dollars in revenue. It avoids slips and missteps in brand embarrassments when all systems are fully established and hard to change. Just look at Facebook and what they’re currently dealing with.
The problem is, Noelle continued, that these engineers are not thinking about the “moral and ethical repercussions”. They’re too busy “making the stuff”. And rightfully so. It should not be their responsibility to think about the moral and ethical implications of the company. It’s the responsibility of the company.
But how do companies do that when every company sees morals and ethics slightly differently?
Siloed Data Collection Creates Inconsistencies
One way to help regulate content is in the way we gather our data.
Currently, data is being siloed within companies. Those companies are then building regulations and standards within their siloed information. These are only company wide and do not regulate the industry as a whole. This creates irregularity and inconsistency in standards.
Noelle says we should be open with our data and realize that this collection of information is solving a bigger problem; not just for that one company or even that one industry but for the global good of communication.
Because there’s no consortium now, we’re going to make the same mistakes of the past. What’s good for one company may not mirror another. Systems become segmented more and more and global ethics are just not possible.
So how do we create an open source virtual assistant bot framework that allows everyone to build?
Can Robots Make Art?
Noelle had a very interesting opportunity to collaborate with data scientists at MIT and curators of the Met Museum of Art. The museum was trying to “tag” their art in the most inclusive way they could.
“We knew that in order to solve any problems for the museum, we had to get the person with the problem involved to solve it. You can’t just “empathize enough”. You need someone who knows the problem.”
They used sources and data scientists from all over the world as well as specific to the pieces of art that they were tagging to get the most accurate and robust information that they could. What they ended up doing was fascinating.
Through data collection and machine learning, the AI was taught to, for example, point out to the art viewer that the art existed “during x century while in drought season the y society considered z as art.” It could pick out information sometimes faster and more efficiently than a skilled viewer could do with their naked eye.
Diversity More Accurately Archived Culture
The importance of this study, in my opinion, is that when a curator tags the same word for “chair” on all art, what happens? Without a cultural perspective, they couldn’t know what a specific culture or time period may call a “chair”. So by calling it a “chair”, those searching for an alternate word couldn’t find it. The AI tagging could then expose more of an audience to the accurately portrayed artwork.
By including multiple perspectives and diversifying their data scientists, the museum was able to augment the curator’s work. This deep dive into archiving work was never meant as a replacement to those curators, but rather as an enhancement to the art.
With all the research in the world, one human is bound to miss things.
Noelle took this study and created her own Alexa Skill with the fundamental principles. She wanted to help people with special disabilities navigate art. The user asks, “Alexa, what’s the art of today”. Based on the comments of the users, the skill generates work that the AI perceives as art the recipient would appreciate.
As a looker but not an understander of art, this is groundbreaking! I have no idea what art I may or may not like. However, if someone or something was to give me suggestions…? That could potentially make an art appreciator out of me.
Having this accessibility could bring new patrons into the museums that wouldn’t have come otherwise.
Sound Is Nascent
Voice first technology is very new. We should create and build ethics and diversity from the ground up where in the past other industries have failed. We can invest in a more diverse workforce initially and long before any AI is created to avoid costly mistakes that our predecessors have and are currently experiencing.
However, building a diverse team takes time and patience and, of course, money. It is more expensive to be diverse. However, if you pay that money up front, you won’t waste money on mistakes later on down the road.
Together as an industry, we need to hold ourselves accountable and create more diversity of thought.
With more inclusion, you have the ability to create a stronger emotional connection and a brand that speaks the right messaging to the right people. The groundwork has begun, but diversity is a long game. What are we going to do strategically to hold ourselves accountable to move that needle forward?
We can’t just “train”, we need to “fix”.
Related Articles on Voice First
For more on Voice First, check out:
Let’s Make Sound On Purpose
Excited to explore your brand soundscape? Dreamr Productions would love to help. Contact us today for more information.