The future of AI (artificial intelligence) is promising, from new customer service models to digital entertainers the use cases are proving to be endless. But with such powerful technology comes the question of trust and social responsibility. For this panel, we learn at the heart of machine learning is human-centered technology, new avenues of experiential learning, and the ability to interact with the world in a way we weren't able to before.
Discussion includes questions like:
- What does the future hold for the experience economy?
- How can AI technology be built to be human-centered?
- What are organizations doing to protect privacy, represent diversity, and prevent bias?
- How can this technology lower the barrier of education and resources across communities?
- How can AI be implemented in new ways in times like COVID?
Noelle Tassey 0:01
Terrific. Thank you all for joining us. And for those of you who don't know me, I'm Noelle Tassey, CEO of Alley. Alley is a community-driven innovation company, we work with our corporate partners and our startup ecosystem to help create good change. And you know, this series of programming is part of that commitment that we have that we use to bring to life in-person and we now do virtually as so many things. So thanks for joining us. Tomorrow, 1 pm we're closing out mental health awareness month with a fireside chat with April Koh. She's the co-founder and CEO of Spring Health, they just raised a huge round of fundraising. I'm talking to her about raising money, creating a machine learning-driven mental health product, and just her entire founder journey. So we're really really excited. But I'm going to hand it over now to our incredible panel for them to all tell you a little bit more about themselves. So Tanya, do you want to get us kicked off?
Taniya Mishra 1:05
Sure. Hi, everyone. My name is Taniya Mishra. And I am so delighted to spend my afternoon with all of you. Both my amazing panelists, as well as everyone who has taken the time to listen to us. I'm the director of AI research at Affectiva. And what that means is that I get to have the fun job of leading a team of really world-class researchers in developing AI solutions, or machine learning models that can estimate, you know, what is the display of someone's emotional state, their cognitive state, so that, you know, basically interactive agents of the kind that Elnaz and Shaun are going to talk about can better respond not just to, you know, the words we say, but also how we feel, and what's our cognitive state. And the most rewarding part really of my work is building these solutions that can, you know, enhance people's lives, help them to connect better with and through technology in a frustration-free, easy, natural, expressive, and emotional way. And outside of these sorts of professional activities, I'm a New Yorker. I live in New York City with my husband and three children, and one small dog. So if you hear you know, some people interrupting me, it'll probably be them. Hopefully, they won't, but I'm so excited to be here with all of you.
Noelle Tassey 2:49
Awesome. Thank you, Tanya. And I also have a small dog here with me in a small space. She might be joining us too, who's to say? Elnaz, over to you.
Elnaz Sarraf 3:00
Thanks, Noelle and Tanya for the great introduction. Hi, everyone. My name is Elnaz Sarraf. I'm the founder and CEO of ROYBI. You see them in my background, we have little robots. These are AI-powered educational robots for kids aged three to seven in language learning and basic STEM. I am actually really excited to be here with all of you and an amazing story is actually, we were one of the fellows at Alley in the Palo Alto space last year. And I have to say, actually, we started the company from Alley and we had an amazing time with the community and the people and then about a few months ago, we separated from the space because it was time for us to grow our company. When we joined the fellowship and the space we were only two people and now we are 22 people globally— in Mountain View, California and Shenzhen, China, we are also expanding the team all over the world, which is very exciting. We have an incredible team, I relate to what Tanya also mentioned previously, everybody's super passionate about changing the education for children. And that's why we started the company to focus on making education personalized for every child, depending on their, you know, pace and interest utilizing artificial intelligence. And we've gone— we've got a lot of traction, we were even featured on the cover of Time magazine as one of the best inventions and I would say that has been an amazing teamwork and we are really excited to make an impact on children's education. And again, thank you for having me here. Thank you.
Noelle Tassey 4:48
Awesome. Thank you for being here. And that's, you know, I actually had forgotten you guys for such a small company when you joined the fellowship. That is so cool. This is like— (inaudible)
Elnaz Sarraf 4:58
Thank you for the opportunity. You know, we really started with you guys, and it has been an incredible journey so far.
Noelle Tassey 5:07
Yeah, thank you. And we're so excited to hear just about what you guys have been up to since you left. Shaun, over to you.
Shaun Paga 5:16
Yeah. Hi everyone, Shaun Paga here, Vice President of Business Development, based out of the San Francisco Bay Area, working for Soul Machines. We're headquartered— co-headquartered in Auckland, New Zealand. So your typical Silicon Valley startup in San Francisco. So we spun out of the University of Auckland, where we've spent the last eight, almost nine years building a digital brain and fully autonomous digital characters and beings. So very exciting technology. We create hyper-realistic digital humans, and a lot of our focus is— and the reason I'm excited about my panelists— my co-panelists here is emotional intelligence is really critical to the experience within AI. How do you add emotional engagement to the experience? And a lot of our focus is on autonomous animation, and the right way to respond once you recognize that emotion. So a lot of research has gone into that. And then the other part of it is experimental— experiential learning, which is really important. So a kind of playfulness, and how do you interact. And so our very first project, Baby X was actually based on a digital baby. And so it learned through interaction and so— just excited as well about ROYBI in that space because it's really interesting how we engage as human beings, a lot of how we learn is through experience and a lot of what we do in human interaction. Just like this Zoom conversation, we're picking up on the nuances of communication. And so this combination of both playfulness and engagement as well as emotional intelligence is really critical to the way that our customers are reinventing brand experiences. And as we enter into this new era, that's one of the big challenges is companies don't have the ability to engage in a retail location, so how do they create a brand new creative experience in this new reality? And that's some of the exciting projects that we have commercially is in that space around engaging— brands engaging directly with consumers in an emotionally engaging way. And that's where we're excited and so excited to be here this afternoon. I also have three children that are teenagers on Zoom. So if I— my service degrades, that's the reason. And I have a large dog that hopefully does not run in the background. So nice to meet everybody and looking forward to the discussion.
Noelle Tassey 7:36
Awesome. Thank you so much. And Shaun, thank you also for a great lead into to our first session topic, which is obviously this panel is called AI and human connection. Everybody here works on a technology that in some way is really geared towards the interface between AI and humans. Can you tell us all a little bit more about just from a product development perspective, how your product tackles that and what the hardest parts of like building an AI to interact directly with humans has been? So I don't know who wants to kick us off there, it's sort of open to all of you.
Elnaz Sarraf 8:14
I can talk a little bit more—
Shaun Paga 8:16
Elnaz Sarraf 8:18
Because I think, among all the panelists, you know, everybody's doing an amazing work. But I think when it comes to the interaction on utilizing AI in children's space, has been one of the most difficult areas and I'd say that's why you don't see a lot of innovation specifically in early childhood education. And I think that has been really one of the most challenging areas that we deal with every day. Because, you know, we started our product, our robot to help children to speak more, to have more confidence, also to understand emotions and analyze that while they're learning to have social-emotional support for children as well. But as you know, children talk very differently than adults, the way they talk, their vocabularies, many variations. So that has been a very big challenge. And a lot of times also when you see AI like in voice recognition like Alexa, Siri, the majority of them, they focus on older ages like 13 and above. Another challenge when it comes to AI, it's a very sensitive subject and then combine that with children's data and privacy it makes it even a lot more complicated because we can't do so many things that other companies do like analyze their speech pattern, their voice because we really can't record we have to have many layers of security. And collecting data and making our AI and machine learning part of it better and better every day has been a challenge— became, you know to resolve that by acquiring a company very recently called KidSense.ai. So the good thing is they started collecting the data from children about five years ago, over 150,000 kids, the way they talk, accents, variations. And hopefully, that is going to help us but I'd love to hear the challenges from other panelists as well. But so far, this has been the biggest challenge for us to make our AI more accurate to children in— when they are little as they get older.
Taniya Mishra 10:47
Yeah, Elnaz as you were talking, I was actually kind of— kind of— gone back to when I was working on my Ph.D. and my focus was actually building voice synthesis technologies that were very expressive and emotional, specifically developing sort of storyteller systems for children who were on the autism spectrum. And, you know, the same challenges that you just now discussed was something I faced, you know, this was more than 10 years ago. And it continues to be very present, that, you know, how do you build machine-learning models that largely learn from examples, while maintaining sort of the privacy and, you know, the respect for these very young end users that we have? And I think one of the solutions that we adopted back then and that has sort of been the cornerstone of how I have conducted my work is being completely transparent. I mean, we make— we all make these decisions every single day, there's a, you know, we are deciding, you know, what are we— are we willing to share our data on particular platforms, on— through the use of particular technologies in relation to what is the benefit that we are getting? And when I was, you know, doing a lot of the research as a PhD student, and there was a much larger program around it as well at the Oregon Graduate Institute, in Portland, Oregon, we were very transparent with the parents of our young children as to you know, what we wanted to do, how we would collect the data, how we would secure that data, and how it would ultimately help their young children. And I think what was amazing was that you know, people understood the importance of what we were doing and how it could benefit not just their children, but just children all over. And you know, people, because we were respectful and we thought of them as partners, not just subjects, which many in the AI and machine-learning community tend to think of people as, it was amazing the kind of data we were able to collect, but also the kind of partnership we were able to get from the families of these young children.
Elnaz Sarraf 13:22
Absolutely. I'm really glad to hear that because it's all about transparency and educating people that you know, AI is not as scary as so many people think if you utilize it in a good way. It is really impactful and can have a lot of advancements into our lives for sure.
Shaun Paga 13:46
Yeah, just thinking about both of those comments. I mean, for us, you know— for us, a lot of the focuses has been on how do you humanize the conversation? And I think that's one of the big challenges with conversational AI is how do you make that connection? If you're just thinking about, you know, children— one of the projects— early projects that we had, that was first built for adults. And then kind of midstream, the client decided to pivot towards children. And then early user testing, really, you know, average results were not what we typically saw because it was written in a dialogue and in communication style to an adult. And so we revamped the engagement and the way the personality-wise targeted towards children, so much more playful, much more engaging dialogue that was more in-tuned. And so I think that's one of the challenges is, you know, how do you create with AI, you know, how do you create this human connection? And for us, a lot of that's with face-to-face communication. So first, the ability to read and detect the emotional state of the user, but then to respond with the right emotional response, but also the personality and the look and feel. So our clients put a lot of thought into what's the right look and feel of the digital human depending on the role, and in the personality. So you get to choose now, what type of personality and look and feel that you create. And so a lot of thought has to go into that. So it's a really interesting space. So kind of matching up the personality for the role and for the audience has been one of those early challenges that our clients are exploring. Which I think relates to both the audience with the— potentially children, but also with the emotional engagement and how critical that emotional engagement is. I think that's where a lot of the conversational AI is falling flat today is it's just, you know, it's not able to pick up on the emotional pieces, which I think is a bit of a gap today, or has been.
Noelle Tassey 15:44
Definitely, and I think something we talked about the other day on our prep call, was, you know, we were talking about the concept of the uncanny valley, right? Which is discovered a lot, hopefully, everyone's familiar with, but if you're not, you know, we'll get into that as our panelists start to speak to us. But we were talking about it and how— those challenges as you try to create more and more interaction and connection start to crop up. And I just love to hear from all of you. I think you all had really interesting stories of like, where kind of unexpected challenges were. I know Shaun, yours was just some combination of pausing for breath and I think incomplete sentences to kind of bridge that valley.
Shaun Paga 16:26
Yeah, and again, it does come back to humanizing the experience. And you know, human beings are imperfect and that starts with the look and feel. And there's a video, you know, a recent documentary, "The Age of AI", where we— where they interviewed will.i.am about a project we did and he complained about a pimple that he had on his face and kind of jokingly— "Hey, guys, how come you didn't remove that pimple?" And obviously, we can, but it actually adds to the authenticity of the experience. You know, human beings are imperfect and so the look and feel is captured in our digital humans to reflect that and I think that's a big part of the engagement in this concept of the uncanny valley. Where, you know— this actually comes from, you know, the film industry, I think, in Japan in the 70s. So this concept has been around for a while. And certainly, some good examples of— the conductor in the Polar Express, I think, is one that's often given where it's human-like, but it's not— something's a little bit off on it. And part of that, I think, really is the human engagement as well. If you can't detect that person's emotional state and respond to them in a human-like way, just like we're doing in this phone call I think you miss on that opportunity to have that connection and it goes into that valley. So for us, I think it's a bit of a combination of authenticity and realism, as well as the ability to pick up on the emotional cues of that individual and respond appropriately. But— very interesting space.
Taniya Mishra 17:53
Yeah, I think, you know, related to the uncanny valley, I think it's— I mean, the reason the whole terminology came up is that we have these hyper-realistic avatars that look so human, that when humans interact with it, they feel, you know, kind of it's like this eerie feeling. But, you know, after our conversation, Noelle, I was actually doing a little bit of research on it and came up, you know, came across this paper from ACM where they— where they basically did a survey and they found that repeatedly, people kind of use this terminology of "cold." Like people said, you know, they had these cold dead eyes, or— so it's kind of like when I was reading it, I felt like one thing that causes this sort of, you know, uncomfortable feeling in us if you see something that looks so hyper-realistic, but it's like missing warmth. It's missing, almost the emotional element of it. And I think Shaun alluded to it. It's like that's when you start to feel that something is off. And that's why I love Elnaz— the Roybis, they have this cute, you know, just this cute look that kids would really, you know, resonate with but even adults would resonate with and in some ways, the lack of, you know, a very human-like face in them probably makes them cuter and kind of prevents it from falling into this uncanny valley. But I think like, you know, one way that— to mitigate this feeling would be to give these bots humanoid or otherwise warmth through communication— communicating, you know, emotional states and understanding emotional states.
Elnaz Sarraf 19:54
Shaun Paga 19:54
Yeah, just a—
Elnaz Sarraf 19:57
Oh, go ahead
Shaun Paga 19:59
Oh, sorry, sorry, just a quick comment. Absolutely, you know, we— empathy is really important, right? Especially now, and especially as we get into, you know, right now, isolation is a huge challenge and mental health is a huge issue. And so I think part of the way that you kind of cross that and create that connection is, you know, by understanding the emotional state of the user, and then, you know, our digital humans will respond empathetically and look concerned. And it's just, it's amazing what a big difference that can make. Just, you know, seeing and acknowledging somebody and engaging, I think there's a huge opportunity in that space. And that's where this engagement becomes really critical in areas like mental health, which I know both of your companies play in that space. And I think it's really important.
Elnaz Sarraf 20:44
Definitely, and I think, because of the whole situation happening, more and more the biggest feedback from parents that they are shifting their focus a lot more on the emotional well-being of children. And I think one area that we keep hearing the parents at least or even educators are worried about AI is the missing component of the emotional interaction. And I think there are many ways that you can resolve that. As everybody else mentioned, you know, we have AI to detect the emotions and even respond to that as much as we can create some fun interaction. For example, at least during this time, what we try to do is, we try to even come up with a different design for our robot to kind of look like a scientist, human, but not exactly like a human, but also suggesting different, for example, topics, like getting children motivated, especially again during this time, but also even if you think about it, after this time when people go back to work, go back to school, the whole platform, the whole setting is going to be different and they're coming from a time that they had to stay in home all the time, they lost the interaction between their friends, how can we utilize AI to help teachers, parents go past that? And I think the opportunities are huge, but not just by design, but also utilizing the technologies we have and implementing into our products whether it could be robots or even software to help children with their well-being. So I think that the whole setting of workforce and even schools are going to change. We will see that for sure.
Noelle Tassey 22:47
Definitely. And so we've already kind of waded into our next topic, which is great you guys are doing all the hard work for me, love it. Which is you know, in the current crisis, obviously, human connection more and more is being intermediated by electronics in different ways. I mean, that's happening right now with all of us. And there's been a lot of talk of— the upside is obviously working from wherever you want, having that added flexibility, the fact that we can have 100 people with us right now at 2:30 in the afternoon on a Wednesday is amazing. At the same time, obviously, there's some serious drawbacks. How do you guys see AI, stepping in, you know, during this time, and really like mitigating some of those negative externalities of like, our new reality? And then the other follow-on question, that is, what trends does that accelerate?
Taniya Mishra 23:43
I'll take a stab at it. So I think that you know, during this time, we are all kind of experiencing a whole slew of emotions. I mean, you know, I personally feel like I have gone through all of the, you know, the highs and lows, the— beginning with like, it's only for two weeks, and then when it's not just for two weeks, then it's like, you know, some of the, you know, missing people, missing just being able to, you know, see people in real life, follow your usual patterns of, you know, seeing people, whether it's at work or hanging out with friends, you know, then kind of, you know, negotiating all of that, and then feeling, you know, some— a lot of the communal grief from our— you know, I'm— I said, I was from New York City, with the city losing as many people, the fear, the uncertainty, but then also seeing the many ways in which we are now using technology for connecting with each other and kind of reflecting back on the last five years when we've all been saying how technology has been, you know, pushing us apart, and now it's really again through technology that we are finding connection. So seeing sort of the whole fullness, the opportunity in that and it's like a gamut of emotions, right? But— so definitely I see that AI is going to play a huge kind of disruptive positive role in the post-COVID future, in education, in communication, in healthcare and all of that, but I also feel like mental health and personal well-being and self-care is another place where I definitely see you know, AI playing a really pivotal role. Speaking a little to Affectiva's technology, which is you know, building solutions that give you an estimate of your emotion display, your cognitive display, for example, like I would love to at the end of the day, kind of see you know, what has my— where has my you know, mind been, where have my emotions been all day, right? Have I been largely positive have I've been negative? Have I experienced stress, fear, anxiety? Because for a lot of us knowing is the first step to making positive changes, like knowing the state of being. And I definitely see one way in which, you know, besides the other big trends, I see AI playing a big role in self-care in, you know, being able to quantify not just our physical health, but also our mental health.
Shaun Paga 26:34
Yeah, I think it's— completely agree on it's absolutely a big part of the opportunity with AI right now is in that space. You know, for us, it was, you know, when this thing hit, it was really how can we help with— how can we utilize our technology to kind of help in this crisis? And one of the other areas that's really interesting is also information. There's a lot of bad information out there as well. And so that I think there's an opportunity there with AI to kind of give information out there, you know, whether that's CDC or from some reliable source to push out that information in a way that can be consumable by humans. And then the other area, to kind of point to this challenge around mental health is also— having an engagement with AI also provides an opportunity to have a non-judgmental conversation. And so that's a really interesting concept because as human beings and on the commercial side, we'll have— we have banks that use our technology for debt collection, and it's a good commercial example. It's a very uncomfortable human conversation. You know, nobody likes to talk about the inability to pay their bills. And so a digital experience, a digital human having an empathetic conversation around that has been really a great experience. But the same thing in mental health, right, the ability to have this non-judgmental, you know, machine to human, obviously, with our technology, face-to-face communication, that's completely non-judgmental is a really interesting area of opportunity I think with AI because you can provide this service and this empathetic experience without that stigma or that judgment. So that's also, I think, a pretty interesting area right now that's obviously hyper-relevant just with the challenges of isolation. It's always been an interesting space for AI, I think, especially with some of the challenges with age care and isolation. But I think now, they're just accelerated, I think, Noelle, you talked about some of the trends. And I think that's one of those trends. That isolation has been obviously accelerated in this, but in our society in general, it's a bit of a challenge. And I think it's also an opportunity is how do you create and connect with individuals and then connect others and I think there's an opportunity there as well.
Noelle Tassey 28:42
So Shaun, a follow-up question for you to that. We actually had somebody write in with a very similar question— also specifically for you on this, which is about sort of the line between where AI should, you know resemble an AI, whatever that looks like versus being too humanoid. And like at what point that, you know, poses an issue and obviously, you know, Soul Machines, you're really— I've seen the demo, it's amazing. Truly, it's— like the face is so responsive, it's so emotive. And at what point, you know, do you maybe also lose some of the benefits you just talked about, right? So the non-judgmental aspect, if I can't tell the difference between Skyping with my therapist and Skyping with an AI, does that suddenly erode the benefits of doing that?
Shaun Paga 29:33
Yeah, it's really good— really good— really good, pointed discussion that is really important. So we're very explicit that our digital humans are not real because you're right, they are very hyper-realistic, they look and behave— you know, you mentioned the breathing aspect you know, our digital humans have a respiratory system. So they only speak when they're breathing out just like real human beings. And so it really does replicate, and so we—but we're very explicit upfront like this is a digital person that you're engaging with and one of our early customers, Autodesk, actually gave Ava purple eyes to be very explicit that she was not real. So we're very explicit and really trying to take the best parts of human communication and face-to-face, and provide those attributes and then the benefits of conversational AI and machine-to-human interaction, and kind of blending those two worlds. But I think it's really important that you're explicit upfront. Obviously, there was some backlash in the AI field with companies that weren't explicit and it's important to be very explicit that this is, you know, this is a digital person that you're speaking with and engaging with. And so I think that's a big part of kind of level setting the experience so that you can kind of get the best of both worlds. But it is important to be explicit around that this is not a real human being.
Noelle Tassey 30:44
So— and I think like that— certainly, there's like a whole host of like ethical questions that we could tack onto that. But that would— we'll probably do a panel on those, honestly. So definitely sign up for our emails if you're interested in that one, but in terms of, I guess the— this is obviously a fast-growing emerging field, what do you guys see as some of the biggest areas of failure of like AI just as a field in terms of— there's a lot— also a lot of hopes pinned on it, a lot of expectations. Just areas where it's really missed the mark versus sort of unexpected successes.
Elnaz Sarraf 31:27
I think at least for what we are doing the biggest failure— or I wouldn't say failure, but it's the lack of resource because, in order for AI to become better and better, it needs to have much greater data and resources in order to process these and create algorithm to help the machine to understand better and better. And at least in our area, which we work with children, we do see this challenge and failure of being able to gather accurate data. And again, it relates to many reasons. The biggest one would be privacy and being able to really communicate that with the parents, but also again, it's a little bit also more difficult to expand on AI immediately. That's why it takes a little bit longer time to, I think get it even more accurate. It requires a lot of time, a lot of work, research, and implementing, changing. But also I see a huge, huge opportunity. And again, especially after this pandemic situation, there are going to be a lot of like advanced technology being implemented into schools, workforce because, again when it comes to interaction, the human interaction, every individual has certain limits. But utilizing AI we can actually make the experience more personalized, especially the human interaction if the machine can understand the emotions, the state of the mind of even children or teachers, it can communicate even more accurately and more timely than in the past, I would say.
Taniya Mishra 33:38
So, I'll take a slightly different take on it. So I think that if you look back the last two or three years, the biggest failures of AI have been failures of trust. And, you know, there are the more conspicuous examples of you know, chatbots that became racist accidentally, right? Because they were learning from— you know, just random people spew sometimes on social media honestly, or, you know, autonomous vehicles that fail to recognize humans on the road. But those are like the big examples. But if you look at a lot of the other stories, there are plenty of other failures of trust. Where AI has made decisions that— against, I would say, certain groups of people like women, people of color, immigrants, a lot of times already marginalized communities that have caused actual harm. And I think the reason for that is that is the lack of diversity, lack of representation in the tech workforce, the people building these technologies, nobody sets out to build bad tech. I would say, you know, most of us that are building machine-learning models, building, you know, AI solutions, we want to do good we want to solve problems, but what happens when there's lack of representation, lack of different perspectives, lack of you know different skill sets or even lack of different concerns, life experiences is that there are blind spots. Because my experiences are not your experiences, you know, my perspective might— my concerns may not be your concerns. So, when there are the, you know, rich diversity of voices around those tech tables, where these big decisions are being made, that means there is more representation, a better representation of the end-user who will ultimately use these products. Because our models are just learning what we are presenting them with, right? These are statistical systems that learn from examples. So it is incumbent on us as you know builders of technology, developers of technology, proponents of technology to include more people in making these decisions so we are ultimately making products that are— that generalize, that are robust, that can serve all of us in society rather than just some groups of people.
Shaun Paga 36:22
Yeah, I completely agree with both of those points, I guess on the racial— and just gender— you know, a lot of biases that come in with AI. And so I think having that diversification, certainly from our standpoint, with the research that we do, it's very important that you have a diverse, you know, diverse group of scientists, as well as a diverse group that we're engaging with our technology. And so that's a really critical part. And then, you know, as I mentioned earlier, you do take the personality when you're thinking about digital faces. And so brands now have to really think about well, what's the look and feel what— how are they representing their company and their core values and what's the look and feel of the digital humans that they want to put out there. So it's a very interesting new space. And I think there's— it's obviously evolving pretty rapidly. And then a little bit around— what Elnaz was talking about— the emotional part, I think that's probably one of the biggest gaps that I see in conversational AI has been that lack of emotional engagement. And I think that's really what's holding back the technology. And I think once you've been able to kind of cross that ability to understand and add emotional intelligence into AI, I think that's the opportunity, when it really— you get full adoption is to really better understand human beings and how they communicate, and then the right way to communicate is super critical. So I think those are two really important areas right now, and especially when it comes to the ethics, I think Noelle, you've mentioned that around— the big topic around ethics in this space, and so I think it's— it is evolving, and it's important to have more diverse opinions in that space, I think that's very critical, as Taniya pointed out is, is that there's representation when it comes, you know— when you're building these projects as well that you have representation from a diverse set of folks that are building the implementations because I think that's where you can fall into traps is when you don't have representation to Taniya's point.
Noelle Tassey 38:26
Definitely. And so the— asked— one of the things that you just brought up, one of the aspects of the emotional intelligence piece, which we talked about the other day, very briefly, is there just some areas of emotional connection, emotional response that are much harder for humans, let alone machines to navigate. This is sort of the other side of the uncanny valley. It's not just you know, are— is your AI speaking to me on the exhale, but can it truly like read the full range of my reactions the way that you know an adult human who's been well-socialized, who has high EQ can? And then what are the problem areas there? Like, I'm sure you've all run into this in various points in terms of developing your technology, what have been like the biggest challenges? You know we talked a little bit mixed emotions, and in sort of the more nuanced states, but—
Shaun Paga 39:22
Taniya is definitely the expert on this. I'd love to hear her talk about this.
Taniya Mishra 39:27
So first, I'll set the bar by saying that we as humans, we are actually not that great at reading each other's emotions, as well. There have been studies that have shown that even when you are looking at someone and you're hearing their voice, and you are, you know, a native speaker of that language that they're speaking, you are somewhere, you know, like, there's like a 70% match between, you know, the emotion I intend to express and what my observer actually gets, right? So— and then if you take away one or the other modality, like if I remove the video or I remove the audio, then, you know, emotion perception actually even drops. So humans are— you know, we are not a hundred percent at reading each other's emotions but what we are good at is actually, you know, asking clarifying questions. We are good at saying, "did you mean this?" Or we— sometimes we say it explicitly and sometimes we actually just display it through our body language, our face, like I might just do *this* and that would be enough for you to kind of stop and say, "do not understand me?" Right, I'm just like, you know, squinting my brow a little bit, and that's enough or I can just keep nodding, and then you know that yes, you know, she gets me I can move on to my next point, and so on. So humans are really good at asking clarifying questions. So, when we are, you know, building technologies, whose job it is to estimate the display, and again, I keep saying display because, you know, like— we are really at the tip of the iceberg when it comes to building intelligence that can understand, you know, emotions, cognitive states, you know, there's a lot of interesting and deep work to be done there. But, you know, when we're building those sometimes, like, if the system hasn't seen enough examples from people of that culture that race, that gender, then it will be less sure of its, you know, estimation, right? And consequently, it's kind of really incumbent like going back to Elnaz's point that you know, when we're building these models, we need as many examples as possible from as many groups of people, as many cultures, countries, age ranges as possible. And, you know, collecting those is a tough task. You know, we've talked a lot about these models that do amazing things. But really, when you're building it, you're spending far more time collecting all that data. So that's definitely a challenge and Elnaz, you already spoke a lot about that. And of course, collecting it from children makes it even trickier. But— so I would say that's like a big challenge. And consequently, it has to be married with, you know, conversational agents that can ask clarifying questions that have almost like basically, a meta estimate of when they are unsure of what somebody's emotional state is, and, you know, ask clarifying questions, take a different path through the dialogue model, right? So I think it's— what I'm really excited about in the next step is that there are all of these challenges in— related to emotion understanding, emotion display, intent to understand— you know, intent understanding, and, you know, responding to that intent. But I feel like we are all going to all of these pieces that might even sort of work independently, or are being developed independently, ultimately will come together and are already starting to come together in the kind of work that Shaun and Elnaz, are doing, to build you know, stronger and better conversational agents that can serve young kids and adults and people of just different, you know, experiences better.
Elnaz Sarraf 43:43
I agree and I think when it also comes to children it is a lot more challenging because again, it is very difficult to even understand the emotions for example, on an adult's face. And children, they tend to have all sorts of emotions at the same time. And it's also some technical challenges because when we gather data, the quality of that information is also very important. The image that we collect in order to tag and understand— if this picture means the child is excited or means they're unhappy, these are very challenging. Even the distance you know, of the child with the product, with a camera for example. It all creates a lot of challenges, but also we face another challenge is that— like from adults, when you ask if you are feeling happy or sad, they understand the emotions but we have this difficult situation with children because the majority of the time they do not understand how they actually feel. And you know if they— if you ask a question and just tell them "so you feel happy?" They would say yes. And I think this challenge gradually is going to get thinner as we gather more data and information especially from little kids and older. Then I would say it's just a matter of time and gathering this information and implementing it into our algorithm.
Shaun Paga 45:23
Yeah, I guess, to add on to that a little bit, I think that's also the huge opportunity. And there's a question in there about, you know, machine learning and gathering data. And I think that's a huge, huge part of what you get with conversational AI is just a massive amount of better understanding of your users and the types of questions that they're asking and then adding in the emotional part to that just gives it a richness that wasn't available before in any other form. So I think that's a huge part of the opportunity. And just to Taniya's point, if you spend a lot of time in technology, you'll know that there's a lot of people that don't have emotional intelligence, right? It's a hard— it's a hard human— you know, it's kind of notoriously known for being a field where engineers don't necessarily have that. And it's difficult, right? And people that have really high EQ you notice, right? People that can pick up on things and communicate. And I think that's a big part of the opportunity in front of us is picking up on things and, and a lot of it— it is so much nonverbal, right? 80%-90% of communication is nonverbal. So, if you think about also adding in interactive components, you know, whether it's a robot, or whether it's the ability to have this face-to-face communication, and pull up different assets and engage with media in different ways, is a really exciting opportunity. So I think that's part of the opportunity that we have in front of us is really adding in all these different components. I mean, emotion is very nuanced and it's been a big eye-opener for me have been with the company, you know, three and a half years and talking to some of our researchers and PhDs like Taniya and professors in psychology around all the different nuances of human emotions is very complicated, right? And I think that's why you run into individuals that don't necessarily pick up on social cues. And I think that's part of the opportunity as well is how do we kind of learn through these experiences? And so the more interactions that we have with conversational AI where picking up on that is going to help improve a better way to communicate. And I think that's a big part of the opportunity that we have in front of us.
Noelle Tassey 47:26
Definitely. We've just had a bunch of really excellent questions come in from the audience. I guess we back-loaded them today, I suppose. And I know we only have seven minutes left, so we're gonna try to get through as many of these as we can. For those of you who have asked a question that's still open or that I flagged to answer live, I'm going to stay on a little bit after the end of this with any of our panelists who have extra time we'll try to get through as many of them as we can. And we're definitely going to be doing more panels on this topic, so stay tuned. But a lot of questions are coming in about just in terms of how to integrate— so I think a lot of these are from developers or from like product-focused founders who want to know at what point do I have to start really investing in AI for my business? Because obviously, this is something that's going to impact in different ways, every industry and we're very focused on human-centric, human-facing AIs in this panel, so I think we can definitely look at it from that angle. Obviously, a lot of people have already integrated forms of AI into their tech stack without really realizing it. But yeah, when to invest in that and how?
Shaun Paga 48:38
Now. Yeah, the sooner the better right? I think it is a journey. So yeah, I think it is. And I think that's the nature of AI as well is you don't know where those learnings are going to come from. And I think the sooner you can get a project out into the wild and get users experiencing it you're going to learn from that experience. And so I think that's— you know, in every engagement that we have, there's always— there's always something that we've learned that's been surprising and certainly when we're engaging with companies that have gone down the AI path, that are into that, right? There's also challenges, right? Like in conversational AI, you still have to build the different components around, how do you create engaging dialogue that relates to individuals, but then you also have to build the different components within the NLP engine to really create an engaging experience. And so there is also time to build that— you know, again, the sooner you can invest in that, I think for us conversational content is an important— a really important aspect of how we see customers being successful in conversational AI is you've got to build and create content to get that depth to find out where that experience is resonating and where you should continue to invest or not invest because everyone has a hypothesis going into it and there's a lot of pivots in this space. So I would say, certainly the sooner the better because it is— we're relatively very early on in this journey.
Elnaz Sarraf 50:09
I agree I think there's a huge opportunity when it comes to AI, it is definitely undoubtedly the future. And the sooner you can actually start building your product and utilizing AI, the better that would be. And from my point of view, and— or at least how we started our company is that we didn't invest a huge amount of time and money in the beginning. We tried to utilize the technologies already available on the market,. and with our technology, our data and content upon those. And just very recently this year, we started gradually transitioning into our own capabilities, our own data and information because it is very time-consuming, very expensive and it is going to be taking you at least two years to really understand how to shape your product utilizing AI. So start today or even yesterday, by— try to utilize the technologies that you can access today and get your product out and start collecting data. That's my opinion.
Noelle Tassey 51:26
Taniya, anything you want to—
Taniya Mishra 51:28
I mean, I would just say that you know, Shaun and Elnaz, I agree with them, you know, start today, start with a small project. You know, test out how it works, experiment, try something new. You don't have to build it from the ground up. Take off— there's so much off the shelf, AI tools that can be useful. But I would say keep your eye on your end-user, on your customer. The goal is to meet their needs. The goal is to empower and enable them so it's going to look differently for different companies. But you know, start today, start experimenting.
Noelle Tassey 52:06
Love that, and love the focus on the human even as we're working to pull robots into the experience. That's advice widely applicable to building any kind of product. And so we had a really interesting question here. Rosie. Rosie asks, "could you speak to some of the methods that you used to collect training data for your AIs? And how much processing and analysis do you do before you feed it to your model?" Much more technical than we've gone so far on this conversation, but I think a great question and kind of speaks to some of the pitfalls we were discussing earlier actually around the downsides of not having enough people in the room, maybe not processing data properly, loading in bias, things of that nature. So we'd love to hear from you guys.
Taniya Mishra 52:57
I mean, I would say that you know, go first with the principles. So, always be transparent, only collect data by consent. And by consent means that people should have the ability to withdraw consent at any time, which means keep your data appropriately tagged and in a way that you can access the data if you need to remove it from your models, from your data lake. You know, keep the data secure, follow all the appropriate you know, data security guidelines such as GDPR, and be fully transparent with your— with the people providing data as to how their data is going to be used. If there is an update in how your— how that data is going to be used, let people know. So, I would say respect for your— for the people who you are collecting data from, and then again, respect for the people who your solution is going to serve should be the North Star that you should follow.
Shaun Paga 54:05
Yeah, I completely agree with that. I would just— to add on to that, absolutely. I mean, the first step is to have a process in place. And then to be really explicit around how are you utilizing that data and what that— what the intention there is, is really, really important. So I think that— those are just a couple of key points when you think about that— going down that journey.
Elnaz Sarraf 54:28
Yes, and I think the transparency, communication, and, you know, always making sure that your customers know what you're collecting and what you're doing even put that on your website. So everyone can actually see and there will be some people that probably would request for their information to be removed, but also at the same time between your team make sure you do continuous Q&A. Like making sure the quality of the data you collect and the process of collecting the data is in place because you know, they're going to be— as you grow the team, there's going to be a lot of information you collect. And then you're going to have a lot of team members that they work on this information. So you really need to make sure you have a process in place that is compliant, you have AR policy, data collection policy, even between your internal team to be able to really create this transparency with your customers as well.
Noelle Tassey 55:36
Awesome. Well, thank you all so much for your insights, for your time, you know, we're just one minute over and we're gonna end on that question. So as to give everyone the chance to get on with the rest of their day. But this has been such a wonderful panel. Thank you all so much for joining us with our participants for sharing the afternoon with us and our panelists for your amazing insights. This has been a really fun one for me. I've learned a ton, hopefully, everyone else has as well. This conversation has been recorded. It'll be available on our website tomorrow, Alley.com, feel free to share it with your community. We're going to be doing a lot more with AI over the coming months, so please join our site for some emails, and see you hopefully around. And thank you again to our panelists. Take care.
Elnaz Sarraf 56:22
Shaun Paga 56:22
Thanks, everyone. Thank you.