GamesBeat Summit 2024: How AI can be used to protect humans in games
[ad_1]
One of the topics that came up at GamesBeat Summit was the prevalence and potential of AI within the gaming sphere — especially Will Wright’s talk on the future of AI in game development. Another talk on the subject was with Kim Kunes, Microsoft’s VP of Gaming Trust & Safety, who had a fireside chat with me about AI usage in the trust and safety sphere. According to Kim, AI will never replace humans in the protection of other humans, but it can be used to mitigate potential harm to human moderators.
Kunes said there’s a lot of nuance in player safety because there’s a lot of nuance in human interaction. Xbox’s current safety features include safety standards and both proactive and reaction moderation features. Xbox’s most recent transparency report shows that it has added certain AI-driven features such as Image Pattern Matching and Auto Labelling, both of which are designed to catch toxic content by identifying patterns based on previously labeled toxic content.
One of the questions was about the use of AI with humans, and Kunes said that it can help protect and help human moderators who might otherwise be too engrossed with busywork to tackle larger problems: “It’s allowing our human moderators to focus on what they care about most: To improve their environments at scale over time. Before, they didn’t have as much time to focus on those more interesting aspects where they could really use their skillset. They were too busy looking at the same types of toxic or non-toxic content over and over again. That also has a health impact on them. So there’s a great symbiotic relationship between AI and humans. We can let the AI take on some of those tasks that are either too mundane or take some of that toxic content away from repeated exposure to humans.”
Kunes also categorically stated that AI will never replace humans. “In the safety space, we will never get to a point where we will eliminate humans from the equation. Safety isn’t something where we can set it and forget it and come back a year later and see what’s happened. That is absolutely not the way it works. So we have to have those humans at the core who are experts at moderation and safety.”
[ad_2]
Source link