Will AI operate video chats in the future? That’s what Nvidia hinted at this week with the unveiling of Maxine, a platform that offers developers a suite of GPU-accelerated AI conferencing software. Maxine provides end users with AI effects like gaze correction, super resolution, noise reduction, face lighting and more, while reducing the bandwidth consumption of video conferences. Quality preserving compression is a welcome innovation at a time when video conferencing is helping to record bandwidth usage. But Maxine’s other, more cosmetic, traits raise uncomfortable questions about the negative – and potentially adverse – effects of AI.

A quick review: Maxine uses AI models known as Generative Adversarial Networks (GANs) to change faces in video feeds. For example, high-performance GANs can create realistic portraits of people who don’t exist, or snapshots of fictional residential buildings. In Maxine’s case, they can improve the lighting in a video feed and reassemble frames in real time.

The distortion of computer vision algorithms is widespread due to the fact that Zoom’s virtual backgrounds and Twitter’s automatic photo crop tool put people with darker skin at a disadvantage. Nvidia has not detailed the datasets or AI model training techniques used to develop Maxine, but it is not beyond the realm of the possibility that the platform may not manipulate black faces as effectively as light-skinned faces, for example. We asked Nvidia for a comment.

Beyond the bias problem, there is the fact that face enhancement algorithms are not always sane. Studies by the Boston Medical Center and others show that filters and photo editing can affect people’s self-esteem and trigger disorders such as body dysmorphism. In response, Google announced earlier this month that it would disable the “beauty” filters on its smartphones by default to smooth out pimples, freckles, wrinkles and other blemishes. “If you don’t know that a camera or photo app has applied a filter, the photos can have a negative impact on mental well-being,” the company said in a statement. “These standard filters can quietly set a standard of beauty that some people compare themselves to.”

That’s not to mention how Maxine could be used to bypass deepfake detection. Some of the platform’s features analyze the points of view of people who are on the phone, and then algorithmically revive the faces in the video on the other side. This can affect a system’s ability to determine whether a recording has been edited. Nvidia will likely put security measures in place to prevent this – currently Maxine is only available to developers for early access – but the potential for abuse has been an issue the company has not yet addressed.

None of this suggests that Maxine is inherently malicious. Eye correction, face lighting, upscaling, and compression seem to come in handy. But the problems Maxine poses suggest that it is ignoring the damage its technology could cause. This is a tech industry misstep that is so common that it has become a cliché. The best scenario is for Nvidia to take steps (if it has not already done so) to minimize the negative impacts that you may encounter. However, the fact that the company didn’t reserve airtime to set out these steps in Maxine’s reveal doesn’t instill confidence.

For AI coverage, send news tips to Khari Johnson, Kyle Wiggers, and Seth Colaner – and be sure to subscribe to the AI ​​Weekly newsletter and bookmark our AI channel.

Thank you for reading,

Kyle Wiggers

AI Staff Writer