A day after controversy erupted over months-ago revisions to Zoom’s terms of service that evoked fears of their video chats being harvested for AI training, the company raised its virtual hand to say that it would never do such a thing without permission.
The new text, added in March as part of a sweeping rewrite of the video-conferencing app’s terms, appeared to give Zoom the right to train AI systems on the data and content of calls.
A Sunday post on the web-developer blog Stack Diary called out two parts in particular. Section 10.2 allowed Zoom to use diagnostics data for purposes including “machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models).”
Another, section 10.4, reserved similar rights to customer-generated content for a list of uses that included “machine learning, artificial intelligence, training, testing.”
As of Monday afternoon, that second section has a new paragraph in bold below it: "Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.”
In a blog post Monday, Zoom Chief Product Officer Smita Hashim wrote that Zoom had earlier rewritten its terms to be more transparent about its workings, not to lay new claims to user data.
“Section 10.2 covers that there is certain information about how our customers in the aggregate use our product — telemetry, diagnostic data, etc.,” she wrote. “We wanted to be transparent that we consider this to be our data so that we can use service-generated data to make the user experience better for everyone on our platform.”
Many consumer apps include telemetry features for quality-assurance purposes, but these haven’t always gone over well with customers. Microsoft, for example, added new forms of diagnostic monitoring to Windows 10, then responded to concerns by adding privacy controls.
“In Section 10.4, our intention was to make sure that if we provided value-added services (such as a meeting recording), we would have the ability to do so without questions of usage rights,” Hashim wrote. She emphasized a later sentence in bold: “For AI, we do not use audio, video, or chat content for training our models without customer consent.”
That consent, Hashim closed, still won’t allow third parties to train an AI off your calls: “And even if you chose to share your data, it will not be used for training of any third-party models.”
While Zoom had not publicized the March rewrite of its terms, it’s been public about plans to add AI-powered capabilities that match those of other companies running communications platforms. A February blog post discussed such features as “smart recording” that would rely on natural language processing to summarize a conversation and call out key points of it.
But it shouldn’t take an AI to conclude that lengthy, dense terms of service documents don’t effectively convey these ambitions to customers. Either people don’t read them at all or they seize on “ToS” language written defensively by lawyers for other lawyers and sometimes misinterpret that legalese.
“There can be the lawyerly temptation to phrase them as broadly as possible to give you the most flexibility as you continue to develop your service,” emailed Catherine Gellis, a Bay Area lawyer who specializes in tech-policy issues. “But the problem with trying to proactively obtain as many permissions as you can is that users may start to think you will actually use them and react to that possibility.”
A bill introduced in Congress last January would have mandated one-page summaries of these documents, plus a requirement to make their full text machine-readable for easier analysis by outsiders. That “Terms-of-service Labeling, Design, and Readability Act” (yup, the “TLDR Act”) went nowhere last year, but its sponsors reintroduced it in July.