
Google Research's Colaboratory (Colab) project has quietly prohibited the use of deepfakes with the technology that allows anyone to write and execute random Python code through the browser freely. The project integrates machine learning, data analysis, and education well.
Deepfakes, which have been used to spread fake news, can be trained to add realistic facial expressions to video clips, making the image appear genuine even though it is fake. Some believe the main source of concern has been a lack of ethical constraints.
"We regularly monitor avenues for abuse in Colab that run counter to Google's AI principles while balancing supporting our mission to give our users access to valuable resources such as TPUs and GPUs. Deepfakes were added to our list of activities disallowed from Colab runtimes last month in response to our regular reviews of abusive patterns," a Google spokesperson told TechCrunch. "Deterring abuse is an ever-evolving game, and we cannot disclose specific methods as counterparties can take advantage of the knowledge to evade detection systems. In general, we have automated systems that detect and prohibit many types of abuse."
Colab, a cloud-based service developed as an alternative to Jupyter Notebook, a web-based interactive computing platform, simplified the process for those with no coding experience. Because there is nothing to download or install, it can be accessed from any internet device. Its purpose is to make it easier to share notes.
There are forums where people can discuss and learn how to make the images, but Google's restriction is expected to impact the process significantly. According to one report, many users relied on pre-trained models in Colab to "jump-start their high-resolution projects."
Colab is also available as a paid service from Google. Subscriptions and other Google services are among the Paid Services. It enables subscribers to connect to other Google services as well as third-party services that Google does not own. Last year, a convincing deepfake video of Tom Cruise went viral on TikTok. It received over 100 million views, prompting the video's creator, Chris Umé, to cofound Metaphysic. This "ethical" AI-generated content company raised $7.5 million from Logan Paul, Winklevoss Capital, and other venture capital firms.
"Deepfakes have a large potential to counter Google's AI principles. We aspire to be able to detect and deter abusive deepfake patterns versus benign ones and will alter our policies as our methods progress," the spokesperson continued. "Users wishing to explore synthetic media projects benignly are encouraged to talk to a Google Cloud representative to vet their use case and explore the suitability of other managed to compute offerings in Google Cloud."






