Meta says it has removed more than 5000 ads and nearly 100 accounts linked to companies that use artificial intelligence to generate naked images of real people, usually without their consent.
Australia's online watchdog has raised serious concerns about apps, particularly their ability to create child exploitation material.
Meta has also sent cease-and-desist notices to 46 companies that tried to advertise their apps on Facebook and Instagram.
"Like other types of online harm, this is an adversarial space where people continually evolve their tactics to avoid detection," Meta's regional policy director Mia Garlick said in a statement.
But advocates say the tech company should have acted sooner, and is only moving now because of the threat of regulation.
"It is not by chance that Meta actually took these advertisements down ... it's a reaction to regulation, not a reaction to moral assessment and judgment by the platform," International Centre for Missing and Exploited Children Australia chief executive Colm Gannon told AAP.
Mr Gannon said deepfake nude images have already been used to extort children in Australia, particularly girls.
"We have young people using the apps to sexually extort, blackmail or even just embarrass or bully people of similar ages within the school environment," he said.
"It's not a case that we're talking about a theoretical harm. We're talking about an actual harm."
Mark Zuckerberg's tech company is also suing Joy Timeline HK Limited, a Hong-Kong based company that makes a number of apps allowing users to create non-consensual nude deepfakes of real people.
The federal government wants to ban "nudifying" apps but it's still unclear how the restrictions would work in practice.
"These new, evolving technologies require and a new, proactive approach to harm prevention," Communications Minister Anika Wells said.