Is “Nano Banana” by Gemini safe? Selfies are converted into AI saree edits by it. Is Gemini’s “Nano Banana” safe? Selfies are converted into AI saree edits by it.

Based on Google’s Gemini Nano model, the new AI-powered photo-editing tool known as “Nano Banana” has gone viral on social media. The application turns ordinary selfies into 3D figurine-style portraits with cartoonish proportions, oversized expressive eyes, and glossy plastic-like skin.
The trend swiftly changed into the now-viral vintage saree AI edit, although users initially liked the quirky figurine look. With cinematic backgrounds evoking vintage Bollywood posters, the filter transforms portraits, primarily of women, into glitzy retro saree ensembles. As millions of people try the trend, Instagram is overflowing with images of chiffon sarees, flowing drapes, and golden-hour lighting.
However, concerns regarding data security, consent, and privacy are becoming more prevalent, as is the case with most AI-driven trends.
Google’s newest AI-powered editing tool is called Nano Banana, or Gemini 2.5 Flash Image. More than 500 million images had been created or altered by users in the Gemini app by the middle of September, and hundreds of millions more had been produced on other platforms, according to Medium.
The tool is easy to use: Add a prompt, upload a picture, and see how it changes.
The eerie side of cute
Jhalakbhawani, an Instagram user, recently shared a disturbing story about the saree trend. She clarified that the generated image displayed a mole on her left hand after she uploaded her photo to Gemini. This is an actual bodily detail that was absent from the first photo that was uploaded.
I have a mole in this area of my body. How did Gemini find out? It’s eerie and frightening,” she wrote. In the comment section, her post generated discussion. While some users raised safety concerns, others wrote it off as a coincidence or labeled it attention-seeking.
Also Read: Three of the best ways to create retro saree looks with Google Gemini Nano Banana AI saree images
Is Nano Banana safe?
According to Google, all photos created or altered with Gemini have metadata tags and SynthID, an undetectable digital watermark, to identify them as artificial intelligence (AI)-generated. These identifiers are intended to provide platforms and creators with a means of confirming the source of content, according to Google’s AI Studio.
Additionally, Google, OpenAI, and xAI (Elon Musk’s AI company) have insisted that uploaded photos are not saved indefinitely. Privacy advocates, however, continue to advise users to exercise caution.
Is watermarking sufficient to prevent misuse of AI?
However, experts, netizens, and industry experts have been cautioning about this issue since the emergence of AI images. Most regular users are unable to verify authenticity because the detection tools required to read SynthID are not yet accessible to the general public. A lot of people have also noted how simple it is to remove, ignore, or fake watermarks.
According to a Wired article, Hany Farid, a professor at UC Berkeley’s School of Information, stated that “nobody thinks watermarking alone will be sufficient.” He and others contend that in order to truly combat deepfakes, watermarking needs to be paired with other security measures. “We don’t have any reliable watermarking at this point,” stated Soheil Feizi, a professor of computer science at the University of Maryland, in the same article.
How can AI image tools be used safely?
- Since AI image generation has become widely used, experts have recommended the following safety measures:
- Steer clear of uploading private or sensitive photos that reveal identifiable personal information.Before sharing, remove location tags and other metadata.
- Verify the app’s permissions and remove any unused gallery or camera access.
- Posting low-resolution photos rather than original, high-quality ones will help you limit exposure.
- To learn how data may be stored or used again, carefully read privacy policies.
- Additionally, specialized tools like Nightshade and Glaze can introduce subtle “noise” to images, making it more difficult to scrape them for AI training.
Bottom line
Even though AI portrait editing is now enjoyable thanks to Nano Banana, it’s crucial to exercise caution. Although they might provide a starting point for accountability, invisible watermarks like SynthID are not infallible.