Cybersecurity experts caution that depending on artificial intelligence to create passwords can seriously weaken digital defenses because AI-generated passwords tend to be predictable and lack sufficient randomness, making them easy for attackers to guess or crack. Research by the cybersecurity firm Irregular found that widely used large language models such as ChatGPT, Claude, and Google Gemini often produce password suggestions with repeating patterns and limited diversity in characters, resulting in low entropy and predictable outputs that could be compromised quickly by brute-force attacks. Industry professionals emphasize that these models aren’t designed to generate truly random strings, and replacing cryptographically secure password generators with AI suggestions can expose both individuals and organizations to greater security threats, underscoring the need to stick to established password best practices and robust authentication tools.
Sources
https://www.itpro.com/security/using-ai-to-generate-passwords-is-a-terrible-idea-experts-warn
https://www.malwarebytes.com/blog/news/2026/02/ai-generated-passwords-are-a-security-risk
https://www.techradar.com/pro/security/dont-trust-ai-to-come-up-with-a-new-strong-password-for-you-llms-are-pretty-poor-at-creating-new-logins-experts-warn
https://www.aa.com.tr/en/science-technology/experts-warn-ai-generated-passwords-may-expose-users-to-security-risks/3834887
Key Takeaways
• AI-generated passwords often lack true randomness and sufficient entropy, making them easier to guess or break with automated tools.
• Major language models like ChatGPT, Claude, and Gemini tend to produce repeating patterns and predictable character sequences rather than secure, unpredictable strings.
• Cybersecurity pros urge users and organizations to rely on proven password management tools and cryptographically secure methods rather than AI for password creation.
In-Depth
The growing enthusiasm around leveraging artificial intelligence for everyday tasks has now collided with a stark warning from cybersecurity professionals: using AI to generate passwords is a fundamentally flawed practice that could weaken individual and organizational security. A recent investigation by the cybersecurity firm Irregular revealed that popular generative AI models, including ChatGPT, Claude, and Google’s Gemini, produce password suggestions that look complex at first glance but are actually surprisingly predictable when scrutinized. These systems, built on statistical modeling and pattern recognition, are not designed to generate cryptographically secure random strings. As a result, their outputs exhibit repeating patterns and limited diversity in character selection, which significantly reduces the measure of randomness—known as entropy—that’s essential to a strong password.
In cybersecurity, entropy isn’t just jargon; it’s a practical measurement of how resistant a password is to brute-force attacks. High-entropy passwords resist guessing because they present a vast range of possible combinations, making them extraordinarily hard to crack even with powerful computing resources. But AI models, trained to produce plausible and human-like text, inherently favor more common patterns and sequences, resulting in low entropy outputs that attackers could exploit. In some cases, researchers found that these models repeated the same suggestions in multiple trials, undermining their utility in protecting accounts and systems.
Experts including professors and security researchers have emphasized that this problem isn’t easily solved by “tweaking prompts” or asking AI for more complexity. The issue is fundamental: generative models optimize for the most statistically likely outputs given the prompt and training data, not for cryptographic unpredictability. As critics note, this makes AI-generated passwords not just weak, but dangerously misleading: they can appear strong in a password strength checker because they include a mix of characters, yet still be vulnerable due to predictable structures.
Against this backdrop, cybersecurity authorities urge users and firms to stick with established best practices. Cryptographically secure random number generators—often built into reputable password managers—remain the gold standard for creating unguessable passwords. These systems are specifically designed to avoid patterns and maximize randomness, providing substantially higher entropy levels that protect against both human and machine-driven attacks. Experts also highlight the importance of multi-factor authentication and identity tools like passkeys, which can offer stronger protection than traditional passwords alone.
In short, while AI continues to transform many aspects of technology and productivity, it isn’t a suitable tool for generating secure passwords. Relying on it for something as critical as account security could open the door to cyber intrusions, data theft, and other malicious activity. Users and organizations would do well to heed these warnings and instead adopt proven, secure methods for protecting digital credentials.

