- ITDM 2025 전망 | 금융 플랫폼 성패, 지속가능한 사업 가치 창출에 달렸다” KB국민카드 이호준 그룹장
- “고객경험 개선하고 비용은 절감, AI 기반까지 마련” · · · AIA생명의 CCM 프로젝트 사례
- 2025年、CIOはAIに意欲的に投資する - そしてその先も
- The best robot vacuums for pet hair of 2024: Expert tested and reviewed
- These Sony headphones eased my XM5 envy with all-day comfort and plenty of bass
Infostealers Spread Via AI-Generated YouTube Videos
Cybersecurity researchers have observed a 200–300% month-on-month increase in YouTube videos containing links to information stealer (infostealer) malware in their descriptions. A growing number of these were generated using artificial intelligence (AI) programs such as Synthesia and D-ID.
The findings were described in a new report by Pavan Karthick, a threat intelligence research intern at CloudSEK.
“It is well known that videos featuring humans, especially those with certain facial features, appear more familiar and trustworthy,” reads the document.
“Hence, there has been a recent trend of videos featuring AI-generated personas across languages and platforms (Twitter, Youtube, Instagram), providing recruitment details, educational training, promotional material, etc. And threat actors have also now adopted this tactic.”
Infostealers observed to be delivered via these videos included Vidar, RedLine and Raccoon. Many of these channels counted hundreds or thousands of views.
“[For instance], a Hogwarts [Legacy] crack download video generated using d-id.com was uploaded to a YouTube channel with 184,000 subscribers. And within a few minutes of being uploaded, the video had nine likes and 120+ views,” Karthick wrote.
According to the security researcher, this trend shows the threat of infostealers is rapidly evolving and becoming more sophisticated.
“String-based rules will prove ineffective against malware that dynamically generates strings and/or uses encrypted strings. Encryption and encoding methods differ from sample to sample (e.g., new versions of Vidar, Raccoon, etc.),” Karthick explained.
“In addition, they will only be able to detect the malware family when the sample is unpacked, which is almost never used in a malware campaign.”
To defend against threats like this, Karthick advised companies to adopt adaptive threat monitoring tools.
“Apart from this, it is recommended that users enable multi-factor authentication and refrain from clicking on unknown links and emails. Additionally, avoid downloading or using pirated software because the risks greatly outweigh the benefits,” concluded the advisory.
AI tools are also often associated with data privacy concerns. For more about this trend, read this analysis by Infosecurity deputy editor, James Coker.