Breakthrough in AI Research Raises Concerns for Machine Learning Security A recent study proposes a novel backdoor attack on pre-trained models, posing significant threats to the security and trustworthiness of machine learning systems. Dubbed "TransTroj," this approach embeds malicious behaviors into pre-trained models, making it difficult to detect and remove. The researchers' success rate in embedding backdoors is nearly 100% on most tasks, demonstrating robustness even when fine-tuned for different applications. Source: https://dev.to/mikeyoung44/novel-supply-chain-backdoor-attack-on-pretrained-models-via-embedding-indistinguishability-technique-4eco