protectai / modelscan

Protection against Model Serialization Attacks

Home Page:http://modelscan.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Scan arbitrary code in Keras Lambda layer

mehrinkiani opened this issue · comments

Is your feature request related to a problem? Please describe.
One of Keras' core layers is Lambda layer. However, Lambda layer can be exploited for malicious code execution. At the moment, modelscan only flags if a Keras Lamba layer is found in a model. modelscan does not give any further insight as to whether the arbitrary code in the Lambda layer is indeed malicious or not.

Describe the solution you'd like
It would be helpful to get some insights from modelscan beyond the notification of a Lambda layer being present. For example, if the Lambda layer code has functions that can execute code (such as the built-in functions in python3: exec(), eval()) modelscan should highlight that the code in the Lambda layer found looks suspicious.

Describe alternatives you've considered
NA

Additional context
NA