LLM security

Girish Kurup
2 min readMar 24, 2024

--

Generative AI has taken world by storm. Startup and individuals are using proprietary models like Open AI to create new AI apps or products.

However large enterprises are caught up in long discussions on risk and fear associated with LLM. Because no body knows what kind of data proprietary models LLMs were trained on.

Recent llm model theft story has rattled many security establishments and enterprise application security red teams

With less than 20$ expense , it is possible for adversarial risk attack to extract the entire projection matrix of OpenAI’s Ada and Babbage language models.

Suppose enterprise downloads the non prosperity models like mistral or llama2 from hugging face, what kind of vulnerabilities scanning are available to classify the llms and associated weights plus Python code as secure

Fear push enterprise to use non prosperity llms available in azure or aws or gcp with assumption that these cloud providers must have performed all possible vulnerability scanning. Is there vulnerability databases publicly available ?

  1. Assume llama2 llm is available in azure how to make sure llama2 llm available in azure has passed all vulnerability scanning
  2. Suppose an enterprise finetune llama2 available in azure how to secure the. new weights is llama post finetuneing
  3. Suppose an enterprise starts building a new app using azure open ai how to perform treat modeling

--

--

Girish Kurup
Girish Kurup

Written by Girish Kurup

Passionate about Writing . I am Technology & DataScience enthusiast. Reach me girishkurup21@gmail.com.

No responses yet