Anbi Guo
Large language models (LLMs) were evaluated for detecting Living Off The Land (LOTL) attacks using Linux audit logs. All manually simulated attacks were successfully identified across three prompt strategies. However, the amount of noise varied significantly—generic prompts produced the most false positives, while targeted prompts achieved better precision. These findings demonstrate the potential of LLMs in log-based threat detection and emphasize the need for improved prompt design to reduce false alerts.