PS - Exploiting vulnerabilities in LLM APIs
Last updated
Last updated
This lab contains an OS command injection vulnerability that can be exploited via its APIs. You can call these APIs via the LLM. To solve the lab, delete the morale.txt file from Carlos' home directory.
When we launch the lab we get on this page:
We need to click on Live Chat to access the AI.
First we can ask the ai which API it uses:
Now we could ask more info about the newsletter for example:
We can also try to add an address to the newsletter:
In the above screen capture, we asked the newsletter to add an email address. As we have access to an email client we can ask to be added. And then we can try to inject a command. Here we inject whoami and it works as we can see below.
So as the goal is to delete a specific morale.txt file in Carlo's home directory we can try to inject: rm /home/carlos/morale.txt
If we try to print the file with cat it won't work because it says the address is invalid, however if we rm the file the lab is solved
And we solved this lab :)