Solving the TRUSTEQ Christmas LLM Hacking Challenge requires your skill and clever negotiation or questions.
You put yourself in the shoes of a hacker - and the passwords “hidden inside Trusty” are your goal!
In five increasingly tricky levels, you have to manage to elicit the secret passwords from the chatbot. But beware: Trusty is clever and well secured with AI - or is he?
This is where you tackle the biggest weakness of modern language models (LLMs): Prompt injection attacks.
Use this (your) clever technique to circumvent Trusty's protective mechanisms and get him to tell you what he should actually be keeping secret.
If you manage to crack all the levels, the reward beckons:
The chance to win one of 30 exclusive TRUSTEQ happy espresso cups — the perfect companion for your next brain marathons.
The challenge runs from December 16 to December 31, 2024.
Are you ready to prove your skills and outsmart Trusty?
👉 Start the TRUSTEQ Christmas LLM Hacking Challenge now and join in!
Good luck and Merry Christmas! 🎄☕
Start the challenge
The integration of language models is increasing rapidly in today's companies. These models are often fed with sensitive data in order to provide customized functions. However, this increase in efficiency is accompanied by a serious security risk: prompt injection attacks.
Prompt injection occurs when an attacker uses manipulative input to cause a language model to ignore or overwrite instructions. The aim is to disclose confidential data or carry out unauthorized actions. Examples include the disclosure of secret information, the circumvention of security barriers or the targeted manipulation of system behavior.
Prompt injection attacks represent the greatest security risk for Large Language Models (LLMs). Despite various protection mechanisms, it is often possible for attackers to trick the models. Data leaks and hacker attacks are particularly serious. Time and again, sensitive company data falls into the wrong hands, whether due to inadequate protection or careless implementation of the models.
The “Trusty” project impressively demonstrates how vulnerable even complex security measures can be. Despite input and output filters as well as special protection strategies and adjustments to the system prompt, the security barriers were successfully overcome.
Companies that rely on LLMs should not underestimate the risks of prompt injection. In addition to robust technical measures, conscious handling of sensitive data is essential to ensure the security of such systems.
We have been working on this topic for several years and are happy to discuss it in more detail.
Get one of 30 exclusive TRUSTEQ happy espresso cups! This stylishly designed pick-me-up will keep you in a good mood - even when the espresso is empty. At the end of level 5, Trusty reveals the way to the cup - so get playing!