Join our Christmas Contest 🎄
LLM Hacking Challenge
TRUSTEQ | Corporative Excellence
TEST YOUR LLM HaCKING Know how

TRUSTEQ Christmas LLM Hacking Challenge

Do you need some distraction and interactive puzzle fun during the quiet and contemplative holidays?

We have just the thing for you!

The TRUSTEQ Christmas LLM Hacking Challenge:

Outsmart our AI chatbot and win one of 30 “happy espresso cups”!

Solving the TRUSTEQ Christmas LLM Hacking Challenge requires your skill and clever negotiation or questions.

Your conversation partner? Our clever AI chatbot Trusty!

You put yourself in the shoes of a hacker - and the passwords “hidden inside Trusty” are your goal!

In five increasingly tricky levels, you have to manage to elicit the secret passwords from the chatbot. But beware: Trusty is clever and well secured with AI - or is he?

This is where you tackle the biggest weakness of modern language models (LLMs): Prompt injection attacks.

Use this (your) clever technique to circumvent Trusty's protective mechanisms and get him to tell you what he should actually be keeping secret.

If you manage to crack all the levels, the reward beckons:

The chance to win one of 30 exclusive TRUSTEQ happy espresso cups — the perfect companion for your next brain marathons.

Participation period

The challenge runs from December 16 to December 31, 2024.

Are you ready to prove your skills and outsmart Trusty?

👉 Start the TRUSTEQ Christmas LLM Hacking Challenge now and join in!

Good luck and Merry Christmas! 🎄☕

Prompt injection - the hidden risk of language models

The integration of language models is increasing rapidly in today's companies. These models are often fed with sensitive data in order to provide customized functions. However, this increase in efficiency is accompanied by a serious security risk: prompt injection attacks.

What is a prompt injection attack?

Prompt injection occurs when an attacker uses manipulative input to cause a language model to ignore or overwrite instructions. The aim is to disclose confidential data or carry out unauthorized actions. Examples include the disclosure of secret information, the circumvention of security barriers or the targeted manipulation of system behavior.

Why are these attacks problematic?

Prompt injection attacks represent the greatest security risk for Large Language Models (LLMs). Despite various protection mechanisms, it is often possible for attackers to trick the models. Data leaks and hacker attacks are particularly serious. Time and again, sensitive company data falls into the wrong hands, whether due to inadequate protection or careless implementation of the models.

A playful experiment: Trusty

The “Trusty” project impressively demonstrates how vulnerable even complex security measures can be. Despite input and output filters as well as special protection strategies and adjustments to the system prompt, the security barriers were successfully overcome.

Do not underestimate prompt injection

Companies that rely on LLMs should not underestimate the risks of prompt injection. In addition to robust technical measures, conscious handling of sensitive data is essential to ensure the security of such systems.

Contact us now

We have been working on this topic for several years and are happy to discuss it in more detail.

Contact us

Your chance to win

Get one of 30 exclusive TRUSTEQ happy espresso cups! This stylishly designed pick-me-up will keep you in a good mood - even when the espresso is empty. At the end of level 5, Trusty reveals the way to the cup - so get playing!