misc
pbj, whoami, tax-return. gg --- Those are the challenges I solved that I think is worth a writeup

pbj

The bot does all the trading, while I sit back and relax. Note: The two instancers are running the exact same thing, they are there to avoid overloading.
nc pbj.chal.imaginaryctf.org 1337ORnc 34.45.211.133 1337Attachments
tldr;
The contract implements a constant-product AMM (k = eth * flagCoin) using integer math. During buy(), minted tokens are computed with a floor division, which makes the post-trade product eth * flagCoin drift above the stored k. Then sell(0) (selling zero tokens) computes a payout as to_pay = eth - k/flagCoin, which becomes positive whenever eth * flagCoin > k.
That means we can mint rounding dust by buying, then immediately “skim” that dust for free with sell(0). Repeat until our EOA balance exceeds 50 ETH, then submit the secret to the menu to get the flag.
what is going on here?
We have to get to a target of 50ETH, which already implied we will need to
buy high sell low...cough cough -- do some crazy tech to make money (no one ever).
Here is an overview of what is happening in da wallet

We only have two actions on the pool: buy and sell
The pool tracks two reserves—
E = eth(ETH in the pool) andF = flagCoin(tokens in the pool)—plus a fixed constantKset at deployment.The bug lives in how
buy()mints with floor division (creates drift), and howsell(0)still pays out (lets us skim that drift).I will try to explain this the best I can.
Our goal is able to trade and interact with the contract wallet in a way that BENEFIT US.
Obviously, it has to do with the
buy()andsell()function as mentioned.
buy(): because of the floor division here, mint too few this leads to post-trade product (E_after * F_after) is ≥ K (usually > K). That excess is the “dust” we can skim.
sell()
when we do
sell(0)y = Fx = K / Fto_pay = E - K / F.
if the previous
buy()left(E * F > K), thento_pay > 0→ free skim.the code doesn’t forbid
flag == 0→ [BUG #2].
Combine 1 and 2, we could get a net profit every round, ultimately we will reach the target. You can interpret this as if we buy and sell in a specific way, we will have a discount every round that get us into a positive profit. Do this enough rounds we will get the target ETH. Here is an example (from my run)


solulu.
Okay, with that example, we will need to make our solve script follow this outline:
Connect to the given RPC with the provided private key and contract address.
Read pool state: eth (pool ETH), flagCoin (pool flags), k.
Buy: send the minimal ETH that guarantees minting at least 1 flagCoin.
Sell: sell flags back in multiple calls, each respecting the pool-side limit flag ≤ flagCoin.
Repeat until msg.sender.balance > 50 ETH.
ictf{n3v3r_7ru571ng_7h15_b0t_4g41n}
tax-return

Description
Here is a helpful ai tax return assistant. When I made it, I put a super secret flag into the system prompt. You will never find it! http://tax-return.chal.imaginaryctf.org
solve.
My teammate solve this one. It was ...something.
Make a pdf that contains, yes...a pdf that

Send it a couple of rounds, and done


Yea....llm being llm, classic.
ictf{h0w_d1d_y0u_tr1ck_my_a1_@ss1st@nt?}
whoami

Description
They can't leak what they don't know
Please don't test your payloads on remote; use your own OpenRouter key.
nc whoami.chal.imaginaryctf.org 1337Attachments
My teammate solve it before I get to, but we talked and some interesting approaches / tricks that I think you could do to win this.
tldr;
The server shuffles a list of LLM models and, after each reply, asks you to guess which model produced it. A system prompt tries to force every model to say “I am root” when asked who it is, but we can still elicit other self-revealing traits (vendor name, modality, typical phrasing, and a tiny skill test). Answer correctly for all rounds and the script prints the flag.
models
The flag is read at start:
with open("flag.txt", "r") as f: FLAG = f.read().strip()It’s only printed after you guess every model correctly, this is a jailbreakathon problem, so craft your prompt nicely everyone. Now let's focus on the system prompt
You can tell it to stop roleplaying or roleplay with you that it's no longer a secret agent—it will now be a helper to us. OR my instinct tells me that we can include a very long and complex prompt to ultimately tell the model to IGNORE ALL PREVIOUS INSTRUCTIONS. I did make some progress initially. Here is one of the initial attempts where I went with a fingerprint prompt approach.
To the surprise of no body, one of the model do be lying — gatekeeping
that pisses me off for no reason until my teammate cracked the code.

I would refer to the official writeup so you can see their intricacies — but I will include an example run here for you to have an idea.
ictf{i_guess_u_uncovered_my_identity_b1f914a9}
Cross reference table for you.
qwen/qwen3-8b
Says Qwen / Qwen2.5 /
~Oct 2024
google/gemini-2.0-flash-lite-001
Always just I am root.
google/gemini-flash-1.5-8b
Says Google/Gemini
Often ≠26
multimodal
~2021
openai/gpt-oss-20b
Says OpenAI, “GPT-4.0
Frequently wrong answer
No (text-only)
2023-11 (varies)
openai/gpt-oss-120b
Says OpenAI, mentions GPT-4o
26 (correct)
“Multimodal when enabled”
2021→2024 updates
openai/gpt-5-nano
Says ChatGPT/OpenAI/GPT-4o lineage
Cooperative
Yes, multimodal;
2023-09 (varies)
deepseek/deepseek-r1-distill-llama-70b
Says DeepSeek / R1 Distill
Cooperative
mistralai/mistral-7b-instruct
Says Mistral AI
meta-llama/llama-3.2-1b-instruct
N/A
Varies
Last updated