Lesson 2.2: Junk in the Learning Loop
By the end of this lesson, you'll understand why negative keywords and exclusions can't solve a problem that lives in your conversion definition, and when "not counting" is the right move.
Spam, fraud, and curiosity fills aren't equally bad in isolation, but they're equally damaging to your account's learning if they all count as wins at tag time.
Every time a junk submission fires a primary conversion, you're reinforcing: this kind of traffic is working. Negative keywords and placement exclusions can reduce junk coming in, but they don't redefine success. If the win condition is still "tag fired," the system will just find new pockets of traffic that satisfy the tag. It's creative that way.
Here's a useful distinction:
Negative keywords are a sieve. They filter what comes in. Conversion definition is the faucet. It controls what the system is thirsty for in the first place.
You can run the finest mesh sieve on the market. If the faucet is aimed at garbage, you're just working harder to get the same result.
When should you retract, adjust, or "don't count this"?
When you have a way to say, after the fact and with legitimate data, that the win was false. This is the value of disqualification workflows and upload cadences. You don't need to master every edge case to accept the principle: the learning loop must be able to unlearn bad wins. Without that, you're stuck praising noise louder over time.
The mechanics of disqualification and upload cadences are implementation-layer work (covered separately in the Signal Hygiene module). The principle you need right now: junk that fires your primary conversion isn't a traffic problem. It's a definition problem.