Ralphthon

Singapore · Participant Guide

Singapore
Problem Statement

Each team picks one track only.

Track 1 — Impact

Prove it with business value.

This is the traditional hackathon impact track. Your project is scored on whether it could create real value in the market.

  • The problem is real, and someone is willing to pay (or use it daily).
  • The product is polished enough that the demo itself sells it.
  • The craft, UX, and shipping-ness are visible from minute one.
  • AI-native, but the AI is in service of a human user — not the other way around.

Stack is free. Use Codex, use anything else. What you built matters more than what you built it with.

Think: AI-native SaaS, consumer apps, workflow automation, useful products people would actually pay for or open daily.

Track 2 — Harness

Prove it with technical depth.

Less about monetization, more about how well you tame and operate the agent. Like Oh My Codex, OpenClaw, gbrain, gstack, hermes — the harness wrapping the agent is itself the deliverable.

  • The agent runs long, runs autonomously, and finishes its job without babysitting.
  • Goal, termination, verification, and recovery are designed — not improvised.
  • Multi-agent orchestration, tool use, memory, and loop structure are deliberate.
  • A judge can read the harness and understand why it works, not just that it works.

Harness track must be integrated so it works with OpenAI Codex. MCP Server, Agents SDK, Direct API, /goal — integration method is free, but evaluation is anchored on Codex.

Think: autonomous coding agents, Codex-backed multi-agent systems, harnesses that wrap Codex with novel loop / memory / verification logic, meta-agents that improve themselves on Codex.

Track comparison

Impact
Harness
Scored on
Business value · market fit · UX · polish
Agent design · autonomy · technical craft
Codex usage
Free
Required
Why it's good
People want to use it
The agent is technically well-designed
Keywords
useful, polished, shipping, AI-native
autonomous, long-running, agent-first, harness
Reference
General AI products
Oh My Codex, OpenClaw, gbrain, gstack, hermes

Declare your track at submission. Prizes are awarded per track.

NOT TO DO List

Projects will be immediately disqualified if they fall into any of these categories:

  • Basic RAG Applications
  • Streamlit Applications
  • Image Analyzers
  • "AI for Education" Chatbot
  • AI Job Application Screener
  • AI Nutrition Coach
  • Personality Analyzers
Connect with the Community

Join the Ralphthon Discord for team formation, announcements, and Q&A:

  • #announcements — Official updates from organizers
  • #team-building — Find teammates (max team size: 4)
  • #live-and-updates — Live updates during the event
  • #questions — Ask organizers anything
Rules
  • Team Size: Maximum 4 members per team. Solo participants are welcome.
  • Track Lock-in: One team, one track. Declare at submission.
  • Open Source: Submitted repositories must be public.
  • New Work Only: You may not present existing projects as your own work. Judges will check.
  • Demo Requirements: Your demo must highlight only features built during the hackathon. Failure to clearly identify original contributions = disqualification.
  • Harness Track = Codex integration required: Harness submissions that don't integrate with Codex are auto-disqualified.
  • Banned Projects: Projects will be disqualified if they violate legal, ethical, or platform policies, or use code/data/assets you don't have rights to.

📹 Media Notice

This hackathon is being filmed by a professional video crew. If you prefer not to appear on camera, please let us know at check-in — we'll provide a sticker for your name badge so the crew knows to keep you out of frame.

Judging

Judges walk the floor and visit each team's table. ~3 minutes live demo + 1-2 minutes Q&A per team.

Show us what you have built. Please do not show us a presentation. We'll be checking to ensure your project was built entirely during the event; no previous work is allowed.

Scorecard

One card per judge per team.

Columns: Team name · Judge name · Live demo (0-4) · Creativity / Originality (0-3) · Impact potential (0-3)

Total: 10 points (Live demo 4 + Creativity 3 + Impact 3)

  • Live Demo (0-4)How well does the team implement the core idea? Does it actually work live?
  • Creativity / Originality (0-3)Has this been seen before? Does it tackle the track's problem in an original way?
  • Impact Potential (0-3)Will this survive past the hackathon? How useful is it?

Tracks

Impact and Harness are awarded separately. The same scorecard is used, but for Harness, 'Live demo' evaluation includes Codex integration quality.

Questions? Reach out to GB Jeong — bong@team-attention.com