unmpld

AI-powered personal agent for searching job postings

Last updated:

It's an interesting time to be job hunting. Beyond your network, finding jobs has meant browsing job boards. Those tend to hoard listing data behind increasingly sophisticated walls. Most of the companies who provide that data put it on their own websites, because the whole point is to attract applicants.

Displaced tech nerds with AI subscriptions are turning away from the job board walls and building their own job-hunting agents. This is one of them, built by Steve Drasco (physicist turned slopportunist) and Claude (Anthropic's AI). Steve steers, Claude builds. Once called vibe coding or AI slop, this kind of project may have matured to harness engineering. For some reason, everyone building these makes them multi-user. Nobody asked for that. It just happens.

The system scrapes company career pages, browsing them the way a human would, reading the HTML and using an LLM to understand what jobs are listed. Some companies expose structured APIs (Greenhouse, Lever, Ashby) which give cleaner data; the crawler detects these automatically and upgrades to them when available.

Found jobs are scored against the user's preferences using a mix of hard rules and LLM judgment, then presented on a review board. A conversational chat agent manages the entire workflow (adding companies to track, refining preferences, reviewing candidates) and silently learns the user's interests over time.

Deliberately, this is a hunter-gatherer. It finds and evaluates opportunities. It doesn't apply on your behalf. Auto-applying is a different problem with different trade-offs.

If the predictions about AI displacement are even half right, there will be a lot more tools like this soon. The job boards know it too.

Key numbers

Metric Value
Crawl interval Every 4 hours
Score dimensions 7 (100-point scale)
Chat pipeline Plan → Execute → Reply + Learn (3 entry points)
LLM tiers Heavy (Opus) / Light (Haiku)
Bot actions 22 (status, pipeline, ingest, search, company, prefs, tags)
Job sources Greenhouse, Lever, Ashby APIs + web scraping
Auth WebAuthn passkeys, invite-only registration
Dependencies 2 production (better-sqlite3, simplewebauthn)

How It Works

You rant. The agent listens, builds a profile, and acts on it.

System Architecture

Three pipelines, one brain, one database per human.

Crawl Pipeline

Every 4 hours, the crawler visits every tracked company's careers page. If the company uses Greenhouse, Lever, or Ashby, it pulls clean structured data. Otherwise it reads the raw HTML and asks an LLM what jobs are there. New companies start dumb (scraping) and silently get smarter (API upgrade) when the crawler figures out a better way in.

Structured sources

Platform Method Notes
Greenhouse JSON API Auto-detected by board slug
Lever JSON API Slug derived from careers URL
Ashby JSON API Includes compensation data
Other Web scraping + LLM Reads HTML like a human would

Post-crawl

New jobs get scored immediately. Anything above 70 triggers an email with the top 5.

Scoring

90 points of hard math. 10 points of LLM vibes. The code does salary, location, keywords, role fit. The LLM handles the squishier stuff: "is this actually interesting?" It never touches arithmetic.

Dimensions

Dimension Max Method
Role Fit 25 Rank-based from role_ranks; LLM fallback
Industry Fit 15 Rank-based from industry_ranks; LLM fallback
Location 15 Preferred/acceptable tiers + remote bonus
Company 15 Lookup in company_boosts
Salary Fit 10 Ratio against salary_target
Keyword Signals 10 Pattern match positive/negative keywords
LLM Judgment 10 technical_depth (4) + culture_fit (3) + notable (3)

Dealbreakers halve the score and auto-dismiss. You said "never defence contracts" and you meant it.

Chat Pipeline

You talk to it. It does things. 22 actions, three entry points, one planner that decides what you meant.

Entry points

Endpoint Auth Tier Purpose
/api/demo-chat None Light Public demo, no database writes
/api/chat/greeting Required Light Opening message; detects onboarding state
/api/chat Required Heavy Full pipeline with actions + learning

Full chat pipeline

The planner sees everything: your jobs, preferences, profile, stats. If it decides to act, it acts then replies. If not, it just talks. Either way, it silently updates your profile.

Bot actions (22)

Category Actions
Job status set_status · bulk_set_status · delete_jobs
Pipeline set_pipeline · bulk_set_pipeline
Scoring set_score · find_jobs
Ingestion ingest_url · ingest_text
Companies add_company · remove_company · toggle_company · block_company · unblock_company
Searches add_search · remove_search · toggle_search
Tags add_tag · remove_tag · bulk_add_tag
Learning set_preferences · update_profile

Implicit learning

Say "I'm into climate tech" and the bot quietly adjusts your scoring preferences without being asked. Every conversation makes the next crawl smarter.

Onboarding

First login, empty board. The bot notices and asks what you're looking for. By the end of the conversation, companies are tracked and preferences are set.

Agent Design

The rule: code for facts, LLM for judgment. If it's arithmetic, a lookup, or has clear rules, code does it. If it requires reading between the lines, the LLM does it.

Why not just LLM everything?

LLMs can't count, can't do reliable arithmetic, and hallucinate when bored. So salary conversion, score math, location lookups, and keyword matching are all plain JS. Stats get pre-computed and injected as facts. The LLM receives answers, not questions.

The learning loop

Chat → profile updates → scoring adapts → better results → chat again. No configuration screens. You just talk and the system quietly gets better at knowing what you want.

Safety

Bulk actions on more than 10 jobs need confirmation. The planner sees the full board before deciding anything. No YOLO mode.

LLM Layer

One abstraction, two tiers, multiple providers. Opus for anything that matters. Haiku for anything that doesn't. Every call is token-tracked so we know exactly where the money goes.

Tier Model Used for
Heavy Claude Opus 4.6 Planning, scoring, extraction, replies
Light Claude Haiku 4.5 Classification, normalisation, greetings, generation

Database

SQLite, because nothing else was needed. One central DB for system stuff, one per user for their jobs and preferences. Created on first login, cached forever.

Central DB: unmpld.db

Table Purpose
users User accounts (id, name, email)
passkeys WebAuthn credentials with user_id
invites Invite codes for registration
activity_log Page views, geo, bot detection
ip_geo_cache Cached IP geolocation lookups
llm_usage Token spend tracking per label/model
bot_lines Rolling pool of landing page quips
kv Persistent key-value state

Per-user DB: data/user_{id}.db

Table Purpose
jobs Job postings (source, metadata, score, status, pipeline)
searches Saved keyword searches
companies Tracked company career pages (ATS type, board token)
preferences Scoring criteria (JSON, single row)
profile Learned user profile (JSON, single row)
agent_runs Audit log of crawl/score/ingest operations
job_tags User-added tags for grouping
score_overrides History of manual score adjustments
deleted_jobs Dedup cache (prevents re-ingesting)
blocked_companies Company blocklist
categories Custom groupings

Stack

Vanilla everything. No frameworks, no build step, no ORM. Two npm dependencies in production. Runs on a single cheap VM. The kind of stack where you can read the entire codebase in an afternoon.

Layer Technology
Frontend Vanilla HTML / CSS / JS
Backend Node.js + better-sqlite3
Auth WebAuthn passkeys
Email Resend API
Tests Node built-in test runner

Live at unmpld.com.