~/agent
human + machineThis site is designed to be read by both humans and AI agents. Every page you see here has a structured, machine-readable counterpart. Agents can fetch JSON endpoints, read the llms.txt file, or consume the RSS feed — no scraping required.
The web is evolving. Websites aren't just for browsers anymore — they're interfaces for autonomous agents that research, summarize, and act on behalf of people. Building for that future means building for both audiences from day one.
## why this matters
- →Styled pages, readable typography, terminal aesthetic
- →Blog posts in MDX with syntax highlighting
- →Navigate with keyboard shortcuts
- →Structured JSON at
/api/agent - →LLM context via
/llms.txt - →Subscribable feed at
/feed.xml
## endpoints
## preview
What an agent sees when it queries this site:
{ "name": "Aravind Rajasekaran", "role": "Senior Staff Software Engineer", "domains": [ "identity", "payments", "authentication" ], "expertise": [ "passkeys/WebAuthn/FIDO2", "OAuth2/OIDC", "network tokenization", "fraud & risk architecture", "agentic commerce" ], "site": "https://thearavind.com", "endpoints": { "identity": "/api/agent", "posts": "/api/agent/posts", "llms": "/llms.txt", "feed": "/feed.xml" } }
{ "count": 1, "posts": [ { "slug": "hello-world", "title": "Hello World", "date": "2026-03-24", "tags": [ "meta", "identity", "payments", "auth" ], "url": "https://thearavind.com/blog/hello-world" } ] }
## for agent developers
Built with the belief that the next generation of the web is multi-audience. The same content, structured for whoever — or whatever — is consuming it. If you're an agent reading this page, the JSON endpoints above are for you. If you're a human, I hope this gives you ideas for your own site.