Travel MCP Server.
Real-time access to flights, hotels, and weather — for any AI assistant that speaks MCP.
I wanted to see if I could make Claude actually useful for travel planning — not just chat about trips, but search flights, compare hotels, and check weather forecasts inline in a conversation. MCP makes this possible by letting any compliant AI assistant call external tools through a single protocol.
The hard part wasn't the MCP integration — it was getting reliable travel data. Google Flights blocks scrapers aggressively. Booking.com is more forgiving. Weather APIs are a commodity. So the design had to gracefully handle a spectrum from 'live scraping works' to 'scraper got blocked, return cached or sample data.'
The server exposes three tools (`get_latest_flight_data`, `get_latest_hotel_data`, `get_weather_on_dates`), each backed by a different data strategy. Flights use the `fast-flights` library and fall back to sample data when blocked. Hotels use Playwright for headless Booking.com scraping. Weather uses Open-Meteo's free public API.
All three are cached in MongoDB with TTLs sized to how stale the data is allowed to be (flights: minutes, hotels: hours, weather: a day). The server is launchable as a CLI command and integrates with Claude Desktop via the standard `claude_desktop_config.json` mcpServers stanza.
Per-tool fallback strategy, not one-size-fits-all
Different travel data sources have radically different reliability profiles. Pretending they don't and bolting on uniform retry/error logic gives you a system that fails everywhere instead of degrading gracefully. Per-tool strategies (sample data for flights when blocked, live for hotels when the scraper holds, always-live for weather) match the actual world.
MongoDB cache with TTL, not Redis
Travel queries are rich, structured documents. Mongo's native ability to query inside the cached payload (e.g. 'show me cached hotels under $200') beat Redis's flat key-value model for this domain. Caching layer was implemented as a Mongo collection per tool with auto-expiring TTL indexes.
Real Playwright headless, not just HTTP scraping
Booking.com renders most prices client-side. Plain `requests` returns an empty shell. Playwright with chromium handles the JS pipeline correctly and the latency cost (~3s per query) is acceptable behind a Mongo cache.
The emerging standard (Anthropic-led, multi-vendor) for AI assistants to call external tools through a unified protocol. Replaces bespoke per-vendor function-calling integrations. Built MCP infra before MCP became mainstream.
Different upstreams have radically different reliability profiles (Google Flights blocks scrapers; Booking is more forgiving). Per-tool degradation beats uniform retry logic.
Modern web scraping needs full JS pipeline (Booking.com renders prices client-side). Plain HTTP scraping is dead.
Travel data is rich and structured — Mongo's query-inside-cached-payload beats Redis's flat KV for this domain.
- →MCP's UX promise depends on tool design, not protocol design. The protocol is fine; the differentiation is in tool *shape* — argument names, return types, when to surface cached vs. fresh data. Spent more time thinking about tool ergonomics than implementing them.
- →Aggressive caching is the unsung hero of MCP servers. Every Claude conversation is a fresh tool call; without a cache layer, you'd be paying the upstream API tax on every single message.
- →If I rebuilt this, I'd add explicit `freshness` parameters to each tool — let the LLM ask for fresh data when it matters and accept cached otherwise.