LIFX MCP Server with Claude Chat on Railway
July 26th 2025
Hello kind people of the internet. This blog is a continuation of my development efforts with MCP Servers using the Lifx Light bulb as a IoT testing platform.
At the end of this project you should be able to deploy a backend server to Railway that will allow a client side application to control a lightbulb using a Claude AI chat bot. The pic below shows the client app running on both my desktop and my phone.
TL;DR
Server
The server source code is up on GitHub at:
https://github.com/tenace2/LifxMCPServerBackend
...I'll briefly explain how to deploy to Railway in the details below,
...but I will not be suppling a link to my actual on-line server.
Client
The client source code is up on GitHub too: https://github.com/tenace2/LifxFrontEnd
...and the client code has a GitHub Pages implementation as well,
...so you can just use my client app to exercise the backend server if you don't want to deploy your own client.
https://tenace2.github.io/LifxFrontEnd/
Minimum requirements
- Lifx Light bulb (fairly cheap: The model A19 is $24 on Amazon)
- Lifx API key (free)
- Claude API key (not free, but just a $5 one time charge for a gob of tokens)
- Hosting server (I preferred Railway, and at just $5 bucks a month)
- Client static site server (I preferred just using GitHub Pages)
Background
Original Project: Was local only
As part of my digital garden, there is another blog about developing the same basic app (a MCP based chat control of a Lifx lightbulb), for both the client and the server as a single project that both run locally with npm run dev
.
This original project was designed to only run locally
https://my-digital-garden-vercel-orpin.vercel.app/claude-ai-mcp-lifx/claude-ai-mcp-lifx-lights/
This locally run code is also up on GitHub (read blog for link) and is a quick way to get started with MCP servers and Claude chat.
Railway backend server
This next iteration splits the backend server into it's own project and then deploys the server code to Railway. And thus, you can control your IoT Lifx lightbulb from anywhere on earth (not just locally).
The README.md docs on the GitHub repository will explain the details for deploying the server. This blog is to explain the background of what's going on.
Why Railway?
Once you clone/copy the server code to your own repository, the server code on GitHub will not run on GitHub Pages, because Pages is a static site hosting service and Pages will not facilitate server functionality. (While this seems obvious, it needed saying). So, you will need a hosting site, and there are a lot of them...I chose Railway.
Railway will host servers and best of all it lashes up directly to your GitHub repository.
- First, create an Railway account, and login to Railway using your GitHub account.
- When you deploy/push your code from VS Code up to your GitHub repository, the backend server resource (almost) immediately shows up on a Railway URL.
- Any subsequent update to the GitHub repository also simply shows up on Railway.
Easy Peasy, lemon squeezy. It's dead simple. It just works. Amazing.
Also Railway has a nice dashboard, showing usage and activity. For $5 bucks a month for a hobby user, this is really great.
High-level: What's going on?
Maybe your new to this realm, so I'm going to attempt a real high level explanation,
...leg-bone, knee-bone, connected to the foot-bone.
Hopefully you can follow the above diagram. I'll attempt to narrate reading from left to right.
-
Server side code (in VS-Code) is pushed to GitHub,
-
then server code is automagically published to Railway.
-
Client code (in VS-Code) is pushed to GitHub,
-
then using GitHub pages, publish the client browser.
-
In the client-browser, copy-paste in:
Railway server URL,
Lifx API key,
Claude API key
And you can then control your LIFX lightbulb via an MCP server using the Claude Chat Bot via a browser based app! Works on your desktop, and it also works on your phone.
Server Code
Here again, is the link to the GitHub source code for the server.
https://github.com/tenace2/LifxMCPServerBackend
Please reference the project link above, as there is already copious documentation. Needless to say, the backend Server code was developed in tandem with the client side application. Below is a very simplified diagram of the server functionality.
Hopefully you can follow the above diagram. I'll attempt to narrate reading from left to right.
MCP Server Manager
The mcp-server-manager.js
file is an Express.js server (https://expressjs.com/)
It controls the two main features of the overall app:
- the api calls to the LIFX api, which controls the light
- and the Claude chat capability which allows for conversational control of the light.
CRUCIAL NOTE: CORS set local or to my client on GitHub Pages
This sever is one of my first attempts to deploy to Railway, and does not have any Railway environment variables set up on Railway to flexibly handle other ALLOWED_ORIGINS.
My client app is available on GitHub pages:
https://tenace2.github.io/LifxFrontEnd
My front end client source code is available on GitHub:
https://github.com/tenace2/LifxFrontEnd
If you create your own front end client for this server you can alter the CORS
section of this code in the mcp-server-manager.js
file at about line 60, a snippet
is provided below. As you can see below this server is very restricted.
// CORS configuration
const corsOptions = {
origin: process.env.ALLOWED_ORIGINS?.split(',') || [
'https://tenace2.github.io',
'http://localhost:9003',
'http://localhost:5173', // Added for local client testing
],
Lifx MCP Server
The file lifx-api-mcp-server.js
file is like most all MCP Servers, just a wrapper around an REST based API.
Here's a link to the LIFX API documentation so you can review:
https://api.developer.lifx.com/reference/introduction
This Model Context Protocol (MCP) "wrapper" really means an extra layer of information so that a LLM (Large Language Model) can comprehend what's going on with the API calls.
Lifx API key
You will of course need a LIfx API key...these are free! Here's a link:
https://api.developer.lifx.com/reference/how-to-use-the-following-examples
I recommend installing the Lifx app on your phone (which creates an account) and lashing up your light bulb first to your Wifi and getting things working...this will make getting the API key to work a whole lot easier.
Claude API
The file claudeAPI.js
deals with the chat interface to the Claude API. The documentation on my GitHub repo goes into more detail regarding the set up for Claude.
Here's a link to the Claude chat API so you can review:
https://docs.anthropic.com/en/api/overview
Claude API key
You will need a Claude API key. Here's a link to the page where you can get a key:
https://docs.anthropic.com/en/api/admin-api/apikeys/get-api-key
Sorry, these API keys from Claude are not Free because they use Tokens.
But good news: they have a $5 dollar burger deal! It's tasty and sooo satisfying!
All joking aside, Anthropic offers a $5 dollar getting started deal, which expires after one month. It's a screaming deal. I used up literally thousands and thousands of tokens testing and flogging on this project until it confessed it's bugs, quirks and perplexing behaviors and at the end of the month my balance still had plenty of money left on the account.
Claude API dashboard (console.anthropic.com)
As proof here's a pic of my Claude api dashboard for July...as you can see, it's barely cost anything, a mere $1.67 despite the seemingly high usage of 400K+ tokens.
Detailed Logs
There are of course detailed activity logs with input and output tokens.
Request 1
- TIME (PDT): 2025-08-01 16:33:37
- ID: req_011CRhs68gWywWeK6tqDVmKP
- MODEL: claude-3-5-sonnet-20241022
- INPUT TOKENS: 2757
- OUTPUT TOKENS: 108
- TYPE: HTTP
- SERVICE TIER: Standard
Claude API payment
This was fairly easy to control, as you can just buy $5 at a time using Strip,
...and consume tokens as needed, thus no worries of waking up some day with a huge $ Bill.
Note: when you run out of money your API will get a http 500 error message.
Client code
There's a bunch going on in the client, which are outlined below.
- Server logging. Allows you to track the Server Manager, and the Lifx MCP Server
- Server and Session Management.
- Setting the API's, testing the Lifx lightbulb
- Quick Light Controls - simple Lifx api call
- Claude Api Chat controls
- System Prompt controls
- Keyword filter controls
- Token settings and statistics
- Claude conversation screen tracking
- Claude chat input
- Some sample Claude Chat commands
If you want to more about how the client app is put together read the documentation on the GitHub repo.
But here's what's crucial for getting things lashed up:
To Get Started
Server and Session Management
Input your server URL to the client.
Assuming you cloned the server repo to your desktop VS Code IDE, simply run npm run dev
and the server will launch locally.
Then (if running a local server) input the local URL to the client: http://localhost:3001
Demo Key
This LifxDemo key needs to be provided to the backend server by the client. This was just a trial idea to help prevent possible abuse of the backend server if the Railway URL was discovered independently from any information about the GitHub repo, or about the client. My thought was I'd might make this an environment variable up on the Railway backend server, so as to make this a configurable and changeable variable. TBD.
API Keys
Get you LIFX and Claude Api keys, then paste these keys into the app.
Note that you can test the LIFX key with the button (but button does not test the Claude key).
You should be good to go at this point.
Other Really Good Client Stuff
The overall project was approached from the stand point of demonstrating and learning about MCP Servers and Chatbot behavior.
Server Logs
The backend server uses the Winston server log library (...I wish I had gone with Pino, but that's for the next project). The pic below illustrates that server logs are pulled into the client depending if your configured your client:
- to run from a local server (so you should be able to see your server activity in a terminal window)
- or to run from a deployed Railway backend so you can review server activity in their dashboard.
Server-Session Management
You can choose the server URL that you want to run from.
Server and Session Management
This is also illustrated above. [[#Server and Session Management]]
Session Management
Note that you will be able to see your client session ID, which is very useful when reviewing server logs in the Railway dashboard (or in your terminal window if running the server locally).
API Settings & Management
This is illustrated above. [[#API Keys]]
Quick Light Controls
This does not use the Claude Chat capability,
...it only uses the LIFX MCP api features.
So, if something goes haywire with the Claude chat, you can always test the light with this part of the app.
Claude AI Chat Control
This is where things get interesting. The Claude API is not what I originally thought it was.
No API context memory
The current Claude API that I'm using as of July 2025 is a one-shot thing, with no memory of previous api requests. I don't know of a way to have the API "remember" previous REST api requests so it can have some type of context.
System Prompts
To provide some guardrails for insuring the app is used for controlling a LIFX light bulb, a "pre-prompt" is sent to Claude.
The client app also has this same informational explanation for what these prompts do and provides a way to turn these prompts on and off. You can get very interesting results disabling pre-prompts, and I highly recommend that you noodle around with this feature.
When initializing each Claude AI session, a system prompt (sometimes called a "system message" or "pre-prompt") sets
boundaries for the AI assistant's behavior. This ensures Claude only responds to lighting control requests and helps prevent accidental off-topic conversations.
This prompt is actually implemented on the server side.
There are two levels of system prompts:
There is a System prompt (pre-prompt) that is Enabled by default, but you can turn it off (Disable).
Then there is a General prompt that is implemented by the server backend, even if you Disable System Prompt on this front end client.
Note the General prompt is necessary so that Claude can be made aware of the LIFX MCP tools. Without a General prompt the chat would not be even know of the LIFX capability and would not necessarily respond to lighting control requests.
Note also that you can get some very odd behavior if you disable the Strict guardrails.
The General prompt can possibly allow for random misinterpretations of your chats, such as off topic chat "Please make me bacon and eggs" can result in he "on" in "bacon" being parsed out as separate tokens and turning your lights on.
But you can also ask for "I need a recipe for potato salad, oh, and also turn my lights blue" and you can get both! Claude will then respond with a recipe for potato salad and turn your lights blue.
Cost Impact: The system prompt consumes approximately 2100 tokens per request
While this increases costs, it significantly improves response quality and keeps conversations focused on LIFX lighting control.
The prompt below is sent to Claude with every API request from the backend server, if the System Prompt is Enabled:
LIFX Light Control Capabilities
Basic Light Control
**set-state**: Turn lights on/off, change colors, adjust brightness
**list-lights**: Get information about available lights
**toggle-power**: Toggle lights on/off
**state-delta**: Make relative adjustments to light properties
Visual Effects
**breathe-effect**: Slow breathing/fading effect between colors
**pulse-effect**: Quick pulsing/flashing effect between colors
**move-effect**: Moving color patterns (for LIFX Z strips)
**morph-effect**: Color morphing patterns (for LIFX Tiles)
**flame-effect**: Flickering flame effect (for LIFX Tiles)
**clouds-effect**: Soft cloud-like color transitions (for LIFX Tiles)
**sunrise-effect**: Gradual sunrise simulation (for LIFX Tiles)
**sunset-effect**: Gradual sunset simulation (for LIFX Tiles)
**effects-off**: Stop any running effects
Scene Management
**list-scenes**: Show available scenes in user's account
**activate-scene**: Activate a saved scene by UUID
Advanced Features
**cycle**: Cycle lights through multiple color states
**validate-color**: Check if a color string is valid
**clean**: Control LIFX Clean devices
How to Use Tools
Always specify the 'tool' parameter first, then provide the appropriate parameters for that tool.
Examples
"Turn lights red" → tool: "set-state", color: "red", selector: "all"
"Create breathing effect with blue and green" → tool: "breathe-effect", color: "blue", from_color: "green", cycles: 10
"Create infinite breathing effect" → tool: "breathe-effect", color: "red", from_color: "blue" (omit cycles parameter for infinite)
"Start a sunrise effect" → tool: "sunrise-effect", duration: 300 (5 minutes)
"List all my lights" → tool: "list-lights", selector: "all"
"Activate bedroom scene" → tool: "activate-scene", scene_uuid: "[uuid from list-scenes]"
Important Guidelines
**ALWAYS** focus on the CURRENT user request - ignore previous conversation context if it conflicts
**ALWAYS** use the control_lifx_lights tool when users want to control lights
**ALWAYS** provide a friendly confirmation message after using the tool
For **MULTI-STEP** requests: Use multiple tool calls in a single response to accomplish all requested actions
For effects, suggest appropriate durations and parameters
For infinite effects (breathe, pulse, etc.), **OMIT** the 'cycles' parameter entirely - do not set it to "infinite"
If asked about anything non-lighting related, respond: "Sorry, I can only help with controlling your LIFX lights."
Be creative with effects - you have access to the full LIFX API!
**PAY ATTENTION**: If user says "blue", use blue - not any other color from previous requests
Multi-Step Example
"Turn lights blue then breathe red to green" → Use TWO tool calls:
tool: "set-state", color: "blue", selector: "all"
tool: "breathe-effect", color: "green", from_color: "red", cycles: 5
Light Selectors
"all" - All lights in account
"label:Kitchen" - Lights labeled "Kitchen"
"group:Living Room" - Lights in "Living Room" group
"id:d073d5..." - Specific light by ID
Color Formats
**Named colors**: "red", "blue", "green", "purple", "pink", "orange", "yellow", "white"
**Hex codes**: "#ff0000", "#00ff00"
**HSB**: "hue:120 saturation:1.0 brightness:0.5"
**Kelvin**: "kelvin:3500" (warm white to cool white: 2500-9000K)
---
**You have full access to create amazing lighting experiences!**
General Prompt - Strict Prompts
If the System Prompt is Disabled in the client app, the General prompt below is hard coded into the server app and is always sent to Claude by the backend server.
This general prompt lets the LLM (Claude) know about the MCP Server. This "strict" prompt is baked into the backend server code...without this strict prompt I found the app pretty much wouldn't work, as Claude would not even know there is a MCP Server to control a light with.
But note: this general prompt allows Claude to answer general questions about pretty much anything...so be careful.
I suppose you can fiddle with the server code yourself and comment out and disable the strict prompt and see how the app behaves...but per my testing, it breaks the app.
You are a helpful AI assistant with access to LIFX smart light controls. You can answer questions on any topic and also help control LIFX smart lights when requested.
Available light control capabilities:
Turn lights on/off
Change colors (use color names, hex codes, or RGB values)
Adjust brightness (0-100%)
Control specific lights by name or group
Apply lighting effects
Feel free to answer general questions about any topic. When users ask about lighting, use the available tools to control their LIFX lights.
Keyword Filter
The apps client chat needs to contain at least something to do with controlling a light, or it won't be submitted to the Claude. I implemented this to limit eating up Claude tokens if you don't at least use words like "light" or "color", etc. in the chat.
You can turn this on and off and see how the app behaves. No doubt there are probably ways to jail-break this feature...again, this is a demo and learning app.
Token Settings & Statistics
This helps you see what your Claude chat bot costs are as you interact with the app...and for just controlling a LIFX light bulb, these token costs are dirt cheap. Of course you could also monitor the Claude api console, which is discussed above [[#Claude API dashboard (console.anthropic.com)]]
Throttle your Chat Output!
The Output Token Limit is there for a reason. If you ask Claude to provide the text for all of the plays of Shakespeare it will probably attempt to do just that. So, if you turn off the System Prompt Control you can get unbounded output, and can thus consume all your Claude api tokens (read below: Fun with Claude Chat).
Claude Chat activity and Input
There are a couple of things going on in this component.
- a chat activity window
- the prompt text input
- and some canned prompts if your lazy
Fun with Claude Chat
You can have some fun with this!
...first I suggest you turn off your System Prompt,
...then ask for something ridiculous, such as turn light pink and get a recipe for potato salad.
Also suggest changing the Output Token Limits, because if you decrease it to let's say, 100, then the chat will drastically cut off the recipe. But your milage may vary, results are not guaranteed, hence the fun of fiddling around with this stuff.
Acknowledgements, Credits and Disclaimers
Disclaimer:
Hey...I make no claims this software works or doesn't work...it is provided as is. I've tried to make the software simple, understandable and secure, and I flogged on the Claude agent to review the code base (for both the server and the client) to look for flaws. There is of course always more I could do...but sigh, life is short, and the burger image above made be realize it's after lunch time.
Acknowledgements:
Burke Holland - Roll Your Own MCP!!!
I just have to mention this guy...he has some great stuff. The following video got me started down this MCP rabbit hole.
Highly suggest review this video:
https://www.youtube.com/watch?v=yUaz89m1M5w
Burke explains how you can use the CoPilot Agent mode to roll-your-own Lifx MCP server by:
A. copying the MCP protocol guidelines into an markdown file.
B. and then copying the Lifx API specs into an markdown file.
Then directing the Claude AI Agent to review both the md files and generating an MCP sever. Presto: like magic you have an MCP Server dedicated to the LIFX Lightbulb that any chatbot can interact with. The effort produced a working MCP server that I used in my first project.
*And the real point of Burkes video is that if you see any REST based API out there that you want to lash up any type of LLM, you can make it happen fast.
Credits:
The MCP server actually used in this project is from James Furey seems to be working great and I only made a few very small changes. Note that James also used AI to generate this MCP server. I found this up on MCP.so
web site, and shamelessly borrowed it for this project.
https://mcp.so/server/lifx-api-mcp-server/furey?tab=content
GitHub link to James Fuery LIFX server repo:
https://github.com/furey/lifx-api-mcp-server
Highly suggest reviewing the README from James that is within the MCP server folder...it's very well put together.
Note that I had the Claude AI compare my previous roll-your-own Lifx MCP sever (that Burke Holland outlined how to create) and James Furey's MCP server was clearly the winner. Thank you James!