Skip to main content

We need your help! We’re exploring how AI can support real workflows across Lucid and other tools. Before we build too far, we want to learn where people actually want help so we can focus on the things that matter.

We’re kicking off early research into our Model Context Protocol (MCP) APIs. These APIs could help tools like Copilot, Claude, or Gemini work smarter inside Lucid. But we don’t want to guess what people need — we want to build based on real workflows, so we’re starting here.

 

👉 Take this 2-minute survey (https://t.maze.co/397951544) to tell us what AI workflows you’d actually use.

 

Got thoughts or questions?
Drop a reply in this thread. What jumps out to you? Are there AI workflows we didn’t mention that should be on our radar?

Thanks for helping us build this right!

I’ll be reading and responding to every comment — excited to learn from you.

Idea→Seeking Feedback

Thank you for the invitation to share our thoughts ​@cesar m 

I’ve just completed the Maze survey and I hope the input was helpful to your team.

Looking forward to seeing how the MCP evolves and supports the work we do every day.

Happy to share more insights if needed!


Excited to contribute to this research! Just took the survey.


My use case is (in software development) that I’m drawing BPMN diagrams in LucidChart and writing textual specs in Google Docs. I want to read the text and diagrams to Claude Code in order to generate acceptance test procedures, e.g., set up test data for database, etc., for each scenario.

Currently I’ve generate python code to authenticate and download the specification Google Doc, I’m exporting diagrams as PDF from Lucid, and feeding all of them to Claude to do it’s magic.

In MCP, I’d be very interested seeing something that would allow my local Claude Code to connect to Lucid’s BPMN diagram and understand the structural content there without Claude needing to decipher pictures/PDF.


I would want to describe an AWS, Azure, or Google architecture and have it spit out an architecture diagram and read the diagram and make a terraform, ansible script that matched.


In code, you need to a lot of details and usually it doesn’t make sense to add all of the to diagrams, lest they would become unreadable. I recently created a non-trivial GCP architecture with Claude Code and Terraform, and that combination worked great. I’d be happy it I could somehow command Claude to use LucidChart to draw architectural diagrams based on existing Terraform code. Diagrams could be drawn at various levels of abstraction, i.e., overview and detailed.


Thanks for all the thoughtful responses. This thread is already influencing what we prioritize.

​@Ria S , ​@Humas1985 — appreciate the survey responses and the openness to share more. We’re clustering feedback now to identify the most impactful patterns.

​@rocketman , ​@Kbrewer — the direction you’re pointing to is aligned with where we want to go. We’re especially interested in enabling AI to understand structured diagrams, connect information across tools, and support generation workflows in a more intelligent way.

We’re starting with foundational APIs and layering up from there. Please keep sharing edge cases like these. It helps us shape what’s next.


I see some unofficial Lucid MCP servers on Github.  I’d prefer not to spend time on those if possible - do you have an approximate quarter for when you would release v1.0  or a beta of the official Lucid MCP server?


​@jfudge — I don’t have a public date to share yet, but our goal is to get an early version into customers’ hands as soon as possible this year. I’d recommend holding off on third-party versions, since the official release will be supported, secure, and kept up to date. We’ll share more details here as soon as we have a timeline.


It would be nice if you could somehow implement a diff’ing functionality. I mean when you’re modifying code with an AI, you usually have an option to view the proposed changes before applying them. Something similar would be nice in Lucid + AI combination too. Or at least “undo latest changes done by AI via MCP” kind of functionality would be needed. While I’m keen on interacting with diagrams using AI, I wouldn’t want to worry about accidentally creating a mess that takes an hour to clean up.


This is me interacting with Kiro in this case while trying to build an ASCII version of an architectural diagram:

in the archtiecture diagram in section 1.1, the react frontend component should interact with the API GW only.  It doesn't go direct to the Query Engine.  We need to add another layer between the [step funtions, elasticache, query engine] layer and the [users, React Frontend UI, API GW] layer.  This should have the [aws cognito] component in it.  The query engine should then be connected to the [aws cognito] component and to the [api gw] component.  The bottom layer should be ordered as [users, React Frontend UI, API GW] since the [users] interact with the [react frontend ui] which interacts with the [api gw].

This is the sort of conversation I want to have to build Lucid charts via a MCP rather than build them in ASCII