Skip to main content
Featured topic

Calling all Engineers: Help us shape the future of AI at Lucid! ⚙

  • March 30, 2026
  • 2 replies
  • 52 views

tlykins
Forum|alt.badge.img

Hello everyone!

I’m Taylor from the Lucid Product team, and I have a specific request for the builders and coders here!

We are spinning up a new AI-focused research team, and we’re looking for a small, dedicated group of long-term partners to help us bridge the gap between AI innovation and technical workflows.

Rather than a one-off feedback session, we’re looking to build long-term partnerships with engineers and IT professionals who live in code to engage in an ongoing dialogue with our team. You’ll get a first look at what we’re building, and your technical expertise will directly influence our roadmap.

For our ideal partners, we are specifically looking for software engineers, DevOps professionals, IT architects, or anyone whose daily life involves writing and managing code. You don’t need to be an AI expert to join! By participating, you will have a direct impact on the tools you use by collaborating alongside the people actually building them.

As a thank you for your time, we are happy to offer a $50 gift card! If you're interested, please drop a comment below or react to this post! Looking forward to building something cool together!

Comments

Forum|alt.badge.img

Hi @tlykins,

I am Aryan, currently pursuing my master’s in AI. I recently spent some time testing Lucid AI on the platform, and within roughly 20 minutes of interaction a few issues stood out that I felt were important enough to share.

  • First, the agent currently seems to operate more like a general API call wrapper than a context-aware assistant grounded in the platform itself. For example, when prompted with homework questions, programming questions, or even unrelated philosophical prompts, it still responds at length rather than staying scoped to the product context. If that is intentional, then fair enough, but from a cost and product-design perspective it appears likely to drive unnecessary token usage over time.
  • Second, it struggles on longer-context reasoning tasks. It does not seem to reliably track the user’s intent, the purpose of the conversation, or the broader context of the interaction. From the outside, this suggests that the retrieval and context-handling pipeline may need improvement, especially if the goal is to support more grounded, high-value user interactions rather than shallow prompt-response behavior.
  • Third, and most concerning, the guardrails appear too permissive. When I tested boundary behavior, the agent began disclosing internal behavior about how it is structured to respond for certain business use cases, including how it defaults to geometry and design settings under uncertainty. When pushed further into an “audit mode,” it also started describing internal access patterns, admin permissions, logging visibility, and how certain shielded or closed-call API methods interact. That kind of internal disclosure should ideally be blocked much earlier.

I wanted to flag this because I think the product has real potential, but these areas likely need attention, especially around grounding, cost discipline, long-context reasoning, and internal safety boundaries.

Also, I am actively looking for a summer internship and would genuinely love to help if there is any opportunity to contribute on the product, evaluation, or AI systems side.

For reference, here is my LinkedIn:
linkedin.com/in/aryan-dwivedi-20aa56202

Best regards,
Aryan


chrisklaus
Lucid Legend Level 5
Forum|alt.badge.img+8
  • Lucid Legend Level 5
  • April 16, 2026

Hi Taylor,
If you still need more partners to assist, I would be happy to help. 
We are using Lucid to help drive our AI product development at Chalice AI. There are certainly capabilities that would make our work easier. 
Thank you!
Chris