Skip to main content

Hi Lucid Community we are excited to announce a new way for you to increase your efficiency and productivity when working with others in Lucid - Collaborative AI!

By incorporating artificial intelligence and machine learning capabilities across the Lucid Suite our products will help you unlock the power of divergent thinking be more creative and innovative and get more done in less time. Overall our goal is to provide the following capabilities to your teams with Collaborative AI:

  • Unlock the power of divergent thinking: identify patterns and connections foster inclusivity and diversity when problem solving and facilitate collaboration between teammates with different perspectives. 
  • Supplement your team’s thoughts by sparking additional ideas and trying to cover all angles: leverage vast data to identify unnoticed problems capture a wider range of perspectives and ideas and reduce cognitive bias when identifying key themes.
  • Increased productivity and efficiency across teams: automate time-consuming tasks allocate resources effectively and free up time for creativity and innovation.

 

Our goal is for Collaborative AI to integrate seamlessly into your workflow in these ways and we want to understand what would make this experience powerful and useful for you. Please use this thread to share questions concerns and ideas surrounding AI in Lucid. Here are some conversation starters to get your gears turning:

 

✨ How comfortable are you with incorporating AI into your workflow?

✨ How would you like to use AI within Lucid?

✨ What repetitive tasks do you complete within Lucid that would benefit from automation?

✨ When brainstorming or planning using Lucid what themes patterns or needs would be helpful to identify?

✨ What types of data do you analyze using Lucid?

✨ What would you need or want to have in order to start using AI in your workflows?

✨ Beyond Lucid how do you see AI being impactful within your line of work?


Note: Collaborative AI is currently in beta. Please refer to the Collaborative AI article from the Lucid Help Center for up to date details and functionality information.

I would love Lucid A.I. to be able to notify other users about a diagram they might find interesting based on diagrams they've created themselves.

I would love Lucid A.I. to be an impartial facilitator in Workshops using Lucidspark.


Thanks Jamal we appreciate your insight! Our development teams are actively working on how AI can enhance your workflows and make them easier - we'll pass this feedback along to them. Thanks again!


AI image generation could greatly increase the speed in facilitating design and discovery of product development. It would also help improve the adoption of Lucid across our company as a whole by accelerating the speed of conversations workshops brainstorming and testing concepts before more work is generated or blocking the conversation.


Thank you for the feedback and thoughts Matthew! This is something we have thought about and we will look into it some more.


Colleagues "Create AI-generated diagrams using the Lucid ChatGPT Plugin" is a novel idea in the arena of "generative AI". Certainly there is a wealth of information- and capability in GPT-4.


However perhaps it would be more benefitial to interface with GPT-4 or other ML environment in such a way that GPT can support key artifacts within the diagram in an intelligent fashion.


One thought on this might be as simple as a BPMN gateway SysML activity diagram decision etc. where parameters for the decision are instantiated as properties in the UPSTREAMDEEP objects that then influence the decision at the gateway? 


What if those UPSTREAMDEEP objects could themselves inquire some query to GPT that would then return a resultant that would in turn influence the decision at the DOWNSTREAMDEEP gateway?


I think it is important to delve into the capabilities at a more substantial level.


What are your thoughts?


as an addendum to the above and per other posts (and preferably without a DEVOPS effort on the part of the customer to extend Lucid) there could be significant benefit in reaching into environments like Google CoLab to execute Python programs that are using ML techniques with TensorFlow SKLearn Pandas NLTK etc. to provide signifant focused ML analytics and results back to the diagram in Lucid. I have written extensively on other posts. This would perhaps radically open up the arena for intelligent diagrams beyond conditional formatting of object/shape color etc. Could one go so far as to "model" a neural net? Could SysML models "come to life"? Could BPMN diagrams provide simulations far beyond simple step-by-step evaluation? This capability could take Lucid from a "dum drawing" to a real-time live interactive intelligent modeling capability that would redefine your market.


just a thought


John thanks for sharing those thoughts! We're very much aligned with the vision of making Lucidchart diagrams (and Lucid canvases in general) intelligent beyond a simple visual representation. We think and invest a lot in ways to make the diagrams closer to working models of the systems or data they represent.


Could you share an example of the kind of use case or modeling you're trying to do this way? I think I understand some of the components you propose (in terms of objects being able to take action and store/generate data that then affects other objects on the diagram and doing that through a variety of potential methods and integrations including with external ML/LLM APIs) but it would help to understand how you see them coming together to accomplish a particular workflow or use case.


Anthony - yes be glad to. Let's take a very simple example. Let' say we had a 2 step process with a decision:
Step 1: collect data from loan applicant (info stored as properties in object taken from a data source like an XLS or Googlesheet)
Step 2: send info collected on Step 1 properties to ML to determine applicants viability to recieve loan (UPSTREAM properties) and store result on Step 2 property for loan applicant viability rating
Decision: based on Step 2 property with applicant viability rating make a decision whether or not to give loan
 Decision branch A: advise loan officer to approve loan
 Decision brandh B: advise loan officer to deny loan
This would be a very simplistic way to author a workflow and reuse it given updates to the data in the data source.


Another approach would be to take a (TEAMS/Zoom/etc.) recorded transcript from meeting run it through Python/NLTK to preprocess and thin out the important info then send that to GPT-4 to generate the lucid BPMN process diagram ... hmmm


Another applicable area might be to support SysML simulations ... "firing" ML methods that return property values that are updated in the shape-objects that influence conditional formating or are used DOWNSTREAMDEEP to implement behaviours etc.


This can have considerably greater implications. Shape-objects could inquire of web-services for in-process status on varied operational aspects in a business or factory.


This could facilitate a BPMN -to- BPEL like workflow authoring environment (similar to Visio with SharePoint workflow authoring)


A shape-object could call a Python service with ML application that could update the model with information on other (UPSTREAMDEEP) objects or send shape property values to an ML for a predictive analysis and that impact a workflow simulation with real-time analytical/predictives


Shape-objects could inquire of GPT-4 that would in-turn generate other diagrams ... 


So there are a lot of implications. I've asked similar in the past and folks have graciously providing me with API documentation to write my own integrations to Google CoLab etc. ... but frankly i have bigger fish to fry and little time left for sleep!!!


Addendum ... OK if we get really crazy ... could we use it to enable digital thread feedback from in-service systems to modify a digital-twin of some system or component and then drive a FEA stress analysis to make predictive failure calls:


so i asked the following in Lucid collaborative AI. I also asked the question in ChatGPT-3.5 (see below).


1. how am i to understand / utilize the output of Lucid AI


2. how can i implement the ChatGPT output?


Thanks



I asked the same question in ChatGPT-3.5:


Implementing SysML v2 in Lucidchart and integrating it with Python for simulations can be a complex task but it can be broken down into several steps. Here's a high-level overview of the process:




  1. Install and Set Up Required Software:




    • Lucidchart: Ensure you have a Lucidchart account and access to the SysML v2 shapes and features.


    • Python: Make sure you have Python installed on your system along with any necessary libraries for simulations.




  2. Create Your SysML v2 Diagrams in Lucidchart:



    • Launch Lucidchart and create SysML v2 diagrams as needed. You can use SysML v2 shapes and notations provided by Lucidchart.




  3. Export Diagram Data:



    • Lucidchart may offer export options that allow you to save diagram data as a structured format (e.g. XML or JSON). Check Lucidchart's documentation for exporting options.




  4. Python Integration:



    • Write Python code to read and interpret the exported diagram data. You can use libraries like lxml for XML or json for JSON parsing.

    • Convert the diagram data into a format suitable for simulations. This may involve creating Python classes or data structures to represent the elements in your SysML v2 diagrams.




  5. Simulation Code:



    • Write the Python simulation code using the data from your SysML v2 diagrams. This code will depend on the specific simulations you want to run. You may need scientific libraries like NumPy or SciPy for mathematical simulations.




  6. Visualize Simulation Results:



    • Depending on your needs you can use Python libraries like Matplotlib or Plotly to visualize the results of your simulations.




  7. Feedback Loop:



    • If your simulations produce results that should be reflected in your SysML v2 diagrams you'll need to establish a feedback loop. This may involve updating the Lucidchart diagrams based on simulation output.




  8. Testing and Validation:



    • Thoroughly test your integration to ensure that data is transferred correctly between Lucidchart and Python and that simulations produce the expected results.




  9. Documentation and Maintenance:



    • Document your integration process and code for future reference and maintenance. Keep in mind that Lucidchart and Python libraries may receive updates that could impact your integration.




  10. Security and Access Control:



    • Ensure that your integration is secure and access-controlled especially if you are working with sensitive or proprietary data.




  11. Scaling and Optimization:



    • Depending on the complexity of your simulations and the size of your diagrams you may need to optimize your code and infrastructure for scalability.




  12. Continuous Improvement:



    • Stay up-to-date with SysML v2 standards Lucidchart updates and best practices for simulation to continuously improve your integration.




Remember that this is a complex task that may require expertise in both SysML v2 Lucidchart and Python. Additionally the specific implementation details will depend on the nature of your SysML v2 diagrams and the simulations you want to perform.


 


Thanks for the detail John. This is all interesting stuff. Several things that are on our minds and in the works (especially in the area of how we can take various input formats and generate canvas experiences from them especially diagrams). Several others that are interesting and specific to the domain of process automation modeling etc. where we'll need to find the balance between what Lucid offers directly vs. makes available via partnerships or APIs. Thanks again for the feedback—I'll be reaching out directly so we can chat in person about this if you're up for it! 


FYI: https://github.com/labs4capella/python4capella


The above is a Python API for the Thales Capella MBSE tool that Implements the Arcadia method for model-based systems engineering (mostly based on SysML per INCOSE).


How could something like this work for Lucid? Interface to MATLAB or Mathematica? What above Python/NLTK class/methods integrated to Lucid shape-objects for intelligent NL analytics and maybe even derivative process models based on "interpretation" (ie: suggested future state)???


<ALT> - what above meeting transcript analysis via Python/NLTK that then creates process / architecture / etc. "interpretations" based on the content of the conversation???? 


from nltk.wsd import lesk
context_glossary("Sing in a lower tone along with the bass" 'bass')
....
Debug:  Sense1:  
 Synset('bass.n.07') 
 the member with the lowest range of a family of musical instruments

IdeaSeeking Feedback

To make the tool more accessible and inclusive, perhaps consider using AI technology to explore voice command integration. This will allow users with physical disabilities to control the tool using voice commands.


@Amelia S Interesting accessibility-related feedback! Thank you for your insight, @Ria S 👌


I would follow up on @Ria S ideia.

In my opinion, maybe, AI could solve simple tasks. Like creating and audio for all text message (like voice over from Siri and Google Assistant does).

Also - creating quick action that are most common - like organizing sticky notes, or if you select a bunch of icons to make multiple changes at once.

In short - AI, at this early stage, could be more of a assistant to the user for most common tasks, making it easier and faster to work on specific boards.


Loving the new Generate Diagram feature in AI. Have shared it with a number of clients and colleagues. Fantastic addition to an already great tool!

 

One suggested addition to the functionality that I think would be useful (beyond just creating a written summary) would be the ability to populate the “generate diagram” details of a completed workflow. That way, users could use the detail that Lucid’s AI would have needed to create the diagram to learn how to use the tool more effectively. 


This has nothing to do with Lucid creating something new, but we can do it for you. As you have Marketo, we do this kind of  connections and campaigns. I think it could be great to create immediate awareness and people can use their own ideas to try Lucid AI on the fly. Who could I talk to?


Regarding the UI, the floating prompt refinement panel is blocking parts of my diagram that I eagerly want to view. The only way to move it is to resize and pan the image around. The only way to get rid of it is to dispel it, which then prevents further refinement.

 

Can the refinement dialog just be an extension of the left-anchored panel instead of its own floater? At least I need 1) to move it around, 2) re-instantiate it if I close it so I can further refine my model. Thanks!