Hello Michel,
Thank you for posting and I recognize the bad experience you have had.
There are definitely inconsistencies in the behavior of the Get Document Contents endpoint in terms of the “content too large” error; most notably that the error message isn’t accurate. Its not the content of the board in the JSON sense, but more the resources needed within our infrastructure to produce the output without putting the health of the system at risk. Unfortunately, there isn’t a really effective way to capture that. We have tried to come up with some type of guidance for users to know when they will encounter this, but have not found a good way that captures all the variables involved. What appears to be simple on the canvas can be very complex in its underlying representation
Even worse, we will sometimes see a document that “randomly” succeed or fails at different times. This occurs when the contents of the document are right at the edge of what our system is able to safely produce within the limits we have set. Based on the specific load of the system at that moment, it may fail or succeed.
We have identified a long term solution that will resolve these issues for all but the most complex documents. Unfortunately, I do not have an ETA for when those fixes will be available publicly. I wish I had a better answer at this time.
Hi Michael,
Thank you for you extensive and well explained answer.
Could I suggest you if you could add a page interval in the endpoint API so we could get the Json content from part to part for large document. Something like .
https://api.lucid.co/documents/{YourDocumentId}/{YourFirstPage-YourLastPage}/contents
Best regards.
Agreed that contents by page makes a lot of sense, even separate from the “complexity” issue. I”ll pass it on as a suggestion. If its a relatively easy win that also gives us functionality that makes sense, it would be a great stop gap.
I have taken permanent note of this thread. When I can provide any update of our progress on this issue, I’ll post here.