Skip to main content

Hi Lucid Developer Community 👋,

I'm in the process of building a custom data connector and need some guidance on the best way to deploy the backend service. My team primarily uses Google Cloud Platform, and I'm planning to use either GCP Cloud Functions or Cloud Run to host the connector.

I've reviewed the developer documentation but am looking for more specific examples and best practices related to deployment, especially for a serverless architecture.

Could you help me with the following questions?

  • What are the recommended deployment patterns and infrastructure requirements for a custom data connector's backend?

  • Are there any specific advantages, limitations, or things to watch out for when using serverless platforms like GCP Cloud Functions or Cloud Run for this purpose (e.g., cold starts, timeouts, authentication)?

  • Does anyone have documentation, tutorials, or code examples of a data connector successfully deployed on a serverless platform?

Any advice or examples the community could share would be incredibly helpful.

Thanks in advance!

Hello mmuenker,

 

We are so happy to hear that you are building on our extension platform. As is often the case, there are many ways to deploy the data connector. Since we utilize AWS, we use AWS Lambdas fronted by an AWS Gateway. Data connector instances can spin up/down as load requires and any state we need to keep track of is held in the extension itself. (on the document).

 

From a lambda warm-up perspective, we have not needed to do anything special beyond whatever best practices AWS suggests. The performance has been acceptable even in cases where no lambdas are spun up when a first request is received. We believe serverless deployment is perfect for this scenario and I don’t think there are any gotchas. The framework was designed with this very use case in mind.

 

We do not currently have documentation around common patterns for building extensions from an architecture standpoint. We agree that we should and discussions have started about adding that to our documentation.

 


I'm happy to share that I successfully deployed my data connector using Firebase Functions. For anyone interested in this approach, I've included my deployment code at the end of this post.

Now, I have a follow-up question regarding access control. My Firebase Function is currently public, and I would like to restrict its execution to only allow requests that originate from lucid.app.

I've looked into the headers sent with the request and have a few specific questions on how to best validate the source:

  • Can I validate the request using the x-lucid-rsa-nonce and x-lucid-signature headers? If so, could you provide guidance or an example of how to implement this server-side?

  • Alternatively, is it possible to use the Bearer token for validation?

  • If the Bearer token is the correct method, does this token belong to the active Lucid user's account or to the OAuth provider defined in the manifest.json?

Thank you in advance for your help!

Here is the code I used for the Firebase Functions deployment:

const dataConnector = makeDataConnector(new DataConnectorClient({crypto, Buffer}));

export const dataConnectorFunction = onRequest({
cors: [/lucid\.app$/]
},async (request, response) => {
const { status, body } = await dataConnector.runAction(request.url, request.headers, request.body);
response.status(status).send(body);
});

 


Hello Mmuenker,

There is additional information about data-connector security here that may address some of your questions.

 

The framework will automatically validate the signatures using the headers you provided. The above link shows how to also validate them in the extension manually, if desired. A given oauth token will be used for whatever OAuth provider in the manifest you supplied when doing the `performDataAction` call. Those tokens represent the end user and can be used anytime to access the 3rd party API corresponding to the provider on that user’s behalf.


Reply