Question

Error when refreshing token


Badge +3

We are getting intermittent errors when refreshing Oauth2 tokens in our Rest API integration. We are using a legacy token (Admin > App Integration > General >App Integrations > Custom OAuth apps) and the app has been working in our test environment for months. We recently deployed to production and more users are using it more regularly. 

The same user will get a refresh token error one moment and then when they try again it errors (see below).

We did have a scheduled process that was retrieving all documents from each user and therefore hitting the API more frequently (once per user every 15 minutes),  we have stopped that process  temporarily to see if that helps, but would like to know if there are any other known issues which would be causing the intermittent issues.

 

{"type":"http.ClientResponse","code":400,"headers":{"Alt-Svc":"h3=\":443\"; ma=93600","alt-svc":"h3=\":443\"; ma=93600","connection":"close","Connection":"close","Content-Length":"166","content-length":"166","Content-Type":"application/json","content-type":"application/json","Date":"Wed, 24 Apr 2024 22:17:28 GMT","date":"Wed, 24 Apr 2024 22:17:28 GMT","Strict-Transport-Security":"max-age=31536000","strict-transport-security":"max-age=31536000","Vary":"Origin","vary":"Origin","Via":"1.1 mono001.prod-phx-na15.core.ns.internal","via":"1.1 mono001.prod-phx-na15.core.ns.internal","x-lucid-flow-id":"e93e10683a3fe85b","X-Lucid-Flow-Id":"e93e10683a3fe85b"},"body":"{\"error\":\"invalid_grant\",\"error_description\":\"Invalid client id, client secret, or refresh token\",\"error_uri\":\"https://developer.lucid.co/api/#general-oauth2-errors\"}"}

 


Comments

Badge +1

Hello Ben,

Thank you for reaching out. We are exploring the cause of the refresh token request failures. We recognize that the oauth2 token refresh flow is sensitive and take these issues very seriously.

I’ll be in touch as soon as I have any findings that helpful.

Badge +3

@Michael B101 - turning off the process that was running every 15 minutes had no effect on the error (I was concerned about maybe hitting the refresh twice at the same time but that doesn’t seem to be the case), we’re continuing to get the errors when trying to refresh.

This is a production issue, I appreciate any assistance you can provide.

Badge +1

For the request you posted originallly, here is the sequence of events that occured. All times in `UTC` and `hashes` refer to non-identifying values which represent an instance of a token. We do not ever have access to the actual token.

 

  1. 2024-04-24T00:31:49.000+00:00 
    1. A user token refreshed many times is successfully refreshed again, generating token hash `3de3198111`.
    2. We return a 200 success with the new token pair
  2. 2024-04-24T01:32:22+00:00
    1. A request to refresh token hash `3de319811` is received. Our system refreshes the token and updates the data store with the new pair.
    2. While updating the token history, a data store connection issue occurred leading to an unhandled exception.
    3. We return a 500 internal server error
  3. 2024-04-24T01:46:35+00:00
    1. Another request is received to refresh token hash `3de319811`
      1. Because the client didn’t get the refreshed token from step 2, they still see `3de319811` as the current token pair.
      2. But, because our data store was updated before the exception was thrown, we see `3de319811` as a previously used token outside of the 1 minute grace period.
        1. Per the oauth2 spec, when a refresh request is made with a previously used token, the entire chain of user and refresh tokens is revoked
      3. We revoke the original user token, and the current refresh token.
      4. We return a 400 bad request (invalid client,……..)
  4. Every request since:
    1. We now consider the token invalid, and return the 400.

 

This is clearly an issue on our side, and an unfortunate sequence of events.  The takeaways that we are taking and workibng no now:

  1. We need to evaluate the connection issue, how it occured, and why it was unhandled.
    1. This investigation is underway.
  2. We need to evaluate the oauth2 refresh flow and ensure that anywhere it could fail will appropriate roll back any updates, or at least ensure the new pair is always returned.
    1. This investigation is also underay.
  3. We need to update our documentation (which is already going through an overhaul) to state that when a 5xx response is received (or the connection fails), the client should do a retry.
    1. This is one of the goals of the 1 minute grace period.  Although we had updated the refresh pair, there is 1 minute where we still consider the previous pair valid and will return the new pair.
    2. Even with the fixes we will implement for points 1 and 2, there is still the risk of a connection issue causing the client to not receive the new pair. This retry will protect against these scenarios.

I recognize that the recommendation in step 3 is a band-aid. But, if you can introduce a retry mechanism in your auth flow it will protect from this happening again. Unfortunately, the original token (and any others that have had similar problems) is invalid now and can’t be restored. That is why you only see 400s now.

 

One point of clarification. Have you seen this with multiple user token flows, or just one that is completely broken?

 

 

 

Badge +3

Thanks @Michael B101 - I appreciate the transparency and communication. I’ll add in a retry and see if that fixes things. Please let me know if there are any other recommendations.

This has been affecting multiple users, I don’t have access to the logs (I’m a consultant and don’t have direct production access) but reviewing the logs with the client yesterday and today we’re seeing it with most if not all users. 

 

 

Badge +1

Hmmm, concerning. My interpretation of your original post was that it was more than just the one token. I will expand my research to all tokens by your client.

 

I have received nebulous reports of dropped connections in some of our microservices that I am investigating now. When I originally saw your post I thought it was going to be the same thing. It turned out not to be with that token, but I am concerned that your other failures might. If it is dropped connections, I believe the retry will protect you as well. But, that is clearly not the solution to the problem.

 

I’ll be in touch with any additional information I learn. Let me know any status changes on your side, good or bad.

Badge +3

Ok, thanks @Michael B101 , I added in the retry, just awaiting deployment to prod (someone else will be deploying it) we should have some results later today.

Badge +1

Sounds great. I am adding logic to make it easy to see all failures for a given client and why.  Right now, I have to find and look them up one at a time via logs.  I expect by sometime tomorrow to be able to immediately get a list of client refreshes, and the outcomes. That should make it easy to see macro trends that aren’t visible when looking at individual requests.

Badge +3

@Michael B101 we got the retry deployed and operational around 6:15 PM CDT - we’ll have users testing it tonight and tomorrow and I’ll try to confirm tomorrow afternoon one way or another what our logs / functionality is looking like.

After deploying the retry functionality we removed all user tokens so users will re-authorize the integration and we can start from scratch and the auth reset flow.

The integration was working during our initial test but we didn’t do anything beyond embedding a single document with a single user.

We have left the scheduled process I mentioned earlier turned off (it authenticates on behalf of users every 15 minutes to retrieve meta data on their embedded documents). We plan to turn it on tomorrow afternoon so we can get most of the day tested without it on in case the backend API calls combined with user interactions caused a problem.

 

Badge +1

Sounds great. Keep me posted.

** Reposting this response as I am not sure it went through:

You mention trying to refresh twice and that made me think of a possibility. We recently had an embed based integration getting refresh token failures. It turned out that in cases where their application had many embeds on the same page, they would all attempt to separately refresh the underlying user token at the same time. (each on their own thread). Because of the multiple parallel requests being so close together, it was hitting a perfect timing window in our logic that lead to a race condition and ultimately a duplication error in our data store.

A fix for that problem is underway now. We are refactoring the refresh token request to be entirely atomic, but its not an insignificant change as we need to ensure it will be able to scale and protect the endpoint from being overwhelmed. I will look at your requests tomorrow to see if it could be the same thing,  but wanted to send this out in case it rings any bells. This is another scenario where the retry should work, but can get a token in a bad state if the requests line up being handled at just the wrong moment in a sub-micro window.

 

Badge +3

Thanks for the additional thought @Michael B101, Our account level authentication is my biggest concern because multiple users could be using it at a time (our embeds are authenticated at the account scope), we won’t have more than one operation per UI user at any given time (we only load one embed at a time - whichever is the one actively being viewed), but with the system process and any number of users there could be up to 2 refreshes at the user level (if a user happens to be creating a document while the system is updating metadata on their documents) and any number at the account level.

I added in some additional logic on the retry to ensure that it checks back to the saved token values if the call to Lucid fails. That way if we do happen to have two processes running they’ll check back in with each other if one fails. I think that this combined with the 1 minute grace period you mentioned earlier in the thread should handle thing well.

So far today everything has been working well - no further issues since implementing the retry logic and resetting everything.

Thanks for all the help.

 

 

Badge +1

Super glad to hear it. As a matter of professional experience, I remain a bit concerned you will have problems. Hopefully, next week will go smoothly.  Of course, you can resolve this thread whenever you feel the original question has been handled.

Reply