Consuming APIs responsibly
Building the Fediverse.to requires hitting the API endpoints of thousands of servers. Some of these servers are huge, some are tiny, and almost all of them are managed by just 1 person or a small team. Being a sysadmin myself I know how painful it is for a server to crash due to mysteriously large amounts traffic from seemingly nowhere, so I put together a small list of dos and donts to make sure my hobby project doesn’t ruin someone’s night. What best-practices and guidelines should I follow when polling a diverse mix of other people’s servers? How can I make sure that my work doesn’t cause unintended consequences?
First do no harm
The first part of the etiquette is ensuring that you don’t thrash the API server. The best way of doing this is to assume it has a small amount of compute/storage/network and serves other clients more important than you. Be kind. The best course of action is taxing the server as little possible to allow it to serve other requests.
Make requests in series rather than in parallel which will let the API server save its threads/cores for more important things. This is a judgement call since it will slow down your app/client considerably but that’s not the worst thing. Your work may be important but not mission-critical.
The API might also support fetching multiple records in a single request. For example, there may be a provision to supply multiple record IDs and get the array of results in one response (like GraphQL). This ensures two things: first, the API might be able to, for example, fetch the data in a single
SELECT with multiple
WHERE conditions lowering database transaction costs. Second, it saves multiple network calls. Use the network conservatively and batch requests wherever possible although your mileage will vary.
The fediverse.to backend uses the requests library which supports gzip compression by default, meaning if the server supports it, there should be fewer bytes-on-the-wire. The backend also caches the responses for re-use leading to fewer requests to the API server.
Let the UX drive performance
If your application is still good with 24 hour old data, don’t fetch it every 5 minutes just because you can. Think about the product you’re building and what your data freshness requirements are. This can provide huge performance gains to the target API server. The API requests you don’t make are as important as the ones you do.
Backoff and retry
If the server is unable to comply with your request for data, wait for a generous amount of time before trying again. It’s possible the server is using a small amount of resources and immediately retrying a deadlocked request isn’t helpful. Use exponential backoffs and retries to give the server time to recoup and recover before requesting content again.
When all else fails…
When (not if) you end up breaking someone else’s baby, make sure you’re reachable. Your website/domain should contain your contact information so you’re reachable in an emergency.
Your user-agent string should let the server admin know what technology you’re using to hit their API. The format
User-Agent: <product> / <product-version> <comment> is simple and informative.
FROM header field should contain an email address where the server admin can reach you.
Use a reputable hosting provider
Most hosting providers publicly disclose the IP address range that they use. So if your code starts misbehaving, your IP address can be traced back to your hosting provider. And if your hosting provider has a good support team, they can reach you and help you fix things. I was recently contacted by the DigitalOcean support team about some issues one of my scripts caused. Apparently my Python client was hitting too many non-existent URLs on an API server and the server admins got in touch with DigitalOcean. I identified the problem, fixed it, and everyone lived happily ever after.