Introduction to Batch APIs
In modern Python applications, it’s common to access remote API using REST or other web-based technologies. Batch APIs are capable of processing multiple requests with a single call. You can use batch APIs to reduce the number of network calls to the remote service. This is ideal when you have to make lots of calls to a remote service that can be batched into a single request.
Suppose you had a REST API that returned the current price of a stock. Using a simple API that takes a single stock identifier and returns the current price, if you needed to get the price of a thousand stocks you would need to make a thousand API calls. A batch API that offered the same functionality would instead take a set of stock identifiers in the request and return the current price for all of the requested identifiers. Using the batch API you would be able to fetch all the prices you need in a single request. This reduces the network overhead thus reducing the latency of your application. It also potentially reduces the load on the remote server.
In this article, you will learn how to use a batching pattern with Python’s asyncio package to batch many individual function calls into a smaller number of requests.
Read This Also : Most Widely Used Packages and Frameworks in Python
Motivation: Async Python Functions in Excel
This article came about from a user of the Python Excel add-in PyXLL asking a question about how to use a batch API to streamline their Excel spreadsheet.
PyXLL embeds Python into Excel and it enables calling Python functions directly in Excel spreadsheets. Each time a cell calculates using a Python function it calls that Python function. In this case, the function was an async function that makes a request to a REST server.
A sheet with thousands of cells making individual requests to a REST API was taking too long. The solution was to use the batching pattern!
Background: AsyncIO and Concurrency
When making multiple requests to a remote server often you don’t want to send a request and wait for a response before sending the next request. Usually sending multiple requests in parallel (at the same time) and waiting for all of the responses is much faster. You can achieve that in Python using multithreading or async programming. This section gives an overview of multi-threading and async programming. You will also look at why you would choose one over the other.
Multithreading is a way to perform multiple tasks concurrently. In the threading model, you start multiple threads and each thread executes its code at the same time. If your problem is CPU bound, breaking it down into tasks to be run in parallel using multithreading can help. A program is said to be CPU bound when the main performance bottleneck is the CPU processing time.
There are some subtleties to threading specific to Python that you won’t go into in this article, but in theory, that’s basically how it works!
The computer’s operating system manages all threads and it ensures that each gets a share of CPU time. This adds complexity as each context switch takes time that could be spent doing something else. That complexity scales with the number of threads. When the bottleneck is waiting on IO (network requests, for example) then running multiple threads for each request and waiting for a network response is far from ideal. It also doesn’t scale to thousands of requests. That’s where async programming comes in.
Asynchronous Programming with asyncio
Asynchronous programming in Python is a different model of concurrency that doesn’t use multiple threads. Instead, everything runs on a single thread and Python manages switching between active tasks. It is perfect for programs that use a lot of network requests or other IO-bound tasks like disk or database access.
An event loop manages a set of running tasks. When a task is waiting for something like a network request to complete it “awaits”. While a task is awaiting the event loop can schedule other tasks to run. This allows another task to send another network request and then await, allowing another task to run and so on and so on. When the network request is ready the event loop can resume the task. This allows us to have multiple simultaneous requests in flight without the overhead of one thread per request.
Advantages of a Batch API
Above you learned you can make multiple requests concurrently. This can be much faster than waiting for each request to be returned before sending the next request. If you can send all the requests you need at the same time, why do you need a batch API?
Sending multiple requests requires more network traffic than sending a single request. If you can request all the data you need using a single request that is more efficient from a data transfer point of view.
It can also have other benefits. For example, if the remote server can reduce the amount of work it needs to do by fetching everything in one go then the time it needs to service that one batch request can actually be less than the total time needed to service the equivalent individual requests.
The Batching Pattern in Python
Now you understand what a Batch API is, and that you can make multiple requests concurrently using asynchronous programming in Python, what is the Batching Pattern and why do you need it?
Put simply, the Batching Pattern collects together multiple requests into a single request and dispatches that single request in one go. In the rest of this article, you will see how you can use this to turn an implementation that uses lots of individual requests into one that batches requests together to make fewer calls to a remote server.
Example: Fetching Locations for Street Addresses
You’ll use fetching locations of street addresses as your example. To do this you can use the REST API from https://www.geoapify.com/. There’s a free tier you can sign up for testing, and it supports fetching multiple locations in bulk. To use the code below you will need to sign up and get an API key.
Here’s the first attempt at some code to fetch locations for a number of street addresses:
You might have noticed that the above code is still calling the API for each address sequentially. Despite using async, the for loop is currently waiting for each request to complete before moving to the next address. To fix that you can use the asyncio function “gather”. By gathering the tasks together and awaiting them all at the end you don’t need to await them individually.
Your updated main function now looks like this:
You are still sending multiple requests to the server. Next, you will see how the Batching Pattern batches these requests together to reduce the number of requests, without modifying your main function.
Example: Fetching Multiple Locations using a Batch API
Using a batch API you can submit multiple requests in one go. If the server handling the request is able to process a batch more efficiently than individual requests it can be much faster to use a batch request when dealing with more than a handful of queries.
You’ll use the batch version of the geocoding API used above. It’s a little more complicated. Instead of submitting a single address as part of the URL you have to make a POST request. As it can take a little while to process a batch instead of returning the results immediately the server will first respond with a request id which you then query to check if the results are ready or not. This is a common pattern used when implementing a batch API.
The following function queries the API for the locations of a list of addresses. It does this using a single request to the batch API.
Putting it Together: The Batching Pattern
Now you have a function that can call a batch API to find the location of a list of addresses in bulk. Your next task is to refactor “get_location” so that it can take advantage of the batch API without having to change your “main” function.
Why not change the “main” function? In this simple illustration, it would be trivial to change the main function to call get_locations. In real-world projects that sort of refactoring is often not so simple. Other times it’s not even desirable to change the inputs a function takes and you often want to shield the end user of the function from the implementation details.
To come back to the original question that inspired this post, that was about calling Python functions from Excel using the Excel add-in PyXLL. In that case, the end user is an Excel user who may not know anything about Python. Having a single function to take one input and return one output fits their expectations as an Excel user. Exposing them to the concept of batches would confuse matters unnecessarily. It would also mean they would have had to structure their spreadsheet to call it in an efficient way. Handling the batching of requests behind the scenes while keeping the interface the end user sees is definitely an advantage in this case.
How it works
In pseudo-code, what we want to write is along these lines:
You can achieve this in Python using asyncio. You “get_location()” function can start a background task to process any queued requests. It will await until that background task has processed the batch containing your request and then return it. The background task should only be started once and so you will need to check if it’s already running before starting it. If “get_location” is called multiple times, because it’s an async function, it can run while the others are awaiting. Each subsequent call will add a request to the current queue.
To return the result back from the background task to the awaiting get_location functions you will use the asyncio primitive “Future”. A Future is an awaitable object that when awaited on will block until a result is set.
Your “get_location()” function re-written to batch up requests, using a future to pass the result back, looks like this:
The code above creates an asyncio.Future object and adds that and the address to a list which will be processed as a batch. If the loop to process the batches is not running it starts it using “asyncio.create_task”. The function “asyncio.create_task” schedules your “processes_batched_loop” on the asyncio event loop to be called when the other running tasks have awaited. You’ve not yet defined your function “process_batches_loop” but you will do that next. You await on the future, allowing other tasks running on the asyncio event loop to run, and once the result has been set you return it.
Processing the Batch
The “process_batches_loop” function waits a short time to allow other functions to add requests to the “ADDRESSES_BATCH” list. It then submits all the queued requests as a single call to the REST API. Once the results are returned from the REST API it unpacks the results and sets the results on the futures, allowing each awaiting “get_location” function to complete.
You have now achieved the original goal. You have a function “get_location” that looks to the caller like your original function. It takes a single address and returns a single location. Behind the scenes, it batches these individual requests together and submits them to a batch API. Batch APIs can offer better performance compared with APIs that only process individual requests and now your function can take advantage of that, without any change to how the function is called.
The time spent waiting for requests to be added to the batch should be tuned to match how the function is being used. If the function is likely to be called many times at almost the same time, for example, multiple cells being calculated at the same time in Excel, then a short delay can be used. In other situations, for example, if the call results from some user input that might take a few seconds, then a longer delay would be necessary. Logging the time each item is added to the batch along with the time each batch is processed would help us understand the optimal time to wait.
Room for Improvement
There is plenty of room for improvement in the code presented here. I hope this has given you some ideas to take this forward and use in your own projects! The code was written in a relatively simple way to try to make the intention behind it clear, but before you use this in a real-world application there are some things you will need to consider.
- Error checking. This is probably the most important thing to add. What happens if the loop processing batches fail? Your code should handle any errors that might occur gracefully, or at the very least log them so that you can track what’s happened.
- Unnecessary looping. The loop to process batches as written continues looping even if there is nothing to do. You could modify this to await on an “asyncio.Event” object until you’ve queued at least one item. Alternatively, you could exit the loop when there are no more items to process and restart it when needed.
- Stopping the loop when your program ends. The loop will continue looping as long as BATCH_LOOP_RUNNING is True. When your program ends you should think about how to gracefully end the loop. This could be simply setting BATCH_LOOP_RUNNING to False and then awaiting on the task for it to complete. The function “asyncio.create_task” returns a Task object which you could store as a global variable.
In this article, you have learned what a batch API is and why it can be advantageous to use one. You looked at concurrency in Python training institute in noida, comparing multithreading to asynchronous programming. Finally, you demonstrated how you can use a batching pattern to turn a function that processes individual requests into one that uses a batch API.
REST APIs are one example of a batch API. You can apply the same technique to database queries, or any other type of function where it’s more efficient to queue data in bulk.
Batching requests behind the scenes and hiding the details from the user-facing function or API is useful when you want to keep things simple for the end user of the function. It can also be a way to retrofit a batch API to an existing code base where refactoring would be difficult.
The motivation in the use case this article was based on was calling a Python function from Excel, without exposing the Excel user to the details of managing batch calls. The user calls a simple function to perform an individual request. If they build a sheet that makes multiple requests across different cells then this solution automatically batches everything together behind the scenes. The Excel add-in PyXLL enables integrating Python into Excel, making it possible to call Python functions as Excel worksheet functions.
I love to Read this type of content that’s why i am sharing this content on your site : Content Source from : pythonlibrary