Skip to content
  • Joe LeGasse's avatar
    Size-based buffering with batch aggregation · a37a6e69
    Joe LeGasse authored
    The largest change is that buffering is now based on a rough memory
    footprint, rather than just the number of writes to store in memory. In
    addition, writes are batched together (based on the query string) to aid
    in back-filling data faster once the down service comes back online.
    This means that users can have a somewhat bounded memory footprint in
    the event of a downed server.
    
    There is also a small, but significant change: 2xx responses are written
    back to the client immediately. We are using a buffered channel, so the
    extra respones will filter in, then get garbage collected. 4xx responses
    should probably be updated to have the same behavior. The benefit of
    returning immediately when we have enough information is that the http
    server can re-use the underlying connection immediately, rather than
    waiting for all of the backends to come back online (could be
    hours/days).
    a37a6e69